Skip to main content

Ethical Considerations for Using AI in Child Welfare: What You Need to Know

As AI becomes part of our child welfare systems, it will change how we think about what we do. It is important, therefore, that we always consider the fundamental ethics behind what we do as we strive to achieve better outcomes for children and families. In this post, we discuss three central ethical concepts to keep in mind when discussing the implementation of AI technologies as you continue the ongoing conversations about the use of AI technology within your teams, departments, and agencies.

 

  1. Accuracy and Reliability of Data

Regularly review processes that impact data quality.

The value of AI systems depends completely on the data on which they are trained. Data models can be “poisoned” by bad data, which can lead to decreased accuracy, biases in the distribution of services, and an overall erosion of trust in the value of these models. Agencies should actively audit all of the systems that provide data to that is used to train the AI models that they use, which could include your case management systems, billing systems, and decisions tools that your agency uses. Third party experts can be helpful in in the auditing process to ensure objectivity and fairness. Be especially careful about historical data that may be contextually different from data that was generated through different procedures.

 

  1. Privacy and Data Security

Be intentional and deliberate about what information is shared.

Regularly review and update your agency’s security protocols to address emerging threats and ensure that PII and PHI data remains secure. This data often includes health records, abuse allegations, family histories, and other confidential details that, if exposed, could lead to severe emotional, psychological, or physical harm to the individuals involved. Understand, on a granular level, which data your AI systems can access as they create outputs and kickstart other processes. Protocols such as role-based access for employees and contractors is critical but your best results will be achieved if every individual who shares data understands how it may impact your AI systems and the second-order effects that may result from doing so.

 

  1. Humans and Machines

Use AI to assist, not replace, human decision-making.

AI systems would not exist without the ingenuity of human beings. They excel in processing vast amounts of data quickly, identifying patterns, and making predictions that would be impossible for a human to accomplish alone. They are effective at scaling impact. Humans, on the other hand, can use intuition, creativity, and ethical reasoning. They are effective when dealing with the nuance that comes with human interactions and relationships. By better understanding the strengths and weaknesses of AI systems, humans will better understand where they fit into the overall processes that drive the outcomes achieved by our agencies. If we train ourselves to work with these AI systems, it will improve our AI systems and will enable us to give attention to the instances where machines are not able to provide the necessary interventions.

 

By remembering these high-level considerations as we create and modify our AI systems, we can positively impact their effectiveness. Finding the right balance of caution and innovation is essential for any agency to continue improving their care delivery, and it requires an ability to monitor the details of what is happening while also keeping the overall ethical considerations in mind. If we approach it responsibly, AI will actually help us to deliver better outcomes for children, families, and the agencies that serve them.