Child Protection Services to stop using a trained algorithm to detect child abuse

At Dijon University Hospital, researchers are currently testing an algorithm to detect child abuse, by identifying diseases and pests while very young children are admitted to hospital. In the USA, in many states, screening tools are already used by child protection services but have been shown to be harmful: trained on data such as mental health, drug addiction, and prison stays, they will target black families. Despite its conviction that artificial intelligence can help, the state of Oregon has just announced that it is abandoning the algorithm currently used to decide whether a family investigation is necessary.

When child abuse or neglect is reported, social workers are required to conduct an investigation to preserve that child’s life.
In the United States, as child protection agencies use or consider implementing algorithms, an Associated Press (AP) investigation has highlighted issues related to transparency, reliability, and racial disparities in the use of AI, including its ability to reinforce bias in the child protection system.

Algorithm from Allegheny County, Pennsylvania

The algorithm currently used in Oregon was inspired by the Allegheny County algorithm, which a team from Carnegie Mellon University researched. The Allegheny algorithm identified a disproportionate number of black children for a “mandatory” investigation of neglect, compared to white children. The independent researchers were also able to note that social workers did not agree with a third of the risk scores produced by the algorithm.

This was trained to predict the risks of placing a child in foster care within two years of the survey using detailed personal data collected from birth, health insurance, substance use, mental health, prison stays, and probation records, among other government data sets. The algorithm then calculates the risk score from 1 to 20: the higher the number, the higher the risk. Neglect, on which this algorithm has been trained, can include many criteria ranging from inadequate housing to poor hygiene, but similar tools can be used in other child protection systems with minimal or minimal human intervention. Nothing, in the same way that algorithms have been used to make decisions in the criminal justice system in the USA and thus can reinforce existing racial disparities in the child protection system.

A member of the research team said:

“If the tool had acted on its own to detect a similar call rate, it would have recommended screening of two-thirds of black children, compared to about half of all other children reported.”

Oregon abandons the algorithm

A few weeks after those findings, the Oregon Department of Social Services announced to its employees via email last May that “ After a “comprehensive analysis,” the agency’s hotline workers stopped using the algorithm in late June to reduce disparities between families investigating child abuse and neglect by child protection services. »
Lacey Andersen, director of the agency, said:

“We are committed to continuous improvement in quality and equity. »

Oregon Democratic Senator Ron Wyden says he is concerned about the increasing use of AI tools in child protection services.
He said in a statement:

“Making decisions about what should happen to children and families is too much of a task to abandon untested algorithms. I am glad that the Oregon Department of Social Services is taking the concerns I raised about racial bias seriously and is suspending the use of their screening tool.” »

Leave a Comment