Sofia Brooke, who was with us as an intern for six months as part of the GOWales initiative, has written a blog post below with some reflections about her research whilst working with us.
Child Sexual Exploitation: Can a Computer Help Identify Victims? A Reflection Upon My Time As a Research Assistant
By Sofia Brooke
The digital era has brought about new challenges to welfare, security, and our services. The increasing digitalisation of systems to organise important information has brought challenges to society and shows how the poorest and most vulnerable can be discriminated against if systems are not monitored or challenged. An example of this is the use of an Automated Decision-making System in the benefits system to detect fraudulent claims by finding patterns in data, and alert investigators of them. Regardless of intent, this use of data systems puts a large amount of vulnerable people under scrutiny. The issue is that such systems are built by people, or data, who may have ingrained prejudices that they may unknowingly build into an algorithm which has the potential to create discrimination. These Automated Decision-making and AI systems also impact people’s work or personal lives in major ways. Because of this, it is necessary to understand why or how certain systems are built as well as what can be done to help them make sensitive decisions in a fairer way.
Using research knowledge I have gained on this work experience placement and taking the issues around data and automated decision-making that I have raised, I will be assessing the usage of a data system in Bristol in order to reflect upon my time as a Research Assistant at the Data Justice Lab.
Context
Bristol has created their own system using multiple different software to keep track of families with issues of poverty, drug use, and poor school attendance. The system can help the First Response team to know where to send families if they want help from the services in Bristol, and can help professionals know who to reach out to.
They have also consulted with Barnados’ BASE project partners about people who are known victims of child sexual exploitation. By taking “negative data” such as records of domestic abuse and low school attendance, and relying on professionals who may identify people at risk in person, a score is created which suggests whether a child is exhibiting the same characteristics as someone who has been sexually exploited. Data is shared across other services to help identify those who could be at risk. The data can also be used to identify schools with specific risk patterns – which sounds like a positive step in safeguarding children. This information can advise professionals and help them make decisions as to which families they could visit.
This kind of system is similar to both Automated Decision-making Systems which often study patterns to help professionals make decisions, and Predictive Analytics which predicts outcomes or risks.
However, with child sexual exploitation being a personal human issue – is there a space or a need for the use of computer systems which are anything but personal and human? What are the issues that come up with using them?
What Do We Need to Consider?
Built to predict the risk of child sexual exploitation, the model’s key purpose is to help professionals know where to go. The model relies on professionals’ judgement and uses known risk factors to work effectively.
However, there is a risk that when an initial assessment is made, that the score given may label some children experiencing abuse as not a priority case. Concerns have been raised previously by data scientists who believe professionals could disregard the score or their own instincts if a computer displaying a score “says” there is little risk. Even if they are aware that the system is only making ‘educated guesses’. There is an option to add context as to why a score has been attributed but this still seems like a risky way to add a subjective value to a human issue. Will this be an issue of training in which anyone using the system would have to learn to understand scoring as a subjective attribute? Is it ethical to treat vulnerable people as numbers or scores? Can we judge human suffering in a way where we say who “deserves” help more than someone else?
Data can also be skewed in a system. Incorrect arrest records are an example of this. Small details which might not seem important to some people in the grand scheme of things – someone gave a police officer a different name in the moment – might be more important than we think if we are trying to arrange data in a meaningful way. It might stop a system like Bristol’s from being as effective in safeguarding.
Another issue is that child exploitation is an inherently human issue computers do not have an understanding of. A common assumption would be that people know more about human behaviour and its nuances. Are there things a professional might be able to understand, maybe from a gut feeling or previous knowledge of these kinds of cases? Is it possible to apply context in real life and accurately judge who is at risk based upon both knowledge of negative data, what a child discloses about their personal life, and their body language which may be hard to transfer into negative data?
Perhaps it is a logic that a computer could not access. Although, as a way of organising such data, computers are excellent.
Furthermore, money is an issue in creating and maintaining these systems. The system in Bristol is updated weekly and if it is not as effective as it was six months ago, it will have to be rebuilt. Is it possible that this money and time could be better put to pursuing more cases or improving quality of care? The main argument against this is that the system does help professionals to become more organised which is certainly a huge benefit when dealing with such sensitive issues. There is long-term potential for this system to be very useful, and to have a huge positive impact upon vulnerable people if it helps professionals to better and more quickly tackle issues. However, it is not ideal to wait and see because the system deals with such sensitive issues, which have very long-term consequences for the victims and their families.
Related to issues of money, many children’s services are deemed inadequate and are underfunded. Is using an AI a way of saving money? Because of the lack of money, there is a lack of resources and there may be issues where certain people are being prioritised over others. The question is whether this means some people cannot access services as easily as others.
Lastly, are our services protecting children immediately from domestic violence? Witnessing domestic violence is a form of child abuse. This data (in the form of reportage) is collected and used in Bristol’s system. The question is whether this data really needs to be collected and organised or whether there needs to be more immediate action instead to protect children. Are these instances immediately reported to the appropriate professionals?
Data Sharing
The information from Bristol’s system is shared across public services, with the exception being in health. However, this is changing as many health professionals recognise that there are links between health and social issues.
One concern is that many children who are abused may not initially appear as if they have complex issues. Can these children adequately be identified as risks by a computer?
Furthermore, there is a concern as to whether the data is secure and is being handled responsibly. Who is looking at this data? Is it ethical to hold so much data about the public in order to predict behaviour? It does seem like an invasion of privacy! Have people consented to have their data used in such a way or are people aware that their data is being used in such a way? These questions would need to be asked going forward as it seems unethical otherwise.
Also, will the wrong people also be identified, and what does this mean? In other areas of crime prevention, ethnic minorities have often been identified as targets in certain types of crime and it has continued instances of institutional racism. Especially as bias has often been built in to AIs! Will a system like Bristol’s ensure that ethnic minority groups are not unfairly targeted?
In the UK, families from an ethnic minority are more likely than White British families to live in deprived areas and in poverty so it is a concern when poverty is identified as a risk factor in Bristol’s system. Not only does this have the potential to stigmatise all poorer families who are already vulnerable but there is also an added risk of the authorities potentially stigmatising those from an ethnic minority. Especially when there are already plenty of harmful stereotypes about people from ethnic minorities and their family environments.
There are also cultural barriers and disabilities that people supplying information to a system may not be aware of or fully understand, and this may stop AIs from identifying victims of abuse. This would need to be sensitively handled as it could have the potential to negatively impact vulnerable people. Would such data need to be understood on a human level before it could be implemented into a system?
Furthermore, there is a stigma against social work and child protective agencies in a lot of areas in the UK so it could be a worry that if victims were incorrectly identified, it could hinder the work of these essential organisations!
Conclusion
In conclusion, systems such as Bristol may need time and money to be built. This is not desirable as cases of child sexual exploitation and abuse are very sensitive human issues. However, the intent to organise sensitive information may help professionals to know who needs help from different services or who is at a higher risk of certain issues.
The risk of using scoring systems is that we judge people’s traumas in an inhumane way, and we also have the potential to target already vulnerable people. The ethical issue is that we take human experience and turn it into data, and judge who is worthy of help. The data issue is that large amounts of data is stored, and the general public may not agree consent to their data being used if they were more aware of how it is being used. There would need to be frequent considerations of ethics in maintaining systems like these.
Bristol’s system is an example of how data can be used to make decisions and predict information which could also negatively impact and stigmatise both professionals and families. The usage of this system shows how AI and digital technology is increasingly being utilised to manage services and government, predict crime and risk, and it raises questions that can also be asked of other Predictive Analytics and Automated Decision-making systems in place.
Categories: Uncategorized