How AI Is Being Used in Criminal Cases
You may have seen recent headlines about prosecutors and law enforcement using an AI tool, Cyberchecks, in criminal proceedings. Cyberchecks is an advanced AI tool developed to assist prosecutors and law enforcement agencies in criminal cases by harnessing the power of machine learning.
This technology delves into vast amounts of web data, collecting “open source intelligence.” This includes social media profiles, email addresses, and other publicly available information. The primary aim is to pinpoint potential suspects' physical locations and gather additional critical data relevant to serious crimes such as homicides, human trafficking, cold cases, and more.
According to its creator, Adam Mosher, Cyberchecks boasts an impressive accuracy rate exceeding 90%, effectively performing automated research that would otherwise take human analysts hundreds of hours. By last year, Cyberchecks had been instrumental in nearly 8,000 cases across 40 states, involving close to 300 different agencies.
Why Cyberchecks Has Come Under Fire
Despite its touted benefits, Cyberchecks has faced significant criticism from defense lawyers and legal experts. Two of the primary concerns are the lack of transparency and the alleged accuracy of the tool. Defense attorneys have argued that the methodology behind Cyberchecks remains obscure and has yet to undergo independent verification. This lack of transparency raises questions about the reliability of the evidence that the AI tool generates.
In a noteworthy case in New York, a judge prohibited the use of Cyberchecks evidence, citing that prosecutors failed to demonstrate the tool's reliability and acceptance within the scientific community. Similarly, an Ohio judge blocked a Cyberchecks analysis when Adam Mosher, the Cyberchecks creator, declined to reveal the software's underlying methodology.
These instances underscore the growing skepticism and legal challenges surrounding the use of AI in criminal justice, highlighting the need for more rigorous standards and transparency in deploying such technologies.
Other Problems AI Presents in Legal Settings
While AI technologies like Cyberchecks have revolutionized data analysis, they also introduce a range of complications, especially in legal applications. These problems include the potential for creating deep fakes, introducing convoluting false or circumstantial evidence, and the genuine concern of AI hallucinations.
Deep fakes are highly realistic, artificially generated images, audio, or video content that can make someone appear to say or do something they never did. The proliferation of this technology poses a significant threat to courts, as presenting fabricated evidence (knowingly or otherwise) could unjustly sway juries, damage reputations, and potentially lead to the conviction of innocent individuals.
False evidence generated by AI is another concern. Machine learning algorithms, particularly those involved in pattern recognition and predictive analytics, are susceptible to biases present in their training data. These biases can result in prejudiced outcomes, such as disproportionately targeting certain racial or ethnic groups, further exacerbating existing inequalities within the criminal justice system.
Circumstantial evidence indirectly suggests a fact through an inference rather than providing direct proof. When AI-generated circumstantial evidence is involved, it can make it difficult for juries and even legal professionals to grasp and interpret it accurately. This complexity can lead to misunderstandings or misinterpretations, overshadowing direct evidence and potentially resulting in unjust and discriminatory outcomes.
AI hallucinations, where the AI generates false or misleading outputs due to misinterpretation of the input data, are yet another issue. This can lead to erroneous information being presented as facts in investigations and courtroom settings, causing misunderstandings and potentially wrongful convictions. The lack of a robust mechanism for verifying AI-generated evidence amplifies these risks, necessitating stringent validation processes and oversight.
The Risks of AI Usage in Criminal Cases Are Great
Collectively, these challenges emphasize the critical need for comprehensive guidelines, transparency, and ethical standards when integrating AI into the legal framework. Without proper safeguards, deploying AI tools in criminal justice risks undermining the core principles of fairness and justice.
If you are worried about how the use of AI in Massachusetts courts may impact your criminal case, contact our firm. We are here to provide our clients with the guidance and legal support they need during these difficult times.