Trust in Robo-Lawyers

Surely anyone who has ever been elected to public office understands that one commodity above all others, namely the trust and confidence of the people, is fundamental in maintaining a free and open political system." -Hubert H. Humphrey

Recently Robo-lawyers are making headlines in the papers with DoNotPay’s AI lawyer being the first in history to assist a defendant in court.(https://www.bbc.co.uk/news/business-58158820).

However, the rise of AI in the legal sector has always raised the same concern –Are we humans ready to trust Artificial Intelligence? While there are many technical and ethical concerns around it, today I will talk only about the concept of trust in systems. The above quote by H.H. Humphrey comes to mind while trying to understand why people find it difficult to trust a machine over a human lawyer. We live in a society fundamentally built on trust and that is why politicians are always trying to win the trust of people to stay in power.

The same applies to the law. We obey the law, the judges, and the jury only because we trust in the system. Once that trust is gone, the system collapses. But how is this trust built? Certainly not overnight. Trust requires transparency, accountability, and rules that uphold the principles of fairness.

To understand this complex concept of trust, I asked myself –If I am in a legal battle, whom will I trust and why? My immediate thought was someone with a good educational background, solid experience, and reputation. As a junior lawyer, I wouldn’t trust myself with a complicated legal case. Expertise (implied trust) in law does not only mean proficiency in the legal text, it is also dependent on institution names, fame, and years of practice. Older lawyers are therefore preferred and if they are from law schools like Harvard and Oxford, that adds another level of gravitas.

When we talk about robot lawyers and their near 100% accuracy rates in deciding a matter, they do not yet have the other ingredients of trust in their pockets. This can be one of the reasons for apprehension in trusting AI tools in the legal sector. I recently came across an interesting study by Stanford University about the perceived legitimacy of content moderation systems and it corroborated my theory on trust (https://hai.stanford.edu/policy-brief-algorithms-and-perceived-legitimacy-content-moderation). The study stated that“participants perceive expert panels as a more legitimate content moderation process than algorithmic decision-making.” The accuracy of algorithms did not matter in the perceived sense of legitimacy.

But this is only the case for robots in the legal field. When we think of science, robots are trusted more than humans to handle intricate mathematical equations. The idea of trust is obviously different in the scientific field, and this is something that should be remembered. With exponential developments happening in AI technology, there is no positive development in building trust.

I am not saying that robots should be sent to law schools and be made to graduate, but if they are going to start representing people in the courts, why not start bridging the gap between scientific and societal trust? I truly believe that we are making progress by using Robo-lawyers in courts.But this process of trust building is far from complete and will surely take a lot of time. Policymakers, politicians, engineers, lawyers, and most importantly the common people need to voice their opinions and work together to create an environment where we can start to trust Robo-lawyers.

If you liked this blog, check out our other blogs on AI.

Previous
Previous

Machine Learning and the future of Open Justice

Next
Next

Innovative Lawyers Wales