The What and Why of XAI

How do you solve a problem like machine learning?

“Never trust anything that can think for itself if you can't see where it keeps its brain.”

- Mr Weasley

For many in law, the challenge of adopting machine learning is a problem of trust. You trust that the algorithm is pulling the right information out of the data you give it, trust that it can infer enough of the context to solve the problem, and, ultimately, trust that it’s working towards the same goal as you. All that, and the algorithm has no skin in the game. As it stands, AI doesn’t have the emotional range to care about your client. Nor, as of writing this blog, can they be fired. You can’t rely on that to keep the AI honest. In this one-sided relationship, we have to resort to more creative approaches.

Know thy enemy algorithm

Machine learning models can look like black boxes to the outsider and, for complex models, sometimes even to the designers. You plug in some data, the model does something, and you are left with its results. An answer to a question, a remediated document, a selection of the most relevant case data. What you don't have is an explanation for that result. For low stake use cases that could be fine. If you're just trying to count the number of ships in an image or predict what may be lying beneath the dirt at an excavation site, you likely won’t need to know the ins and outs of how that prediction is made. It’s a different story when you’re preparing contracts for a client and your model is suggesting what clauses to include. You and your clients will want to know why.

What is explainable AI?

Explainable Artificial Intelligence (XAI) is a form of artificial intelligence where the processes behind decisions can be understood by humans. XAI can approach this in a number of ways.

Machine Learning models can be designed to be intrinsically understandable to a human. Decision trees work like this. You could graph the decisions made by an algorithm at every 'fork' and follow how it moves from input to output. Suppose you were employing a decision tree model for contract remediation. For each clause, the model could ask a series of question that leads to the clause being amended or not. “Does this clause include a reference to an outdated benchmark?” etcetera. However, such an approach limits the effectiveness of the model. All its interactions would be binary choices.

If you wanted to use a more complex model for a more complex problem, it may be better to try a 'post-hoc' method. This is where you use techniques to analyse the model and understand it after it has been trained. An example would be to use a permutation feature importance method to understand the relative impact each feature has on the result. You could use such an approach to understand what aspects of a client’s case most strongly influenced a prediction of a ‘win’ result at court.

What else can explainable AI do?

So XAI can help establish a base of trust between the lawyer and their assistive Machine Learning tools, but it can also help strengthen trust between the lawyers that employ these tools and their clients. In 2021 The Law Society published a report on principles and ethics of LegalTech, drawn from interviews with the biggest law firms in the UK, that outlines transparency as a key principle of client care.

Regulatory bodies also require it when concerning personal data. Articles 13 to 15 of GDPR provision for rights to "meaningful information about the logic involved' in automated decisions.

With the transparency of XAI, it is possible to identify where errors are being made in the model and how you may correct them - for example, in contract remediation, if your training data is overrepresenting certain clauses and therefore giving them undue importance in the review process, you may want to omit those clauses from consideration. Perhaps in your sample set of contracts you were always updating clauses concerning two named companies. For the broader exercise, the names of those companies are irrelevant, but the model has picked them out as important. If your AI is providing explanations for its choices, you will be able to tell what it’s doing. You could then remove the company names from the data as part of pre-processing before training your model. Improving the accuracy of your model would reduce the amount of human time spent correcting their results.

By using XAI, you also allow Machine Learning to play a more strategic role. If you’re using a Machine Learning model to predict the outcome of a case if taken to trial, it’s not enough for the model to predict that you will win. The lawyers representing the case need to understand which aspects of that case and the related case law informed that ‘win’ prediction. For the model to be useful, it needs to inform the strategy taken to court. XAI can provide this transparency, highlighting the most relevant clauses for the case.

Another component of strategy is knowing when to employ information and when to ignore it. By understanding how models arrive at their conclusions, lawyers are empowered to make these decisions instead of jumping in on blind faith and just trusting that the maths is infallible. In this way, Machine Learning can act as an accompaniment to the lawyers' skills rather than as a mystic ball trying to guess at the future.

Whose concern is it anyway?

It would be easy for lawyers to write off the inner workings of Machine Learning as a 'tech problem' to be solved by IT staff in basements while the lawyers are busy providing services and solving client problems, but lawyers can't effectively sell their services without fully understanding the limitations and benefits of those services.

If you're pricing a contract remediation package for a client, your costs will not match reality if you don't understand where you can and cannot employ your tools. If the models you are using require thousands of contracts in a highly structured format, but your client is a small business with only ten contracts, all written in different formats by different people, your ML solution is not fit for purpose, and you can't offer those discounts when you have paralegals to pay.

Clients also look to their lawyers for explanations and guarantees, not the IT staff. If you're not able to understand why your models are giving the results that they are, you won't be able to justify those results to your clients, nor assure them of a high quality of work.

Machine Learning issues are not tech issues alone when it's the lawyers who are the ones that will be held to account for decisions that are made, and be the ones put on the spot to explain them.

In the convolution of computer processing, you’re not likely to see where the algorithm is keeping its brain. If you’re using cloud computing, you might not even be able to see the server that contains it. But understanding its brain is the next best thing.

Previous
Previous

Legal Accelerators 2.0

Next
Next

AI and IP