NLP Text Summarisation in the Legal Sector

From writing a summary of the Adventures of Tom Sawyer in school to the summarisation of documents for a senior at work, every person requires summary writing skills. This ability is especially useful in the legal world since it is no mystery that lawyers love drafting excruciatingly long documents. But their love of writing does not extend to reading lengthy drafts produced by other lawyers. It is at this juncture junior associates with a panache for writing summaries make an appearance. With an ever-growing pile of cases and long hours, who would have the time to peruse the latest court decisions or comb through documents for due diligence purposes? 

 

Luckily the advancements in Natural Language Processing (NLP) have made it possible to automate the task of text summarisation, thus making identifying key information without losing the essence of the text a far quicker and more efficient process. Text summarisation is carried out by NLP through two different methods. The first is called the extractive method which identifies the keywords or phrases from the original text to create a summary similar to the original text. This method acts like a highlighting tool while the second method called abstractive summarisation creates summaries that rephrase and concise the original text, comparable to what a human would do. 

 

Text summarisation will immensely aid the legal profession as lawyers are always looking for the fine print in case laws, contracts, or any other documents that can make or break their argument in court or during a negotiation. Since time is of the essence in such situations, the use of NLP can lead to cost-cutting, time saving, an increase in productivity, and a reduction of manual errors. 

 

However, this technology does not come without challenges. Even though machine learning is making strides in understanding natural language, it is far from interpreting the law accurately. Legal texts are highly complex and technical and most of the time a lawyer requires a summary of case law to understand its applicability to a particular client’s situation. Since NLP uses algorithms that identify the most relevant sentences, it might miss out on certain contextual details that would be relevant to the client’s situation. Other times the summary can even provide misleading information. Additionally, NLP models may not be able to comply with the requirement for privacy, security, and other regulatory and ethical standards of the legal sector.  

 

While these issues can be easily identified by lawyers who read the summaries created by a bot, the same would not be so evident when read by a non-lawyer. Consider OpenAi’s latest chatbot the GPT-3. It is creating a buzz all over the internet and rightly so. This NLP model is able to write summaries of court cases, write academic articles, and even poetry! With this sophisticated writing abilities, it is said to have come close to passing the Turing test of whether Artificial Intelligence can fool an actual human being. Now if this tool is used by people for understanding complex legal theories by using text summarisation, there is no guarantee for it producing accurate results. Moreover, there is no way in which the bot could be made liable for wrong legal advice. 

 

Lawyers spend their lives reading books by reputed authors to broaden their knowledge while NLP models like the GPT-3 are trained on datasets available openly on the internet and this includes unreliable sources. Therefore, even though NLP text summarisation is a helping hand for lawyers, its use should be restricted to the use by lawyers who understand the limitations and even ethical challenges of such technology. 

Previous
Previous

Innovative Lawyers Wales

Next
Next

Legal Innovators UK 3.0 – Conference Review