Is the full potential of generative AI in legal practice being inhibited?

Allen & Overy recently announced that they are integrating innovative AI programme, Harvey, into everyday practice at the firm after testing it in beta since November 2022.

Harvey’s founders include former lawyers and was partly funded by OpenAI’s Startup Fund. The programme comes in the form of a chat bot which incorporates OpenAI’s GPT technology and is designed to assist lawyers with document drafting and legal research by analysing data and using natural language processing and machine learning to produce results.

The use of Harvey within the firm comes with a caveat that all information must be checked (by a human) for accuracy. An issue with generative AI to date has been a tendency to “hallucinate” at times, that is, to produce results which are misleading, or just plain wrong. 

That aside, this is a significant leap for the legal sector which is often accused of being slow on the uptake of technology, but could the patchwork of legislation and regulatory requirements around AI prevent such innovations being used to their full potential?

Various pieces of legislation apply to AI, for example, UK data protection laws refer specifically to automated decision-making and the processing of personal data. This means that chat bots like Harvey are not able to process client data, without clients first opting-in to having their information used in this way.

The information used to train AI could also be problematic if it could contain sensitive client information, or be subject to copyright.

There is also regulatory action to consider. The Legal Services Act 2007 provides that some legal activities are “regulated activities” that can only be carried out by authorised individuals/firms and that they will be responsible for the level of service they provide to clients, even in cases where part of the service is provided by a chat bot.

There are also issues around how firms can successfully meet their regulatory obligations, for example, their duty of confidentiality. However, the Solicitors Regulatory Authority (SRA) have advised that they are unlikely to take any action against a firm in cases of an error or flaw in an AI system, provided the firm can show that it did everything within reason to prevent such issues.

Work is underway to provide some clarity and guidance in this evolving area. Chat bots are specifically intended to fall under the scope of the Online Safety Bill, which is currently making its way through Parliament, in cases where content generated by chatbots interacts with user generated content on social media.

The Government released its National AI Strategy in September 2021, a 10 year strategy aiming to invest in, plan for and support the growth of AI technology in the UK, as well as ensuring that any regulation in this area is fit for purpose.

Further to this Strategy, the Office for AI published its policy paper - Establishing a pro-innovation approach to regulating AI - in July 2022, detailing the principles which they propose should be considered in order to form a “pro-innovation framework for regulating AI”.  They suggest that any regulation should be context-specific, pro-innovation, risk based, proportionate and adaptable, and rather than legislating on this subject, regulators should be given overarching guidance which they can then interpret and implement in their specific areas.

It is hoped that by following these principles, it will allow the Government and the industry to react quickly when required, compared to the slow moving process of drafting legislation.

For the time being at least, it seems that emerging AI technologies may raise as many issues as they resolve.

Previous
Previous

UX: Accessibility and Security – An Overlooked Balancing Act

Next
Next

Neurotech - Augmenting the lawyers (and the law)