How do we build ethical Artificial Intelligence (AI)?

How do we build ethical Artificial Intelligence (AI)? A recent webinar on The Future of AI and Ethics in LawTech, organized by Barclays, delved into this question in the context of ethical problems posed by AI in the legal sector. The panel comprised of speakers from different specialisms. Tanja Podinic, a lawyer and global director of innovation at Dentons; Rafie Faruq, an AI specialist and co-founder of Genie-AI and Rebekah Tweed, ethics, public policy specialist and the program director at All Tech is Human.

Tanja kicked off the webinar by addressing why the rate of adoption of AI in legal sector is lower than most of the other industries. She made an interesting comparison of legal sector with the entertainment industry by saying - ‘while a wrong movie recommendation by Netflix might only cause annoyance to users, a wrong piece of legal advice might result in changing the course of a person’s life.’ The risk of using AI in LegalTech is higher than in other businesses because ethics lie at the heart of law and is one of the core principles of justice. Since most of the AI systems do not guarantee a 100% accuracy rate, lawyers are apprehensive to use AI based LegalTech. Therefore, the only LegalTech that lawyers tend to adopt are the ones with minimal risks such as tools for e-discovery, due diligence, document management systems etc. These tools are not free from risks, AI can miss important documents or texts during e-discovery, but these risks are fairly lower than if AI were used in legal advice or contract negotiations.

While bias is one of the most talked about ethical issues, Rafie pointed out that AI ethics also include safety, quality, intellectual property considerations and explainability. From LegalTech perspective, these considerations should be kept in mind while designing policies around ethical AI. Moreover, these policies should have a bottoms-up approach, meaning that ethics should be imbibed at all the stages of the development of the LegalTech tool, right from hiring the right people to the testing of the product.

How do ethical considerations begin at the hiring stage? Well, since ethics largely depends on perspective, having diversity will provide a holistic ethical picture of AI’s impact in the legal sector.  To achieve this, Rebekah Tweed raised the need for diversification of the tech pipeline by hiring candidates that represent minorities and also having diverse specialisms.  At the development stage, Rafie highlighted the need for organisations to point out their priorities in terms of the accuracy rate (ballpark figure) for a particular ethical issue that the LegalTech solution should produce. For instance, some organizations might require their LegalTech solutions to provide x% accuracy rates for privacy related ethical concerns. This might help in designing solutions that ethically fit for that organisation. 

All the speakers agreed that the bottoms up approach need the formation of an ethics board whose approval should be required at all stages of development. Additionally, it should be a board that is formed by collaboration with different sectors, comprising of not just lawyers, technical specialists, and business teams but also philosophers, sociologists and psychologists.

Although these steps will ensure a certain degree of ethical standard, it can never provide a perfect solution..  Ethics is anything but certain. It is more about perspectives than perfection.  Organisations need to constantly strive to be more ethical because society will always have some biases, new ones created every day, and AI will just reflect the imperfections of the society. But the more we collaborate, the more we listen, the better will our ethical framework be for LegalTech.

Previous
Previous

Swansea Science Festival Chatbot

Next
Next

Festival of Ideas 2022 - CYTREC Presentation