A report by LexisNexis Legal & Professional, Lawyers cross into the new era of generative AI, published this week, reveals the legal sector’s adoption of AI tools is accelerating rapidly.
LexisNexis surveyed more than 1,200 legal professionals in January 2024, finding 62% of law firms have already made changes to their day-to-day operations as a result of generative AI. Examples of changes made included launching AI-powered products for internal use, providing AI training for staff, hiring AI experts and developing policies on the use of generative AI.
Stuart Greenhill, senior director of segment strategy at LexisNexis, says the appetite for generative AI in the legal sector is ‘unprecedented.
‘Lawyers from all backgrounds are jumping at the chance to make the most of its time-saving potential. However, the demand is growing in the legal sector for generative AI tools that are grounded and trained on legal sources and can provide a higher-level of transparency for all responses generated.’
In terms of the areas legal professionals want to use AI for, nine out of ten respondents wanted to use the technology for time-saving tasks such as drafting documents and for researching matters. Some 73% would use it for communication tasks such as writing emails.
Roughly half the respondents say they anticipate using generative AI for more complex tasks such as contract analytics, case management or real-time comparisons of law across jurisdictions.
Nevertheless, lawyers express concern about potential risks and are therefore eager to find AI that is grounded on trusted legal content and capable of providing transparency for all responses generated.
Chief among the list of concerns is generative AI’s tendency to hallucinate, fabricating answers based on inaccurate information, followed by security issues. As the report notes, AI can ‘leak confidential data and construct biases’.
Despite these fears, however, the majority of respondents—two-thirds overall and 73% from large law firms—said they would be somewhat or completely confident using AI if it were grounded in reliable legal content sources with linked citations to the case, legislation or content informing the response.
According to Rachita Maker, global head of legal ops, tech and consulting at DWF, who is quoted in the report, lawyers can safeguard against hallucinations by relying on trusted datasets.
Maker says: ‘We are using our own documents, which make the output of generative AI significantly more reliable.’
LexisNexis recently launched Lexis+ AI, a generative AI system grounded in the LexisNexis database and providing verifiable authority-backed results.
View the full report here.