header-logo header-logo

AI explainer

From the Turing Test to closed AI models, Ian McDougall sets out what lawyers need to know about AI
  • Explains in easy-to-understand terms what AI is, the different types, how it works and why lawyers should embrace closed AI models.

We are seeing the words ‘artificial intelligence’ thrown around in an increasingly casual manner. It is the new buzzword. Anyone who has a computer program that is even remotely complicated is now describing it as AI. That is understandable in a social media world, where you must shout louder than anyone else to be heard. But let’s first take a step back and examine what the buzz is all about.

Clearly, AI is an expression that is easily confused and regularly misused. However, a leading test of what amounts to AI was introduced by Professor Alan Turing in his seminal paper ‘Computing machinery and intelligence’ in 1950. The Turing Test was, and is, basically: ‘Can a computer imitate a human so that a human cannot say whether they are engaging with a machine or another human?’ The test, originally called ‘the imitation game’ by Turing, does not reference specific technology, such as machine learning, analytics, natural language processing, neural nets, etc. The test is focused on the user experience. It is not a test of complexity. My chess game is complicated, but it isn’t AI. Tesla cars are almost entirely made by very complicated robots and computer programs, but it isn’t AI.

But I think we may be presumptuous enough to add to the Turing Test by noting that some elements are clearly necessary to pass it. The ability to interact using natural language. The ability for the system to create something new. The ability to adapt and learn. I think it is reasonable to include these elements as necessary to imitate a human.

Generative AI/large language models

The relatively recent development, generative AI, has produced a lot of publicity. Rightly so. It will fundamentally change the way all the ‘cognitive’ industries work. It represents a huge advance in technology. But first, let me explain its precursor, extractive AI.

Please note that I am enormously oversimplifying for convenience. Extractive AI can be thought of as a system that extracts relevant data points from a library it has been trained on. It combines them, creates logical links between those various data points, and uses that huge mass of connectivity data to reproduce parts of the library in response to natural language questions.

The recent advance is generative AI. This technology builds on the ideas of extractive AI to generate newly created, human-understandable responses. That may involve creating new photos or videos (as we see in the world of ‘deepfakes’). Or it may be creating natural language responses to natural language questions or instructions. Most importantly, it is trained on, and requires, massive amounts of data. Thus, instead of answering your query by simply referring you to relevant information from the library it has been trained on, it will produce an answer of its own based on that library of data.

Limitations of large language models

My focus for our purposes, and especially the cognitive industry we commonly refer to as the legal profession, is on the use of language by generative AI systems. Also referred to as large language models (LLMs), the key thing to note is the generative AI large language models do not understand words and language in the sense understood by humans, be they philosophers or psychologists. The LLMs are really playing ‘the imitation game’. The LLM is a statistical modelling system. It looks for patterns in language that are most often used and then uses that to predict the likely next word.

I don’t wish to minimise the amazing achievement that this is. Huge advances in processing power and connectivity speeds have made it possible to combine literally billions and billions of data points to create the necessary complexity required to be able to deliver a coherent answer at speed. However, because it is not a true understanding of word meanings and concepts, it cannot know some important issues.

It is not on a quest for truth and morality. It is not able to make value judgements. It assesses likelihood. In other words, it uses statistical analysis to predict the next most sequentially probable word and then produce it.

As it is a statistical-based model (and not a truth model), it requires an enormous amount of data to build up those patterns. Models are trained by going out on to the open web and ingesting whatever they can find. (Discussion of copyright issues is for another day!) This means that the model ingests all the prejudice, bias, fake news and sheer nonsense that exists (dare I say, in huge quantities) on the open internet. This leads to the increasingly well-known problem of ‘hallucinations’. Or in plain language, making up things that are not true. This is because it is assessing probable words, not truthful words. It is assessing likelihood, not value. The more rubbish you ingest, the more rubbish you will egest. Believe me, there is a lot of rubbish on the internet.

For expert advice, ask an expert

However, there are developments that can help minimise this hallucination problem. I use the terms ‘open AI models’ (not to be confused with the company ‘OpenAI’) and ‘closed AI models’. As previously mentioned, the open AI model goes out to the internet and vacuums up all the word patterns it can find. When asked a question, it will use that data to provide a likeliest word answer. My analogy is that this is the human equivalent of walking into a bar and asking a question of the first person you meet. You might get an accurate answer. You might get a load of drunken nonsense. But you take your chances. You rely upon that information at your peril. In other words, it is good for idle, bar-type chit-chat, but for goodness’ sake, don’t rely upon it when it is crucial to you. You will reap what you sow!

A closed AI model is where the model has been trained and refined specifically on a closed data set, such as the LexisNexis database, for example. Its answers will reflect the content and quality of that database. A high-quality, well-curated database of the kind I just mentioned will supply you with high-quality and reliable answers. There have been recent examples of lawyers using open AI models to produce court pleadings and the like. A New York federal judge recently sanctioned lawyers who submitted a legal brief written by the artificial intelligence tool ChatGPT, which included citations of non-existent court cases. In addition to each paying a $5,000 fine, the attorneys, Peter LoDuca and Steven Schwartz, and their Levidow law firm, were ordered to notify each judge falsely identified as the author of the bogus case rulings about the sanction.

They figuratively went into the bar and asked the first person they came across to draft their briefs and then relied upon those answers. You wouldn’t do it with humans; don’t do it with AI. When you want expert advice, you go to an expert human. I suggest when you want expert support, you go to an expert AI. I promise you; you will reap what you sow!

Embrace it

The legal profession is traditional, conservative, and reluctant to change. However, changes in the legal profession will be dominated by AI, and refusal to adapt to technology will lead to failure. Can you imagine any law firm saying today, ‘Oh, we don’t use the internet; we prefer traditional methods.’ That is what the discussion of the use of AI tools will become. It isn’t an option; it will be core. But you must use the right systems for the right purpose. Recommending that you don’t go to your doctor for tax advice seems obvious, but that is what I am suggesting about AI systems.

Doing things in the same old way is no longer an option for the legal profession. Those who are prepared to innovate and embrace the 4th Industrial Revolution are the ones who will survive and prosper. Those who don’t, won’t.

Professional services, of all kinds, will begin to see the greatest transformational change in the near to medium term because of this coming Industrial Revolution. This is because those professions are the cognitive industries, where previously human judgment prevailed, but where data and analysis will eventually dominate. However, it is possible these new challenges will open a range of new opportunities. Old skills go or change, new ones open—skills and roles that can’t even be imagined today.

In conclusion, the impact of AI on the legal profession is not disastrous if you are prepared for it. It will be amazing if you embrace it. But be careful; human intelligence must use AI appropriately —or you will reap what you sow! 

Ian McDougall, EVP & General Counsel, LexisNexis—Legal & Professional.

Lexis+ AI: Leading in Legal AI

MOVERS & SHAKERS

NLJ career profile: Liz McGrath KC

NLJ career profile: Liz McGrath KC

A good book, a glass of chilled Albarino, and being creative for pleasure help Liz McGrath balance the rigours of complex bundles and being Head of Chambers

Burges Salmon—Matthew Hancock-Jones

Burges Salmon—Matthew Hancock-Jones

Firm welcomes director in its financial services financial regulatory team

Gateley Legal—Sam Meiklejohn

Gateley Legal—Sam Meiklejohn

Partner appointment in firm’s equity capital markets team

NEWS

Walkers and runners will take in some of London’s finest views at the 16th annual charity event

Law school partners with charity to give free assistance to litigants in need

Could the Labour government usher in a new era for digital assets, ask Keith Oliver, head of international, and Amalia Neenan FitzGerald, associate, Peters & Peters, in this week’s NLJ

An extra bit is being added to case citations to show the pecking order of the judges concerned. Former district judge Stephen Gold has the details, in his ‘Civil way’ column in this week’s NLJ

The Labour government’s position on alternative dispute resolution (ADR) is not yet clear

back-to-top-scroll