Tech
Tech Tip 1: Using Generative AI
Generative AI seems to have gotten lots of bad press of late. Even Yann LeCun, Chief AI Scientist at Meta, thinks this technology “really sucks” when compared to the innate learning capabilities of even the simplest animals. Sure, Large Language Models (LLMs), a subset of Gen AI, can predict the next word based on preceding input, but they lack the ability to genuinely understand context. At the moment, that is. Does it mean that until and unless Gen AI can seismically transform into a system capable of understanding, predicting and interacting with the world with a depth akin to living beings, lawyers should avoid it like the plague?
Gen AI has proven to be “passable” in summarising law and related guidance, particularly for areas of law that are well-known and extensively discussed on the internet. In Linklaters’ LinksAI English Law benchmark test[1], the ability of several LLMs to answer legal questions was tested. It demonstrated that the current generation of LLMs should not be used for legal advice without human supervision.
If you are planning to use Gen AI to undertake legal research, remember to:
- · Independently verify the output for accuracy, reliability and currency, since LLMs are notoriously prone to “hallucinations”;
- · Be extremely vigilant not to share any legally privileged or confidential information or any personal data since prompt injection attacks are one of the most widely reported weaknesses in LLMs, where an attacker creates an input designed to make the model reveal confidential information; and
- · Test and refine your prompts to see what works and what doesn’t if you don’t get the desired response right away
<< Back to Junior Lawyers Professional Certification Programme