Embracing AI in Practice: Considerations for Singapore’s Legal Professionals
SPONSORED CONTENT
BY JESSICA LOW
Artificial Intelligence (AI) tools are increasingly being used to automate routine tasks, analyse large volumes of data, and even predict legal outcomes. The legal profession in Singapore, like many others worldwide, is not immune to the wave of this digital transformation. AI represents an exciting opportunity for lawyers to streamline their practices, save their clients’ money, and provide better quality representation. However, it is also fraught with risks. For this reason, lawyers should exercise caution in entrusting tasks to AI and, if and when they do, scrutinise the work it produces.
The Legal Profession (Professional Conduct) Rules 2015 and the relevant ethical principles do not require that lawyers shun AI technology. In fact, the use of AI in some circumstances have been encouraged. [1] However, a lawyer's role is to ensure that AI work is checked and verified and to exercise their own independent judgment on complex legal matters. The following sections explore the relationship between AI and the legal industry, a brief overview of ethical implications of using AI in legal practice, and the role of lawyers in shaping AI governance and ethical standards.
I. AI and the Legal Industry
AI is a branch of computer science that has been around since the mid-20th century, however it is only in recent decades that it has started to garner attention or feature in the spotlight. This is largely due to advances in technology, such as increased computational power and the availability of large datasets, which have enabled more sophisticated algorithms. At its core, it aims to create systems capable of performing tasks — e.g. learning from experience, understanding natural language, recognising patterns, and making decisions — that would normally require human intelligence.
The legal industry is ruled by billable hours, and so speed is pivotal. With the help of AI tools, routine tasks can be automated. In return, this frees up lawyers to focus on more complex and strategic aspects of their work. For instance, AI can streamline the document review process using Technology-Assisted Review (TAR) programmes. TAR programmes will analyse documents that the human reviewers have marked responsive or nonresponsive, and subsequently return results through feeding the human reviewers with documents of the same type. [2] This allows lawyers to accelerate their review. After the AI has been "trained" to a certain level, lawyers can also choose to have the TAR programme make its own determinations as to which documents are responsive and unresponsive.
Similar technology can also be used to identify key documents, make privilege determinations, and group documents by category. Other uses of AI in the legal industry include but are not exhaustive to:
• Understanding the data involved in legal matters and using that insight to drive a smarter eDiscovery process from the get-go;
• Reviewing case documents to draft deposition questions;
• Reviewing legal bills; and
• Assisting in contract drafting and brief writing.
ChatGPT, a popular AI chatbot, has also been used by lawyers to analyse a legal scenario to provide available causes of action. In July 2022, GPT-4 passed the US bar exam whilst also outperforming 90% of new lawyers taking the exam. [3]
Specifically, in Singapore, legal practitioners have the following exciting developments to look out for:
• LTP x CoPilot for SG Law Firms: Copilot for SG Law Firms is your everyday AI assistant to organise and manage legal work. It connects the magic of Microsoft Copilot with legal-specific workflows in the Legal Technology Platform. The integration looks to help lawyers scope smarter, spot risks, get organised, manage workload, improve profitability and more. This is a collaboration brought to Singapore lawyers under the Legal Technology Platform Initiative (LTPI) by the Ministry of Law, Singapore, Lupl, and Microsoft. [4]
• GPT-Legal: GPT-Legal, an upcoming AI tool developed by the Infocomm Media Development Authority (IMDA) in partnership with the Singapore Academy of Law (SAL), is an AI tool that aims to revolutionise the accessibility and efficiency of legal research in Singapore. [5]
II. Attorney Scepticism?
A survey administered by Thomson Reuters found that majority of lawyers (82%) believe that ChatGPT and generative AI can be readily applied to legal work. [6] However, in another survey administered by LexisNexis, the findings showed that lawyers aware of the technology remain sceptical about the transformative impact of AI on law practice. 60% of surveyed lawyers also reported that they have “no plans to use [the technology] at this time.” [7]
This scepticism could perhaps be attributed to caution as “half of the lawyers aware of generative AI had already used it for their work or planned to”. [8]
That said, scepticism could be attributed to several factors including but not exhaustive to challenges such as AI hallucinations, [9] ethical implications, and the lack of framework or guidance on the use of AI. Any regulation relating to the use of AI is still very much at infancy stages. [10]
In Singapore, the IMDA has released the Model AI Governance Framework (Generative AI) in May 2024. [11] The framework advocates for the responsible use and design of generative AI technologies including topics of safety, fundamental human rights such as addressing risks of bias and algorithmic discrimination.
III. AI and Legal Profession (Professional Conduct) Rules 2015 (PCR 2015)
The Legal Profession (Professional Conduct) Rules 2015 (PCR 2015) and the ethical principles raise several important considerations for the use of AI in the profession. For instance, how can lawyers ensure the confidentiality of client information when using AI tools? How can they avoid conflicts of interest when AI is used to automate certain legal tasks? How can they ensure that they are providing competent representation when relying on AI?
a. Client Confidentiality Must Be Protected
The Confidentiality Principle stipulates that a legal practitioner’s duty to act in the best interests of their client includes a responsibility to maintain the confidentiality of any information acquired in the course of their professional work. [12]
This principle naturally applies when a lawyer is using AI while representing their client.
For AI to be utilised in various legal tasks such as document coding, legal conclusion formulation, or legal document generation, it is necessary for client information to be entered into the AI system. This client information, once entered into the AI system, is typically accessible to the system’s vendors and/or developers. For instance, ChatGPT retains personal user data and conversation details, which are accessible to its developers for the purpose of system enhancement. [13]
Likewise, documents uploaded to a document review platform — which could potentially consist of millions of documents in large-scale litigation cases — are subject to the security measures that the platform have in place. These security measures may be strong or weak. When an AI programme is used to generate legal documents — wills, incorporation documents, real estate documents, loan agreements, promissory notes, contracts, among others — it gathers and stores highly sensitive personal or business information to create the final product.
Lawyers must ensure that any AI tools they use comply with confidentiality rules. As good practice, they should be aware of the risks associated with data breaches and take appropriate measures to protect their clients’ information. Accordingly, lawyers should take proactive steps to understand or acquire working knowledge of the operative security policies of the system they are using. This includes understanding the extent of document retention, the duration for which they are preserved, the encryption technology used, the parties within the vendor organisation that have access to the information, and contingency plans in the event of a data breach.
This leads to the importance of ongoing education and training for legal professionals not just for AI but for all technology tools that are being used regularly.
b. Lawyer’s Obligation to Remain Competent
The Competence Principle requires that a legal practitioner must have the requisite knowledge, skill, and experience to provide competent advice and representation to their client. [14]
AI is, without a doubt, one of the most “significant technologies” of our era. It can assist lawyers in identifying frequent errors such as:
• Citing legislation that have been overturned;
• Misquoting sources of law; and
• Inconsistent usage of terms in a contract.
As these issues and more can be identified and rectified with the click of a button, lawyers might increasingly find themselves justifying a refusal to use AI. A lawyer who chooses to perform such tasks manually might risk losing work to those who utilise AI and related technologies to accomplish the same more efficiently and cost-effectively.
Lawyers do not need to become AI experts, but they should understand the capabilities and limitations of the AI tools they use to ensure they are providing competent representation.
c. Accountability - Lawyers Must Oversee Any Work Done by AI
Under the PCR 2015, legal practitioners in a supervisory role are required to "exercise proper supervision over the staff working under the legal practitioner in the law practice". [15] This rule could be said to extend to practitioners who utilise AI tools; lawyers are responsible for the quality of the results generated by AI systems. As such, merely attributing errors, inconsistencies, or conclusions informed by improper context on an AI system is not adequate. The same is true if a lawyer attempts to evade responsibility by claiming lack of knowledge of the general workings of the AI system or the derivation of its conclusions.
Inherent to the responsibility to be competent, as discussed above, is the obligation to ensure that work product generated by AI is coherent, defensible, consistent, and reflective of sound legal knowledge. Relying on AI-produced or informed work product that does not meet this standard may be seen as not adhering to the duty to be competent. This can lead to consequences that can be problematic at best and catastrophic at worst.
Work product or conclusions generated by AI must be supervised by human judgement to ensure that the use of AI does not compromise the quality of their work or their professional responsibilities. Where AI is used to generate a legal document, a lawyer should closely review the language and the law implicated to ensure that both the relevant facts and law were considered.
Mata v Avianca [16] is an example of the use of generative AI for legal work gone wrong. Lawyers from a New York law firm who represented a client in a personal injury case had used generative AI to prepare for a court filing submission. The case citations and all other data provided for the brief were all made-up; the experience here is also known as AI hallucination.
AI hallucination, if left unchecked, can be a serious problem for lawyers. It is also a poignant reminder of the importance of human oversight in AI development and application.
d. Improve Access to Justice
On a positive note, by automating routine tasks and reducing costs, AI has the potential to improve access to justice as it allows lawyers to deliver more efficient legal services. [17] This translates into savings for both law firms and clients, making legal services more accessible to those who need it most. Furthermore, AI’s capability to analyse large volumes of data swiftly can expedite legal processes, thereby increasing the efficiency of the justice system.
AI tools such as chatbots, can provide round-the-clock legal advice, ensuring that legal assistance is available at any time. Basic legal aid can also be provided to those who cannot afford a lawyer. For example, answer simple legal questions, guide individuals through legal processes, or assist them in filling out legal forms. Moreover, AI can educate the public about their legal rights and responsibilities, thereby increasing legal literacy.
By leveraging AI, the gap between the legal system and those who have traditionally been unable to access it due to barriers such as cost, complexity, and lack of knowledge can be bridged. However, it is important to ensure that these benefits are realised in a way that is ethical and fair.
IV. Practical Tips for Singapore Law Practitioners Using AI
Some practical tips for practitioners to use AI tools effectively while adhering to the principles of ethics:
• Stay Updated with AI Technologies: Familiarise yourself with the latest AI tools and technologies that can improve your practice. Regularly attend workshops, webinars, and conferences on legal tech to stay abreast of the latest developments.
• Understand the Ethical Implications: Before using an AI tool, consider its ethical implications. This includes understanding how the tool handles data privacy and confidentiality, how it might impact your duty of competence, and whether it could potentially create conflicts of interest.
• Ensure Regulatory Compliance: Make sure that your use of AI complies with the prevailing code of conduct for Singapore law practitioners and the AI governance framework established by IMDA. Familiarise yourself with the testing framework and software toolkit provided by AI Verify.
• Participate in Shaping AI Governance: Lawyers have a role to play in shaping AI governance and ethical standards. You can contribute to policy discussions, provide legal advice on AI-related issues, and advocate for the responsible use of AI.
• Transparency with Clients: Be transparent with your clients about your use of AI tools. Explain how these tools are used in your practice and how they can benefit the client.
• Regular Audits: Conduct regular audits of your AI tools to ensure they are functioning as expected, and not producing biased or unfair results.
The integration of AI into the legal profession is a transformative development that brings both opportunities and challenges. For law practitioners in Singapore, understanding AI and its ethical implications is crucial. By staying updated with the latest AI technologies, understanding the ethical implications, ensuring regulatory compliance, and participating in shaping AI governance, lawyers can harness the benefits of AI while upholding the highest standards of legal ethics.
This article was sponsored by LUPL.
[1] Lawyers should learn AI but must be aware of its ethical risks: Chief Justice Menon, The Straits Times (21 August 2023), Available: https://www.straitstimes.com/singapore/courts-crime/lawyers-should-learn-ai-but-must-be-aware-of-its-ethical-risks-chief-justice-menon.
[2] How to Use Human Centered AI in Legal Document Review, The Reveal Blog (2 February 2023), Available: https://www.revealdata.com/blog/artificial-intelligence-in-legal-document-review.
[3] Latest version of ChatGPT aces bar exam with score nearing 90th percentile, ABA Journal (16 March 2023), Available: https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile.
[4] Copilot for SG Law Firms, Lupl, see: https://sg.lupl.com/copilot/.
[5] Legal research to become more efficient with new large language model contextualised to domestic law, Singapore Academy of Law (29 May 2024), Available: https://www.sal.org.sg/node/1856.
[6] New report on ChatGPT & generative AI in law firms shows opportunities abound, even as concerns persist, Thomson Reuters (17 April 2023), Available: https://www.thomsonreuters.com/en-us/posts/technology/chatgpt-generative-ai-law-firms-2023/.
[7] Shock survey reveals most lawyers shunning game-changing AI technology, JD Journal (27 March 2023), Available: https://www.jdjournal.com/2023/03/27/shock-survey-reveals-most-lawyers-shunning-game-changing-ai-technology/.
[8] Shock survey reveals most lawyers shunning game-changing AI technology, JD Journal (27 March 2023), Available: https://www.jdjournal.com/2023/03/27/shock-survey-reveals-most-lawyers-shunning-game-changing-ai-technology/.
[9] AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries, Stanford HAI (23 May 2024), Available: https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries.
[10] Several jurisdictions have enacted legislation to regulate applications of AI in various sectors: J European Union, Canada and the USA have enacted omnibus legislation regulating various applications of AI across various sectors include the European Union, Canada and the USA. In Singapore, the United Kingdom, Japan and Australia. the stance taken is that of regulating products and services in a technology-neutral way without enacting AI-specific legislation.
[11] Model AI Governance Framework (Generative AI), Infocomm Media Development Authority (30 May 2024), Available: https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/gen-ai-and-digital-foss-ai-governance-playbook.
[12] s.6 Legal Profession (Professional Conduct) Rules 2015.
[13] What is ChatGPT?, OpenAI Help Center (Accessed 11 July 2024), Available: https://help.openai.com/en/articles/6783457-what-is-chatgpt.
[14] s.5 Legal Profession (Professional Conduct) Rules 2015.
[15] s.32 Legal Profession (Professional Conduct) Rules 2015.
[16] Mata v. Avianca, Inc., 22-cv-1461 (PKC) (S.D.N.Y. Jun. 22, 2023), Available: https://casetext.com/case/mata-v-avianca-inc-2.
[17] s.4(e) Legal Profession (Professional Conduct) Rules 2015.