AI & Law – Singapore Academy of Law, Chief Executive
In this interview, Zee Kin shares his insights on the legal challenges in the Era of Advanced AI
00:21 – How are the concerns around AI different this time?
00:28 – Concerns such as content, child protection, data security, etc. persist with the latest AI innovations
02:06 – Increased accessibility and scalability of threats make detecting fake images harder
02:42 – What does the Getty vs Stability AI case reveal about Gen AI’s unique challenges in the legal context?
04:22 – One interesting aspect of this case involves the use of copyrighted data for AI training, however the challenge is not new
05:26 – Data lineage and the provenance of data have always been important in legal contexts
06:37 – Another case involves Clear View AI, a facial recognition software maker, a few years ago
07:09 – Legal basis required to use internet-sourced personal data to train the facial recognition model
Zee Kin highlighted that with the latest AI innovations, the responsibility and legal issues remain largely consistent, but the tools and technology introduce different challenges.
For instance, he shared that such concerns around content, child protection, intermediary behavior, data security, data protection, and cybercrime remain, while challenges such as detection of fake content has intensified due to increased tool accessibility and the scalability of threats.
Referring to the “Getty vs. Stability AI” case, he shared that the interesting question is the use of copyrighted data to train AI models – which is not new, and the key is to establish a proper legal basis for using such data. Data lineage and the provenance of data have always been important in legal contexts.
He also noted that these concerns have also surfaced during the recent governmental responses around the world to the latest AI innovations.
Zee Kin also highlighted the challenges with defining terms such as “fairness,” “transparency,” and “repeatability” – varies by context, where expectations and priorities for AI differ based on its use, such as safety and predictability in medicine, and bias and fairness in personal data applications.
Repeatability poses an additional challenge in Generative AI because every iteration of an image or summary will vary (**owing to Generative AI’s statistical predictive nature).
Zee Kin also shares his views of AI’s impact on job security, nothing that there will be emerging opportunities for lawyers to use AI tools for efficiency and error reduction.
———-
Recorded TechLaw Fest 2023, 21st Sept 2023, 3.30pm, Marina Bay Sands, Singapore.
———-
Mr Yeong Zee Kin holds a Master of Laws from Queen Mary University of London and completed his undergraduate law degree at the National University of Singapore. His experience as a Technology, Media and Telecommunications lawyer spans both the private and public sectors. He has spoken and published in areas relating to electronic evidence and intellectual property, as well as legal issues relating to Blockchain and AI deployment.
Zee Kin is an internationally recognized expert on AI ethics. He spearheaded the development of Singapore’s Model AI Governance Framework, which won the UNITU WSIS Prize in 2019. He is currently a member of the OECD Network of Experts on AI (ONE AI). In 2019, he was a member of the AI Group of Experts at the OECD (AIGO), which developed the OECD Principles on AI. These principles have been endorsed by the G20 in 2019. He was also an observer participant at the European Commission’s High-Level Expert Group on AI, which fulfilled its mandate in June 2020
Zee Kin is also a well-regarded expert on data privacy issues. He has contributed to publications on legal issues relating to data privacy and has spoken at many well-recognised international and domestic platforms on this topic.
———-
Stay with us:
LinkedIn ➡️ / lojane
YouTube ➡️ https://cutt.ly/U2B0yVi
#misscyberpenny
#cybersecurity
#cybercrime
#deepfake #aiethics #generativeai #legaltech #ailawyer #datasecurity