Applying Ethical Principles for Artificial Intelligence in Regulatory Reform

Always revitalising and evolving

 

About the project

There is a growing consensus that the increasing deployment of AI systems across society, as well as bringing benefits, must also be human-centred and built on strong ethical foundations. That, in turn, has implications for the laws and regulatory interventions that enable, guide and constrain how and where AI is deployed. 

In that context, the Singapore Academy of Law’s Law Reform Committee (‘LRC’) has produced a report identifying issues that law and policy makers may face in promoting ethical principles when reforming laws and regulations to adapt to AI, and offering examples of human-centred approaches that could be taken to address these. 

Specifically, the report discusses the following ethical principles:

  • Law and fundamental interests        
  • Considering AI systems’ effects        
  • Wellbeing and Safety         
  • Risk Management
  • Respect for values and culture 
  • Transparency
  • Accountability 
  • Ethical Use of Data

The report does not seek to advance specific means or level of intervention, which will necessarily vary depending on the technology and sector in question. Its purpose, however, is to provide a framework for broader consideration and discussion on the best means to achieve human-centred, ethical norm-making and calibration of regulatory responses regarding AI.

 

Project status: Completed

  • The report was published in July 2020. Click here to download a copy.
  • This report is part of the Law Reform Committee’s Impact of Robotics and Artificial Intelligence on the Law series. Further reports in this series are available here.

 

Areas of law

 Technology Law

 Robotics & Artificial Intelligence

 


 

Click on the image above to view the full report

 

Last updated 9 July 2020

 

Law Reform Page Tag