Report on Criminal Liability, Robotics and AI Systems

Always revitalising and evolving

 

About the project

Robotic and Artificial Intelligence ('RAI') systems are being increasingly deployed across society, bringing not only many benefits, but also risks that those systems cause serious physical, psychological or economic harms. Against that backdrop, the Singapore Academy of Law’s Law Reform Committee (‘LRC’) has considered how criminal laws and penalties might be applied where such harms arise from the behaviour of RAI systems. 

The LRC's report considers both harms committed intentionally using RAI systems, and situations where serious harm results, even though it cannot be said that the user of the RAI system or another person intended the harm to occur.

  • In the LRC's view, existing criminal laws and frameworks should generally be well placed to deal with, or be extended to, instances of intentional harm caused by RAI systems. However, greater issues arise in relation to non-intentional harms, particularly those that result even though there was no negligence on the part of a human user (or where the system is fully autonomous and there is no human controlling or overseeing its operation at all).
  • Criminal liability may not always be appropriate in such scenarios, and regulatory tools and sanctions may be a more appropriate way to promote safety without chilling innovation.
  • Where criminal liability is considered appropriate - given the challenges of applying existing frameworks such as criminal negligence - alternative or supplementary approaches may be necessary. 
  • The report evaluates various such alternatives, including conferring legal personhood on RAI systems, introducing new offences similar to those previously considered by the Penal Code Review Committee, and imposing liability through duties similar to those existing under present workplace safety legislation.

The diversity of RAI systems and their potential applications means there is no 'one-size-fits-all' approach, and the report does not seek to advance one approach over others.  However, the LRC hopes that its report will assist policy makers in assessing where future legal and regulatory challenges may arise in seeking to hold someone criminally accountable for serious harms caused by RAI systems, and possible approaches to addressing those difficulties.

Project status: Completed

  • The report was published in February 2021.
  • This report is part of the Law Reform Committee’s Impact of Robotics and Artificial Intelligence on the Law series. Further reports in this series are available here.
  • Report (pdf)
  • Quick Guide (pdf)

 

Areas of law

 Technology Law

 Robotics & Artificial Intelligence

 Criminal Law & Procedure

 


 

Click on the image above to view the full report

 

Last updated 10  February 2021

 

Law Reform Page Tag