The media and academic dialogue surrounding high-stakes decisionmaking by robotics applications has been dominated by a focus on morality. But the tendency to do so while overlooking the role that legal incentives play in shaping the behavior of profit-maximizing firms risks marginalizing the field of robotics and rendering many of the deepest challenges facing today’s engineers utterly intractable. This Essay attempts to both halt this trend and offer a course correction. Invoking Justice Oliver Wendell Holmes’s canonical analogy of the “bad man . . . who cares nothing for . . . ethical rules,” it demonstrates why philosophical abstractions like the trolley problem—in their classic framing—provide a poor means of understanding the real-world constraints robotics engineers face. Using insights gleaned from the economic analysis of law, it argues that profit-maximizing firms designing autonomous decisionmaking systems will be less concerned with esoteric questions of right and wrong than with concrete questions of predictive legal liability. Until such time as the conversation surrounding so-called “moral machines” is revised to reflect this fundamental distinction between morality and law, the thinking on this topic by philosophers, engineers, and policymakers alike will remain hopelessly mired. Step aside, roboticists—lawyers have this one.
Bryan Casey, CodeX Fellow, The Stanford Center for Legal Informatics. J.D. Candidate, Stanford Law School, Class of 2018. The author particularly thanks A. Mitchell Polinsky, the Josephine Scott Crocker Professor of Law and Economics at Stanford Law School, for his generous support.
Copyright 2017 by Bryan Casey
Cite as: Bryan Casey, Amoral Machines, Or: How Roboticists Can Learn to Stop Worrying and Love the Law, 111 Nw. U. L. Rev. Online 231 (2017), http://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1248&context=nulr_online.