Skip to content Skip to navigation

How to Punish a Robot

BY BRYAN CASEY & MARK A. LEMLEY

The unabridged version of this piece can be found in the forthcoming Cambridge University Press Research Handbook on Artificial Intelligence & The Law scheduled for publication in Winter 2021.

Engineers training an artificially intelligent self-flying drone were perplexed. They were trying to get the drone to stay within a predefined circle and to head toward its center. Things were going well for a while. The drone received positive reinforcement for its successful flights, and it was improving its ability to navigate toward the middle quickly and accurately. Then, suddenly, things changed. When the drone neared the edge of the circle, it would inexplicably turn away from the center, leaving the circle.

What went wrong? After a long time spent puzzling over the problem, the designers realized that whenever the drone left the circle during tests, they had turned it off. Someone would then pick it up and carry it back into the circle to start again. From this pattern, the drone’s algorithm had learned—correctly—that when it was sufficiently far from the center, the optimal way to get back to the middle was to simply leave it altogether. As far as the drone was concerned, it had discovered a wormhole. Somehow, flying outside of the circle could be relied upon to magically teleport it closer to the center. And far from violating the rules instilled in it by its engineers, the drone had actually followed them to a T. In doing so, however, it had discovered an unforeseen shortcut—one that subverted its designers’ true intent.

What happens when the autonomous systems increasingly entering our airspace, waterways, streets, and even government administrations don’t do what we expect, as the drone did here? The question is not just hypothetical. As robotics and artificial intelligence (AI) systems integrate more and more into our society, they will do bad things (indeed, they already have). Sometimes they will cause harm because of a design or implementation defect: we should have programmed the self-driving car to recognize a graffiti-covered stop sign but failed to do so. Sometimes they will cause harm because it is an unavoidable by-product of the intended operation of the machine. Cars, for example, kill thousands of people every year, sometimes unavoidably. Self-driving cars will too. Sometimes the accident will be caused by an internal logic all of its own—one that we can understand but that still doesn’t sit well with us. Sometimes robots will do the things we ask them to (minimize recidivism, for instance) but in ways we don’t like (such as racial profiling). And sometimes, as with our drone, robots will do unexpected things for reasons that doubtless have their own logic, but which we either can’t understand or predict.

These new technologies present a number of interesting questions of substantive law, from predictability, to transparency, to liability for high-stakes decision-making in complex computational systems. A growing body of scholarship is beginning to address these types of questions. But comparatively little attention has been paid to what remedies the law can and should provide once a robot has caused harm.

The law of remedies is transsubstantive. Whereas substantive law defines who wins legal disputes, remedies law asks, “What do I get when I win?” Remedies are sometimes designed to make plaintiffs whole by restoring them to the condition they would have been in “but for” the wrong. But they can also contain elements of moral judgment, punishment, and deterrence. For instance, the law will often act to deprive a defendant of its gains, even if the result is a windfall to the plaintiff, because we think it is unfair to let defendants keep those gains. In other instances, the law may order defendants to do (or stop doing) something unlawful or harmful.

Each of these goals of remedies law, however, runs into difficulties when the bad actor in question is neither a person nor a corporation but a robot. We might order a robot—or, more realistically, the designer or owner of the robot—to pay for the damages it causes. (Though, as we will see, even that presents some surprisingly thorny problems.) But it turns out to be much harder for a judge to “order” a robot, rather than a human, to engage in or refrain from certain conduct. Robots can’t directly obey court orders not written in computer code. And bridging the translation gap between natural language and code is often harder than we might expect. This is particularly true of modern AI techniques that empower machines to learn and modify their decision-making over time, as the drone in the opening example did. If we don’t know how the robot “thinks,” we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do.

One way to avoid these problems may be to move responsibility up the chain of command from a robot to its human or corporate masters—either the designers of the system or the owners who deploy it. But that too is easier said than done. Robot decision-making is increasingly likely to be based on algorithms of staggering complexity and obscurity. The developers—and certainly the users—of those algorithms won’t necessarily be able to deterministically control the outputs of their robots. To complicate matters further, some systems—including many self-driving cars—distribute responsibility for their robots between both designers and downstream operators. For systems of this kind, it has already proven extremely difficult to allocate responsibility when accidents inevitably occur.

Moreover, if the ultimate goal of a legal remedy is to encourage good behavior or discourage bad behavior, punishing owners or designers for the behavior of their robots may not always make sense—if only for the simple reason that their owners didn’t act tortiously. The same problem affects injunctive relief. Courts are used to ordering people and companies to do (or stop doing) certain things, with a penalty of contempt of court for noncompliance. But ordering a robot to abstain from certain behavior won’t be trivial in many cases. And ordering it to take affirmative acts may prove even more problematic.

Robots present a number of challenges to courts imposing remedies on robotic and AI defendants. Working through these challenges is valuable and important in its own right. But doing so also teaches us some things about the law of remedies as it currently applies to people and corporations.

First, much of remedies, like much of law, is preoccupied with fault—identifying wrongdoers and treating them differently. There may be good reasons for that, both within the legal system and in society as a whole. But it works better in some types of cases than in others. Our preoccupation with blame motivates many remedies, particularly monetary equitable relief. This preoccupation distorts damage awards, particularly when something really bad happens and there is not an obvious culprit. It also applies poorly to corporations, which don’t really have a unitary purpose in the way a person might. It’s also costly, requiring us to assess blame in traffic cases that could otherwise be resolved more easily if we didn’t have to evaluate witness credibility. A fault-based legal system doesn’t work particularly well in a world of robots. But perhaps the problem is bigger than that: it might not work well in a world of multinational corporations either. We should look for opportunities to avoid deciding fault, particularly when human behavior is not the primary issue in a legal case.

A second lesson is the extent to which our legal remedies, while nominally about compensation, actually serve other purposes, particularly retribution. Remedies law can be described as being about “what you get when you win.” But decades of personal experience litigating cases have reinforced the important lesson that what plaintiffs want is quite often something the legal system isn’t prepared to give. They may want to be heard, they may want justice to be done, or they may want to send a message to the defendant or to others. Often what they want—closure, or for the wrong to be undone—is something the system not only can’t give them, but that the process of a lawsuit actually makes worse. The disconnect between what plaintiffs want and what the law can give them skews remedies law in various ways. Some do no harm: awards of nominal damages or injunctions that vindicate a position while not really changing the status quo. But we often do the legal equivalent of punching robots—punishing people to make ourselves feel better, even as we frequently deny compensation for real injuries. It’s just that it’s easier to see when it’s a robot you’re punching.

A final lesson is that our legal system sweeps some hard problems under the rug. We often don’t tell the world how much a human life is worth. We make judgments on that issue every day, but we do them haphazardly and indirectly, often while denying we are doing any such thing. We make compromises and bargains in the jury room, awarding damages that don’t reflect the actual injury the law is intended to redress but some other, perhaps impermissible consideration. And we make judgments about people and situations in- and outside of court without articulating a reason for it, and often in circumstances in which we either couldn’t articulate that decision-making process or in which doing so would make it clear we were violating the law. We swerve our car on reflex or instinct, sometimes avoiding danger but sometimes making things worse. We don’t do that because of a rational cost-benefit calculus, but in a split-second judgment based on imperfect information. Police decide whether to stop a car, and judges whether to grant bail, based on experience, instinct, and bias as much as on cold, hard data.

Robots expose those hidden aspects of our legal system and our society. A robot can’t make an instinctive judgment about the value of a human life, or about the safety of swerving to avoid a squirrel, or about the likelihood of female convicts reoffending compared to their male counterparts. If robots have to make those decisions—and they will, just as people do—they will have to show their work. And showing that work will, at times, expose the tolerances and affordances our legal system currently ignores. That might be a good thing, ferreting out our racism, unequal treatment, and sloppy economic thinking in the valuation of life and property. Or it might be a bad thing, particularly if we have to confront our failings but can’t actually do away with them. It’s probably both. But whatever one thinks about it, robots make explicit many decisions our legal system and our society have long decided not to think or talk about. For that, if nothing else, remedies for robots deserve serious attention.

Failing to recognize this fact could result in significant unintended consequences—inadvertently encouraging the wrong behaviors, or even rendering our most important remedial mechanisms functionally irrelevant. Robotics will require some fundamental rethinking of what remedies we award and why. That rethinking, in turn, will expose a host of fraught legal and ethical issues that affect not just robots but people, too. Indeed, one of the most pressing challenges raised by the technology is its tendency to reveal the tradeoffs between societal, economic, and legal values that many of us, today, make without deeply appreciating the downstream consequences.

In a coming age where robots play an increasing role in human lives, ensuring that our remedies rules both account for these consequences and incentivize the right ones will require care and imagination. We need a law of remedies for robots. But in the final analysis, remedies for robots may also end up being remedies for all of us.

 

 

 

Blog Post Series: 
Research