top of page

When the Rubber of Artificial Intelligence Meets the Road: Will We Ever Ride in Driverless Cars?

The driverless car stands at the pinnacle of future visions of a world powered by Artificial Intelligence. Futurists picture roads full of cars moving swiftly and safely while passengers send endless texts and watch streaming videos.


As strange as it may seem, a recent news report of an automobile accident can help us understand the likely limits of Artificial Intelligence as well as the importance of trust in our lives and our futures.

Consider the news from last week that a Google driverless car caused a wreck last when it ran into a city bus in Mountain View, California. The accident stands as the first such incident among the 53 vehicles that have been driving themselves for more than 1.4 million miles. Actually, they have not really been driving themselves because there are always two Google employees who can override the car’s own “thoughts” and have done so to avoid 13 accidents. There is no way to count how many accidents have been avoided by other drivers taking action to avoid the Google cars.


The accident occurred because the Google car thought that the bus, driven by a human, would yield to the computer guided car. A Washington Post article noted that, “Google characterized the crash as a misunderstanding and a learning experience, saying its cars will learn that large vehicles are less likely to yield than other types of vehicles.” The Google analysis would have you believe that the “large vehicle” had a mind of it own. In fact, the accident occurred because the human bus driver moved the “large vehicle” in a way that confounded the Google car’s logic.


Of course, this interaction between human and machine challenges the entire premise of using Artificial Intelligence to drive cars. In the best case, all driverless cars would share the same logic system to reach the same conclusions. But, is that likely when Google, Apple, GM, Ford and many others are each developing their own systems? More importantly, until such a time as all humans yield driving responsibilities to the machines, we will consistently face the bus vs. Google car, the human reaction vs. the computer reaction, dilemma from last week.


And that mismatch may mean that society will never even get to the point where many of us trust the driverless car in the first place.


Pilots operating Boeing 777’s report that they spend less than seven minutes manually controlling the aircraft during a flight; Airbus pilots report only half as much “stick time”. Military airborne drone technology demonstrates that aircraft can be controlled remotely from halfway across the world without any onboard input. But, ask yourself, would you get into an airplane without a pilot?


Robotic surgery is growing more common. But, would you allow a robot to operate on you without a surgeon at the controls?


Here’s the thing: the role of humans will always be vital to riskier aspects of our lives because our willingness to participate in those activities depends on trusting other humans. And trust is a fundamentally human quality.


We insist that a pilot fly the plane even if technology does most of the work because we trust the ability of a human to deal with an emergency in flight. We want a surgeon to be present because we trust the doctor will know what to do in case the computer malfunctions. Dr. Mary Cummings, Director of the Humans and Autonomy Laboratory at Duke University and a former Navy F-18 pilot, was quoted in a recent New York Times article, “You need humans where you have humans.”


We insist that a pilot fly the plane even if technology does most of the work because we trust the ability of a human to deal with an emergency in flight. We want a surgeon to be present because we trust the doctor will know what to do in case the computer malfunctions. Dr. Mary Cummings, Director of the Humans and Autonomy Laboratory at Duke University and a former Navy F-18 pilot, was quoted in a recent New York Times article, “You need humans where you have humans.”


And, in the end, one must ask quite critically whether we will ever trust anyone but a human to drive us down the highway at 80 miles per hour?

The lesson for lawyers is clear. As technology (including applied Artificial Intelligence) narrows the range of tasks being performed to those things that only humans can do…those matters requiring clients to trust in us because of the inherent risk to them…trust will become more important. Lawyers have to realize that our future lies in building that trust with clients by living an integrated set of values that place client interests first. By focusing on building trust, we will be placing ourselves in the best position for long-term success.


Society may never learn to trust the driverless car, the pilotless plane or the autonomous robot surgeon. As lawyers, we have to learn that our future depends upon continuously working to ensure that the trust we derive in client relationships keeps us among the pilots, the surgeons…and even the car drivers…those trust relationships that technology cannot replace.

bottom of page