A recent study performed by Volkswagen showed that drivers invest a remarkable amount of trust in what they believe to be robotic cars. The cars in the study were actually controlled by a human driver who was out of sight. Technology expert and blogger Brad Templeton summarizes the study, which was reported in the Register:

They created a fake robocar, with a human driver hidden in the back. The test subjects then were told they could push the autopilot button and use the car. And they did, immediately picking up their newspapers to read as they would in a taxi (which is what they really were in.)

Not only that, when they were told the robot could not figure out the situation and needed human assist, they gave it, and then went right back to autopilot.

Templeton is a vocal proponent of what he calls "robocars" and says this supports his belief that humans would be happy to trust computer-controlled vehicles. "So trust of a robocar is already at a higher level than we might expect," he concludes. But this raises an implicit question: should we invest so much trust in robots, especially robots that we have entrusted with our lives?

A commenter on Templeton's blog dissents, saying that the subjects were only so trusting because they knew they were in a controlled situation with lots of oversight for which VW would be legally liable. "We don't trust robocars, we trust the lawyers."