Can We Trust Driverless Vehicles?

By: | September 15, 2014

Ara Trembly is founder of The Tech Consultant and The Rogue Guru Blog. He can be reached at [email protected].

Several years ago, I read a story about a man who purchased a brand new mobile home. One day, while driving along, he decided he needed a cup of coffee, so he set the mobile home on cruise control and walked back to make said coffee.

Needless to say, the vehicle ran off the road and crashed. It turns out this story wasn’t true, but it does reinforce the idea that technology left alone in a moving vehicle may not be a good idea.

That brings us to the subject of the proposed driverless car, a topic on which I have opined previously.

A recent article in the Wall Street Journal notes that, “Between now and 2016, an increasing number of car makers will offer ‘traffic jam assist’ systems that take over braking, steering and acceleration for vehicles inching along in low-speed traffic. It is a far cry from Google Inc.’s vision for a car that can drive itself in all conditions, but auto makers and suppliers have long taken the view that quantum leaps typically take place one mile at a time.

At first, this seems like a very appealing concept. I was recently stuck in a monster traffic jam on Interstate 95 in South Carolina, and I certainly could have saved a great deal of effort and aggravation over the hour or so that we crawled along if my car simply took over all the stops and starts while I grabbed a nap.

But I also remember that during this mind numbing event there were several times when people, including children, got out of their cars to walk around on the roadway.

Common sense and true concern for the safety of drivers demand that we strike a balance between technology that reduces risk and gadgets that actually increase the danger by removing responsibility and accountability.

Would my “traffic jam assist” recognize that potential hazard? And would the software alert me when the road was clear again? One wonders.

Auto industry executives, the Journal says, intend to offer systems that can robotically pilot a car at speeds up to 40 miles per hour within the next five years or so.

“Meanwhile, federal safety regulators say they are still conducting research on the potential safety and benefits of autonomous technology.”

Well done, regulators. Any technology that substitutes itself for the alertness and judgment needed from a human driver is risky by definition.

Ask yourself how many times your own computer fails to work quickly — or just quits working, necessitating a reboot or some other fix. Most of us have come to accept these glitches as a fact of life, but motoring down the road at 40 mph (and I’m sure it will be faster as time goes on), there would be no time for a reboot.

As I noted in my previous writings on this subject, accidents involving the inevitable failure (even if it is only occasional) of such technology could be a nightmare for insurers who need to assign risk and pay claims.

Certainly, technology that automatically brakes before my car can smash into anything is a potential lifesaver. The real danger is from technology that allows or encourages human drivers to stop paying attention, because the human brain understands things about risk that a computer chip may not.

Common sense and true concern for the safety of drivers demand that we strike a balance between technology that reduces risk and gadgets that actually increase the danger by removing responsibility and accountability.

More from Risk & Insurance