Intel Passenger Trust

Intel has said that passenger trust is key to our autonomous future, even if they’re in a car with perfect self-driving technology. They also believe that this is a solvable problem.

In a study conducted by Intel, passenger trust ranked highest in the possible problems that could negatively influence the rollout of autonomous technologies. The study showed that, even if the self-driving technology was perfect, it’s still likely that people won’t trust it. Think of an aeroplane; accidents in that industry are significantly lower than vehicular accidents, but people still have a fear of flying. The same could be the case for self-driving cars.

An Intel spokesperson said: “We are really close to perfecting the technology for self-driving cars. But our driverless future won’t go anywhere if people don’t trust it. It’s one thing for our autonomous test cars to take us for a drive with a safety driver behind the wheel. But soon there won’t be anyone in that seat.”

Intel Passenger Trust

The promise of autonomous vehicles is tantalising. Experts predict that we can save millions of lives, and open up a new type of mobility just by removing humans from the driver seat. But the difference between theory and practice comes down to one important factor: people are scared of robot cars.

Intel found that 75% of Americans were scared of taking a ride in a self-driving car. Intel continued: “The good news is this is a solvable problem. At Intel, we believe we can overcome consumer apprehension by creating an interactive experience between car and rider that is informative, helpful, and comfortable – in a word: trustworthy.”

The main way, Intel found, to gain passenger trust was to simply let them take a ride in a self-driving car with great user experience built in from the beginning. From requesting a vehicle, to starting a trip, making changes to the trip, handling errors and emergencies, and finally pulling over and exiting the vehicle. They found that if nothing went wrong at all five stages in the trust interactions, the passenger would have a significant boost in their confidence level.

“People feel they need to completely understand how the technology works, and its full capabilities, before they will trust the system.”


Their research identified seven areas of tension for self-driving technologies that they believe needs further exploration. The first being human vs machine judgement. They saw that passengers were agitated when they saw someone crossing the road ahead of the vehicle. Where a human driver might acknowledge that they had seen the pedestrian, the autonomous technology simply slows down and the passenger is left feeling nervous as they aren’t certain that the vehicle has seen the obstacle ahead.

Another issue was that people felt they needed to completely understand how the technology worked, and its full capabilities, before they would trust the system. This is similar to how people with a fear of flying can learn about how an aeroplane works to ease their concerns about flying. At the same time, seeing and experiencing the vehicle as it sensed and responded to what was happening around them – proving it works – helped them gain confidence.

Interestingly, one of the tension points that Intel found was that, when people became comfortable with the autonomous system they were bothered by the constant alerts and warnings, which they found to be irritating. You can’t win either way!

Intel will continue to explore trust as a core element of vehicle system architecture and design.