RISK TO EXIST

Thinking Cities
By Thinking Cities October 20, 2017 11:51

RISK TO EXIST

It’s time to face up to the risks as well as the opportunities of automation, says the European Transport Safety Council’s Antonio Avenoso

 

Regular users of computers running Microsoft Windows over the years will be familiar with something technically inclined people like to call the ‘Blue Screen of Death’. This is the error message that renders your computer unusable until it is restarted or, in more serious cases, the operating system is reinstalled. In my experience it usually occurs on the day of a critical meeting or on the eve of an important project milestone.

We have become used to computers, mobile phones and other gadgets that can be unreliable in daily use. These situations are frustrating, but generally speaking, nobody dies as a result.

Technology companies and carmakers are now hoping that even more sophisticated combinations of hardware, software and mechanical systems will render human drivers obsolete, increase road safety and reduce congestion. They are racing to put computers in charge of cars, vans and lorries.

But for now much of the talk about autonomous vehicles is hype, with little scientific evidence to back up the claims. The available data often comes from the industry itself, with little in the way of independent verification.

Such hype has helped push the market value of Tesla, a 15-year old company that sold 76,000 vehicles last year, above that of Ford, which sold 6.65 million.

Tesla’s website says that all new vehicles leaving its factory “have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver”. Further down the same web page, the company says its “full self-driving capability” will have “what we believe will be a probability of safety at least twice as good as the average human driver”.

Tesla provides no further information on its website to verify these claims. One snippet of publicly available data is a regulatory filing to the California department of motor vehicles for the year 2016. Tesla tested four vehicles on Californian public roads with engineers behind the wheel at all times. It lists the numbers of ‘disengagements’ i.e. times when the human driver had to take over control of the vehicle. In 550 logged miles of driving, there were 182 disengagements.

Many of these events took place on roads classified as ‘suburban’. And indeed, the prospect of autonomous or semi-autonomous vehicles driving on streets where they will interact with pedestrians and cyclists raises many of the most pressing concerns surrounding automated driving.

How will pedestrians and cyclists react to driverless cars?

A recent research paper by SWOV, the Dutch Institute for Road Safety Research1, highlights a number of issues that have, so far, received little attention from the research community. One fundamental challenge is how to ensure that automated vehicles can reliably predict the behavioural intentions of pedestrians and cyclists. Today, a whole range of complex factors impact on how people behave as pedestrians and cyclists, including formal rules and regulations, informal rules and non-verbal communication. Even if behaviour could be accurately predicted in current urban environments, it is not clear how vulnerable road users will change their behaviour when faced with driverless cars.

According to SWOV’s research, the few studies that did examine the behaviour of pedestrians and cyclists in their interaction with automated vehicles generally found that they were fairly cautious and not necessarily confident of its ‘skills’. Furthermore, pedestrians and cyclists were found to appreciate messages and/or signals from the car indicating whether the car has detected them and what it intends to do. However, which exact messages need to be brought about, and the method of communicating them, are not yet settled and this requires further study.

These types of questions, as well as others concerning related areas such as training, infrastructure and regulatory changes will need to be answered before automated vehicles are offered for sale, not as an afterthought.

Who writes the rules?

At present, in the absence of regulatory clarity, car manufacturers are writing their own rules. In an interview for the US magazine and website Car and Driver, a Mercedes-Benz executive in charge of automated systems said a Mercedes automated vehicle’s ‘first priority’ would be to save its occupants.

Whether one agrees or disagrees with this ethical choice, it simply shouldn’t be up to carmakers to decide.

A fatal crash last summer in Florida is another case in point. A Tesla-owner, reportedly driving his vehicle in semi-automated ‘Autopilot’ mode, crashed into the underside of a large truck that was crossing the highway in front of him.

Several facts of that case were disturbing. Firstly, the car was exceeding the speed limit at the time. Automated systems should not be able to break laws in place for safety reasons. Second, the car’s automated system was activated despite the fact that the car was not driving on a suitable road. Tesla’s system was apparently unable to cope with a large white truck crossing a highway on a sunny day in Florida, which must be a rather common occurrence. As with the theoretical ‘occupant first’ rule set by Mercedes, Tesla is currently deciding for itself what rules apply in every conceivable situation.

The risk is of a kind of lawless Wild West for the early years of automated cars, not unlike the early years of motoring itself – before speed limits, traffic lights and driving tests started to set the rules of the road.  This could be a disaster.  And not least for the nascent industry.

If independent regulation and step-by-step approval of automated systems is not in place soon, a number of high-profile deaths caused by automated vehicles could so horrify and appal the public, that the vehicles will be withdrawn from use. Rebuilding trust could be a huge challenge.

Regardless of the overall likelihood that deaths could eventually go down as computers gradually remove human error, and recklessness from driving, a small number of so-called ‘false positives’ where the vehicle makes an error and causes a fatal collision, could be devastating for the industry.

The fears of automotive killing machines would be felt in a similar way to terrorism. To be stopped at any cost.

A step-by-step approach to increased automation

What’s needed is a step-by-step approach, starting with approvals for systems that have been proven to work in specific scenarios such as motorways without cross junctions or roadworks.

In Europe, it should be national governments, together with the European Union that set the rules, oversee testing, and independently investigate collisions. The current regulatory environment is not set up for any of these tasks in the vastly more complex world of automated cars. It’s time Europe woke up to the risks as well as the opportunities of automation.

FYI

Antonio Avenoso is Executive Director of ETSC, a Brussels-based independent non-profit making organisation dedicated to reducing the numbers of transport-related deaths and injuries in Europe.

information@etsc.eu

www.etsc.eu

Thinking Cities
By Thinking Cities October 20, 2017 11:51