Look Ma – no hands
What if we could have the advantages of cars with none of the downsides? That’s the promise held out by advocates of computer-controlled, self-driving vehicles. As they power ahead on solving the technology challenges, Michael Cameron looks at the legal i
What if we could have the advantages of cars with none of the downsides? That’s the promise held out by advocates of computercontrolled, selfdriving vehicles. Michael Cameron looks at the legal implications.
It was supposed to be a nice day out. On August 17, 1896, Mrs Bridget Driscoll and her 16-year-old daughter were taking a trip to Crystal Palace in south-east London to attend a fete for the Catholic temperance organisation League of the Cross. But the day had a tragic end. A motoring exhibition was also taking place that day, and 45-year-old Driscoll was struck and killed by one of the newfangled horseless carriages. The imported Roger-Benz motor car, on a demonstration ride for two passengers, was travelling at a stately 6.5km/h, but it was too fast for the bewildered woman, who had the dubious honour of becoming the first pedestrian in the UK – and possibly the world – killed by the new technology that was about to sweep the planet.
Far from responding with a regulatory clampdown, lawmakers passed the Locomotives on Highways Act less than three months after the tragedy, which actually relaxed speed limits. The accident was reported widely, but Victorians were famously easy-going about health and safety.
In any case, the arrival of the life-changing automobile was keenly anticipated. In August 1899, the New Zealand Herald celebrated the “passing of the horse and the coming of the new era of motor cars”, which it confidently predicted would “completely change for the better the conditions of city life” and rhapsodised about “spinning along a country road, up-hill and down-hill, in one of these cars, at the rate of twenty miles per hour” with “no fatigue, and little or no danger”.
We have largely forgotten the social ills that plagued our cities until the automobile consigned them to history. Horse manure and urine caused dire sanitation problems; horses’ ironclad hooves damaged street surfaces; agricultural production had to emphasise fodder crops such as oats and hay.
Urban planners were at a loss: restricting their use would be a cure worse than the disease, because horses were essential for the functioning of the modern city. But cars cleaned up the mess, eventually providing cheap transport to people who could not afford to buy and maintain horses.
Our relationship to cars has many parallels to the Victorians’ dependence on horses: we bemoan their negative effects, such as road deaths and injuries, air pollution, urban sprawl, oil wars, obesity and climate change, and we try to reduce our car use by riding bicycles or public transport. But so far we have failed to reduce the effect of cars to any meaningful extent. The only solutions seem to involve drastic restrictions, which economic and political realities make very difficult.
But what if we could have the advantages of cars with none of the downsides? That is the promise held out by advocates of computer-controlled, self-driving cars, who are every bit as enthusiastic as their 19th-century counterparts were for the new technology of their day.
Because computers don’t get drunk, distracted or make mistakes, they will drastically reduce – even eliminate – road deaths. Because they are such careful drivers, more people will feel safe enough to ride bikes, and our children will walk and ride to school again. Because they will facilitate efficient app-based ride sharing, they will take vehicles off the roads and ease congestion and emissions. Because they can be networked, the seething traffic of our cities can be rationally co-ordinated, further easing congestion and emissions. Because they can drive much closer together, they will increase the capacity of our existing roads. Because they can drive off and park themselves when not needed, they will release vast tracts of inner-city real estate currently
More than 300 people die on our roads every year. If driverless cars are safer than human-driven ones, any delay will cost lives.
monopolised for parking, so car parks will become real parks.
Self-driving cars will boost economic growth, revolutionise mobility for everyone, particularly the disabled, and make car ownership a choice rather than the necessity it is for many.
DAYS OF FUTURE PAST
That’s the utopian vision. But there is cause for scepticism. For starters, we have been promised self-driving cars before. In 1964, General Motors was promoting the Firebird IV, a concept car that “anticipates the day when the family will drive to the superhighway, turn over the car’s controls to an automatic, programmed guidance system and travel in comfort and absolute safety at more than twice the speed possible on today’s expressways”. The technology was ingenious and impressive and it worked on test runs. But it came to nothing. It relied on expensive buried cables, which cost up to $200,000 a mile. It would never be economical to convert existing highways.
In 1997, the US Department of Transportation funded a consortium of companies and universities to develop and demonstrate a workable prototype of an automated highway system. The demonstrations went flawlessly, as fleets of Buicks drove autonomously in tight formations using a system of magnets embedded in the roads. Articles at the time breathlessly reported that “highways of the future may feature relaxed drivers talking on the phone, faxing documents or reading a novel” and that the demonstrations “illustrate that the vision of an automated highway system that improves traffic safety and highway efficiency can be made a reality”. But again, it never became a reality: installing the magnets was too expensive.
We might be forgiven for thinking that the new hype about self-driving cars is just another false dawn, but there is good reason to think it will be different this time. Previous initiatives failed because they required
massive investments in public infrastructure, but a confluence of new technologies, including GPS, cheap and effective sensors, high-definition cameras and artificial intelligence (AI), is making it possible for self-driving cars to function without the need for any new infrastructure.
Already, this has resulted in production cars with useful semi-autonomous modes, such as the autopilot on Tesla models, which allows the car to drive itself on a highway, staying in its lane, maintaining a safe following distance and changing lanes safely on demand.
BULLISH OUTLOOK
All these systems still require human supervision and are not fully autonomous by any stretch. But they are improving all the time and leading companies are bullish that they will soon crack full autonomy. Ford predicts it will offer fully autonomous ride-sharing vehicles on public roads by 2021. And
Tesla plans to have a car drive itself from Los Angeles to New York early next year. Whether progress will be this swift remains to be seen. Even if it is, there are legitimate questions about whether it will be so uniformly positive.
Reductions in congestion, emissions and energy use may be temporary. Just as building new roads is said to simply result in more people using them, any gains made from autonomous ride-sharing may quickly be eroded by people taking more trips. We could end up back where we started, or worse off.
What about urban communities that rely on foot traffic for their unique character? If everyone has super cheap point-to-point transport on demand, might our cities become featureless urban deserts populated by hyper-efficient people-moving boxes? And what will happen to the more than 40,000 New Zealanders who make their living as drivers?
The decisions we make will be guided by whether we think concerns about self-driving cars are outweighed by their benefits.
Of greatest alarm for most people is being killed by a self-driving car. We have become used to computers dominating our lives, but so far they have largely been kept out of the physical realm. Never mind that they are supposed to be safer than human drivers: there is something terrifying for many about placing their lives so directly in the hands of machines. The blue screen of death of personal-computer infamy may have been annoying, but at least it was only a metaphor; no one has died from one. And the death last year of Tesla driver Joshua Brown while using autopilot doesn’t really count, because he ignored repeated automated warnings. But somewhere, sometime a self-driving car will make a mistake and someone will die.
NEW TERRITORY
Our laws are silent on driverless vehicles; whether they are even legal is a grey area. Some level of supervised autonomy seems to have already been accepted as legal, as demonstrated by the presence on our roads of cars such as the Tesla Model S with its autopilot function. There is an arguable case that the future use of fully autonomous
vehicles without a supervising driver is also legal under current laws, so it is quite possible that we could see them causing deaths on our roads before too long.
So we have to make some decisions. Are we happy to muddle along with the status quo? Do we want to ban them? Or explicitly legalise them? If the latter, how do we ensure safety? Should this be by way of approving specific models? Or do we want to encourage faster uptake by being more permissive?
The decisions we make will be guided in part by whether we think concerns about self-driving cars are outweighed by their benefits. If we decide they are beneficial, there is good reason to be in the leading pack of jurisdictions grappling with their arrival. They may create more lucrative jobs in the countries involved in their development, and surely it is better to encourage domestic and overseas companies to invest in the industry rather than simply be a passive consumer of its products. Beyond the business case, the arguments are more compelling. More than 300 people die on our roads every year. If driverless cars really are safer than human-driven ones, any delay will cost lives.
The first deployments of autonomous vehicles are likely to work in restricted circumstances – what the experts call an operational-design domain. Anders Lie is a specialist with the Swedish Transport Administration, working with Volvo on a large-scale self-driving trial (he’s the man responsible for that annoying but effective sound your car makes if you don’t buckle up). He is picking that the first commercial applications will be highway-driving modes that work without driver supervision, or ride-sharing fleets of self-driving taxis or mini-shuttles, possibly operating at first in restricted lanes such as tramways.
Other possibilities include convoys of autonomous long-range freight trucks or small unmanned delivery pods. All of these applications will have one thing in common: “Highly automated driving systems will depend heavily on the preparation of detailed 3D maps, and some will be unable to operate outside the mapped areas except on prescribed routes,” says Jim Sayer, director of the University of Michigan Transportation Research Institute. “Such systems will also depend on significant testing carried out in the unique environments of the areas in which they will be deployed – taking into consideration local or regional differences in traffic-control devices, roadway design, weather conditions and driving norms.”
Google and Uber are among companies that have invested heavily in mapping and testing in California, Arizona and Pennsylvania, which are likely to be among the first places where autonomous vehicles are deployed. There has been some mapping of New Zealand roads by satellite-navigation providers such as TomTom, but none of the companies developing autonomous vehicles has come here to map yet. And, with the exception of the ground-breaking autonomous shuttle bus trials being done by HMI Technologies (including one at Christchurch Airport), testing has been limited to a few demonstration runs.
THE LAW’S DELAYS
If we want to catch up with these
“Clarity in self-driving law is going to be incredibly important for developers of the technology.”
developments, we need to think about how much our laws encourage – or discourage – this kind of work. We need to decide who will be responsible when a selfdriving car causes an accident or whether it will be an offence if a driverless car does not exchange details after an accident. Speed limits as currently worded apply to a “driver”, so a self-driving car cannot be said to be speeding.
The first and biggest issue is that we have not made explicit legal provision authorising cars to operate on public roads without human supervision. Companies might be reluctant to invest in New Zealand if they have no certainty that their business model will be legal.
“Our vision is that self-driving vehicles will be best operated within a shared-ride network,” says Justin Kintz, senior director of policy and communications for Uber, “and therefore it is imperative that any place that wants to benefit from self-driving first establishes progressive rideshare laws, before tackling self-driving regulations.
“Clarity in self-driving law is going to be incredibly important for developers of the technology, because the investment required for early deployment is huge, and certainty of standing provides confidence for businesses.”
Kintz points to the approach of Arizona, which has attracted a great deal of investment and testing, in part because of its straightforward rideshare rules, but also because of an executive order in 2015 spelling out the circumstances under which testing of self-driving vehicles could occur.
At first glance, it looks clear that the law could simply require cars to be submitted for approval before they are allowed on public roads. A company might describe what it thinks its car can do and explain why it is safe. A government-approval agency would then scrutinise the proposal, test the vehicle and approve or decline it.
This is probably what will happen under our current laws. For most vehicles, entry to New Zealand is dependent on the model having approval in Europe, where there is a very prescriptive process known as “type-approval”. But there are complications in applying type-approval to driverless vehicle systems.
To begin with, it is unlikely that governments will have or be able to acquire the expertise to properly assess these vehicles since it really exists only inside the companies developing them and it is hard to get information out of these firms.
“Companies are reluctant to share their data with regulators without some assurance that information that they consider to be trade secrets will not become part of the public record and disclosed to their competitors,” says Brian Soublet, deputy director and chief counsel of the California Department of Motor Vehicles. The implication is that any jurisdiction insisting on such disclosure risks being ignored as an investment destination.
It is simplistic to say that regulation always hampers innovation; sometimes it can stimulate it. When ozone-depleting chlorofluorocarbons (CFCs) were banned, replacements were quickly found. But there is evidence that heavily prescriptive type-approval regulation can stifle creativity. Requiring manufacturers to use certain materials, for example, can discourage the
search for better ones. But in the absence of regulation, we must rely on assurances from manufacturers, and this raises many questions, not least how the safety of driverless vehicles can be measured and demonstrated.
“Current known methods for assessing the safety of systems are not workable for automated driving systems because of their extreme complexity,” says Steven Shladover, a research engineer and manager at the Partners for Advanced Transportation Technology Programme at the University of California, Berkeley, and a veteran of the US national automated highway system demonstration of the 90s. “Vehicles would need to drive billions of miles to achieve a statistically meaningful demonstration that they are significantly safer than average drivers.”
And whose safety are we talking about? Is it acceptable to prioritise the safety of people inside the car over others? Does that matter if the car is still safer overall? If it is 10 times safer for occupants but only five times safer for others, do we insist on equality, or are we just grateful for any improvement? If we do insist upon equality, will we be discouraging the uptake of the vehicles with their lifesaving advantages?
A survey published in Science last year showed that although most people approved of autonomous cars being programmed for equality, they wouldn’t personally want one that didn’t prioritise their safety. And is there actually anything new about these dilemmas? Or have manufacturers always had to balance the safety of different groups? As Bart Simpson retorted when his sister expressed concern about the propensity of SUVs to be involved in fatal accidents, “Fatal to the people in the other car, let’s roll!”
THE TEACHABLE CAR
Demands by the public for manufacturers to disclose exactly what a car’s ethics are and what it will do in various hypothetical situations (swerve into the pole or plough straight into the pedestrian?) can be difficult for manufacturers to answer. This is not necessarily because they are reluctant to tell you; they may not even know themselves. The reason these vehicles are now getting so good is that many are, to a large extent, no longer formally programmed.
The problem of writing code to specify different courses of action for all of the practically infinite scenarios a car could encounter once seemed insurmountable but it is now being conquered through the use of machine learning. Instead of detailed programming, the self-driving programs spend thousands of hours practising to learn what works best. Designers can’t necessarily predict what their car will do, but they can say that, whatever it does, it will be safer than what a human would have done.
Many will find the notion that these cars are not predictable more alarming than anything else. Not knowing why computers are coming to the conclusions they reach can be disturbing. There have been cases in the US where machine-learning programs used to assess the risk of criminal reoffend- ing turned out, after an investigation last year by the ProPublica website, to be racially biased. And with the Volkswagen emissions scandal fresh in people’s memories, trust in manufacturers’ assurances about their systems cannot be taken for granted.
If these assurances are not enough for us, we will need to find other safeguards. Putting vehicles through driving tests or demonstrations is an approach favoured by Dave Verma, director of autonomous vehicles for Auckland-based HMI Technologies. Verma says “the tests would need to be carefully designed to involve a level of unpredictability to prevent manufacturers from teaching to the test”. Some manufacturers would argue that such tests are superfluous. What can a few hours with a testing officer tell us that hundreds of thousands of hours of real-world testing and simulations cannot? “Not much,” says Verma, “but they have the great advantage of providing us with level-playing-field results that anyone can understand and that seem less reliant on having to trust the assurances of manufacturers. There has to be a benchmark that the public can trust as the minimum expected performance.”
If concerns arise, manufacturers risk expensive product recalls and the US Federal Government is already well equipped for those. “The National Highway Traffic Safety Administration has the authority to identify safety defects, allowing it to recall vehicles that pose an unreasonable risk to safety even when there is no applicable federal standard,” says the administration’s federal automated vehicles policy. There may be potential for New Zealand to make greater use of these kinds of mechanisms.
Self-driving cars have many enthusiasts, kindred spirits of that writer in the New Zealand Herald more than a century ago. The benefits can vastly exceed the costs if they are managed well, and if we can avoid counterproductive regulation. But sooner or later self-driving cars will face their Mrs Driscoll moment: a self-driving car somewhere in the world will make a mistake and kill an innocent person.
When this happens, the reaction is likely to be a lot more vigorous than that of the Victorians to the first automobile death. The aim should be to avoid overreacting and not lose sight of the bigger picture. Every death is a tragedy. But at the moment, our small country is suffering tragedies of death and injury every day that don’t even make the papers. The larger tragedy would be if a knee-jerk reaction jeopardised our opportunity to seize the future.
Michael Cameron, the Law Foundation’s 2016 international research fellow, talked to experts from Silicon Valley to Singapore about the potential of driverless vehicles in New Zealand. His report, to be published in November, will make recommendations for law reform.
“Vehicles would need to drive billions of miles to achieve a statistically meaningful demonstration that they are significantly safer than average drivers.”