by Tracy Zafian, Research Fellow
In March 2018, an Uber Volvo operating in automated driving mode, with a driver at the wheel, hit and killed a pedestrian in Arizona. It was the first autonomous vehicle (AV)-related pedestrian fatality. There have been other crashes involving autonomous test vehicles and additional fatalities involving lower-level automated vehicles since. Also in March, a driver in California was killed when his Tesla, in autopilot mode, crashed into a concrete highway lane divider and caught fire. Last month, a Tesla, in autopilot mode, crashed into a parked firetruck in Utah and into a parked police car in California. Injuries were minor in these instances.
All of these crashes have occurred while there has been a driver at the wheel making decisions about when to use and disengage the automated driving assist system. As reported in a USA Today article, following the Utah firetruck crash, Tesla issued a statement saying, “When using Autopilot, drivers are continuously reminded of their responsibility to keep their hands on the wheel and maintain control of the vehicle at all time,” and, “Autopilot is designed for use on highways that have a center divider and clear lane markings.” These conditions are not always met when crashes occur; for example, the driver in the Utah crash admitted to being distracted by their phone before the crash. From the National Transportation Safety Board’s preliminary findings from investigating the Uber pedestrian crash, there was no warning given to the safety driver before the crash. Current AV technologies, such as Tesla’s Autopilot, are referred to by car manufacturers as “driver assistance systems” but it is not clear that all drivers understand their limitations, including the need for drivers to monitor the driving environment and stay involved in the driving process.
The Society of Automotive Engineers has developed a classification system for autonomous vehicles. The classification includes six levels (Level 0-5); with Level 5 being fully autonomous and Level 1, containing some automated features such as adaptive cruise control and parking assist. Most current automated driver assistance systems are Level 1 or 2, meaning that drivers still need to be actively involved.
Driver assistance systems and autonomous vehicles hold great promise for improving safety and mobility, but AV technologies are still relatively new, and numerous challenges remain. A number of universities and researchers in Massachusetts are exploring this topic. In April, a commentary by MIT AgeLab researchers, “People must retain control of autonomous vehicles,” was published in Nature magazine (link is for the article). In their remarks, Dr. Ashley Nune, Dr. Bryan Reimer, and Dr. Joseph Coughlin, Age Lab Director and UMass Transportation Center Research (UMTC) Affiliate, focused on two areas – safety and liability – that need urgent attention as policies and regulations are developed for autonomous and semi-autonomous vehicles. They write that, in their view, “some form of human intervention will always be required. Driverless cars should be treated much like aircraft, in which the involvement of people is required despite such systems being highly automated. Current testing of autonomous vehicles abides by this principle. Safety drivers are present, even though developers and regulators talk of full automation.” The researchers’ piece ends with key points for policymakers preparing AV legislation to consider:
- Driverless does not, and should not, mean without a human operator
- More information should be shared with operators/drivers about how well different autonomous and driver assist systems are working, including their reliability and limitations
- Operators should need to demonstrate that they understand the autonomous and driver assist systems in their vehicles and should be tested on their understanding and competence at periodic intervals
- Remote monitoring networks should be established and shift time guidelines considered for workers monitoring AVs.
In May, a forum held at Harvard University’s T.H. Chan School of Public Health on “Self-Driving Cars: Pros and Cons for the Public’s Health.” (A recording of this session and a transcript are available at this link.) Dr. Jay Winsten, Associate Dean for Health Communication at Harvard, said there is hope right now around the potential for autonomous and highly automated vehicles to reduce traffic deaths. He also addressed the hype around this: “I think both the media and some of the manufacturers and developers have been going a little too far in setting public expectations for what to expect, especially in the short-term and in the medium-term.” The panelists discussed that initially most vehicles will be highly automated (SAE Level 2), not autonomous (Level 3-5), and the deployment of the autonomous vehicles is likely to occur first for long-distance, highway-based commercial transport and in urban areas for shuttles and other short-distance trips. There are some challenges including the current reliability and drivers’ understanding of AV technologies, including the need for drivers to stay alert while behind the wheel and the safety of vulnerable road users. There are also concerns regarding regulation. The federal government through the National Highway Safety Administration has developed some guidelines regarding autonomous vehicles and automated driving systems. However, there are currently no federal regulations in place regarding autonomous vehicles. Therefore, currently regulations are primarily set at the state level.
As described in an earlier Innovative Outlook article, Governor Baker and Massachusetts state officials have largely taken the approach that it is better not to regulate AVs through legislation, as the technologies are still evolving and legislation can be difficult to modify once passed. Panelist Deborah Hersman of the National Safety Council, shared those concerns, saying “We’ve got to find out how to do this differently” so that any regulations keep up with changing technology. Herman also urged there be more transparency and data sharing regarding specific AV technologies and how well they perform, saying that NTSB investigations after a crash can be challenged by lack of access to such data.
In June 2018, MassDOT entered into a Memorandum of Understanding with several municipalities to help facilitate and expand autonomous vehicle testing on roadways in Massachusetts. As described in a MassDOT blog article, “Following the signing of this MOU, MassDOT and the participating communities will finalize a universal application for companies to use when seeking to test autonomous vehicles and the participating municipalities will identify locations and roadways suitable for autonomous vehicle testing. ‘This agreement will allow companies to responsibly develop and test autonomous vehicle technology in Massachusetts, while ensuring there are uniform safety guidelines in place,’ said Governor Baker [at the MOU signing] . ‘The MOU builds on the existing autonomous vehicle testing framework while simplifying the process for municipalities to work with innovative companies that are seeking to advance transportation, create jobs in our nation leading innovation economy, and improve our quality of life in the Commonwealth.’ …Said Lieutenant Governor Karyn Polito, ‘By creating a standardized process and working collectively with local officials, we can generate economic growth and support our communities as they play a role in the future of innovation and motor vehicle automation.’ Fourteen communities signed the MOU initially, including Boston, Worcester, Arlington, Boston, Braintree, Brookline, Cambridge, Chelse, Medford, Melrose, Newton, Revere, Somerville, Weymouth, Winthrop, and Worcester. In addition, the Massachusetts Department of Conservation and Recreation also joined the MOU, allowing Commonwealth-owned parkways to be available for autonomous vehicle testing.