Fuel for thought: Can a driverless car get arrested?

Cyber Security

Digital Security

What happens when problems caused by autonomous vehicles are not the result of errors, but the result of purposeful attacks?

Fuel for thought: Can a driverless car get arrested?

Fleets of robotaxis hit the brakes, citing the need to “rebuild public trust”. This story had been brewing for a while.

It seemed fairly inconsequential at first, or at least not the start of a big security story: A video shared on social networking site Reddit showing a bunch of robotaxis in Austin, Texas coming to a central thoroughfare and stopping en masse, causing an ad hoc traffic jam scene, which is becoming all-too-frequent in light of the platform’s rising popularity. A quick search found this article discussing the event, which by no means is unique. Driverless or autonomous vehicle fleets are currently operating in San Francisco and Las Vegas, with pilot programs in about a dozen more cities stretching across the United States, from Seattle to Miami. And in case you are wondering, this is not a uniquely American issue: Driverless vehicles are also being developed and tested throughout Europe and Asia as well.

Right now, the problems caused by autonomous vehicles, such as traffic jams, driving into wet concrete and blocking emergency service vehicles, are real ones. They are also the result of non-malicious errors on the part of driverless car companies. But what happens when these are not the result of errors, but the result of purposeful attacks?

If there is one thing we have learned in decades of computer security, it is that any technology which is successful will draw entrepreneurs to it, seeking to make money – both legally and illegally. For cybercriminals, the lure of autonomous vehicles must appear particularly shiny. Aside from more well-known criminal activities that occur entirely in the cyberdomain, such as account theft targeting consumers and ransomware targeting businesses, having vehicles at play in the physical world offers some interesting opportunities as well:

  • Extorting customers over their travel history. Been somewhere shady you’d rather not share? This is the automotive equivalent of revenge porn.
  • Remote takeover of vehicles, aka drivesomware
    • Stopping some (or all) autonomous vehicles in their tracks could become a new model for ransomware-style extortion.
    • Threatening to wipe vehicles’ local storage or overwrite their firmware so they could no longer operate would generate extensive costs to the vehicle fleet owner, who would not only have to recover each vehicle, but also restore each one’s firmware and software while hopefully patching the vulnerabilities that allowed them to be exploited in the first place.
    • Vehicle theft (in whole or stripping parts) – stop at the (chop) shop on the way home and lighten the car’s load of saleable things, an on-the-go automotive diet.
    • Kidnapping the passengers – even the threat of not letting them out and making them pay will work for some: after all, they have a digital payment method in their pocket or purse, setting up a great ransom opportunity. Think they should pay more? Scoop them up to a remote location straight out of a bad TV show plot with ropes and dim lights before they can call the police. For that matter, extort the fleet operator to not kidnap their passengers, a 21st century twist on old protection rackets.
    • Sending vehicles to a specific location to cause a traffic jam. Think of it as TJaaS – Traffic Jam as a Service; think DDoS with cars.
      • Target busy intersections or motorways at rush hour. For roadways that are already jammed with conventionally-driven vehicles, creating even larger traffic jams to further slow down traffic and then disperse the vehicles; who would know what was really happening?
      • Airports, train stations, or bus terminals jammed with traffic can act as a vehicular barrier for bad actors seeking to keep law enforcement away while they engage in dirty deeds. A traffic jam caused by autonomous vehicles could even block police from getting to a bank being robbed.
    • Blocking of emergency services – a variation of SWATting where you keep law enforcement away, for a price of course.
    • Cover for other organized criminal activities, e.g., flash mob thefts by criminal gangs; use of vehicles for moving illegal goods. How would the car know it’s making a drug deal using “left luggage?”
  • Disabling safety features / causing crashes. Crashes amongst autonomous vehicles are big news anyway, so if a bad actor shorts the company’s stock and then deploys malware to the vehicles, it could create a difficult-to-detect “insider trading” stock sell-off.

intersection-autonomous-cars

It should be noted that robotaxis are not the only vehicles that could be used for such attacks. There are an ever-increasing number of private vehicles on the road with self-driving capabilities and anti-theft/remote lockout functions that could be triggered.

In case all of this sounds… well, fantastical, for lack of a better term… we would like to point out that runaway vehicles are no longer fiction, but fact: In October 2023, an electric vehicle in Scotland lost all control and the driver had to crash it into a police van in order to stop it. While not a fully autonomous vehicle, it did have a sophisticated driver assistance system which seemed to have failed, leaving the vehicle unable to slow down or shut the engine off. While this does not seem to be the result of any malicious activity, it definitely shows how reliant vehicles are becoming on their computing systems.

Another possible concern about automated vehicles is commercial trucks. An autonomous truck carrying valuable cargo could be stopped in or diverted to a place of the criminals’ choosing and have its cargo stolen before police arrive. Trucks could also be used to block transit hubs, like docks where cargo is offloaded from ships.

Moreover, they could also be used as battering rams to gain entrance to restricted areas separated by gates, bollards, or other obstacles. This harkens back to the heady days of hastily contrived steel-clad impromptu armored vehicles birthed by the A-Team but run by computer programmers with evil intent.

Autonomous vehicles seem wide open to becoming victims of more widely available GPS jamming techniques which can be localized to intercept and “retrain” vehicles to do an attacker’s bidding. A botnet of cars oozing along at the behest of its herders can provide a powerful video sure to go viral, regardless of the technical details.

To be fair, any new technology, especially during its nascent rise into the populous zeitgeist, rattles the imagination and is guaranteed to present hurdles. But rising fame also attracts technozealots who may be able to help bolster the digital defenses so the herds of robotaxis don’t become the subject of B movie plots without expensive actors, or without many.

Autonomous vehicles in the form of automobiles that can drive on the same roads as traditional human-operated cars represent one of the biggest changes to automobile technology in the past several decades. It seems like some basic precautions learned from over a century of transportation engineering should not be forgotten:

  • Autonomous vehicles owned by individuals or businesses should have controls that can be operated by a human in an emergency. As good as AI for driving becomes, it may never be able to anticipate and respond to all situations that a human driver can. Providing steering, acceleration, and braking mechanisms that can disengage the AI “autopilot” could mean the difference between saving lives and “merely” being in an accident. Machines are good at navigating known patterns, but humans can manage wildcard events that couldn’t reasonably be covered in automated training sets. A kid dressed up in a ghost costume darting out to scare you? You’d know what to do but your car might not.
  • For vehicles meant to operate as taxi or shuttle services, an emergency braking system should be accessible to passengers, not unlike those emergency pull cords or buttons used in passenger rail and subway cars. Although technically it would have to operate differently since railways operate differently than roadways, the desired outcome would be to bring the self-driving car safely to a stop in a way that doesn’t endanger its passengers, other vehicles around it, or nearby pedestrians.
  • Regardless of whether it is a human taking full control of an autonomous vehicle, or just pulling the emergency brake, these actions should automatically notify both fleet operations and emergency services when activated, just as existing services provided by General Motors’ OnStar, Subaru’s STARLINK, and other AACN (advanced automatic collision notification) services do today.

Autonomous vehicles have the potential of creating a safer future for everyone on the road. However, safety has to be the primary concern for autonomous vehicle manufacturers and fleet operators (which are sometimes the same thing, and sometimes not) alike. That can only occur if these vehicles are engineered in a way that puts safety first.

Products You May Like

Articles You May Like

The ABCs of how online ads can impact children’s well-being
US Imposes Visa Restrictions on Alleged Spyware Figures
Beyond fun and games: Exploring privacy risks in children’s apps
Apache Cordova App Harness Targeted in Dependency Confusion Attack
New RedLine Stealer Variant Disguised as Game Cheats Using Lua Bytecode for Stealth

Leave a Reply

Your email address will not be published. Required fields are marked *