There were news reports everywhere that an autonomous taxi from a company called Cruise was driving through San Francisco without headlights. The local police tried to stop the vehicle and were a little flustered that there was no driver. Then the car passed an intersection and stopped, further baffling the officers.
The company says the headlights were the result of human error and the car stopped at a traffic light and then moved to a safe stop. This raises the question of how people, including police officers, interact with robotic vehicles.
On Cruise’s side, they have a video informing law enforcement and others how to approach one of their vehicles (see second video below). You have to wonder how many patrol officers saw it. We don’t think we can get away with saying, “We mentioned our automatic defense system in our YouTube video.”
Frankly, we’re not sure in an emergency situation we’d want to go through our list of auto vehicle companies to find the right number to call. At the very least, you would expect the number to be prominently displayed on the vehicle. Why the lights didn’t come on automatically is an entirely different question.
We cannot imagine that if autonomous vehicles catch on, there will be no regulation. Just as firefighters have access to Knox boxes so they can let themselves in in places, we’re pretty sure a fail-safe code that stops a vehicle and unlocks the doors, regardless of brand, is probably a good idea. Sure, a hacker can use it for bad purposes, but they can also break into Knox boxes. You have to be sure that the stop code protection is robust.
What do you think? What happens if a robot car is put aside? What happens if a taxi passenger has a heart attack? We’ve talked about the issues surrounding self-driving anomalies before. Some questions don’t have easy answers.
This post Does your programmer know how fast you went?
was original published at “https://hackaday.com/2022/04/13/does-your-programmer-know-how-fast-you-were-going/”