The next time you’re in the driver’s seat, Siri or Alexa tells you:
“Sorry, you’re not allowed to drive. This vehicle is temporarily out of service. Please try again later.”
There is no override or “lost password” feature to circumvent the lockdown. It doesn’t matter where you are going or how urgently you need to go. An AI-powered system determines if you are unfit to drive.
Another dystopian fantasy? Not likely. Congress and the President enacted Public Law No. 117-58 (November 15, 2021), which requires domestic regulations to equip passenger cars with “advanced drunk driving and impaired driving prevention technology.” Having said that, who could object to that idea? After all, AI systems will save lives.
The fact that Congress approved continued funding over strong opposition in November 2023 means that “advanced drunk driving and impaired driving prevention technology” could be installed in cars as early as 2026. means. The AI-powered system “(i) passively monitors the performance of the vehicle’s driver” and accurately identifies whether the vehicle’s driver may be impaired. (ii) prevent or restrict operation of the motor vehicle if a fault is detected; ”
technology is here
How do driver recognition systems work? CorrActions is a “software-only motion-based driver monitoring product that uses involuntary, uncontrollable muscle movements to monitor brain activity.” Developed. Inputs are made from the steering wheel or the driver’s smartphone.
CorrActions says the system is “capable of detecting a wide range of cognitive conditions,” including “fatigue, inattention, anxiety, alcohol/drugs, and more.” When a driver interacts with a smartphone app, it reportedly provides enough information for him to determine his blood alcohol content (BAC) with 90% accuracy, and falsely reports his BAC as high. It is not.
solution causes problem
Let’s think about it here. Federal agencies are using AI to determine whether America’s more than 350 million people can drive cars. AI designers build driver approval systems. The drunk driving problem has been solved. Federal agencies and AI bask in heroic glory.
However, big-picture issues and day-to-day trade-offs must be faced. First, consider the everyday real-world problems that driver authorization systems (DAS) pose. You’ve had a few beers and then your car won’t start at all.
- My wife suddenly went into labor.
- Your child is sick, injured, or bleeding.
- A tornado has been spotted tearing through your town
- You need to move your car away from wildfires.
- I need to move my car from the garage where the water heater is leaking.
Whether you drink beer or not, your system won’t work perfectly forever. The battery can drain and the wires can become frayed, or the DAS can fail, rendering the vehicle unusable. They don’t know why you want to drive, and they don’t care.
In all cases, DAS cannot assess the urgency of the situation. DAS cannot assess risk versus Benefits of your driving. Companies don’t know or care that your destination is on a rarely used road or that a doctor’s office is just two miles away. This is a federally mandated robot and is subject to uniform national rules.
claim, lawsuit, fact, opinion, truth, justice
DAS is expected to record and store data about each driver’s “cognitive state.” Naturally, police investigators and insurance companies will want that data when considering who is “at fault” for an accident. Will the various authorities accept his DAS records regarding the driver’s “impairment,” “fatigue,” “anxiety,” or “inattention” as unquestionable fact?
Not if fairness and justice still matter. Any device that provides evidence in litigation must be evaluated for validity and reliability. The use of DAS reports on mental status requires the designer’s hard evidence as to the accuracy of the machine, whether it has been recently tested and calibrated, and whether harmful internal and external factors are influencing the report. Is DAS susceptible to electrical or radio interference, jamming, data corruption, human hacking, or software update errors?
Dig deeper: For example, if the DAS reports “impaired functioning,” “anxiety,” or “inattention,” are these the only answers? yes or no? Does the report indicate shades of gray or degrees of severity? Perhaps the report indicates the emotional state on a scale of 0 to 9.Where does that scale come from? And who decides it? Anxiety=5 and carelessness=6 Should I stop driving or park my car on the road?
Nothing guarantees that DAS is “correct” in any objective way. Just as a polygraph machine is neither a “lie detector” nor a “truth detector,” a DAS depends on (1) what data is relevant, (2) how the results are calculated, and (3) how the results are displayed. limited to the designer’s choices. Computer conclusions about highly subjective human physiological and psychological factors. Equipping every vehicle with his DAS will either result in absolute control by AI or lead to many lawsuits demanding both truth and fair human outcomes in disputes.
DAS 2.0: Dangerous Servant, Eager Master
Let’s look again at the big picture: how society thinks about how governments make decisions and impose power on their people. The federal DAS mandate normalizes the idea that government agencies using AI can and should “solve” health, safety, and financial problems. Government agencies and their bureaucrats can “just follow orders” from AI systems that are considered super-experts. No more “human error” – AI will become superintelligent. And no citizen can rationally object to that intellectual decision. Governance by experts using computers becomes the government model.
The DAS initiative itself effectively calls for an expansion of government power. Currently, DAS is referred to as a “passive monitoring” system that interacts with the car driver. If DAS were made compulsory and widely implemented, it would be necessary to protect people, for example, from people who owe taxes, have been accused of violations, are seen as troublemakers, or whose “carbon footprint” exceeds a certain number. There will be plenty of great ideas, such as ways to restrict or prevent people from using cars.
Evolving technology connects DAS to centralized control systems designed to advance bureaucratic policies. regular. Just as Alexa can now listen to people’s conversations and word choices, DAS 2.0 could also detect the language and thoughts being discussed in private.
Government mandates for surveillance and control technology in private vehicles will result in top-down control of how individuals move, think, speak, and act. Lord Acton might have quipped: He said, “Power corrupts, and AI power corrupts exponentially.”