Good Morning, Ms. Fuentes, Mayor, Councilmembers,
Ms. Fuentes, Please insert this into the public record. Apologize for the short notice.
I oppose reauthorizing the Flock ALPR readers, and other similar automated monitoring technology. These machines use AI, which has no conscience, and if people put their faith in it entirely, as many have be told to do, the resulting errors can be catastrophic.
The name of these cameras is a bit strange –FLOCK. Are we the sheep they are trying to herd??
San Diego has over 500 Flock brand ALPRs at intersections throughout the city. It reads every vehicle that goes through 24/7/365 and keeps a record for the previous 30 days. SDPD is short over 200 officers. SDPD has said this tech helps them solve crimes, however, I would rather the city take the $4 million dollars they have spent the last 2 years on ALPR tech and use it to hire the officers they need. Per Google, San Diego has paid $10 million this past year in police overtime due to staffing shortages.
At least eight cities have cancelled or paused their contracts with Flock for various issues and I hope San Diego will do the same. This tech is too invasive and gathers the patterns of innocent people’s lives whenever they enter an intersection with an ALPR. And you’ll have to have humans to interpret the data and testify in Court anyway, sort of a duplication of service (extra cost, in other words.)
In other cities, errors have been made and innocent people have been frighteningly taken from their cars at gunpoint. This was due to human error entering the plate number for a “hot list”, or the ALPR actually misread part of the plate number. But Flock is keeping its errors on San Diego residents in San Diego.
Flock has allowed Customs & Border Patrol to access cities’ (?) data, without the cities’ knowledge, and they are rolling out more invasive tech – their Nova program and automated drone systems. It is creating a self serve surveillance dragnet open to government agencies and it is a free for all tracking system.
And speak of AI-based systems, please be very careful with them. They have potential for good, but loads of potential for harm to society.
Short History of Artificial Intelligence – The concept goes back to 250 BC when, although not AI or near it, the Greek Ctesibius built a self-regulating water clock, and Ismail al-Jazari invented over 100 automated devices in the 1200’s. Among them was a musical automaton, which had a programmable drum machine with pegs (cams) that bump into little levers that operated the percussion. The drummer could be made to play different rhythms and different drum patterns if the pegs were moved around. He is considered the father of robotics.
AI had its first public and practical use in a chess game by Leonardo Quevedo from Spain shown at a 1914 Paris expo, not 1950 Alan Turing as we’d like to think. AI uses old, very old, algorithms.
AI Safeguards – What a joke – AI STILL has no conscience, and the underlying algorithms are riddled with errors. It pretends to have a human face, but do you really want one error at the risk of it messing up the lives of 3 million County residents? Maybe your pet stakeholders and State residents too. Would you like the AG at your throats because the AI you chose threw an innocent person in jail or someone died because of a mistaken denial of care?
From Psychology Today: Human complacency fuels AI deception. Apollo Research showed GPT-4 executing an illegal insider-trading plan and then lying to investigators about it. AI company Anthropic, along with Redwood Research, recently demonstrated that advanced AI models can produce apparently reliable answers while secretly planning to do the opposite once oversight weakens. And the busier the decision context (like a County office,) the more tempting it is to click accept and move on.
From ABC news: AI phone scam calls can now uses clips of your voice as short as 3 seconds to create convincing fakes for robocalls and impostor scams The exmple they give is that “Tech experts created an AI generated voice of our own ABC7 Reporter Jason Knowles, asking, “Can you send me $500? It’s Jason, I’ve been in a car accident here on my vacation in Mexico. I’m okay for the most part, but I am at some hospital and they say I need to pay money for x-rays and treatment. My credit card isn’t working and I don’t have enough cash.” But he never actually said those words.”
Apple introduced its generative-AI tool in the US in June, boasting about the feature’s ability to summarize specific content. It erroneously summarized a New York Times story, claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested. In fact, this was when the International Criminal Court issued a warrant, bu there was no arrest. It also falsely summarized a BBC report that Luigi Mangione, the suspect behind the killing of the UnitedHealthcare chief executive, had shot himself.
So AI lies too. Do you really want to promote this sort of thing? Should the County aid these schemes as a side effect of AI when there are more reliable but slower non-AI algorithms?
Deception is a social art as much as a technical feat. AI systems master it by predicting which stories we are willing to believe.
Two people died after being denied coverage by UnitedHealth. The plaintiffs argue that the health insurance company should have known how inaccurate its AI was, and that the provider breached its contract by using it.
Their grievances are corroborated by an investigation from Stat News into United Health’s internal practices at its subsidiary NaviHealth, which found that the company forced employees to unwaveringly adhere to the AI algorithm’s questionable projections on how long patients could stay in extended care.
It had schoolkids arrested and jailed for crude jokes. Misidentification is routine. AI is well known for deepfakes. I have seen the same woman changed by AI from white to Asian to Black.
I myself asked the AI q/a tool my HOA’s parking company uses a simple question, although not in the preset list they provide, and got a totally bogus nonanswer.
And I see ads for deepfake generators on Google.
Do we really need AI playing havoc with our lives? Its mistakes are so frequent as to create a new level of liability? Especially since you just reduced the actual reserves.
How much reliance do we want to put in a system that is not only flawed but can do serious societal harm without so much as the conscience to fix itself?
Who will control it when it starts lying, murdering, or cheating?
What you want is something to deter the bad actors as well as the incorrect coding, logical errors, or similar, like actual police.
And you can get the same results, slower but a lot safer, by using the old algorithms. That 1914 Chess game never hurt anyone.
And WAYMO, another AI play, is COMING TO SD
Doing dances around stopped schoolbusses (illegal maneuvers, red lights on the busses flashing) 19+ times since August 2025; trapping people in cars blocking traffic or trains; and Vishay Nihalani, director of product management operations for Waymo (on camera) says “I don’t think people should expect perfection. What’s really important, though, is that we’re learning from all the scenarios that we encounter”
Now, regulators at the National Highway Traffic Safety Administration (NHTSA) are investigating Waymo over complaints that its robotaxis have illegally passed school buses at least 19 times, according to Reuters.
WAYMO NEEDS TO BE BANNED until it works, is programmed correctly, and does not take months to fix a relatively simple rules issue like the schoolbusses.
–Anonymous
For the latest updates, subscribe to our newsletter.