What happened
In the town of Česká Lípa (Czech Republic) the AI Overview feature in Google Search began returning that the schedule of on‑duty dentists is coordinated by the municipal police. People, relying on the AI response, indeed began calling the number 156 — the police line, which has nothing to do with medical services. The incident was reported by the Česká Lípa police.
"We received several calls asking for the working hours of the on‑duty dentist. The police do not provide medical services, but must respond to such requests, which distracts from our primary duties."
— Česká Lípa Police
Why it matters
At first glance — a curiosity. In reality — a stress test for systems that people trust to resolve everyday issues. A false AI recommendation places additional burden on a service that should be dealing with crimes, traffic accidents and other emergency calls. When an algorithm replaces official sources of information, the risk of delays in responding to critical situations increases.
Wider context
This incident is not isolated: AI technical errors are already affecting education, business and public services. Recently one major accounting body suspended remote exams due to AI‑assisted cheating, and media reported that tech leaders made hundreds of billions from the AI boom. This indicates that the impact of these technologies is now geographically broad and has social consequences.
Demands and expectations
Česká Lípa police have already filed a complaint with Google, but there has been no response so far. Technology platforms must provide mechanisms for the rapid correction of algorithmic errors and clearly label answers that require verification with official sources. For citizens, it is important to treat AI instructions critically and verify key information through official contacts.
What this means for us (in Ukraine)
In a context of informational and physical vulnerability, when every delay can have consequences, the question of service reliability is not abstract. AI errors can divert emergency services, create chaos in logistics, or contribute to misinformation. Our task is to demand accountability from global platforms and invest in local, secure communication channels.
Summary
The incident in Česká Lípa is a simple case with a clear conclusion: technologies are changing everyday life, but without oversight they can undermine the work of important institutions. The question is not whether AI is needed, but who and how is held accountable when it makes mistakes. Are platforms ready to act as quickly as people respond to their information?