When AI Becomes “Intelligence”: A Cautionary Tale for OSINT and Policing
- sarah5977
- Jan 15
- 4 min read

Recent events involving West Midlands Police and the attempted restriction of Maccabi Tel Aviv supporters should concern anyone working in intelligence, OSINT, policing, or risk analysis.
At the centre of the controversy was a recommendation to ban travelling supporters based, in part, on a fabricated historical incident - a football match that never took place.
That piece of analysis was later traced to an AI tool, reportedly Copilot, and was incorporated into a formal risk assessment without verification. The issue here is not simply that AI got something wrong. The issue is that the AI output was taken at face value, and treated as intelligence and that distinction matters simply because AI output is not intelligence.
Intelligence is a process that involves the intelligence lifecycle. Large language models (LLMs) generate plausible text, not validated facts. They do not “know” whether something occurred; they predict what sounds correct based on patterns. Hallucinations, confidently presented falsehoods, are not edge cases. They are a known and persistent characteristic of generative AI. We have seen examples such as Google recommending that you put glue on pizza or eating a rock to aid digestion. Clearly ridiculous in nature but to AI, it’s not.
When an AI system produces a claim, it should be treated as a hypothesis, not as evidence.
We need to employ critical thinking. In this instance, a fictional event crossed an invisible line: from generated text into an operational intelligence product. This is not a failing of your tools - it is a tradecraft failure.
The Intelligence Lifecycle Was Bypassed
Experienced practitioners understand that intelligence is not a single step.
It follows a lifecycle:
1. Direction and tasking
2. Collection
3. Processing
4. Analysis
5. Dissemination
6. Feedback and review
AI can assist with collection and processing. It can help surface material, summarise data, or identify potential lines of inquiry. What it cannot do, without human discipline, is validate, assess reliability, or weigh evidential confidence.
What is the ‘So What’ of what we’ve found? In this case, the lifecycle didn’t just collapse – it was completely bypassed. A claim, that was generated by an AI system, was not corroborated, challenged, or cross-checked against primary sources before being disseminated upward and outward. Once that happens, errors become institutionalised.
Single-Source Dependence Remains Dangerous - Even When the Source Is AI
Every intelligence discipline teaches the same lesson early: single-source intelligence is fragile. An AI model is not a source in the traditional sense. It has no provenance, no accountability, no motive structure, and no way to interrogate confidence or intent.
Treating AI output as an independent source - and worse, as a sufficient one - violates basic analytic principles. The fact that the source is technology does not make it neutral, objective, or reliable. It simply obscures the weakness behind a veneer of authority. It is built on a shaky foundation at best.
Confirmation Bias Did the Rest
Perhaps the most uncomfortable lesson is cognitive rather than technical. When organisations already believe a scenario is “high risk”, they are more susceptible to accepting information that confirms that belief and less likely to challenge it. This is a classic instance of confirmation bias. AI systems are particularly dangerous in this context because they can rapidly generate content that appears to support an existing narrative.
Without structured challenge, red-teaming, or analytical scepticism, AI becomes an accelerant for confirmation bias rather than a corrective.
This Was Predictable - and Preventable
None of this is novel. AI hallucination risk is well documented. Intelligence failures caused by poor sourcing, weak validation, and pressure-driven analysis are older still. The Iraq invasion in 2003 is a classic example of this.
What makes this case notable is not that AI was used; but that governance, methodology, and professional judgement failed to keep pace with the tooling. Methodology beats buttonology every single time.
Operational decisions that affect public safety, civil liberties, and community trust demand a clear evidential trail. When senior leaders cannot confidently explain where a critical data point came from, trust erodes rapidly - both internally and externally.
The Real Lesson for OSINT and Intelligence Professionals
Whether we like it or not, AI is here to stay. When it is used properly, it can enhance speed, coverage, and analytical support. Used improperly, it creates a dangerous illusion of certainty.
What to remember;
1. AI outputs are inputs, not conclusions.
2. Verify and verify again.
3. Multiple independent sources remain essential – never rely on a single source.
4. The intelligence lifecycle is not optional.
5. Human judgement, accountability, and scepticism cannot be automated.
6. Technology does not replace methodology. It exposes the cost of abandoning it.
For those of us responsible for intelligence standards - whether in policing, security, corporate risk, or OSINT training - this incident should be treated as a warning, not an anomaly.If we allow generated text to masquerade as intelligence, we should not be surprised when the consequences follow.
Don't just train your team – give them the power to excel.
Equip your team with the specialised OSINT consultancy and training with Seiber.

Comments