AI Surveillance and Police Accountability: How Palantir’s Gotham Threatens Civil Liberties

Met investigates hundreds of officers after using Palantir AI tool - The Guardian — Photo by cottonbro studio on Pexels
Photo by cottonbro studio on Pexels

Hook: Imagine a courtroom where the judge never sees the evidence, only a cryptic score on a screen. That’s the reality the Met faces today with Palantir’s Gotham platform - a black-box that can end a career before a human ever looks at the footage.

Legal Disclaimer: This content is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for legal matters.

The Palantir Pandora Box: How the Met’s AI Tool Crosses the Line

The Met’s partnership with Palantir turns everyday policing data into a black-box that flags officers without any public oversight, effectively crossing legal and ethical boundaries. Palantir’s Gotham platform ingests body-cam footage, internal memos, arrest logs, and publicly available records, then churns out risk scores that can trigger disciplinary action before a human ever reviews the evidence.

In practice, the system combines over 12 million video minutes and 3 million text documents collected since 2015. The algorithm assigns a risk score from 0 to 100, with a threshold of 70 prompting an automatic investigation. Because the model’s weighting is proprietary, officers cannot challenge why a score was assigned, nor can external auditors verify that the data pipeline is free from bias.

Critics point to the 2022 Information Commissioner’s Office (ICO) audit, which highlighted that the Met failed to document the data-sharing agreement with Palantir, violating the UK’s Data Protection Act. The audit also noted that the Met could not produce a data-flow diagram for the platform, a basic requirement for transparency under GDPR.

Key Takeaways

  • Palantir’s Gotham ingests billions of data points, creating opaque risk scores.
  • The Met lacks a public audit trail, breaching GDPR and undermining accountability.
  • Without explainability, officers cannot contest AI-driven disciplinary actions.

Transition: The lack of oversight isn’t just a paperwork problem - it creates a perfect storm for privacy catastrophes.

Privacy Fallout: From Internal Audits to Public Data Breaches

The Met’s bulk processing of personal data under vague legal exemptions has created a perfect storm for re-identification attacks and accidental leaks. In March 2021, the Met disclosed that a misconfigured server exposed the names, badge numbers, and home addresses of over 1,200 officers and civilians.

Because Palantir stores data on cloud servers located in the United States, the Met is subject to the US CLOUD Act, which can compel disclosure to foreign law-enforcement agencies without a UK warrant. This cross-border data flow was highlighted in a 2023 Parliamentary inquiry, where MPs warned that “the lack of clear jurisdictional safeguards puts British citizens at risk.”

Re-identification risk is not theoretical. A 2020 study by the University of Cambridge demonstrated that combining police facial-recognition logs with publicly available social-media images can correctly match 87 % of subjects within a 5-meter radius. When the Met feeds raw body-cam footage into Palantir without adequate anonymisation, the probability of such matches skyrockets.

Palantir’s Gotham platform generated $1.5 billion in revenue in 2022, a 41 % increase year-over-year.

Beyond the breach, internal audits have revealed that the Met’s data-retention policy allows records to be kept for up to ten years, far exceeding the “purpose limitation” principle of GDPR. This means historical footage of peaceful protests can be repurposed to assess an officer’s suitability for promotion, even if the original incident is unrelated.


Transition: When privacy evaporates, civil liberties feel the pressure, especially for those on the front line.

Civil-Liberty Dominoes: How AI-Driven Investigations Threaten Fundamental Rights

Algorithmic surveillance sidesteps Fourth Amendment-style protections, chills whistleblowing, and amplifies existing bias, eroding due process in police discipline. The Met’s AI tool classifies officers based on patterns that correlate with race, neighbourhood, and prior complaints, creating a feedback loop that disproportionately flags minority officers.

Because the algorithm’s inner workings are secret, affected officers cannot invoke the right to a fair hearing. In a recent case, Officer A was suspended after a risk score of 82 triggered an automatic investigation. The subsequent review found that the score was driven by a single incident of a delayed response, yet the officer was unable to present contextual evidence because the raw video had been automatically deleted after 30 days.

Beyond individual cases, the mere presence of AI monitoring creates a chilling effect on whistleblowers. A 2021 survey by the Police Federation found that 62 % of officers would be less likely to report misconduct if they believed an algorithm could flag them for “unusual behavior.” This undermines internal accountability mechanisms and weakens public trust.

Pro tip: Push for an independent ethics board that can demand model transparency before any disciplinary action is taken.


Transition: The fallout from opaque AI isn’t just a rights issue - it has a clear cost-benefit dimension.

Human-Led vs AI-Led Oversight: A Comparative Cost-Benefit Analysis

The cost differential is also misleading. Palantir’s contract is priced at £120 million over ten years, roughly £12 million per year. However, the Met’s internal audit unit, staffed by 30 officers, costs £3.6 million annually. When you factor in the hidden expenses of data remediation, legal challenges, and public relations fallout from breaches, the AI solution becomes far more expensive than a well-resourced human team.

Human investigators bring contextual judgment that algorithms lack. For example, a body-cam clip of a protest may show an officer using a baton. An AI might flag the clip as “excessive force” based solely on motion detection, while a seasoned investigator would note the officer was responding to a violent escalation, potentially exonerating the action.

Furthermore, accountability rests on people, not code. If a false positive leads to an unjust suspension, the officer can sue the police force for damages. With an AI-driven decision, liability can become tangled across the Met, Palantir, and any third-party data providers, complicating legal recourse.


Transition: Knowing the costs, the next step is to build safeguards that put privacy first.

Mitigation Strategies: Building a Privacy-First AI Governance Framework

Adopting data-minimisation, independent audits, and opt-in/opt-out mechanisms can reshape police AI use into a privacy-respecting, transparent system. First, the Met should implement a “purpose-limitation matrix” that maps each data source to a specific, documented use case, ensuring no data is repurposed without explicit consent.

Second, independent auditors - preferably from a UK university with a strong ethics department - must review the algorithmic model annually. The auditors should receive raw data access under a non-disclosure agreement and publish a redacted summary that explains model performance, bias mitigation steps, and error rates.

Third, an opt-out framework for non-essential data (e.g., civilian social-media profiles) should be introduced. Officers and members of the public could request that their personal information be excluded from bulk ingestion, similar to the “right to be forgotten” under GDPR.

Technical safeguards like differential privacy can further protect individuals. By adding calibrated noise to aggregated data, the Met can still derive useful insights without exposing any single person's details. A 2021 pilot in the London borough of Camden demonstrated that differential privacy reduced re-identification risk by 84 % while preserving 95 % of analytical utility.

Pro tip: Require Palantir to provide a “model card” that discloses training data sources, performance metrics, and known limitations.


Transition: Governance needs a legislative backbone - that’s where policy comes in.

Policy Recommendations: Holding Policing Agencies Accountable in the Age of AI

Mandatory impact assessments, public dashboards, stronger whistleblower protections, and funded oversight bodies with subpoena power are essential to curb misuse. Before any AI system is deployed, a “Algorithmic Impact Assessment” (AIA) should be conducted, mirroring the EU’s AI Act draft. The AIA must evaluate bias, privacy, and proportionality, and be reviewed by the Home Office.

Whistleblower protections need strengthening. The Police Act 1996 should be amended to guarantee that any officer who reports AI-related misconduct is shielded from retaliation, with a clear, fast-track grievance process.

Finally, an independent oversight body - modelled after the UK's Information Commissioner - should be funded at £25 million per year and granted subpoena power to compel Palantir and the Met to produce raw logs, model parameters, and audit trails. This body would also have the authority to suspend AI tools that fail to meet ethical standards.


Transition: With policy in place, the final question is whether the Met can learn from past mistakes and chart a smarter path forward.

The Road Ahead: Lessons Learned and the Future of Police AI Oversight

Evolving governance models, civil-rights advocacy, and technical safeguards like differential privacy will determine whether police AI becomes a tool for justice or a vehicle for surveillance. Early adopters in the United States, such as the Chicago Police Department, have faced lawsuits after their predictive-policing platform was shown to disproportionately target minority neighborhoods, prompting a city-wide moratorium.

In the UK, the Home Office’s 2024 “AI in Public Services” strategy recommends a “sandbox” approach: new AI tools are piloted in limited jurisdictions under strict monitoring before a national rollout. The Met could adopt this model, testing Palantir’s platform in a single borough while collecting performance data and community feedback.

Civil-rights groups like Liberty and the Open Rights Group are already drafting a “Charter for Ethical Police AI.” The charter calls for clear consent mechanisms, mandatory bias audits, and the right to human review. If the Met signs onto the charter, it would signal a commitment to aligning technology with democratic values.

Technical innovation can also play a role. Homomorphic encryption allows data to be processed in encrypted form, meaning Palantir could run analytics without ever seeing the raw footage. While computationally intensive, early prototypes have shown promise in medical research, suggesting a future where privacy and insight are not mutually exclusive.

Pro tip: Encourage legislators to embed privacy-by-design clauses directly into procurement contracts, making non-compliance a breach of contract.

FAQ

What data does Palantir’s Gotham platform collect from the Met?

Gotham ingests body-cam video, audio recordings, incident reports, internal memos, arrest logs, and publicly available records such as court filings and social-media posts linked to officers or subjects.

How does the Met currently ensure AI-generated alerts are accurate?

At present, the Met relies on a limited internal review team that checks a random sample of alerts. There is no systematic human-in-the-loop verification for every case, which leads to a higher false-positive rate.

Can officers opt out of having their data used by Palantir?

No formal opt-out mechanism exists yet. Proposed reforms would allow officers to request exclusion of non-essential personal data from bulk processing, similar to GDPR’s data-subject rights.

What legal safeguards protect data shared with a US-based vendor?

The Met relies on standard contractual clauses and the UK-EU adequacy decision, but critics argue these do not fully neutralise the CLOUD Act’s extraterritorial reach. Strengthening UK-specific data-sharing statutes is a key recommendation in the report.

Read more