Debate-inducing AI software is currently assisting in identifying corrupt and inactive officers.
Metropolitan Police utilized Palantir AI to identify corruption, misconduct, and breaches of work-from-home regulations.
Palantir Technologies, known for its secretive and controversial nature, has taken on a notable (and somewhat dystopian) new function in London. A report from The Guardian reveals that a one-week trial of the company's software aided in uncovering potential misconduct within the Metropolitan Police Force. The results included problems like manipulation of duty rosters, breaches of hybrid work policies, as well as serious claims involving fraud, sexual assault, rape, misconduct in public office, and the misuse of police systems. Three officers were arrested, and two others received gross misconduct notices.
How prevalent is routine misconduct within the force?
Some of the most noteworthy findings from the pilot were the seemingly ordinary nature of some issues. Ninety-eight officers are under review for allegedly manipulating the shift-rostering system for personal or financial reasons, with around 500 others receiving prevention notices. Additionally, 42 senior leaders are being examined for serious violations of office attendance guidelines. Twelve officers are facing gross misconduct investigations for not disclosing their Freemason membership.
Is Palantir addressing misconduct or creating new concerns?
Palantir usually maintains a familiar trajectory with its clients, proving to be highly effective, financially beneficial, and deeply contentious. Detractors claim that the company’s collaborations with U.S. Immigration and Customs Enforcement and various military organizations demonstrate how its data-processing tools can readily transition into surveillance systems.
The use of Palantir by the Metropolitan Police occurs during a period of increased regulatory scrutiny for AI companies regarding data management, public trust, and potential harm. For example, OpenAI is under investigation over concerns related to ChatGPT, highlighting the quick shift of AI tools from useful applications to broader accountability issues when oversight is ambiguous.
Palantir CEO Alex Karp responded in February, asserting that the company’s systems incorporate safeguards against governmental overreach, as its revenue from U.S. government contracts surged 66% year-on-year to reach $570 million in the fourth quarter of 2025.
Why does Palantir continue to secure contracts despite criticism?
Law enforcement isn’t the only sector in the UK that's seeking Palantir Technologies' assistance. The government has recently agreed to a £330 million deal with the firm to establish a Federated Data Platform for the NHS, which will connect health data across the system, enabling hospitals and care teams to manage information more proficiently. This agreement has drawn significant backlash, although officials insist that Palantir is prohibited from selling NHS data or using it to train AI models. The Financial Conduct Authority is also employing Palantir software to combat financial crime. For the Metropolitan Police, the attraction is clear, as the software seems to quickly detect corrupt officers and rule violations compared to older systems.
Other articles
Debate-inducing AI software is currently assisting in identifying corrupt and inactive officers.
The Metropolitan Police employed the contentious Palantir AI software to analyze internal data, identifying hundreds of officers for alleged corruption, misconduct, attendance violations, and undisclosed connections.
