AI Surveillance: Palantir, Pegasus, and the Boundaries of Legality

Palantir analyses data for intelligence agencies and law enforcement worldwide. Clearview AI has scraped over 60 billion facial images from the internet — without consent. Pegasus spyware infiltrates journalists’ and activists’ smartphones via zero-click exploits. And predictive policing algorithms disproportionately send patrols into minority neighbourhoods. AI-powered surveillance is not a future scenario — it is reality. The question is no longer whether it is happening, but where the boundaries should be drawn.

Palantir: The operating system of surveillance

Palantir Technologies was founded in 2003 with seed funding from the CIA. The company operates three platforms: Gotham for military and intelligence services, Foundry for government agencies and enterprises, and AIP, which since 2023 integrates large language models into military operations — including the control of drone reconnaissance via a chat interface.

The scale of the contracts is remarkable: 10 billion dollars for the US Army alone (2025), over 200 million for US Immigration and Customs Enforcement (ICE), and over 900 million pounds for the British public sector — spanning health data, defence, and border control. In Germany, three federal states already use Palantir Gotham for police data analysis.

Switzerland rejected Palantir. A risk analysis by the Swiss military from December 2024 concluded that “data held by Palantir could be accessed by the American government and intelligence services” and that data leaks “cannot be technically prevented.”

Clearview AI: 60 billion faces without consent

Clearview AI systematically scraped the internet for facial images — social media, news sites, video platforms — building a database of over 60 billion biometric profiles. The technology is primarily sold to law enforcement agencies.

The response from European data protection authorities was unequivocal: France, Italy, Greece, the Netherlands, and the United Kingdom imposed fines totalling over 100 million euros. Clearview AI has ignored all orders, paid no fines, and deleted no European data. The Dutch authority is now considering personal liability for company executives. In the US, Clearview agreed in a 2024 settlement to cede 23 percent of the company — approximately 52 million dollars — because a cash payment would have driven the company into bankruptcy.

Pegasus: State-sponsored spyware targeting journalists

The Israeli NSO Group develops Pegasus, a spyware that infiltrates smartphones without any user interaction (zero-click). Once installed, the attacker gains full access to messages, emails, the camera, microphone, and location data. A leaked target list contained 50,000 phone numbers, including at least 180 journalists worldwide.

In May 2025, a US court ordered the NSO Group to pay 167 million dollars in damages to Meta. In October 2025, a permanent injunction followed, barring NSO from accessing WhatsApp.

Swiss connections exist: Credit Suisse helped finance the NSO buyout. Switzerland was among the countries with the most Pegasus infrastructure servers. A Catalan former parliamentarian living in Geneva was allegedly surveilled with Pegasus. The Federal Office of Police (fedpol) refused to disclose information about possible “GovWare” procurement.

Predictive policing: Algorithms with built-in bias

Software such as PredPol (now Geolitica) analyses historical crime data and generates “hotspot maps” for police patrols. The system is deployed in over 250 police departments. However, studies have shown that the algorithms systematically direct police into minority neighbourhoods: more police leads to more recorded offences, which in turn reinforces the algorithm — a self-perpetuating feedback loop.

Several cities have drawn the consequences: Santa Cruz became the first US city to ban predictive policing in 2020. Los Angeles discontinued the programme, stating it was “unable to measure its effectiveness in reducing crime.” In January 2024, US senators called on the Department of Justice to end funding.

Hikvision: Chinese cameras everywhere

Hikvision, 42 percent owned by a Chinese state-owned enterprise, is the world’s largest manufacturer of surveillance cameras. The technology is deeply embedded in the surveillance programmes targeting Uyghurs in Xinjiang — cameras were identified by serial numbers that led to the arrest of specific individuals.

The response has been global: the US placed Hikvision on the Entity List and banned new installations. The UK ordered their removal from government buildings. Canada prohibited their operation in June 2025. Lithuania banned their use in approximately 400 government agencies. The European Parliament removed all Hikvision cameras.

Employee surveillance: The rise of bossware

74 percent of US employers use online tracking tools to monitor their employees — screen recordings, keystroke logging, email monitoring, productivity scoring. The market for such software is growing from 587 million dollars (2024) to a projected 1.4 billion by 2031. The consequences: 42 percent of monitored employees plan to resign within a year.

In Switzerland, behavioural monitoring in the workplace is prohibited (Art. 26 ArG / ArGV 3 Art. 26). Employers may review work performance and IT usage, but permanent and covert surveillance is unlawful. The FDPIC provides specific guidelines on the matter.

The legal framework: What is permitted and what is not

EU AI Act: Prohibitions since February 2025

Since 2 February 2025, the following AI practices have been prohibited in the EU:

  • Social scoring — rating individuals based on social behaviour
  • Untargeted scraping of facial images (as practised by Clearview AI)
  • Emotion recognition in the workplace and educational institutions
  • Biometric categorisation based on protected characteristics (ethnicity, religion, sexual orientation)
  • Real-time facial recognition in public spaces by law enforcement (with narrowly defined exceptions)

Violations can result in fines of up to 35 million euros or 7 percent of global annual turnover. However, practice reveals gaps: in March 2025, Hungary passed a law permitting police facial recognition for all types of offences — a clear violation of the AI Act. Enforcement mechanisms have yet to be established.

Swiss law: A sectoral approach

Since February 2025, Switzerland has pursued a sector-specific approach — AI rules are integrated into existing legislation rather than a standalone AI law. The BÜPF permits twelve months of data retention and the deployment of state trojans. The Intelligence Service Act allows “cable reconnaissance” — the surveillance of all internet traffic flowing abroad. A draft for specific AI legislation is expected by the end of 2026.

The new Data Protection Act (nDSG, in force since September 2023) distinguishes between standard and “high-risk profiling,” requires transparency in automated decisions, and grants individuals the right to human review. Enforcement and penalties, however, are significantly lower than in the EU.

Operating on the edge of legality

  • Clearview AI operates illegally in the EU but ignores all orders and fines — enforcement against a US company has proven virtually impossible
  • Palantir operates in a grey area: legally commissioned, yet the aggregation of vast quantities of government data raises fundamental privacy concerns. In 2023, the Federal Constitutional Court ruled that German data-mining laws violate the fundamental right to informational self-determination
  • NSO Group sells to governments that demonstrably use the software against journalists and dissidents — a US court has already established liability
  • Hungary passes laws that openly violate the EU AI Act
  • Employers using “bossware” in Switzerland deploy tools that operate at the boundary of prohibited behavioural monitoring

What this means for businesses

Technologies developed for state surveillance inevitably find their way into the private sector. Facial recognition for access control, AI-driven employee monitoring, customer behaviour analysis — the line between useful technology and surveillance is blurred. For Swiss companies, this creates concrete areas for action:

  • Data protection compliance — Any use of AI systems processing personal data must comply with the nDSG and, where applicable, the GDPR
  • Vendor due diligence — Hikvision cameras in your building? Do you know where the data is going?
  • Employee monitoring — Monitoring tools must respect the limits of the ArG. Behavioural surveillance is prohibited
  • AI ethics — The EU AI Act also applies to Swiss companies doing business in the EU

Technology is evolving faster than regulation. Anyone deploying AI systems today should not only ask whether it is legal — but also whether it deserves the trust of employees, customers, and partners.

How Zerberos can help

Surveillance technologies create new attack surfaces — and new compliance requirements. Our services help you stay on top of both:

  • Risk Assessment — Analysis of your deployed surveillance and AI systems for data protection and security risks
  • Penetration Testing — Security testing of your camera, access control, and monitoring systems
  • IT Security Consulting — Advisory on data protection compliance and AI governance
  • Contact us for a no-obligation assessment

Sources and further reading