Yes — AI systems are collecting, analysing and making decisions about you right now. My assessment after years of working in security and privacy: the reality is more targeted and more consequential in specific areas than the “AI is watching everything” narrative suggests, and less science-fiction in others. Some of this is legal, transparent, and something you agreed to. Some of it is invisible. The honest picture is more nuanced than either “AI is watching everything” or “you have nothing to worry about.” Here’s exactly where AI surveillance is real, where it’s overstated, and the practical steps that actually reduce your exposure in 2026.
What You’ll Learn
The six main categories of AI surveillance affecting most people
What data is actually collected and what AI does with it
Your legal rights in the UK, EU, and US
Practical steps to reduce AI tracking without going off-grid
AI surveillance intersects with the broader digital footprint your online accounts create. Check what personal data is already exposed with the Email Breach Checker and the Dark Web Exposure Scanner.
Facial Recognition — Where It’s Used
Facial recognition is the most visible AI surveillance technology and the most regulated. My practical guide to where it’s actually deployed versus where the concern is overstated.
FACIAL RECOGNITION — REAL DEPLOYMENT IN 2026
# Where it IS deployed (UK/EU/US)
UK police: live facial recognition at specific events (confirmed deployments 2022–2025)
Airports: automated border control uses face matching against passport database
Retail: some retailers use it for loss prevention (controversial, legally contested)
Your phone: Face ID / Android face unlock (local device processing — not sent to cloud)
Social media: Facebook/Meta tagging suggestions (EU restrictions apply)
# Where it is NOT widely deployed (despite fears)
Most public spaces in UK/EU: GDPR creates high bar for lawful use
General retail surveillance at scale: ICO has found most deployments unlawful
# EU AI Act impact (2025+)
Real-time biometric surveillance in public spaces: prohibited for most uses
US: no federal law — state laws vary widely (Illinois BIPA most restrictive)
Employer AI Monitoring
Workplace AI surveillance expanded significantly during the remote work period and has not retreated. My assessment of what employers are legitimately doing versus what crosses legal lines in most jurisdictions.
EMPLOYER AI MONITORING — WHAT’S HAPPENING
# Common employer AI monitoring tools
Productivity analytics: keystroke logging, app usage time, document activity
Communication analysis: email sentiment analysis, meeting analytics (Teams/Zoom)
Video monitoring: periodic screenshots, webcam checks during remote work
AI-scored performance: automated productivity scores from activity data
# What employers are legally required to do (UK/EU)
Inform employees: GDPR requires disclosure of monitoring activities
Lawful basis: legitimate interest or contractual necessity — must be documented
Proportionality: monitoring must be proportionate to the stated purpose
# What you can do
Ask HR: request information about what monitoring software is installed on work devices
Separate devices: never use work devices for personal activity
GDPR Subject Access Request: request a copy of personal data your employer holds
Social Media AI Tracking
Social media platforms use AI extensively to build profiles, and in my experience this is the category where people underestimate the scale of data collection most severely, predict behaviour, and target advertising. In my experience, this is the category where people underestimate the scale of data collection — the advertising profile that Meta or Google holds on you is far more detailed than most people expect.
SOCIAL MEDIA AI SURVEILLANCE
# What social media AI collects
Explicit data: what you post, like, share, search for
Behavioural: how long you pause on content, scroll patterns, click paths
Inferred: interests, political views, health conditions, financial situation — inferred from behaviour
Cross-site: tracking pixels follow you across websites even when not on the platform
# See your own data
Facebook: Settings → Your Facebook information → Download your information
Google: myaccount.google.com → Data & Privacy → Download your data
Both include: your ad interest profile — often surprisingly accurate and personal
# Reduce cross-site tracking
Browser: Firefox + uBlock Origin blocks most tracking pixels
Android: Settings → Privacy → Ads → opt out of personalised ads
Smart Devices and AI Assistants
SMART DEVICES — WHAT THEY COLLECT
# Smart speakers (Amazon Alexa, Google Home, Apple Siri)
Triggered recordings: sent to Amazon/Google/Apple servers for processing
Human review: confirmed — all three use human reviewers for quality
False triggers: devices sometimes activate without wake word and record ambient audio
Delete recordings: Amazon Alexa app → History · Google: myactivity.google.com
# Smart TVs
ACR (Automatic Content Recognition): TVs identify what you’re watching and report to manufacturer
Opt out: Smart TV settings → Privacy → disable ACR/Viewing data
# Practical steps
Smart speakers: mute when not using · delete voice history regularly
TV: disable ACR in settings — it’s in different places on different brands
Phone: review microphone permissions — which apps have access?
Your Legal Rights
YOUR RIGHTS — UK, EU, US
# UK and EU — GDPR rights
Right to access: request a copy of all personal data a company holds about you
Right to erasure: request deletion (“right to be forgotten”) in certain circumstances
Right to object: object to processing based on legitimate interests
Automated decisions: right to explanation for significant automated decisions about you
# EU AI Act (2024–2026 rollout)
Bans real-time biometric surveillance in public spaces for most uses
Requires transparency when interacting with AI systems
High-risk AI (hiring, credit scoring, law enforcement): strict requirements
# US — fragmented
No federal privacy law equivalent to GDPR
California CCPA: right to know, delete, opt out for California residents
Illinois BIPA: strongest biometric privacy law — class actions against facial recognition
How to Reduce Your Exposure — Practical Steps
AI SURVEILLANCE REDUCTION — PRACTICAL GUIDE
# High-impact changes (30 minutes)
Browser: switch to Firefox + install uBlock Origin — blocks 90%+ of tracking
Search: use DuckDuckGo for searches you don’t want profiled
Social media: download and review your ad profile — then limit data sharing in settings
Smart TV: disable ACR / viewing data in TV settings
# Medium-impact changes
Phone: review app permissions — deny microphone/location to apps that don’t need it
Email: use ProtonMail or Tutanota for private communications
VPN: encrypts traffic from your ISP and masks your IP from sites you visit
# GDPR Subject Access Requests (UK/EU)
Submit to: any company you’re concerned about — they must respond within 30 days
Template: “I request a copy of all personal data you hold about me under Article 15 GDPR”
Free: companies cannot charge for SARs in most cases
What AI Actually Infers About You
Beyond the data you explicitly provide, AI systems infer attributes about you from behavioural patterns. The gap between “what I shared” and “what they know” is where most people are surprised. My explanation of what modern inference models can determine without you ever explicitly stating it.
AI INFERENCE — WHAT CAN BE DETERMINED
# From your social media behaviour alone
Political leanings: what you like, share, follow — documented to be very accurate
Health conditions: search patterns, groups joined, content engagement
Relationship status: who you tag, check-in patterns, content type changes
Personality profile: Facebook research (2015): 150 likes = more accurate than spouse
# From your location data
Home address: where phone is most nights
Work location: where phone is during work hours
Religion: attendance at places of worship
Medical: visits to clinics, hospitals, specialist facilities
# Why this matters for security
Inferred data is used for: advertising targeting, insurance pricing, credit scoring
Also used for: social engineering and targeted phishing (attackers buy this data)
Action: review location permissions on all apps → deny for those that don’t need it
AI in Hiring and Credit Decisions
One of the most consequential but least visible AI surveillance contexts is automated decision-making — AI systems that screen CVs, score credit applications, and set insurance premiums. My concern here: these systems make high-stakes decisions about you with limited transparency or appeal mechanisms. Under GDPR you have the right to request human review of significant automated decisions.
AI AUTOMATED DECISIONS — YOUR RIGHTS
# Where AI makes decisions about you
Hiring: CV screening tools reject applications before human review
Credit: loan and credit card applications scored by AI models
Insurance: risk scoring using AI-inferred attributes
Fraud detection: bank transactions flagged and sometimes frozen by AI
# Your rights under GDPR Article 22
Right to not be subject to solely automated decisions with significant effects
Right to request human review of an automated decision
Right to express your view and contest the decision
How to invoke: write to the organisation stating your Article 22 rights
# Practical example
Rejected loan application: write to lender requesting human review under Article 22
CV rejected automatically: request confirmation of automated processing and human review
AI in Law Enforcement — What the Public Record Shows
Law enforcement use of AI surveillance is the most debated category. My reading of the public record: AI policing tools have expanded significantly but face meaningful legal challenge in the UK and EU where data protection law provides the strongest framework for contesting their use.
AI LAW ENFORCEMENT — DOCUMENTED USE
# UK — documented live facial recognition deployments
Metropolitan Police: live FRT at Notting Hill Carnival, football matches, shopping centres
South Wales Police: early adopter — ICO found initial deployments lacked lawful basis
Legal challenge: Court of Appeal ruled 2020 South Wales deployment unlawful
Current status: ongoing — new deployments continue with revised legal framework
# Predictive policing tools
Various UK forces: AI tools to predict crime hotspots or individual risk scores
Concern: algorithms trained on historical data may encode existing biases
Your rights: Subject Access Request if you’ve been affected by police AI decisions
# What you can do if you’re concerned
UK: complain to the ICO if you believe biometric data was processed unlawfully
EU: national Data Protection Authority in your member state
Organisation: Big Brother Watch (UK) campaigns on facial recognition use
AI Surveillance — Key Points
Facial recognition: limited in EU/UK public spaces · airports · police events · NOT most retail
Employer monitoring: legal if disclosed · ask HR what’s installed on work devices
Social media: more detailed than you think · download your ad profile to see what they know
Smart devices: delete voice history · disable ACR on smart TVs
Rights: GDPR Subject Access Request is your most powerful tool · free · 30-day response
Your AI Surveillance Reduction Plan
Three actions in the next 30 minutes that I recommend to anyone concerned about AI tracking: install uBlock Origin, download your Facebook/Google ad profile to see what they actually know, and disable ACR on your smart TV. These three changes make the most meaningful difference for most people.
Quick Check
Under GDPR, what right can you use to find out exactly what personal data a company holds about you, including any AI-generated inferences about your interests or behaviour?
Frequently Asked Questions
Is AI surveillance legal?
It depends on the context and jurisdiction. In the UK and EU, AI surveillance is governed by GDPR which requires a lawful basis, transparency, and proportionality. The EU AI Act adds additional restrictions on biometric surveillance in public spaces. In the US, there’s no federal equivalent — state laws vary significantly. Most AI surveillance by private companies (social media tracking, smart device data collection) is legal when disclosed in privacy policies you’ve agreed to.
Are smart speakers always listening?
Smart speakers are designed to only activate and record when they hear the wake word (“Alexa,” “Hey Google,” “Siri”). However, false triggers are documented — the devices occasionally activate without the wake word and send short audio clips to company servers. All three major platforms (Amazon, Google, Apple) have confirmed that human reviewers listen to a subset of recordings for quality improvement. You can delete your voice history: Alexa app → History, myactivity.google.com for Google, Siri settings for Apple.
How do I stop social media AI from profiling me?
Complete elimination isn’t possible while using social media platforms, but meaningful reduction is achievable: disable cross-app tracking on iOS, opt out of personalised ads in each platform’s settings, use a tracker-blocking browser extension like uBlock Origin to prevent tracking pixels following you across other websites, and regularly review and clear your ad interest profile in each platform’s data settings.
What is ACR on smart TVs?
ACR (Automatic Content Recognition) is technology built into most smart TVs that identifies what you’re watching — including streaming services, broadcast TV, DVDs, and connected devices — and reports this data to the TV manufacturer and advertising partners. It works by capturing screenshots and matching them against a database. To disable it: Smart TV settings → Privacy or Data Collection settings → look for “ACR,” “Viewing Data,” “SyncPlus,” or similar (varies by brand). Samsung, LG, Vizio, and Roku all have this feature.
→ Tool
Dark Web Exposure Scanner — Check Your Data
→ Tool
Email Breach Checker — Free
Further Reading
Dark Web Exposure Scanner— Check if your personal data has been exposed beyond AI surveillance — data breaches that put your credentials and personal information on criminal marketplaces.
Can AI Be Hacked? 10 Vulnerabilities— The AI surveillance systems watching you have their own security vulnerabilities. Understanding how AI systems can be exploited helps frame the full risk picture.
How to Tell If Your Phone Is Hacked— Phone surveillance goes beyond AI platforms — stalkerware and device compromise can enable surveillance independent of the AI tracking covered here.
ICO — Your Right to Access Data— The UK Information Commissioner’s Office guidance on Subject Access Requests. Includes template letters and guidance on what organisations must provide.
ME
Mr Elite
Owner, SecurityElites.com
The most revealing exercise I give to anyone concerned about AI surveillance: submit a GDPR Subject Access Request to the social media platforms you use and download your data. What comes back — the ad interest profile, the inferred attributes, the behavioural data — is almost always more detailed than people expected. My recommendation: do this once to calibrate your actual exposure rather than relying on assumptions. Then make targeted changes based on what you find rather than trying to reduce everything at once. Focused reduction is more effective than trying to go off-grid.
Founder of Securityelites and creator of the SE-ARTCP credential. Working penetration tester focused on AI red team, prompt injection research, and LLM security education.