.st0{fill:#FFFFFF;}

Stop: CBP Bought Clearview — Will Border Facial Recognition Save Security or Erode Rights? 

 February 14, 2026

By  Joe Habscheid

Summary: This post examines the CBP agreement to buy one year of access to Clearview AI for $225,000, and what it means when Border Patrol intelligence units add a face-search system built on billions of scraped images to their toolkit for "tactical targeting" and "strategic counter-network analysis." I will explain the technology, the limits shown by testing, the legal and operational gaps, and a practical checklist of policy and engineering steps that can reduce harm while keeping legitimate security work possible. How should we balance security and civil liberties now that a key enforcement agency has this tool?


Interrupt. Engage. You want quick clarity about risk and next steps. You want facts, not spin. You want policies that work and controls that hold. Do you want to shape how this tool is used, or watch it become routine? Which would you pick?

What the contract buys, and where it sits

CBP's stated purchase is straightforward: one year of access to Clearview AI for $225,000. Clearview says its system indexes over 60 billion images scraped from public websites and converts them into biometric templates for face-search. The deal extends access to Border Patrol's headquarters intel unit (INTEL) and the National Targeting Center, units that collect and analyze data to "disrupt, degrade, and dismantle" networks and individuals flagged as threats.

The contract language ties the tool to "tactical targeting" and "strategic counter-network analysis." Those two phrases matter. "Tactical targeting" suggests day-to-day intelligence workflows. "Strategic counter-network analysis" means mapping relationships over time. Together they signal this is not a one-off investigative aid; the tool is meant to be embedded.

How Clearview works, and why that worries people

Clearview's model relies on scraping publicly available photos at scale and turning them into biometric templates without the consent of the people pictured. That creates two linked problems: one technical, one legal. Technically, images taken for casual use vary in quality, angle, lighting, and context. Legally and ethically, people whose photos were scraped did not agree to this biometric conversion or to law enforcement access.

Civil liberties groups and some lawmakers have been raising alarms for years. Senator Ed Markey recently proposed banning ICE and CBP from using face recognition altogether, citing concerns that biometric surveillance is being embedded without clear limits, transparency, or public consent. Those concerns are not abstract; they ask who is watched and who gets to decide.

What testing shows: strengths and clear limits

The National Institute of Standards and Technology (NIST) tested Clearview and other vendors. The result is simple: face-search systems can perform well on high-quality, visa-style photos but struggle with less controlled images. Photos captured in field conditions—at borders, during encounters, or from public cameras—often produced error rates exceeding 20 percent for many algorithms.

NIST highlighted a trade-off: you cannot lower false matches without increasing misses. Put another way, tuning the system to avoid false alarms raises the chance the system fails to identify the correct person. NIST recommends using these systems to return ranked candidate lists for human review, not to make single-person confirmations. But that does not remove risk: when a search always returns candidates, searches for people not in the database still yield matches, and those are 100 percent wrong.

Operational risks inside CBP workflows

Embed a flawed tool in routine intelligence, and you magnify error and mission creep. Several operational risks jump out:

• False positives triggering wrong investigations, detentions, or worse. If an analyst acts on a low-quality match, consequences may follow in communities far from ports of entry.

• Unknown scope of searches. The contract does not explicitly say what kinds of photos agents can upload, whether searches may include US citizens, or how long uploaded images and results will be retained.

• Data flow ambiguity. CBP mentions the Traveler Verification System and the Automated Targeting System. Public CBP documents say the Traveler Verification System does not use commercial sources, so Clearview access may instead be attached to systems that already link biometrics, watch lists, and enforcement records. That raises mission creep risk when enforcement data from interior operations mix with border screening tools.

• Contractor access under nondisclosure agreements. NDAs limit transparency and outside scrutiny when contractors handle biometric data.

Legal and ethical gaps

US law does not currently offer a nationwide moratorium on face recognition. That gap lets federal agencies acquire and pilot systems without a clear statutory framework for consent, retention limits, oversight, or redress. The contract with Clearview does not specify retention or whether searches may include people who are US citizens. That absence matters because the public cannot assess risk if rules are not written down and enforced.

Ethically, scraping photos and converting them into biometric templates without consent clashes with basic privacy expectations. Many people post images online for social reasons, not to be swept into law enforcement galleries. The scale—tens of billions of images—means nearly everyone could appear in the dataset.

What this means for communities and enforcement

Two groups have legitimate claims here. First: public safety agencies that need tools to find dangerous people and dismantle harmful networks. Second: everyday people who expect not to be tracked by facial biometrics without cause. These legitimate needs collide when a broadly scraped gallery and a high-powered search engine become routine tools for analysts.

If searches target individuals suspected of serious crimes with probable cause and strict human review, there is a defensible use case. If searches are used as general intelligence infrastructure — checking faces pulled from social media in sweep-like fashion — the harms rise fast. Which path will CBP choose?

Practical, measurable steps CBP and Congress should take

No, handing mass biometric capability to analysts without guardrails should not be the posture. Here is a practical checklist that mixes policy and technical controls to reduce harm while preserving necessary investigations:

1) Public policy statement: CBP should publish a clear policy that defines "tactical targeting" and "strategic counter-network analysis," explains authorized use cases, and specifies whether US persons may be searched. Public clarity builds trust and lets Congress and courts weigh in.

2) Restrict searches to case-based use: limit Clearview searches to investigations with articulable suspicion or probable cause. Routine, untargeted searches must be barred.

3) Human-review requirements and thresholds: require multi-analyst confirmation for any action based on a face-search lead. Log scoring thresholds and require that matches below an agreed confidence score cannot trigger enforcement action.

4) Retention and deletion rules: set short, explicit retention limits for uploaded probe images, intermediate results, and audit logs, except where judicial process requires longer retention.

5) Independent audits and transparency: mandate regular, public audits by independent experts and release redacted logs and error-rate statistics. Contractors should not be exempt via NDAs from external review.

6) Privacy and algorithm impact assessments: require formal privacy impact and algorithmic bias assessments before deployment to production systems, with remediation plans made public.

7) Limit contractor access and require provenance controls: enforce strict role-based access and full provenance for gallery images. Know what images are in a gallery and where they came from.

8) Redress and oversight mechanisms: create a process for people to learn if they were subject to a search and challenge action taken based on a match, while balancing investigative confidentiality.

Technical steps to reduce false matches

From an engineering angle, practical controls can reduce harm while keeping analysts effective:

• Use conservative matching thresholds in operational settings and require ranked-candidate review instead of automated confirmation.

• Measure performance on the specific image types the agency will use, not only on high-quality mugshots or visa photos. Real-world sampling matters.

• Track demographic performance to detect bias and publish those metrics.

• Require multi-modal corroboration where possible — link face matches with other independent identifiers before action.

Questions agencies should answer publicly

I will mirror the hard questions the public is asking: What exactly counts as "tactical targeting"? Will searches include US citizens? What categories of photos can agents upload? How long will probe images and results be kept? Who audits the system? Who is accountable if a false match causes harm?

If you were on a congressional oversight committee, what single requirement would you demand before recontracting? If you work inside CBP, what controls would you want to keep your team reliable and defensible? How should courts or independent bodies verify compliance without endangering investigations?

Political and legal pressure points

There is momentum for legislative action: Senator Markey's bill to bar ICE and CBP from using face recognition directly targets this use. Public pressure and oversight can force agencies into more circumscribed, transparent programs. Social proof here matters: when civil liberties groups, bipartisan lawmakers, and technical experts align, agencies face real reputational and legal consequences.

Congress can act by drafting narrow rules that allow legitimate investigative use while forbidding routine biometric sweeps. Courts can require Fourth Amendment analysis when face-search results lead to searches and seizures. States and cities can set local norms that influence federal practice.

How to hold agencies to account while not crippling legitimate work

Policymaking should avoid black-and-white choices. A complete ban is one policy path; a tightly regulated, transparent use regime is another. Both aim to prevent abuse. My recommendation: require strict case-based use, public audits, and technical safeguards before allowing routine operational access. That keeps tools available for serious investigations but stops them from becoming hidden infrastructure.

Final thoughts and a short action checklist

Security matters. So do rights. Both can be served if agencies face clear rules and enforceable checks. When intelligence units adopt scraped-image face-search, the burden is on CBP and Congress to prove they can use it without causing widespread harm. Will they meet that burden?

Action checklist for stakeholders:

• CBP: publish use policy; limit searches to case-based work; require multi-analyst confirmation; publish error and audit data.

• Congress: require independent audits; codify retention and redress rules; fund NIST-style tests for the exact operational settings used.

• Public and advocates: push for transparency, demand redress pathways, and ask for proof of measurable harm reduction before wider deployment.

No, we should not accept opaque biometric surveillance as a fact of life. We should expect government agencies to prove that any tool they adopt actually improves public safety without disproportionate harms. If security professionals want public trust, they must earn it with transparency, limits, and clear oversight.


#CBP #ClearviewAI #FacialRecognition #Privacy #BorderSecurity #AIethics #Surveillance

More Info -- Click Here

Featured Image courtesy of Unsplash and Arthur Mazi (a8CxRWIu8yw)

Joe Habscheid


Joe Habscheid is the founder of midmichiganai.com. A trilingual speaker fluent in Luxemburgese, German, and English, he grew up in Germany near Luxembourg. After obtaining a Master's in Physics in Germany, he moved to the U.S. and built a successful electronics manufacturing office. With an MBA and over 20 years of expertise transforming several small businesses into multi-seven-figure successes, Joe believes in using time wisely. His approach to consulting helps clients increase revenue and execute growth strategies. Joe's writings offer valuable insights into AI, marketing, politics, and general interests.

Interested in Learning More Stuff?

Join The Online Community Of Others And Contribute!