Summary: Mobile Fortify is being rolled out by ICE and CBP as a tool to "determine or verify" identities. That claim does not match how the software works in the field. The app creates candidate matches from biometric databases under conditions that produce frequent errors. It was fast-tracked after DHS pared back privacy reviews, and it feeds large, long-lived databases that sweep up photos of citizens, protesters, and bystanders. The real question is not technical nuance alone — it is whether we accept an expansive, low‑accuracy surveillance tool as a basis for enforcement and civil liberties decisions. What do we do about it?
Interrupt and engage — here’s the blunt point
Mobile Fortify can't actually verify who people are. Can't verify. The app returns possible matches, not proof. No single field scan from a crowded street, taken on a shaky phone camera, can be treated as identification. So why are agents acting like it does? What will we accept as proof before someone's liberty or reputation is put at risk?
How Mobile Fortify is presented versus how it behaves
DHS markets Mobile Fortify as a tool to "determine or verify" identity. The software, built to generate candidate matches, uses facial templates and similarity scores. It returns entries that meet a threshold that can be loosened or tightened for speed. That means the system is tuned to trade off accuracy for response time. The result: suggestive leads, not confirmations. The human in the loop is then asked to treat a returned photo as if it were proof. Which raises the obvious mirror question: "A 'possible match' — is that probable cause?"
Why field conditions break facial-comparison systems
Face comparisons work best under controlled conditions: cooperative subjects, standard poses, uniform lighting, fixed cameras. Outside ports of entry, none of those controls exist. Head tilt, motion blur, shadows, occlusion, phone autofocus, and simple cropping all change the mathematical template the system compares. Small changes in the input can reshuffle candidate lists. That makes a match more likely to be wrong than right in many common scenarios. NIST testing with DHS and CBP shows accuracy drops sharply when images are taken beyond those controlled settings.
Real-world failures — not hypothetical, but reported
Field testimony shows two photos of the same detained woman produced different candidate identities. Agents used body movement to get a "better" shot — a coercive action that even caused pain. The app provided a "maybe" and a "possible." The human operator then relied on language and appearance as additional justification. The app did not show confidence scores, thresholds, or multiple ranked candidates clearly. So a "maybe" became a basis to press an enforcement action. That pathway — from noisy algorithm output to real-world consequence — is the part people worry about most. Do you accept a tool that produces "maybes" as a stepping stone to detention?
Policy rollback and rapid approval — how the checks were removed
Until early 2025, DHS had a department-wide facial-recognition directive limiting use: no sole reliance on face recognition, opt-outs in non-law-enforcement collections, and bans on wide-scale monitoring. That directive vanished from DHS public materials weeks after a change in leadership. Centralized privacy review was dissolved. CBP and ICE privacy officers assumed the authority to say "no new privacy assessment needed" when the app was fast-tracked. That shift of authority let Mobile Fortify deploy with less oversight than previous programs received. When safeguards are removed, the technology's limits stop being academic and start causing real harms.
Where the data go — retention, watchlists, and unclear rules
Fortify expands image and fingerprint capture well beyond border checkpoints and stores them in systems tied to the Automated Targeting System (ATS). CBP says records may be retained up to 15 years and could live longer after sharing. The app feeds databases used for intelligence and lead generation, such as SAW, which holds "derogatory" markers and is not limited to noncitizens. There are references to a Fortify the Border Hotlist with opaque criteria and no clear removal process. That combination — long retention, intelligence use, and secret lists — means people can end up indexed with no transparency and few remedies. If you're thinking this sounds risky for lawful residents and citizens, your concern is valid.
Design choices from the vendor — speed versus certainty
NEC's patents and system descriptions show deliberate decisions: convert images to templates, compare using similarity scores, and tune thresholds to keep systems responsive. The patents explicitly describe stopping searches after a short time window and surfacing the highest-scoring candidate even when no definitive match exists. Lower thresholds reduce latency at the expense of more false positives; higher thresholds do the opposite. In other words, the product is designed to make trade-offs that favor speed and scale in messy field conditions. That's a product decision with policy consequences. Which side of that trade-off do we want government to take when civil liberties are at stake?
Who ends up scanned — bystanders, protesters, citizens
Reports show Fortify has been used on protesters, observers, and people later confirmed as US citizens. Agents have told people they were being added to databases without consent. Lawmakers have raised alarms about cataloging people who protest or watch enforcement actions. When a system indexes observers, it chills free expression. If agents escalate encounters based on accent, perceived ethnicity, or skin color — then scan faces — then the technology compounds an already unequal encounter. Does that sound acceptable to you?
Legal and civic pushback
Senator Markey and others introduced the ICE Out of Our Faces Act to block certain biometric surveillance deployments by ICE and CBP. Civil-rights groups like the ACLU, EFF, and EPIC have flagged the app's limits and privacy implications. NIST data on performance in uncontrolled settings supports their technical concerns. This is not a fight between fear and progress; it is a debate about where power, accuracy, and accountability should sit when state force meets private data." What standards should govern the state's use of biometric tools?"
Concrete steps officials and advocates can take now
- Require clear, public policies that ban sole reliance on facial-recognition matches for enforcement actions. - Restore independent, department-wide privacy review for any system that collects biometric data outside ports of entry. - Mandate transparent retention rules, removal processes, and public reporting of watchlist criteria. - Require systems to display confidence scores and ranked candidate lists, and to log operator decisions tied to scans. - Prohibit use of the technology for cataloging peaceful protesters or bystanders. - Fund independent audits and field testing under real-world conditions, with results published.
What citizens and local leaders should ask
Ask your mayor, police chief, state attorney general, or congressional office: What policies govern DHS deployment of Fortify in our city? Are citizens being scanned without consent? Is data from local encounters being fed to federal watchlists? Will we get redress if wrongly indexed? These are open-ended questions that force a paper trail and public answers. If you want to push for oversight, what will you demand?
Why saying "No" matters
No is a power move. No says a nonconclusive algorithm output will not become probable cause. No says we will not accept indefinite retention of biometric records without clear standards and oversight. Saying No can stop unlawful practices and open room for better policy. What will your No look like?
Empathy, accountability, and practical realism
People want secure communities and fair enforcement. Law enforcement wants tools to find dangerous individuals. Those goals are not incompatible. But when a tool provides suggestive outputs and agencies treat suggestive outputs as proof, we create risk for innocent people and for good policing alike. If your instinct is to wish for both safety and safeguards, you are in the mainstream. Your concerns are justified, and they deserve rules that keep both public safety and civil liberties intact.
Questions I’ll leave you with
What level of accuracy should the government need before acting on a biometric lead? How transparent should watchlist criteria and retention policies be? Who audits the system and how often? Asking these questions forces answers. If you want to make progress here, which single demand will you raise first?
#MobileFortify #FaceRecognition #Privacy #DHS #ICE #CBP #Surveillance #CivilLiberties #BiometricPolicy
Featured Image courtesy of Unsplash and Szabo Viktor (8MU2zOaDU4M)