Summary: The Department of Health and Human Services is using Palantir and Credal AI tools to screen grants, grant applications, and job descriptions for compliance with President Trump’s executive orders targeting “gender ideology” and DEI. This work started in March 2025 inside the Administration for Children and Families and has already affected funding decisions, staff work assignments, and the language nonprofits use to qualify for federal support. What does this mean for policy, civil rights, and the future of government use of artificial intelligence? What should grantees and the public demand now?
What the inventory reveals — the plain facts
HHS published an inventory of AI use cases for 2025 that shows two contractors running active systems inside the Administration for Children and Families (ACF). Palantir built a list of “position descriptions that may need to be adjusted for alignment with recent executive orders.” Credal AI—founded by former Palantir employees—built a system that audits “existing grants and new grant applications” and flags files for additional review. Staff at ACF make the final decisions after AI flags material.
Payments are documented in federal filings but do not describe this DEI- or “gender ideology”-focused work. HHS paid Palantir more than $35 million during the period in question; ACF paid Credal AI roughly $750,000 for its “Tech Enterprise Generative Artificial Intelligence Platform.” Neither Palantir nor HHS publicly announced the systems’ role screening for DEI or “gender ideology.” Requests for comment went unanswered.
What the executive orders say and how they direct agencies
Two executive orders issued on the first day of the administration set the rules. Executive Order 14151 orders an end to policies, programs, contracts, and grants that reference DEIA, DEI, “equity,” or “environmental justice,” and assigns the Office of Management and Budget and other offices to enforce compliance. Executive Order 14168 requires federal policy to define sex as an “immutable biological classification” limited to male and female, labels “gender ideology” and “gender identity” as disconnected from biological reality, and prohibits federal funds from promoting such ideas. It also directs agencies to “assess grant conditions and grantee preferences and ensure grant funds do not promote gender ideology.”
How the AI tools are being used in practice
According to the inventory, these AI tools operate as first-pass reviewers. Credal’s platform examines application files, produces initial flags and priorities, and forwards that package to program staff for a final decision. Palantir created the list of job descriptions “that may need to be adjusted for alignment” with the orders. The systems do not make final funding or personnel decisions—they generate flags for humans to review. Still, in bureaucratic practice flags shape priorities and focus scarce staff time.
You should ask: How accurate are the flags? What words trigger them? What is the false-positive rate for ordinary terms like “female,” “inclusion,” or “underrepresented”—terms previously flagged at other agencies? What safeguards exist for due process, appeal, and transparency?
Why the timing and agencies matter
The ACF runs child welfare and family programs including foster care and adoption systems. Targeting language and funding priorities inside ACF affects vulnerable populations and community providers. The use of AI here is not theoretical: it interfaces with grants that fund services for children, families, and marginalized groups.
Mirroring the broader federal pattern, earlier actions in 2025 show similar effects across agencies: the National Science Foundation flagged research that used words linked to DEI; the CDC paused research mentioning LGBT and transgender terms; SAMHSA removed an LGBTQ youth service line from the national crisis lifeline. These are not isolated administrative changes—they combine into systemic shifts in what the federal government will fund and study.
Contracting, money, and influence
Palantir’s federal revenue rose sharply in this administration. Reported net payments and obligations topped $1 billion in the first year after the new inauguration, up from about $808 million previously. Within that, Palantir received increased funds from the Army, Air Force, ICE, and HHS. Palantir earned roughly $81 million from ICE in the relevant year, up from $20.4 million the year before. ICE also added $30 million to a contract for tools giving “near real-time visibility” into people and aiding deportation targeting.
Credal’s documented payment from ACF was approximately $750,000. The federal filings for these payments do not list the DEI or “gender ideology” purpose, creating a transparency gap between what is paid for and what the software is being used to do.
Palantir beyond grants: law enforcement and data fusion
Palantir’s products—Gotham, Investigative Case Management, FALCON—are already embedded in immigration and law enforcement workflows. These systems integrate multiple databases, tip lines, and case records to build dossiers and provide region-based targeting tools. The same company supplying tools for enforcement work is now supplying software that flags language in social programs. Ask yourself: what does it mean to have a single contractor with influence across both enforcement and civil-social programs?
Consequences already in motion
Since these executive orders took effect, outcomes have included: nearly $3 billion in NSF and NIH grant funds frozen or terminated; staff reassignments and layoffs across agencies; erasure of mentions of women, indigenous people, and LGBTQ people on agency pages; and changes in how transgender people are recognized in federal programs. Nonprofits rewrote mission statements—more than 1,000 organizations edited language to avoid jeopardizing funding. Service lines for vulnerable people were removed from crisis resources. These are not abstract impacts; they affect research, health care, education, and direct services.
Legal, ethical, and civil-rights risks
There are three overlapping risk domains. First, civil-rights law: programs that deny recognition or services based on gender identity risk violating federal anti-discrimination statutes and precedent. Second, administrative law and transparency: agencies must follow notice-and-comment rules and document decision rationales. Using opaque AI to drive discretionary enforcement or funding decisions raises questions about adequate record-keeping and meaningful review. Third, technical risk: algorithmic flagging can reproduce bias or overreach when trained or configured without careful validation.
For grantees and the public: what does accountability look like when AI shapes the intake pipeline? What auditing mechanisms exist to test whether flagged material was accurate? Who can challenge a decision that resulted from an AI flag? These are not hypothetical questions; they are operational gaps that need answers.
Policy trade-offs and moral arguments — two views
Supporters of the executive orders argue that the federal government should not fund programs that prioritize identity politics or promote contested beliefs about gender. They see AI screening as a practical tool to ensure compliance with a clear political mandate. Opponents see an administrative machinery that suppresses research, silences vulnerable populations, and erodes civil-rights protections. Both sides claim legitimacy. Both sides have real stakes.
I acknowledge both perspectives: government programs should obey the law and elected policy. People who fear exclusion and loss of services deserve weight in the conversation. So do taxpayers and administrators who want consistent rules. But silence—or opaque automation—does not serve either side well. If we value rule of law, we must insist on transparent rules, clear criteria, and public oversight.
Questions stakeholders should push now
Start with calibrated, public questions that demand concrete answers. For example:
- What exact criteria do the AI systems use to flag grants, applications, and job descriptions?
- Who trained the models, on what data, and what were the validation metrics?
- Can flagged applicants see the reason they were flagged and appeal before funding is denied or modified?
- Why do federal contract descriptions omit the purpose of these audits?
- How will agencies ensure that vulnerable populations are not made invisible through language purges?
Ask these as open questions to agency leaders and contractors. Ask them in public forums, FOIA requests, and congressional oversight hearings. Mirroring helps here: repeat the key phrase when you ask, for example—”You are flagging grants for ‘DEI’ and ‘gender ideology’—how do you define those terms?” Repeat the phrase and force a clear definition.
What grantees and nonprofits can do practically
Do not panic. Do prepare. Practical steps:
- Document. Keep copies of submissions and all communications. If flagged material triggers a change request, record the reason and timeline.
- Demand specifics. When notified of a flag, request the exact clause, the software output, and the human reviewer’s rationale.
- Build contingency language. For critical terms, maintain alternative neutral phrasing and document why you used each variant.
- Coordinate. Join coalitions—many nonprofits are already rewriting mission language. A collective response reduces individual risk and creates social proof of the policy’s impact.
- Legal readiness. Consult counsel about administrative appeals and civil-rights protections if funding is denied or strings are attached that limit protected classes.
What agencies and contractors should do if they want legitimacy
Transparency is the cheapest way to build trust. Agencies should publish the criteria, validation studies, and appeal processes for AI-driven flags. Contractors should provide model documentation, error rates, and independent audits. If the goal is compliance with law, state the legal standard and show how the software maps to that standard. If the goal is to reduce waste, show evidence the system reduces false claims without harming protected groups.
No one benefits from opaque, broad sweeps that chill research and services. So ask: What process will the agency adopt to explain an AI-driven change in funding or staffing? If the answer is “we can’t share”—say no to that. No opaque systems; no black-box determination of rights and funds.
Longer-term strategy: policy fixes worth demanding
This is a moment for three durable reforms:
- Mandatory AI documentation and public model cards for any system used to make or influence funding and employment decisions.
- A clear appeal and redress pathway for grantees and staff when AI-generated flags lead to adverse actions.
- Independent audits by civil-rights experts to test for disparate impacts on protected classes.
These are simple accountability rules: transparency, review, and correction. They align with responsible governance, protect vulnerable people, and preserve legitimate policy enforcement. They are also persuasive to lawmakers who want to show oversight without reflexive politicization.
Closing: the choices we face
You can accept an administration where AI quietly enforces policy with little public record, or you can demand openness and process. Which do you prefer? Will you press for published criteria, model documentation, and appeal procedures? Or will you let contractors and file names quietly decide who gets funded and who is excluded?
There is a practical middle path: comply with lawful direction while building transparent safeguards to protect civil rights and public trust. That requires pressure from Congress, civil-society groups, grantees, and journalists. It requires asking precise questions and refusing vague answers. What will you ask next?
#HHS #Palantir #CredalAI #DEI #GenderPolicy #AIethics #GovernmentAI #GrantTransparency
Featured Image courtesy of Unsplash and the blowup (LQRsMX6PjGw)
