Summary: Over 160 employers in New York state filed WARN notices for mass layoffs since last March. New York added an explicit checkbox for “technology and automation” — including AI — on those forms almost a year ago. Yet, not a single employer has selected that option. This post parses why companies avoid saying AI, what the data does and does not tell us, how workers and policymakers should respond, and which practical steps can produce real accountability and support for displaced employees.
The striking fact: “No company has admitted” — now what?
No company has admitted. No employer had marked technology as the reason. Repeat that line to yourself: no company has admitted. What does that tell us about incentives, disclosure rules, and reality? It tells us three things right away: firms have legal and reputational reasons to avoid the label, the mechanics of organizational change mask causal chains, and public statements by firms can differ from formal filings. Those three points explain the gap between national headlines and New York’s WARN data.
What the WARN checkbox was supposed to do
New York added the technology-and-automation option so the state could capture an early signal of structural labor shifts. The form allows employers to pick reasons from 17 choices and, if they pick technology, to specify AI, robotics, or software modernization. The goal was straightforward: get better information so the state can retrain workers and spot trends before they become crises.
Why companies likely avoid checking the AI box
Companies have clear reasons to avoid saying “AI” on formal paperwork. First, reputational risk: public admission could spark backlash from customers, regulators, and employees. Second, legal and political consequences: admitting AI caused layoffs may invite stricter oversight, conditions on tax incentives, or even litigation. Third, management explanations: firms prefer broad causes like “economic conditions” or “restructuring,” which preserve managerial flexibility and narrative control.
Can you imagine an executive saying, on record, “Yes — we replaced people with AI”? Most won’t. So they do not. They give safer labels like “economic” or “reducing layers.” That raises the question: how do we separate deliberate omission from genuine uncertainty? How do you ask employers in ways that make them answer openly?
Why economists say attribution is hard
Economists warn this is messy. Companies change workflows slowly. A new piece of software can be introduced this year but only become the reason for job redesign three years later. Even when AI helps automate a task, firms often reorganize roles rather than eliminate them immediately. That makes clean, binary attribution — “this layoff = AI” — unlikely.
Erica Groshen at Cornell suggests a different approach: ask for data on the evolution of skills and occupations, not just a blunt checkbox. That matters because whether someone was displaced by AI or by competitive pressure, workers need the same thing: clear information and pathways to retrain into growing roles.
What companies are saying publicly vs. what they file
There’s a mismatch. National analyses show tens of thousands of roles attributed to AI by employers, while New York’s WARN filings show zero checked the AI box. Big names are part of the story: Goldman Sachs publicly linked layoffs to AI-enabled productivity gains. Amazon warned that AI benefits would reduce jobs. Morgan Stanley reportedly included a small portion of AI-related losses. Yet, on New York WARN forms, these same firms listed reasons like “economic” or “restructuring.”
Why the gap? Geography and timing: layoffs and automation occur worldwide and over time. A firm may cite AI in a press release about global strategy but still list different proximate causes on a local WARN form. Employers might also fear regulatory attention if they check AI on paperwork tied to state supports and grants.
Are firms hiding something, or are the labels meaningless?
Both possibilities deserve attention. Are firms deliberately avoiding the AI label to dodge scrutiny? Or are WARN labels inadequate for capturing slow, diffuse technological change? Ask yourself: what kind of answer would change policy or worker behavior? A blunt “yes” from employers would trigger immediate political and social responses. A “no” buys time and reduces liability.
What do you want companies to report? More granular skill-level data, or a simple yes/no? New York’s current choice trades detail for simplicity. Groshen argues for richer data streams — skill shifts, task changes, hours altered — because that tells workers what to learn next.
The politics and policy options on the table
Governor Hochul ordered the Department of Labor to start asking about AI. The AFL-CIO supports stronger rules and penalties to force transparency. State lawmaker Harry Bronson proposed two bills: one requiring annual estimates of unfilled roles tied to AI for firms over 100 employees, the other expanding WARN-like disclosure for technology-driven job displacement and adding penalties that could affect eligibility for state grants and tax breaks.
Those are straightforward carrots-and-sticks. Use conditional access to public funds to enforce transparency. That ties into Cialdini’s principle of consistency: once firms commit to truthful reporting, it becomes harder to shift narratives later. Which brings us to a calibrated question for policymakers: how will you balance enforcement with avoiding perverse outcomes that push companies offshore?
Practical steps for workers and unions
Workers need immediate, practical responses. Here are actions with real value:
- Demand clarity in exits: Ask employers, “Was AI a factor in my role being cut?” This is a calibrated, open question that encourages a useful reply. Don’t accept platitudes.
- Collect evidence: save job descriptions, performance reviews, and communications that show changes to task lists. That helps match layoffs to task automation if necessary.
- Call for detailed reemployment plans: unions and worker advocates should press for transition plans that list skills gaps and training options tied to local labor market demand.
- Use WARN follow-ups: the Department of Labor already follows up on filings. Push them to request supplementary data on tasks and technology when firms check “other” or “economic.”
Practical steps for employers
If you run a company, be honest about trade-offs. Transparency builds trust and reduces downstream legal risk. Commit to these steps:
- Report candidly on WARN forms. If you’re using AI to change roles, say so and describe which tasks are affected.
- Invest in transition plans. Offer reskilling and clear rehire pipelines so you don’t lose valuable institutional knowledge.
- Set reasonable timelines. Slow change helps workers adapt and preserves social license to innovate.
If you’re not willing to be transparent, you should expect political and reputational costs. You can say “No” to deeper disclosure, and that’s a valid choice — but it should come with trade-offs. What are you ready to accept?
How regulators can design useful reporting
A checkbox is a blunt instrument. Better options include:
- Task-level reporting: require firms to map which tasks were automated and which roles shifted.
- Skill-evolution data: ask for how many employees had duties reduced, increased, or changed in skill mix because of new technology.
- Time-lag windows: require firms to explain when a technology was introduced and when job effects were realized, to capture gradual transitions.
- Conditional public funds: tie transparency to eligibility for grants and tax breaks to force compliance.
Those measures increase employer accountability while providing actionable data for workforce development. They also respect firms’ need for proprietary protection by focusing on tasks and skills, not trade secrets.
The human side: empathy, fear, and practical reassurance
People who lose jobs feel anger, confusion, and fear. That’s real. Employers who deploy AI often talk about productivity. Workers see paychecks disappear. Both views are valid. We should reassure workers with practical support, and we should demand from firms concrete plans for displaced employees.
Ask yourself: what would make you feel safer if your role changed because of technology? Better severance? Training paid by the employer? Priority hiring lists? Those are the kinds of commitments that win consent and reduce conflict. They also align with Blair Warren’s prescription: encourage dreams (people can move to better jobs), justify failures (firms can be upfront about mistakes), allay fears (offer real transition supports), confirm suspicions (acknowledge the role of tech where it exists), and empathize with struggles.
Two calibrated questions for employers and policymakers
How will you measure whether AI replaced tasks or simply changed them? How will you make that measurement useful to the person who lost the job? These are the open-ended questions that force concrete answers. They move a discussion away from slogans and toward measurable, traceable outcomes.
What the data so far actually means
The raw WARN outcome — zero firms checking AI — is a powerful signal, but not a final answer. It signals the need for better questions, not denial. Nationally, many firms acknowledge AI’s role in job cuts. Locally, firms may choose different labels for legal and political reasons. Both facts can coexist. That should make us ask: what reporting system gives workers timely, useful information?
A short playbook: what to do next — for workers, unions, employers, and lawmakers
Workers and unions: ask hard, specific questions. Demand task-level explanations and transition commitments. Employers: answer honestly or accept accountability. Lawmakers: adopt reporting that trades simplicity for usefulness — task and skill metrics, timelines, and conditional public funding.
Final thoughts — clear trade-offs, honest answers
The blunt fact remains: no company has admitted to selecting the technology checkbox in New York WARN filings. That gap forces a choice. We can accept a simple but weak form that produces poor signals, or we can build a reporting system that produces data actionable for workforce development. Which will you support? Which will your company follow? Which will your representatives write into law?
If you want public agencies to act, push them to ask the right questions. If you’re an employer, consider this: transparency buys trust, and trust buys smoother change. If you’re a worker, collect information and ask precise questions. Saying “No” to sweeping claims is acceptable — but don’t say “No” to asking for detailed, evidence-based answers.
#AI #FutureOfWork #WARN #LaborPolicy #Upskilling #JobSecurity #WorkforceData #NewYorkLabor
Featured Image courtesy of Unsplash and Nellie Adamyan (ejEgCEXo2Ng)
