.st0{fill:#FFFFFF;}

OpenAI Rehires Zoph & Metz — Did Thinking Machines Lose Trust, or Is the ‘Unethical’ Claim Unproven? 

 January 16, 2026

By  Joe Habscheid

Summary: Two high-profile researchers from Thinking Machines Lab — Barret Zoph and Luke Metz — are leaving the startup to rejoin OpenAI. The exit is messy: one narrative says Zoph was fired for alleged “unethical conduct” around sharing confidential information; another says he told Thinking Machines CEO Mira Murati he was considering leaving and was then dismissed. OpenAI’s applications chief, Fidji Simo, announced the hires and said OpenAI does not share the same concerns about Zoph. The moves shift talent, attention, and momentum back to OpenAI while raising big questions about trust, governance, and the talent market in the race to build advanced AI.


What happened — the facts laid out

Barret Zoph and Luke Metz, cofounders of Thinking Machines Lab, are returning to OpenAI along with another staffer, Sam Schoenholz. Fidji Simo, OpenAI’s CEO of applications, announced the hires in a memo. According to Simo, Barret will report to her and Luke and Sam will work under Barret. The memo also said the hiring timeline moved faster than expected, so some role details are still being finalized.

Public reporting adds friction. Tech reporter Kylie Robison posted on X that Zoph was fired for “unethical conduct,” alleging he shared confidential information with competitors. WIRED could not verify that claim and Zoph did not immediately respond to requests. Thinking Machines’ CEO Mira Murati confirmed Zoph’s departure in a post and named Soumith Chintala as the startup’s new chief technology officer. Another cofounder, Andrew Tulloch, left in November to join Meta. OpenAI had recently lost a research VP, Jerry Tworek, so these returns are a counter-move by OpenAI to replenish talent.

Two narratives, both plausible — and neither fully proven

One version is simple: Zoph was fired for alleged misconduct. The other version — given by Simo’s memo — is that Zoph told Murati he was thinking of leaving and was dismissed. Both narratives exist side by side. Both matter. Both raise questions.

Which is true? We don’t have a definitive public record. WIRED could not verify the “unethical conduct” allegation. OpenAI’s memo says it does not share the same concerns Murati expressed. Mirroring that phrase — “does not share the same concerns” — highlights a gap in judgment between organizations. That gap matters for investors, employees, and customers. What does that gap mean for trust, and for the standards teams use to move talent across companies?

Why this is damaging for Thinking Machines Lab

Thinking Machines Lab was built on the credibility of founders who left OpenAI to start something new. Losing two cofounders and an early team member so soon is a blow to morale and to technical continuity. The startup had a reported valuation near $12 billion and was talking to raise more than $4 billion at a $50 billion valuation. Those valuations depend on people as much as on technology. When founders leave, investors reprice risk. When senior engineers leave, product roadmaps and timelines slip.

The product Tinker — a developer tool for customizing models with private datasets — depends on deep system knowledge from teams that tuned post-training pipelines and safety controls. Barret led OpenAI’s post-training teams before leaving in late 2024 to cofound Thinking Machines. Luke contributed to ChatGPT and the o1 reasoning model. Losing their expertise shifts both technical risk and reputational risk back onto Thinking Machines.

Why OpenAI benefits — and why readers should pause before cheering

OpenAI gains experience and capacity. Re-hiring talent familiar with its codebase, systems, and safety practices reduces onboarding friction. OpenAI also had a recent gap in leadership with the departure of Jerry Tworek from research; bringing back senior researchers is a fast way to fill that gap.

No one should assume this is only a win for OpenAI. Re-absorbing talent can create internal friction, and the optics of hiring people who just left another company raise questions about incentives and the movement of proprietary knowledge. OpenAI’s memo says it does not share the concerns about Zoph that Thinking Machines expressed. Yet an outside observer has to ask: if one organization flags an issue, why is the other comfortable moving forward? What checks and reconciliations happened before the hires?

What this says about the broader AI talent market

The episode confirms a truth investors and founders already see: top AI talent is scarce and fluid. Several startups in the sector are led by former OpenAI researchers. That creates two effects: strong investor appetite to back spinoffs, and continuous movement of personnel between startups and incumbents. Social proof is clear — top researchers move together, investors follow, valuations follow.

But mobility creates friction in governance and IP protection. Startups must design tighter internal controls without suffocating creativity. Investors must accept that people may pivot when large incumbents offer familiar systems and resources. That tension is part of the market now: high reward, high churn.

How to read the ethical-allegation angle

Allegations of sharing confidential information are serious. They ought to be investigated and resolved with facts. At the same time, public accusations without verifiable proof can destroy careers and damage startups. Balance is required. Mirroring the word “alleged” matters here — alleged unethical conduct. Alleged — not proven.

Ask: what evidence would settle the matter? How transparent were the investigatory steps? What standards do startups set for off-boarding and competitive contact? These questions should guide founders and boards when they design policies for confidentiality, non-compete, and post-employment contacts. They are not comfortable questions; they are necessary ones.

Lessons for founders, boards, and investors

First: governance matters. Clear, enforceable policies for handling confidential information and disputes are not bureaucratic indulgences. They are risk controls. Second: retain top talent by aligning meaningful incentives with clear expectations. Equity plus purpose plus transparent governance reduces the temptation to leave the team at a critical moment.

Third: investors must price talent risk. If a startup’s value depends on a few individuals, that concentration is a real risk. Make it explicit. Terms, vesting, and board oversight should reflect the human concentration in technical startups. Fourth: communication matters. If one company publicly raises concerns about a former employee, the receiving company should state the facts they relied on to hire. Transparency builds trust with the market.

Immediate effects on customers and partners

For customers using Tinker and for developers who planned integrations, this is a signal to ask direct questions. Who owns the roadmap now? How will service levels change? What guarantees exist for model safety and data privacy? Customers should ask those questions and expect clear answers. That is their leverage.

Open-ended question: what would reassure you as a developer or a partner — public audits, transition plans, contractual guarantees? The answer will vary, but asking it is the right next step.

What to watch next

Watch for these signals over the coming weeks: any public statement from Barret Zoph or Luke Metz clarifying the reasons for the move; any legal action or formal investigation into the alleged conduct; investor reactions at Thinking Machines and at any new funding round; and how the product roadmap of Tinker shifts under Soumith Chintala as CTO.

Also watch internal hiring patterns. If more senior staff leave Thinking Machines for incumbents, that suggests deeper churn. If the company stabilizes and delivers on product commitments, that suggests resilience. Which way it tilts will tell you more than speculation.

Final thoughts — a practical lens

This is both a cautionary case and a demonstration of the marketplace at work. Talent moves where resources and missions align. Investors chase talent. Startups must protect their IP and build incentives that keep people committed. No one wins if accusations are used as a public weapon without due process. Nor does any company win if it ignores real breaches of trust.

So ask directly: what processes are in place to protect users, data, and the intellectual work behind models? What would it take to rebuild confidence if it is broken? These are practical questions founders and executives need to answer now.


Your turn: what does this reshuffle tell you about where the AI industry is headed — more consolidation around incumbents, or maturing startups that can withstand churn? What would convince you to place your trust, data, or investment in a new startup rather than an incumbent? Share one specific proof point you would need to see.

#AI #OpenAI #ThinkingMachines #BarretZoph #LukeMetz #MiraMurati #Tinker #AIStartups #AIResearch #TechTalent

More Info — Click Here

Featured Image courtesy of Unsplash and Nellie Adamyan (ejEgCEXo2Ng)

Joe Habscheid


Joe Habscheid is the founder of midmichiganai.com. A trilingual speaker fluent in Luxemburgese, German, and English, he grew up in Germany near Luxembourg. After obtaining a Master's in Physics in Germany, he moved to the U.S. and built a successful electronics manufacturing office. With an MBA and over 20 years of expertise transforming several small businesses into multi-seven-figure successes, Joe believes in using time wisely. His approach to consulting helps clients increase revenue and execute growth strategies. Joe's writings offer valuable insights into AI, marketing, politics, and general interests.

Interested in Learning More Stuff?

Join The Online Community Of Others And Contribute!

>