Interrupt: you expect geopolitics to break research ties. Engage: the data says otherwise. Read that again: the data says otherwise. The phrase Jeffrey Ding used—“inextricably enmeshed”—is worth repeating. Inextricably enmeshed. That’s the practical reality inside conferences, labs, and code repos. If we ignore that fact, we build policies and strategies on fantasy, not on the flows shaping technology today.
Key findings from the Wired analysis
Wired analyzed 5,290 NeurIPS papers and found 141 coauthored by US and Chinese institutions—about 3 percent. For context, NeurIPS is the industry’s flagship venue where frontier methods show up first. The same rough pattern held in 2024: 134 of 4,497 papers involved both countries. When we look at specific technical building blocks, we see clear cross-pollination: the transformer architecture appears in hundreds of papers with Chinese authors; Meta’s Llama shows up in 106 collaborative papers; Alibaba’s Qwen is cited in 63 papers with US-affiliated authors.
Those numbers don’t capture every connection. Talent moves. Authors list affiliations that don’t reveal long-term relationships. Still, the paper-level signal is direct: practical collaboration persists despite political friction.
Why collaboration survives competition
Science and engineering run on three simple mechanics: shared language (code), shared incentives (publish, cite, reuse), and shared infrastructure (preprint servers, GitHub, open-source libraries). Researchers want to build on what works. Companies want to adopt high-performing techniques fast. Those motivations push in one direction—toward sharing and reuse—even while national governments push in the opposite direction.
Remember that many researchers trained in the US or Europe then returned to China, or vice versa. Professional networks born in graduate labs persist through careers. When a well-tested trick is published, it’s used internationally the next day. Open-source releases and model checkpoints travel across borders at internet speed. That’s the engine behind the numbers Wired found.
How technical ideas cross the Pacific
Look at the transformer: invented at Google, adapted everywhere. Wired found transformers in 292 papers with Chinese authors. Llama, released by Meta as a research-weighted model family, appears in 106 collaborative papers. Qwen, Alibaba’s model, appears in 63 papers that include US organizations. Mirror that phrase: Llama in 106 collaborative papers. Qwen in 63 collaborative papers. The repetition shows how model families become shared tools—raw materials for new work, not trophies to hoard.
What does that sharing mean for firms? It compresses development cycles. A technique validated in one lab becomes a standard building block for the next project elsewhere. That lowers barriers to entry for teams with smart ideas but less compute. It also spreads both benefits and vulnerabilities: optimization tricks travel, and so do attack surfaces and ethical blind spots.
Policy friction and practical trade-offs
Policymakers face a hard question: how to protect national security and economic advantage without throwing away the practical gains from cooperation. Say “No” to simple answers. No, severing ties will not stop knowledge flow; it will force it into darker, less verifiable channels. No, unilateral bans do not produce clean outcomes. They produce fragmentation, compliance headaches, and higher costs for industry and researchers.
Ask instead: how do we limit specific risks—export of sensitive models, misuse of dual-use tools—while keeping spaces for verification, standards, and joint safety research? That’s a calibrated question, and it invites negotiation. How would we set guardrails that make work transparent rather than obscure? What mechanisms ensure red-team efforts, reproducibility, and shared benchmarks across borders?
Business strategy when research ties persist
For firms the logic is simple: exploit the learning where it’s useful, protect what matters, and invest in resilience. Use shared advances—architectures, optimization recipes, evaluation metrics—to shorten development cycles. Keep R&D pipelines flexible so teams can swap in proven components. At the same time, safeguard proprietary data, production weights, and sensitive applications.
Ask yourselves: where is collaboration an advantage and where is it a risk? Where can cooperative research increase credibility and safety? Where must we draw a line for intellectual property and national security? Those questions push teams to state commitments and then act consistently with them, which persuades partners and regulators.
What the methodology tells us about AI tools and research
Wired’s analysis used an OpenAI coding model (Codex) to parse thousands of papers and identify institutional affiliations. That choice is telling. It demonstrates how code-writing models can automate labor that used to take weeks. The project mixed manual script writing with model-driven iterations; Codex wrote, modified, and executed scripts, but humans verified results because models make nontrivial mistakes.
This part of the story has two lessons. One, automation scales analysis, enabling new kinds of meta-research. Two, automation needs human oversight. If you treat an LLM as a black box, you’ll end up with errors that look plausible. How do we use models to accelerate work while keeping human checks? That’s a design and governance problem worth solving now.
Ethics, safety, and the shared-interest argument
If US and Chinese ecosystems are “inextricably enmeshed,” that creates a joint stake in safety. Shared problems—misinformation, model misuse, labor-market disruption—cross borders. That gives pragmatic grounds for cooperation on standards, incident reporting, and verification protocols. Saying “No” to cooperation in safety research would be counterproductive; the hazards don’t respect national lines.
At the same time, be honest about incentives. Firms compete for market share and talent. Governments compete for strategic advantage. Those conflicts matter. We should design institutions that align incentives: cooperative safety research, transparent evaluation platforms, and limited-access exchanges for sensitive tests. Who funds these platforms? Who audits them? Those are negotiation points we must work through.
Questions that push the conversation forward
How should research conferences balance openness and control? How much of model and data sharing can be governed by technical controls—watermarking, provenance—rather than blunt policy bans? What incentives will get firms to participate in cross-border safety work when they also race commercially? If you were setting national policy, how would you measure the cost of decoupling?
Those are calibrated questions meant to move discussion from slogans to trade-offs. They invite practical answers. They also invite disagreement—and that’s useful. When both sides name their constraints and priorities, negotiation becomes possible. Will you accept that trade-off? What would you say No to? …
Takeaways for leaders and practitioners
First, facts beat narratives. The data shows continuing collaboration. Second, shared tools make accidental cooperation likely; you cannot un-invent common architectures. Third, policy should be specific and enforceable, not theatrical. Finally, safety and verification provide natural avenues for cooperation that align with national interests as well as public welfare.
If you run a lab: document provenance, prioritize reproducibility, and invest in safety checks. If you run a company: map where shared research helps product velocity and where it becomes a security risk. If you’re a policymaker: ask calibrated questions that force trade-offs onto the table rather than demand symbolic gestures.
#AIResearch #USChina #NeurIPS #ModelGovernance #SafetyByDesign #ResearchFlows
Featured Image courtesy of Unsplash and Markus Winkler (c_ksDvwnu8o)