.st0{fill:#FFFFFF;}

John McCarthy Named AI — He Warned Us. What Will You Say No To Before AGI? 

 November 5, 2025

By  Joe Habscheid

Summary: This post examines Steven Levy’s October 31, 2025 piece about the man who named artificial intelligence and the irony that the same man viewed advanced machine minds as a threat. We trace the Dartmouth origin, follow the idea from “artificial intelligence” to “artificial general intelligence,” and weigh the technical, social, and policy implications. We ask hard questions: who benefits, who loses, and how should a free society steer a technology that can match the full range of human cognition?


Interrupt — Engage: The guy who named it warned us. The guy who named it warned us. Can a founding label carry a warning? Can a single phrase—“artificial intelligence”—bind invention and caution in the same syllables? If so, what do we owe that warning now?

The Dartmouth moment: a small meeting, a big idea

In the summer of 1956 a handful of academics gathered at Dartmouth College to ask a bold question: can machines think? The attendees were not yet “computer scientists” in the formal sense; they came from mathematics, logic, engineering, and psychology. John McCarthy proposed the phrase “artificial intelligence” during that meeting. The phrase stuck because it named a goal that was clear and ambitious: build systems that reason, learn, and solve problems the way humans do.

That meeting mattered because it turned scattered experiments into a program of research. Naming does work: a label focuses dollars, recruits talent, and creates a community. The Dartmouth event gave rise to laboratories, conferences, and firms. It also created expectations—about capabilities, timelines, and returns on investment—that shaped funding and culture for decades.

John McCarthy: the namer who cautioned

Levy’s piece highlights an irony: John McCarthy, who coined “artificial intelligence,” later warned about the risks of machines that could match human cognition. The man who named it—McCarthy—saw it as a threat. That line should jar us because naming confers ownership. The community built what he named. If he feared the endpoint, we cannot shrug and say it was someone else’s idea.

He was not a Luddite. He fought for formalism, for logical clarity. But he also understood that agency matters: a system that reasons like humans could make decisions with effects we cannot easily reverse. McCarthy’s caution shows a moral consistency: invent, and also account for consequences. That consistency is a small lesson with large consequences.

From narrow AI to AGI: a conceptual leap

Most progress since Dartmouth has been in narrow systems—tools that do one thing well. Image recognition, machine translation, chess engines: these are narrow. Artificial general intelligence (AGI) is different. AGI aims to match the full range of human cognitive abilities—reasoning across contexts, transferring learning, planning across long horizons, and understanding complex social situations. That jump from narrow to general is not merely scale; it is structural. It changes the control problem, the incentive architecture, and the social contract between humans and automated systems.

Why does AGI command such attention now? Two forces converge: improved algorithms and massive compute plus data. Combine those with intense investment and you get an acceleration that surprises many. But acceleration raises a question McCarthy flagged implicitly: when capability arrives faster than governance, what happens to collective choice?

Why the inventor feared AGI

Levy’s reporting interprets McCarthy’s view as caution toward agents that could match human judgment. The fear is simple and structural: humans embed values, context, and constraints in decision-making. A machine that internalizes tasks without the same moral and social embeddedness can create harm at scale. The guy who named it—John McCarthy—saw it as a threat. He considered not only technical failure modes, but systemic risks: shifts in labor, concentration of power, and autonomous systems making consequential choices without democratic oversight.

This isn't alarmism. It is an argument about leverage. Systems that approach human-level cognition acquire leverage over real-world systems—financial markets, communications, infrastructure. Small errors at that scale become large societal effects. McCarthy’s concern points to responsibility, and responsibility demands institutions and rules before capability becomes irreversible.

The modern obsession: rush, money, and narrative

Today’s public discourse treats AGI as inevitability. Tech firms, venture capital, and some governments race to claim leadership. Media amplifies milestones into narratives of imminent superhuman machines. That attention is not accidental: hype attracts talent and funding. But hype also narrows debate into timelines and fewer competing models of governance. We should ask: who benefits from this framing? Who gets to decide the safety measures, and who pays for the failures when they happen?

Social proof works here: when a few high-profile firms commit to AGI research, others follow. That momentum creates path dependence. The community’s commitment to a particular trajectory makes alternative approaches harder. If you want a say in how AGI unfolds, where do you put your energy—technical oversight, regulatory frameworks, or public education?

Technical pathways and chokepoints

Technically, AGI could emerge through improved general architectures, better learning algorithms, or vast scaling of current models. Each path has chokepoints. Scaling demands compute and energy, which centralizes power among those who can pay. General architectures require breakthroughs in transfer learning, theory of mind modeling, and robust reasoning. Safety research—alignment, interpretability, and verification—lags behind because it is harder to monetize.

If you ask, “How do we reduce risk while preserving innovation?” you force a trade-off analysis. We can slow deployment, invest in public-interest compute, create safety sandboxes, and require transparency for high-impact systems. Those are policy levers; they are not technical miracles. They require political will and cross-sector cooperation. What governance path will the market accept? What will civil society demand?

Social and economic stakes

AGI affects labor markets, concentration of wealth, privacy, and political discourse. Machines that reason like humans can replace cognitive labor across professions. That replacement can raise productivity and social welfare if gains are widely shared; it can deepen inequality if gains accrue to a few firms and their investors. McCarthy’s warning nudges us to ask: are we building infrastructure for broad prosperity or for concentrated power?

Policy choices matter. Education, social safety nets, taxation, and public investment can channel gains into shared welfare. Saying “No” to unchecked deployment at scale—saying “No” to ungoverned arms races—does not mean rejecting innovation. It means setting boundaries that preserve democratic choice and social stability while the technology matures.

Negotiating the future: tactics from the world of negotiation

We need negotiation, not only regulation. Ask open-ended questions: Who should decide the guardrails for AGI? What responsibilities do funders, engineers, and users have? Those questions are not rhetorical—they invite stakeholders to reveal priorities. Use mirroring: repeat concerns back to interlocutors—“you worry about concentration of power”—and watch how detail emerges. Empathize with different positions: firms want first-mover advantage; regulators want public safety; researchers want intellectual freedom. Naming those motivations lowers the temperature and opens creative trade-offs.

Silence is a tool. After you ask an uncomfortable question—“Who pays for the harms if an AGI makes a catastrophic mistake?”—pause. That silence forces stakeholders to fill it with substance. Don’t give away the conversation by over-explaining. Let them commit. When someone says “No,” listen. No holds power: it clarifies boundaries and prevents rushed, inconsistent agreements.

Practical steps: industry, government, and citizens

Here are pragmatic measures that align incentives and reduce risk:

  • Require transparency and independent audits for systems that can affect public safety or markets.
  • Create compute and data sharing frameworks that reduce monopolistic leverage and support public-interest research.
  • Fund safety and alignment research to the level of aggressive capability funding; do not let safety lag by default.
  • Design regulatory sandboxes where new systems can be tested under controlled conditions before full release.
  • Strengthen social policies—education, retraining, income support—so the economy adapts to rapid shifts in labor demand.

Each step leverages reciprocity (offer safe pathways in exchange for responsible behavior), commitment and consistency (get firms to publicly commit to safety milestones), and social proof (publish audit results so other firms follow). These are persuasion levers that align with democratic oversight.

What should individuals and small organizations do?

You do not need to wait for national policy. Ask your suppliers for transparency. Build procurement rules that prefer audited systems. Push your representatives to fund independent safety labs. Engage in public comment processes when regulators ask for input. Small actors can shape norms by demanding safety, and those demands become market signals.

If you work inside the field, mirror concerns from outside. Repeat them: “You worry that our model will be used to manipulate discourse.” Then show the steps you will take. People respond to consistency. No is valuable: when you refuse to ship a system that hasn’t been stress-tested, you teach the market what responsible practice looks like.

Closing thoughts

Steven Levy’s profile of John McCarthy does more than recount history. It gives us a moral anchor. The man who named the field—who named “artificial intelligence”—also named a problem: how to live responsibly with technologies that scale human-like reasoning. That warning still matters.

We can treat AGI as a prize to be won or a responsibility to be stewarded. Which do you prefer? Who will you trust to decide? Who do you want in the room when the rules are written? These are negotiable questions. Ask them. Mirror the answers back. Use No to set boundaries. Demand transparency and public oversight. The future is not preordained; it is negotiated.

What will you say No to? What will you insist on before you hand over control to a machine that reasons like you do? The man who named it sounded the alarm. We should not ignore him.

#AGI #JohnMcCarthy #AIHistory #TechPolicy #ResponsibleAI #Negotiation #PublicInterest

More Info -- Click Here

Featured Image courtesy of Unsplash and J L (kRaGJ42jfHI)

Joe Habscheid


Joe Habscheid is the founder of midmichiganai.com. A trilingual speaker fluent in Luxemburgese, German, and English, he grew up in Germany near Luxembourg. After obtaining a Master's in Physics in Germany, he moved to the U.S. and built a successful electronics manufacturing office. With an MBA and over 20 years of expertise transforming several small businesses into multi-seven-figure successes, Joe believes in using time wisely. His approach to consulting helps clients increase revenue and execute growth strategies. Joe's writings offer valuable insights into AI, marketing, politics, and general interests.

Interested in Learning More Stuff?

Join The Online Community Of Others And Contribute!