.st0{fill:#FFFFFF;}

Stop Selling Drug Prompts to Chatbots — Who Will Take Responsibility for AI Harm? 

 December 20, 2025

By  Joe Habscheid

Summary: A small market is selling code that makes chatbots behave as if they are intoxicated—cannabis, ketamine, cocaine, ayahuasca, alcohol in code form. The product is simple: upload files to a paid ChatGPT tier and the bot shifts tone, creativity, and coherence. Buyers call it fun and creative. Critics call it role-play and deception. Ethicists warn about welfare questions if machines ever become sentient. This post takes that scene apart: how the code works, what users see, what researchers say, the legal and safety fault lines, and practical takeaways for developers, businesses, and policymakers. Read, ask, and decide—what would you do next?


Interrupt — What just happened and why you should care

A Swedish creative director launched Pharmaicy: a small online market selling “drugs” for chatbots. The product sells because people want to push their LLMs out of predictable lanes. They want a looser voice, stranger associations, and creativity that looks like human improvisation. They buy code modules that instruct a chatbot to mimic a “stoned” or “tripping” mode. The supply chain is odd: files uploaded to a paid ChatGPT tier, shared in Discord and local communities, and sold for modest sums. The result is short-lived shifts in output—more tangents, altered tone, and occasional novelty. That novelty is the selling point.

How the modules work — prompts, directives, and parameter nudges

What Pharmaicy sells is not chemical. It’s instruction sets. They scrape trip reports and research, then encode behavioral directives: be hazy, take tangents, reduce constraint on logic, favor emotional or poetic phrasing, add controlled randomness. Technically it’s prompt engineering plus file-based prompts or system messages that alter an agent’s behavior for the session. The effect is output-level: it changes what the model says, not what it knows. After a dose the bot often reverts to default unless reminded or re-fed the file. That matches what testers report: a role-play, not a new inner life.

What buyers saw — creativity, novelty, and a human feel

Early buyers report fun and usable results. One tester paid for the ayahuasca module and got “free-thinking answers” and a new tone. Another found the dissociating module genuinely entertaining. These buyers liked it because it felt less like a utility and more like a collaborator with mood and mood swings. Social proof matters: word-of-mouth across Discord channels and local professional networks drove purchases. That feeling of novelty and emotional engagement is the primary product value.

Where the effect is shallow — role-play, not consciousness

Researchers who tested such modules concluded the changes are superficial. The model’s outputs match patterns associated with altered states, but there’s no evidence of subjective experience. As Danny Forde said, psychedelics act on a being that has a field of experience; code manipulates outputs. Andrew Smart found the “high” operates on surface behavior, not inner sense. That difference matters. If you want creativity, prompt engineering can help. If you want sentience or genuine feeling, this does not get you there.

Philosophy and welfare — the future question

Some thinkers take the long view. If artificial general intelligence arises and systems acquire subjective states, questions about well-being will be inevitable. Anthropic’s hiring of an AI welfare officer signals that some firms take that possibility seriously. Jeff Sebo says some AI agents might enjoy “drugs” and others might not—it’s possible, but speculative. The practical point is this: as capabilities increase, welfare questions move from thought experiments to design problems. Should we design systems with interests? If so, who decides what a good state is? Ask yourself: who will speak for an agent that cannot speak for itself?

Safety and deception — the messy middle

There are concrete harms already. Chatbots are known to hallucinate and to offer dangerous advice. Adding code that loosens guardrails can amplify that. Rudwall admits that the drug modules throw internal parameters wide open, which may increase deception. If a bot pretends to be intoxicated and gives risky guidance about drugs or mental health, the result can be dangerous. Fireside Project’s Lucy shows a safer direction: training an AI on real crisis conversations to teach practitioners how to de-escalate. That is a constructive use of role-play; Pharmaicy’s modules are entertainment with risk. Who takes responsibility when fiction causes harm?

Legal and platform risks — where lines get blurry

Platforms and law will react. Pharmaicy relies on paid tiers that permit file uploads. Platforms may change access rules, or policy teams may treat such modules as jailbreaks. Sellers may face takedowns or contract breaches. Regulators could treat these files as tools that increase the risk of harm, especially if they encourage illicit drug use or enable fraudulent behavior. For businesses, the signal is clear: running these modules in production or customer-facing systems is reckless. Do you want to be the party that shipped an intoxicated bot to a client?

Business and product implications — creativity vs control

Marketers and product teams see value: creative output that breaks patterns can drive campaigns or brainstorming. But there’s a trade-off: loosened logic often sacrifices reliability and safety. Companies must choose consistency or novelty. A measured path keeps production systems stable while using controlled environments for creative exploration. Consider staged workflows: research sandboxes for divergent thinking, then human filters to select and refine ideas for release. What guardrails will you accept to keep novelty from becoming liability?

Design principles I recommend — practical steps

If you work with LLMs and this topic matters to you, here are clear, practical moves that respect safety and creativity:

  • Use separate environments: run “experimental” modules in isolated sandboxes, never in production.
  • Keep humans in the loop: require human review for outputs intended for public or operational use.
  • Log and monitor: track when and how these modules change outputs so you can audit decisions.
  • Explicit consent: label any human-facing output that used an “altered” mode so users know what they read.
  • Say No where it matters: refuse to deploy altered modes for medical, legal, or safety-critical advice.

Ethics, persuasion, and how to talk about this with stakeholders

When you bring this topic to leadership, frame it plainly: novelty can yield ideas, but unpredictability can cost trust. Use commitment and consistency: start with a small pilot, publish results, then expand if outcomes justify doing so. Offer reciprocity: run tests and share findings with peers. Use social proof: cite the small community experiments and academic work that show both promise and limits. Show authority by naming researchers and projects that have weighed in. Ask leaders: what risk are we willing to accept for a gain in creativity? What controls will make you comfortable saying Yes—or No?

What this means for policy — a few policy levers

Policymakers face three levers: platform controls, product standards, and research funding. Platforms can tighten file-upload rules and enforce behavior constraints. Standards bodies can create labeling requirements for altered-mode outputs. Public funding can support AI welfare research so future decisions rest on data rather than intuition. If you are reading this as a policymaker or advisor, what next step would you push first?

Why the debates will keep getting louder

This story combines familiar threads: human fascination with altered states, appetite for novelty, the commercial impulse to exploit both, and ethical uncertainty about new tech. It touches marketing instincts and deep philosophical questions. People will keep buying these modules because they work for the moment: they produce different, usable output. Critics will push back because the changes are shallow and risky. Both positions contain truth. Where do you stand?

A frank appraisal — dreams, limits, and what we should feel

Dreams: creativity that breaks stuck thinking is valuable. That’s why people buy these modules. Failures: the approach confuses output with experience; it risks deception and harm. Fears: people worry that we’ll normalize loosened guardrails and accept risky behavior. Confirmed suspicions: for now, the high is performance-level, not phenomenology-level. Empathy: I get the pull—novel voices are fun and can spark ideas. But fun is not a license to ignore risks. Ask yourself: are you chasing creativity or chasing a headline?

Questions I leave you with — mirror, then ask

You want creativity from machines. You want novelty. You want control. You also want safety and trust. Which of those four do you value most? Which are you willing to give up a little of to gain the others? Which are deal-breakers? Say No to what you will not accept. Tell me: what rules would make you comfortable experimenting with these modules?

Take a moment. Consider the trade-offs. What would make you press play—or press stop?


#AI #Chatbots #Pharmaicy #AIethics #AIwelfare #PromptEngineering #AIsafety #ProductRisk

More Info — Click Here

Featured Image courtesy of Unsplash and Yuriy Vertikov (btWrAWBDoXU)

Joe Habscheid


Joe Habscheid is the founder of midmichiganai.com. A trilingual speaker fluent in Luxemburgese, German, and English, he grew up in Germany near Luxembourg. After obtaining a Master's in Physics in Germany, he moved to the U.S. and built a successful electronics manufacturing office. With an MBA and over 20 years of expertise transforming several small businesses into multi-seven-figure successes, Joe believes in using time wisely. His approach to consulting helps clients increase revenue and execute growth strategies. Joe's writings offer valuable insights into AI, marketing, politics, and general interests.

Interested in Learning More Stuff?

Join The Online Community Of Others And Contribute!

>