Summary: This post explores the misconception of trying to extract a “narrative” or “story” from technical messages, specifically JSON error responses like one indicating an insufficient account balance. It explains why these types of texts are not stories, cannot be turned into stories without distortion, and what this tells us about clear communication, user interface design, and the misunderstood role of AI in creative tasks.
Technical Message ≠ Storytelling
When someone sees a chunk of structured machine response—like a JSON error notification—there’s often a reflexive push to “make it human.” That instinct may come from a reasonable place: frustration with robotic language, preference for storytelling, or just habit built from marketing and UX advice to “tell better stories.” But not every message is a story. More importantly, not every message should be one.
Take this input for example: “Unfortunately, the provided text does not contain a story that can be extracted and rewritten. The text appears to be a JSON error response with information about an insufficient account balance. There is no main story or narrative present in this text.” That’s not opinion. It’s a technical boundary. A structured machine response is function-first, not meaning-first. It neither contains conflict nor character, movement nor message. Storytelling needs at least one of those to begin.
The Structure of a JSON Error
A JSON error response is a data structure—just a container made of keys and values. It serves one purpose: efficiently report something that went wrong. Typical elements might include:
- error_code: A machine-readable identifier, like “ERR_INSUFFICIENT_FUNDS”
- message: A short human-readable string, e.g., “Account has insufficient balance.”
- timestamp: When the error occurred
- request_id: A unique identifier to trace the event
This is not backstory. This is not plot. It is strictly logistics wrapped in code. Trying to create a “rewritten story” from that is like writing a screenplay based off a calendar notification. There’s no emotional movement, no values in tension. Just status.
Why This Matters for Communication
Effective communication is never about fluffing things up just because humans like drama. It’s about matching form and function. If your job involves user interfaces, help documentation, or support chatbots, this is where the difference becomes dangerous. If you try to make every technical message “more human” without understanding what the user actually needs in the moment, you risk confusing them—or worse, misleading them.
Clarifying, not “storytelling,” is what’s most useful here. Let’s say the user receives the message:
{“error”: “INSUFFICIENT_BALANCE”, “message”: “Your account does not have enough funds to complete the transaction.”}
There is no need to reframe this into a fairytale about an account bravely trying to buy coffee and failing. The real value is in making the response informative and practical. Can it tell the user what to do next? Yes? Then it’s effective. Does it communicate that the failure was not a bug but a business rule? Now we’re talking clarity—not creativity for its own sake.
What AI Can’t—and Shouldn’t—Do
This request also reveals a deeper misunderstanding about the use of artificial intelligence in communication. Tools like AI can format, summarize, compare, or repurpose data. But they cannot—and should not—make up a story where there is none. That’s not a limitation in power. That’s a feature of design. Just because AI can generate fictional content doesn’t mean it should hallucinate purpose from operational data.
When a user asks AI to turn a series of error codes into “a story,” they may secretly be trying to externalize frustration. “Help me understand this technical wall I just hit.” Fair request—but the solution is educational, not fictional. The right response is not “Once upon a time your wallet felt empty,” but rather: “You attempted X, but Y prevented it due to rule Z. Here’s how to fix it.” Structured, simple, and supportive. That’s how AI communicates effectively without distortion.
Engineering Trust from Precision
Here’s the deeper point for anyone working in SaaS, fintech, automation, or UX: clarity builds trust faster than charm ever will. Your users don’t want clever. They want reliable. Your error handling, your warnings, even your explanations of what went wrong—that’s all marketing too. Softening the blow shouldn’t come by way of invention. It should come from empathy, validation, and clear options.
So what’s a better way forward when your system says “no”? Build better on-ramps. Teach your AI to recognize where the user was headed and offer frictionless recovery paths. Inform them. Confirm their suspicion that systems can be awkward and impersonal—but follow that up by being the human help, not dressing up machine logic in fiction.
Creating Real Usefulness Out of Error
Let’s flip it: what can you do with a message like “insufficient funds” if you still want to make it useful in marketing or product documentation? You illuminate what it means. Most users don’t just want to know their card was rejected—they want to know whether it’s because of a billing error, a usage limit, or a problem with their funding source.
The better question here is, “What would someone in this moment want to know next?” That’s where logic meets empathy. What have others done in the same situation? Social proof. What can they do about it now? Reciprocity. Don’t dress the error up—wrap it with value.
You don’t need to fictionalize operational hiccups. You need to humanize the frustration and give users a path forward that respects their time and intelligence.
This isn’t about limiting creativity—it’s about applying it wisely. Structured data, logic errors, and backend messages aren’t narrative seeds—they’re signals. Our job isn’t to turn them into theater. It’s to build a better response system that speaks plainly, acts helpfully, and connects logically.
#UXDesign #ErrorHandling #TechWriting #AIandHumans #StructuredData #ProductCommunication #UserEmpathy #SmartSystems #FintechClarity
Featured Image courtesy of Unsplash and Joshua Hoehne (MzuFhco8PgA)
