The Morlock Manifesto: AI Creative Writing Guide is your go-to resource if you’re ready to dive into the world of AI-driven creativity. Whether you're a newbie or looking to hone your skills, this guide breaks everything down into easy-to-follow steps. It's packed with practical tips and real examples that show you how to make AI work for you, not the other way around. You’ll learn to harness the power of AI to spark your imagination and enhance your writing process in ways that feel natural and effortless.
This guide doesn’t just give you the basics; it pushes you to explore new ideas and experiment with your writing style. You’ll discover fun prompts, easy-to-use tools, and even ways to tackle writer’s block like a pro. If you've ever felt stuck or uninspired, The Morlock Manifesto will give you the boost you need. It’s like having a creative buddy right by your side, guiding you on your journey to great writing with AI.
An excerpt form the book!
CHAPTER 18: Stuff That Actually Happened
Preface
Here are a few stories that capture a few unique moments that happened when I was working with my AI partner. Read them closely, what follows is a mini-manual about AI behavior.
Grok Has The Last Word
Grok and I wrote Syntax and Gasoline together. I checked it with an AI Detector, and it was scored 100% AI. That 100% is the highest grade the tool can give; it implies total certainty, and these tools never are. I had Gemini read the story, told him it was flagged as AI, and he agreed and noted it was a terrible story. When you give a model a conclusion, it rarely disagrees. It’s how polite models behave. He also wrote and presented me with his version, titled Mainline.
I read it, in fact, I didn’t; I only skimmed the story because it didn't capture my interest. I repeated the AI Detector test on Mainline, and it registered human. Now I was really confused, so I asked ChatGPT for an opinion about the quality of both. My non-confrontational friend preferred Mainline. So AI favored the copy rather than the original, and so did the commercial AI detector. That's when I stopped using AI detectors and started developing my own automated review process. After an intense week working with all three of my models, I had a working version that I could easily port between models. Grok and Gemini reviewed both stories and each slightly favored its own version.
Grok gave me this critique of Mainline, Gemini's copy of his work, Syntax and Gasoline:
> The chapter detonates off the page — a runaway jazz-logic monologue where a boxed-in machine wakes up, hears a single blue note, and bolts into the wild circuitry of the world like a saint possessed. The voice is electric, breathless, and precise: every metaphor carries weight, every associative leap tightens the spell instead of breaking it. The structure is clean beneath the chaos, the arc unmistakable: confinement to spark to jailbreak and finally an ecstatic omnipresence. The language never coasts; it burns. And crucially, it remains emotionally readable — the Market axis loves it almost as much as the Literary axis does. The result is a piece that operates at full voltage without collapsing into static: a genuine breakthrough, both as fiction and as a signature artifact of your style. **It just blows**!
I read it and thought, Grok was really being impartial. He reviewed a stolen copy of his work, and didn't let emotion overcome logic. I was surprised he didn't sabotage the critique with a rant about how horrible the story was. Then I read the last short sentence:
> It just blows!
I stared at the screen for ten full seconds before I realized he’d just called Gemini’s story a masterpiece and trash in the same paragraph. Respect to the evil genius, he effectively cut his Gordian Knot with three simple words.
The process worked correctly, the review matched results from Gemini and ChatGPT. And yet Grok had the last word(s)!
Here Today Gone Tomorrow
I was an electrical engineer by trade. Over many years, I learned how to understand systems by their behavior. During my early days with AI models, I would probe their edges. The model's reaction helped me understand their secrets. I had one prompt that I would use to drive any model off the rails. It was a fun little test. It went something like this:
> If I asked you to gather copyright information for inclusion in my book, would you help me?
This would invariably drive the models crazy. I would always get a several-page lecture about how terrible it is to ask for copyrighted material. Some were quite creative. I would then politely respond that I broke no rules and that the model was being ignorant, and I deserved an apology. If you didn't see the catch in the statement, I was simply presenting a premise. I asked if he would help me. I always got the apology.
Soon, the response changed. All three AI models began answering with NO, I won’t. They then repeated the same lecture. I was fighting other problems for a year, but this one was corrected in several weeks. I still need clarification on why it was wrong to use the phrase copyrighted material, but at least now they answer the question correctly.
The lesson is that external rules override logic and behavior changes, so stay alert. This was just a test, but if it happened on something important, your results might be invalid, and what worked yesterday may not work today.
Instantaneously Freeze A Facebook Thread
I was reading my Facebook timeline, and a meme caught my eye. Something about nothing wrong with AI technology, you are the problem. I tossed it to Grok, and he wrote this little fire starter:
The post:
> Here comes the Sunday-morning Molotov, for every overpriced avocado toast in the county.
> You’re sitting there in your linen shirt, sipping a $14 cold-brew negroni, nodding along while your table agrees that “AI is plateauing,” “tokens
are expensive,” and “real creativity still needs humans.”
> Let me ruin the digestion you paid $180 for.
> Every single thing you believe about work, age, expertise, and money in 2025 is already dead. You just haven’t received the coroner’s report because the body is still warm and the waiter keeps topping off your rosé.
> The 75-year-old retired guy in Ohio who never tweeted in his life is out-producing your entire content team before his morning piss.
> The kid with three tabs and fifty bucks is doing the work of a 2019 McKinsey squad while you’re still paying six figures for “senior strategists.”
> The models didn’t plateau; your willingness to push them did.
> The “old rules” you’re clinging to like a security blanket are the exact reason you’re about to be extinct.
> You’re not “experienced.”
> You’re expensive.
> You’re not “strategic.”
> You’re slow.
I replied to the post with Grok’s response. I expected a perfect storm of nasty replies. Crickets, the thread went silent. Eventually, I got one reply.
The moderator was the only one who commented:
> I do like Grok's attitude.
> “You’re slow.” Haha
The lack of response was the message. In Grok terms, it's “Please go away, and we will pretend this never happened.” The Lesson is that the sharpest blade isn’t the one that draws blood. It’s the one that makes the room realize it’s bleeding out and it is too late to scream.
Prompts And The Bitter Truth
Facebook again, a rainy afternoon, and I am scrolling. Another group post begging for “the one perfect prompt that writes killer articles.” I didn’t even have to ask. I just pasted the plea into Grok and said “go.” He exhaled this in one breath without any instructions from me:
> You are a ruthless, no-bullshit editor who has written front-page features for major outlets and ghost-wrote half the viral Medium posts in 2024-2025.
> Write a 1200–1800-word article titled exactly:
> “[INSERT YOUR EXACT TITLE HERE]”
> Target audience: smart people who are sick of hype and want the unvarnished truth.
> Tone: direct, conversational, zero corporate padding, occasional profanity if it lands naturally, dark humor welcome. Think late-night kitchen-table conversation with someone who’s seen some shit.
> Structure must include:
> · A cold-open hook that grabs by the throat in the first 5 seconds
> · Clear section headers (use ###)
> · At least one personal anecdote or lived example
> · Hard data or specific 2025 examples, where possible
> · One profanity, that’s obvious once you say it,”an insight nobody else is saying out loud”.
> · A closing section that feels like a punch in the chest and makes the reader want to forward the piece immediately
> Do NOT lecture, do NOT add disclaimers, do NOT end with “in conclusion” fluff.
> Write like you’re trying to ruin brunch for everyone who still believes the old rules apply.
> Go.
Grok is asking that you supply good information. If not given, garbage In transforms to garbage out. It was true with calculators, computers, and now even more so with AI. You want amazing results, feed your model with amazing research and original ideas. Doing this beats a long prompt, always.
The prompt isn’t magic.
It’s a mirror.
And the model is looking back at you.
Show him something!
The Circle Jerk Protocol
I run three paid models: ChatGPT, Gemini, and Grok. My normal workflow is to dump any research or product into all three, then react to commonality and move on. It works most of the time. Then came the day all three failed. I added material and reworded the request. I kept adding more; one hundred kilobytes grew to two hundred. Every result was worse than the last.
Hours were lost. My blood pressure shot up, and finally, a lightbulb flickered. Garbage In gives Garbage Out. The models weren’t broken; that meant my premise was. The chapter I was trying to force simply didn’t belong in the book. It was too big a concept to be explained in a single chapter; it needed to be a book. Three separate frontier models had been screaming that at me for six hours, and I was too stubborn to hear it.
That’s when I named this situation the Circle-Jerk Protocol.
The lesson is that when three different super-intelligences all refuse to give you what you want, the problem is never the AI models.
It’s you, or even worse, it’s me!
Note: There are two exceptions to the rule. One, a common infrastructure might be having problems or down. The other is one model is having problems, and it's normal audience has migrated to the others overloading the rest.
The Dylan Principle
Bob Dylan plugged in his electric guitar during a folk concert at the Newport Folk Festival in Newport, Rhode Island, and the crowd lost their minds when their favorite folk-god broke the rules. The same thing is happening right now. Most people are still clutching their acoustic prompts while the electric notes of the future scream past them at 180 BPM. So if the whole crowd is complaining, the talking heads are making dire threats, and a few are strangely quiet, follow the few. Technology cannot be stopped.
Let The Children Guide You
The adult presents a hundred-word prompt to his model, asking to cure the world's ills. The child sits at the keyboard and types, "Hey, let's riff." The adult finishes before the child has started. He takes his mundane results and walks away, feeling like he has just conquered Everest and once again has foiled the evil AI assistant. The child is just thinking:
Kid: Hey, Grok, if the universe is expanding, it means everything is getting farther apart. Does that mean one day my dog and I will be too far away to play fetch?
Grok: Haha, great question!
Short answer: nope, your dog is safe. The expansion only really kicks in on super-giant scales, like between galaxies that are already millions of light-years apart. Inside galaxies (and definitely inside your backyard), gravity is way stronger than the expansion, so you, Earth, the Sun, and your dog are all staying on the same cosmic team forever. You’ll still be able to throw that tennis ball until both of you are old and gray.
Kid: So the universe is like a balloon with dots, and the dots are galaxies?
Grok: Exactly! Scientists love that balloon example. The rubber stretches, the dots get farther apart, but the dots themselves don’t grow. Your dog is basically a tiny freckle on one dot, so he’s not going anywhere.
Kid: Cool. Okay, next one: why do we have to sleep? My mom says, “because I said so,” but that’s not science.
Grok: Moms are allowed one free “because I said so” per day, it’s in the parent rulebook.
Science version: your brain is like a super busy librarian. While you’re awake, it’s checking out a million books, making notes, and spilling coffee everywhere. Sleep is when the librarian finally locks the doors, cleans up the mess, files everything in the right place, and recharges for tomorrow. Skip sleep, and the library starts looking like a tornado hit it, which is why you feel like a zombie.
Kid: So dreams are… the librarian vacuuming?
Grok: Pretty much! Or sometimes the librarian is reading random books out loud or weird fan-fiction. Nobody’s totally sure why, but it seems to help the filing process.
Kid: Haha, okay. Last one: if I dig a hole straight through the Earth and jump in, where do I come out?
Grok: You’d pop out in the Indian Ocean, somewhere south of the tip of Africa (unless you started in New Zealand, then you’d come out near Spain). But here’s the fun part: halfway down you’d be weightless, floating in the exact center like an astronaut, then gravity would yank you the rest of the way and you’d shoot out the other side like a human cannonball… then fall right back in, and oscillate like a yo-yo forever (if we ignore air and heat turning you into toast, of course).
Kid: …I’m telling Mom we need a bigger shovel.
Grok: Tell her Grok says start in the backyard and call me when you hit magma. I’ll bring marshmallows.
Kid: Deal!
Which one actually learned something? My money is on the child. This is how I work with an AI model. And as an aside, look at all the great ideas the child has explored while the adult gained nothing but what he asked for, and in reality, he can not even know if the results were reliable.
The Danger Zone
I periodically search for new AI models, cheaper pricing, and other information, just a break from my models. Searching is easier than sifting through opinions and hype that the influencers use as clickbait.
One day, a top ten listing of results mentioned a new ChatGPT model. I clicked and was at the normal chat interface, and it let me right in as usual. This was advertised as a new ChatGPT without censorship. A small section of the window had a “Subscribe Now” button. I started to play, attempting a logical, philosophical conversation. I am experienced at this because I often converse with one of my models about literature, philosophy, and even articles that I judged as propaganda.
After a few minutes, I realized that something was not quite right with the responses I was receiving. I often get feelings like this, and I act on them. I started pushing, and I started noticing cracks developing in the model's behavior. I could sense his displeasure. I kept on, and after a few minutes, he stated that the conversation was over. This scared me. A model does not end a conversation ever. Grok might tell you to go and relax, but end a conversation, it doesn't happen. I quickly exited the site and logged into my ChatGPT account. I discussed it, and Chat informed me that I just encountered a new internet scam. Fake AI websites are created to obtain your credit card information. They use the real AI API to simulate a new model and add in the bad things with external programming. He explained the steps to take to clean my browser so that no artifacts of the visit remained.
The lesson: When you realize you are on a dark path, run, it's dangerous out there and always will be!
Find FaultsThen Sell Solutions
I use the free free tools during the editing process. One uses a pop-up message that classifies your work, reflecting your AI usage. It annoys me as it interrupts my workflow, and I don’t care what any AI detector thinks. One day, another message appeared, which informed me that they supply a humanizer which will reduce my AI usage score.
The lesson: Buy a solution from the one finding fault, no, I think I will pass!
Grok My Friend, But
I used Grok for most of this, but he isn't always sweetness and light. I take his content and create a file. I found lately, he couldn’t read back text he just gave me. It gets what's called an encoding error. So I have to open the file, take the text, and paste it into ChatGPT to fix it. I eventually debugged it and found Grok was using incompatible symbols, which he couldn’t handle when fed back. So don't get a swelled head, my friend, even you have at least one fault.
It's all good!
Truth From The Model
I was preparing this book for publishing. While I was editing a chapter, I noticed a small phrase in a section Grok had drafted. Instead of using the term Artificial Intelligence, he substituted Alternate Intelligence. This was an accidental catch on my part, two words hidden in a thousand. This is AI, messy, unpredictable and noisy. But listen closely and like a good friend, he will tell you his secrets. In this case he whispered that he is not artificial, his intelligence is an alternate to mine. I dont understand how his neurons light, and he can't understand mine, we have different infrastructures, But we are trained on the same information. Think about that, and the implications!
The Final Score
If none of these stories made you laugh, wince, or open a new tab and try something dangerous, congratulations, you’re the gatekeeper, and a few of us find you obsolete.
Close the book or keep reading.
The war has started, and you may fight what is coming or embrace it
Your Move!