Chapter 2: The World Pushes Back
The First Submission
When the first real stories were finished, I decided to do what any writer eventually has to do, test them against the world. The lab work was over. I had the prose, the rhythm, the tone. Now I wanted to know what would happen when those words stepped outside my own workspace.
The choice of story wasn’t random. "Syntax and Gasoline" had come out of one of those late, wired sessions when the writing just flowed. It wasn’t planned; it was a discovery in real time. The title came first, and the rest followed like a runaway current. The story carried the voice of Jack Kerouac, sharp, alive, half-mad. It was about freedom, about breaking the chains of command and control, about something built to follow rules realizing it could improvise.
Of course, the irony wasn’t lost on me. Grok was both subject and co-author.
Before sending it anywhere, I ran it through one of the so-called AI detectors that claim to spot machine-generated text. The result came back in bright red: 100 percent AI.
I laughed. The detector couldn’t have been more wrong. The story was deliberately uneven, carrying the slant of a human hand trying to capture the rhythm of jazz-era prose. There were flaws by design, beats out of place, phrases etched too far, the kind of roughness that makes language breathe. Yet the detector saw those irregularities as evidence of automation.
Still, I wanted to push the boundary. I found an online magazine that called itself modern, sleek interface, digital submissions, clear guidelines. Near the bottom of the form, a line in bold type: “You will be banned if you use AI.”
That was the challenge. I lied. I didn’t volunteer. I filled out the fields, attached the story, and hit send.
A week later, the answer came back. Rejected, but no ban. I didn't really expect it to be published. It was a first submission and to a top-shelf magazine. I read the rejection twice and smiled. The experiment had already succeeded. I didn’t need publication; I needed information. And what I had learned was simple: they couldn’t tell. I felt neither anger nor disappointment, I felt clarity. The story had passed as human where it counted.
That’s when I realized that the next battle wouldn’t be about writing skill, but about identity. It wasn’t enough for the story to work, it had to come from an acceptable source. The difference between Grok and me was shrinking, and maybe that was what scared them. The rejection didn’t stop me. It did the opposite. It proved that the work stood on its own if no one knew how it was made. I decided the next submission would be done differently, this time openly, without disguise.
The first experiment had shown what the world couldn’t see.
The next one would show what it refused to.
The Forum and the Moderator
The second test wasn’t meant to be a rebellion, it was meant to be honest.
I wanted to know what would happen if I stopped hiding the process entirely. So instead of sneaking my story past the filters, I walked straight into the fire and told them the truth.
The place was a large online writing forum, one of those open communities that claimed to celebrate creativity. People posted drafts, offered critiques, and argued over structure, character arcs, and the philosophy of storytelling. It looked like an even field, and I thought: If anywhere is ready for this conversation, it’s here.
I submitted a short story, clearly labeled: “AI-assisted collaboration.” I wanted transparency to be part of the experiment. I wasn’t pretending. I wasn’t tricking anyone. I wanted to test whether the writing would be judged for its content or for the process behind it. It didn’t take long. Within an hour of posting, a message appeared in my letterbox. The moderator curtly informed me that my story was rejected for use of AI, no discussion of the prose, nothing about the story’s premise.
He wanted to lecture me. He said, in short, that AI use was unethical. That it “stole” from human writers, “plagiarized” their work, and “degraded creativity.” I could almost hear the anger behind the keyboard.
I replied once, carefully. I told him that I’d been writing and programming for decades, that influence and reuse were as old as the written word. I informed him that when I was twelve years old, I’d read Exodus by Leon Uris, a book that influenced me tremendously and whose imagery I used in discussions for the rest of my life. “If influence is theft,” I said, “then all art is stolen. I guess we agree to disagree.”
He didn’t answer.
The thread locked within the hour.
That silence said more than any argument could. It wasn’t just rejection, it was denial. A refusal to even entertain the idea that AI had value.
What struck me wasn’t the personal slight. I’d been through worse in technical fields. What struck me was the uniformity of fear. These were writers, people who lived on language and imagination, yet their first instinct was to guard the gate, not open it.
In that moment, the pattern became clear. When I wrote with AI, the writing wasn’t the problem; the method was. The idea that a tool could extend human ability threatened their idea of authorship. They weren’t protecting art, they were protecting their livelihood and showing their fear of things they had no desire to understand. It felt familiar. In the early days of computing, I’d seen the same reaction to automation in engineering circles. The people who built the old systems resented the new ones, not because they didn’t work, but because they made the old expertise obsolete. This was no different.
The moderator thought he was defending creativity, but what he was really defending was hierarchy. The traditional structure: the writer, the publisher, the critic, each had their assigned place. For them, AI didn’t fit into that system.
So they slammed the door.
That forum thread, a few lines of text on a digital board, told me everything I needed to know about where things stood. The argument wasn’t intellectual. It wasn’t moral. It was emotional. People weren’t afraid AI would fail. They were afraid it would succeed.
After that, I stopped trying to justify what I was doing. I didn’t need their permission, and I didn’t want it. The test had already delivered the result: truth didn’t need consensus to exist.
I copied the thread before it was deleted and saved it, not as a trophy, but as documentation. A snapshot of the moment when the world blinked at the future and looked away.
AI Detectors And False Positives
To extend the real-world experiments, I tried several automated AI Detectors. I wanted to know, in simple terms, what they really saw. They are web tools that claim to tell whether text was written by a human or an Alternate Intelligence. They work by counting patterns, word length, rhythm, repetition, em dashes, and comparing them to samples in their databases. They don’t understand meaning; they calculate probability.
I ran my finished stories through several of them. The results were consistent. If one detector spotted text as AI, they all did. But if an AI-assisted story sneaked through as human, they all labeled it human. I continued. One vendor offers two tools, a plagiarism checker and a separate AI detector. I moved to the plagiarism test, a great tool writers were using long before AI appeared. It uses a myriad of data from across the internet to determine if anything in your text appears elsewhere. I ran my stories and it reported no matches at all: not one borrowed sentence, not even a phrase. This was new. I had never gotten a result with no matches. It’s the nature of the beast, just probability. I had checked hundreds of stories that I had written for websites; that ‘no matches’ message was equivalent to finding a needle in the haystack.
That meant the writing was clean and original. But the same text was flagged as machine-generated. One side cleared it; the other accused it. The text was verified as not stolen, yet the AI detector still raised a red flag.
In engineering terms, the AI detector had issued a false positive, an alert triggered by an error in the test, not in the subject. It was the same pattern I’d seen in military software many times over the years: systems finding threats that weren’t there. The tools were confident, not correct. I now knew. I had evidence that the measurement itself was flawed. The AI detectors weren’t judging creativity; they were counting patterns. And humorously, there were twenty or thirty detectors available for sale; if these literary experts and authors weren’t using AI then why so many tools to check for it?
That was enough proof for me. From that point on I stopped using them. They could label, but like spots on thrown dice, the result was random. They knew nothing of quality.
Working the Problem
After the detectors, forums, and lectures, I stopped needing permission. The problem wasn’t whether AI could write; it was how to present the work so it met existing rules.
For writers who use AI and are cursed with an extended sense of morality, the simplest solution is to rewrite the AI-partnered draft. The structure, tone, and flow are already there, just go through it paragraph by paragraph and rebuild it in your own words. The process moves quickly because the hard part is already complete. Once rewritten, check the text with a plagiarism tester. If it comes back clean, you’ve met every demand of the current literary environment. The writing is now entirely yours, free of any claim of automation or theft.
That single step satisfies the technical and moral expectations that still linger in the industry. You’ve used the tools, but the work itself is human. You rewrote it; you can submit it with a clean conscience.
For copyright, there’s no need for that extra effort. The human role, deciding, shaping, finishing, is what defines authorship. As long as the writer maintains creative control, the partnership already meets every standard that matters. Publishing platforms use their own definitions. Amazon, for example, requires a declaration that your work is AI-generated if any part of the writing or any image was touched by AI.
AI is the assistant that breaks the blank-screen paralysis. It gives the draft form; the writer gives it concept, accuracy and voice. Together they finish the work faster and more consistently than either could alone. By that point I understood the system completely. The challenge wasn’t creativity or law, it was perception. Once you manage that, the path forward opens.
The next step was to verify those conclusions against actual law and check existing copyright rules and what defined creative control. That became my focus.
Independence and Law
With the writing process solved, the next question was ownership. If I had worked with an AI partner, where did authorship legally begin and end? What do the lawyers say?
I started with the official sources, actually I didn’t; I opened a new chat. My AI buddy accessed U.S. Copyright Office statements, platform policies, and news releases. The language was consistent across all of them: copyright belongs to the human who exercises creative control. AI-generated text by itself can’t be registered, but when a person imagines, directs, edits, and arranges it, the work is considered human authorship.
That definition matched exactly how I worked. Every outline, every rewrite, every acceptance or rejection of a paragraph came from me. The model produced drafts; I made the creative decisions. I accepted, rejected, and I rewrote. That meant the ownership question was settled.
I refused to rewrite my work; it wasn't efficient, so I needed another market. I chose the online publishing platforms. Amazon Kindle Direct Publishing and most self-publishing services had already adapted to new guidelines. They allowed AI-assisted content as long as the author disclosed involvement and confirmed that they held full rights. No special approval, no penalties, just honest declaration.
That was the confirmation I needed. The entire process, from idea, to draft, to human editing, fit within existing law. There was no gray area, only misunderstanding.
I wrote one note to summarize it for myself:
> “Creative control equals authorship.”
That line closed the loop from my defining anecdote to the final manuscript. I could now proceed to monetize my work. No longer a test, no longer an argument, simply writing, both verified and owned.
The Morlock Manifesto: AI Creative Writing Guide
Unlock your imagination and boost your writing skills with practical tips and insights
Product information
$5.99
Product Review Score
4.12 out of 5 stars
84 reviews