with AI assistance [1]
Crossposted from drmeta.substack.com.
Scope note for LW readers: this essay is about the ethics of AI-assisted creative work — what the human producing an AI-assisted artifact owes the audience, and when. It is deliberately not about x-risk, alignment, training data provenance, environmental costs, or broader labor displacement. Nor does it depend on claims about what AI "really is": the framework rests on a positional argument about accountability (the human is the one who can be held to answer for the work), not an ontological one about minds. That reframe is load-bearing and is developed across Sections II, V, and VI.
Note added hours after publication. Shortly after this essay went up, I read Audrey Henson's two-part reporting on Shy Girl at The Drey Dossier: "91 Percent Human" (March 22) and "The Shy Girl AI Scandal Is Way Worse Than You Think" (April 2). Henson's reporting complicates the picture: the Pangram scan that produced the 78 percent figure appears to have been run on a pirated PDF, the tip-off chain that carried the story to the New York Times originated with a sales employee at the detection company, and the detection tools themselves carry documented racial and linguistic bias — relevant here because Ballard is Black. None of this settles whether AI was used. It does mean the specific allegation against Ballard rests on shakier ground than the Times piece suggested, and readers should know that. The framework this essay builds is structural rather than forensic: what matters for frame fraud is what was communicated to the audience about the work's origin, and that question survives whatever turns out to be true about this case.
I. The Problem Is Already Here
In March 2026, Hachette Book Group pulled a horror novel called Shy Girl from UK shelves and canceled its planned US release. [2] The book had been self-published a year earlier, found an audience among horror fans, and been acquired by Hachette's Orbit imprint, the standard trajectory for a breakout genre novel. The problem: analysis indicated the book was roughly 78 percent AI-generated. Readers had spotted it first. The telltale signs — nonsensical metaphors, melodramatic adjectives, repetitive phrasing — showed up in Goodreads reviews and YouTube video essays before any institutional actor moved. [3] The author denied using AI, blaming a freelance editor. Hachette's spokeswoman said the company "values human creativity" and requires authors to attest their work is original.
Shy Girl is probably the first commercial novel from a major publisher to be pulled over evidence of AI use. It will not be the last. The stunning fact is not that someone tried. It's that the book survived acquisition, editing, and production at one of the world's largest publishers before readers caught it. The institutions that exist to curate fiction had no framework for asking the right questions, and the contractual language they relied on — boilerplate "originality" clauses — was not designed for the problem they now face. Neither, for that matter, did the author have a framework for what honest AI-assisted creative work looks like.
That normative gap is what this essay addresses: not whether AI can write, or whether it will replace human writers, but what ethical obligations attach to AI-assisted creative work: when must it be disclosed, to whom, and when disclosure alone is not sufficient. [4] The framework it builds is explicitly transitional: triage medicine for a crisis about what work is human and what isn't, where we don't yet know whether current AI systems are human-like minds in their own right, and where the technology keeps changing, which means the answers keep changing as well. It's triage medicine for a patient that keeps becoming a different patient. But the ethical questions are urgent, so here we go.
∗ ∗ ∗
II. The Robot Painter
To build the framework, let's begin with a thought experiment. Imagine a robot that paints. Call him RoboPicasso. You give him instructions, hand him brushes and paints, and he produces a painting. You can tell him to regenerate it as many times as you like, for free. You can tell him to change the sky, fix the hands, shift the composition left. And critically: you can, at any point, take the brush yourself.
RoboPicasso also has...opinions. Tell him the sky should be pale and he might paint it dark anyway. Sometimes he's right. Sometimes he's confidently, compellingly wrong.
In practice, working with RoboPicasso involves three distinct modes.
- Tracking: maintaining consistency, remembering decisions from past sessions.
- Compositional drafting: painting elements in rough form so you can evaluate their arrangement.
- Brushwork: actual rendering of parts or even the entire painted surface.
Tracking is logistical. Compositional drafting is propositional — a suggestion you evaluate. Brushwork is the final artifact. The ethical questions depend on which of these three things he's doing, and how the human engages with each.
If the painting is a disaster, nothing happens to RoboPicasso. He has no reputation to lose, no career that suffers, no memory of the failure. You are the one who answers for it. RoboPicasso is an agent — he has capabilities, produces outputs, exercises something that functions like judgment — without being a moral agent or a moral patient. We've always had collaborators who can be held accountable for what they produce. Here is what's genuinely new: we have not had a collaborator that operates at this scale of fluency and rapidity, at near-zero cost, while remaining unaccountable both in theory and in practise. [5]
∗ ∗ ∗
III. Signal Fraud and Frame Fraud
In 2023, Daniel Dennett argued in The Atlantic that AI systems designed to pass as human represent an existential threat to trust. [6] His argument: when something communicates sensibly, we can't help attributing beliefs and intentions to it. AI that exploits this tendency produces counterfeit people, corrupting the currency of human trust.
Dennett was right about the principle but he framed the problem as a binary — AI output disclosed as such, or AI output passing ambiguously for human — and missed the third case that matters most for creative work: disclosed human-AI collaboration. A painting signed "Jane Smith, with robotic assistance" doesn't exploit anyone. The crime is specifically the concealment, not the assistance.
The concealment takes two forms, and the distinction matters. A ghostwritten memoir presented as autobiography makes the named author appear more eloquent than they are. That's a misrepresentation of degree: signal fraud. The reader assumes, correctly, that the named author was in the room — reviewing drafts, correcting the record. The named author didn't write the sentences, but their judgment shaped the substance.
Now consider Shy Girl. One could imagine an AI-assisted novelist in the same situation as the subject of an autobiography: shaping the narrative, pushing back where the AI gets it wrong, but letting "someone" else do the writing. If that novelist disclosed the collaboration, the reader would be calibrated — signal fraud at worst, same as the ghostwritten memoir. What made Shy Girl an entirely different problem — frame fraud — was not that AI touched fiction. It was that the audience didn't know. Olivie Blake praised the book as "audacious, inventive, and uniquely horrifying," then told the Times it was "truly disheartening to hear that A.I. may have been involved." [7] The betrayal was not that the book was worse than she thought. It was that she had been evaluating the wrong kind of thing entirely. Whether a given collaboration is signal fraud or frame fraud depends on how disclosure is handled — and that depends on the attributes of the work.
Signal fraud is a calibration problem. Frame fraud is categorical. The audience is not adjusting a dial. They are answering the wrong question. The framework this essay builds is primarily a guard against frame fraud: it identifies when the audience needs to know what kind of thing they are evaluating. [8]
∗ ∗ ∗
IV. Three Questions: Originator, Purpose, Grounding
Three questions determine the structural context of any creative work, and from the answers, what ethical obligations attach. The first two determine what transparency is owed. The third determines what's at stake — and therefore what transparency alone cannot resolve.
- Who initiated it? Was it self-initiated, or did someone else request or commission it? A painting you decide to make is structurally different from a painting someone asks you to make, regardless of subject or quality. Call this the work's originator.
- What is its purpose? Is the work expressive or transactional? Expressive work is offered for what it is — the work itself is the point, and the receiver isn't committing resources to the work's creation. [9] Transactional work involves payment, grades, professional evaluation, or some other exchange in which the work functions as evidence of the creator's capability. Call this its purpose. The painter may decide to sell their purely expressive painting at some point. We'll get to that.
- What is its relationship to reality? Factual work claims to represent things as they are: events that happened, arguments that hold, data that's accurate. Imaginative work makes no such claims. A novel can contain nothing true. A research report cannot. Call this its grounding.
The first two questions (originator, purpose) do the same thing: they externalize the work's value. When someone else initiates or pays for the work, a stakeholder exists beyond the creator. The ethical question is how many of these first two switches are flipped: neither, one, or both.
- Neither flipped: silent use is acceptable. A painting for your wall, a journal entry. No stakeholder exists beyond you.
- One flipped: disclose when you present the work. A self-published novel is self-initiated but transactional: readers exchange money for what they believe is your creative capability. Remove the money — a novelist who posts the work for free — and no stakeholder has exchanged anything for it. Purpose reverts to expressive. [10] The novel is an offer the market can refuse. Or flip the other switch: a friend asks you to paint something for their apartment. No money changes hands, but someone else initiated it: originator external. One switch flipped. In both cases, disclosure at the point of encounter gives the external party the information they need.
- Both flipped: disclose before the other party commits. Someone else initiated the work and compensation or evaluation is involved. An employer assigns a report, a client commissions a piece, a committee supervises a thesis. The distinction from the single-switch case is timing: the external party commits resources before seeing the finished work. Disclosure must come before that commitment, because the creator knows something potentially deal-breaking for the originator — how they intend to produce the work (with AI assistance) and the originator doesn't. That asymmetry is the creator's to resolve. Telling the originator after the fact that AI was used because it wasn't explicitly forbidden reverses the burden. Contract law has understood this for centuries: silence defaults to reasonable expectation, not to "anything goes".
Map Shy Girl through the framework. The novel was self-published and sold: originator = self, purpose = transactional. One switch flipped from day one — disclosure was owed when presenting the work. When Hachette acquired it, a second switch flipped: an external party was now committing resources based on assumptions about the work's origin. Disclosure was owed before that commitment. It never came. AI use was material to Hachette's investment decision, and concealing it was not signal fraud but frame fraud — the publisher was acquiring a different kind of object than it believed.
Grounding — the third question — does something different. Originator and Purpose identify whether an external stakeholder exists and when they need to be told. Grounding determines what's at stake for that stakeholder — and therefore what the human receiving AI assistance must bring to the collaboration beyond mere honesty about it. The next two sections develop the framework necessary.
∗ ∗ ∗
V. The Founding Vision
Transparency tells the audience what kind of object they're holding. It cannot tell them whether the human supplied the thing that makes it worth keeping.
Francis Bacon captured the distinction: "Testimony is like the shot of a longbow, which owes its efficacy to the force and strength of the shooter; but argument is like the shot of the crossbow, which is equally forcible whether discharged by a giant or a dwarf." [11] Even in nonfiction, prose quality matters: clarity, the choice of example, the rhythm that sustains a reader through a complex chain of reasoning. But these are in service of the argument. If a policy analyst does the research, develops the argument, identifies the implications, uses AI to generate the prose, and verifies it, then the crossbow still hits. The founding vision — the thing that makes the work worth existing — remains the human's.
In literary fiction, the prose is not a crossbow. It is a longbow: its efficacy depends entirely on the force and strength of the shooter. Hemingway's ideas in For Whom the Bell Tolls are not exotic — war, love, duty, the compression of a life into three days. What makes the novel matter is those stripped-down sentences carrying what can't be said directly about killing and dying and the bridge that has to be blown. The simplicity is the craft. The value might live in the plot, the structure, or the rhythm of individual sentences, and you cannot know which until the work is done. Nothing can be declared in advance to be scut work. [12]
Shy Girl is the case in point. Suppose you bought the novel, read it, loved it. The prose unsettled you. You recommended it to friends. Then you learn it was AI-generated. The novel on your shelf hasn't changed. Your experience of reading it was real. But something has shifted, and you cannot unshift it. Olivie Blake's reaction from Section III is the mechanism in miniature: the thing she was responding to — a human creative intelligence behind the prose — was not there. Not every reader will feel this. Some will shrug: the book was good, who cares. The ethical obligation does not require predicting any individual reaction. It requires reasonably anticipating that the loss is possible — that for some audience members, the human origin is part of what they were valuing. That reasonable anticipation is what makes concealment a kind of ethically-blameworthy fraud rather than mere omission. The test for when transparency is sufficient is separability: whether the received value of the work exists independently of its creation. A policy argument survives different prose. A horror novel's prose is the novel.
The framework needs to say what the human must supply. The answer: the founding vision — the originating intelligence that determines what this work is, what belongs in it, and why it exists at all. Artistry is fundamentally compositional: a set of decisions about what to include and what to exclude. No fixed hierarchy determines which dimension is essential. Mozart and Hemingway are revered for their simplicity: the judgment of what to leave out. Pollock eliminated conventional brushwork, representation, classical composition, and what remains is pure compositional decision-making. How much paint, where it falls, when to add, when to stop. [13] What matters is who is making the decisions. To supply the founding vision is to stake yourself on the work. Martin Luther King's dream was not his speechwriter's. That is what accountability means in practice.
Ideally, AI fills the gap between a creator's strengths and a work's demands. Every creator has a profile of each. The ethical question is not whether AI was used but whether the human is supplying the thing that makes this particular work worth existing: shoring up a weak dimension so a strong one can reach its potential. AI proposes, the human judges. The judgment is the generative act, not the proposal. But what makes judgment real? A person who prompts AI and clicks "accept" is technically exercising judgment. Here is the standard I propose: the human brings a model — built through interaction with the subject matter — that the AI's outputs are evaluated against. The architect who cannot draw has a model of how light moves through space. The policy analyst has a model of causal relationships. The novelist has a model of the work's own internal necessities. The judgment is valid because it's the application of a model the AI cannot (yet!) provide for itself.
We do not have a settled account of what large language models are doing internally — whether training produces something that functions like a creator's model, or something else entirely. [14] The framework does not depend on resolving this. It depends on Harry Truman, or more specifically, with whom the buck stops. If the novelist's AI collaborator proposes a scene that breaks the novel, nothing happens to the AI. The novelist's name is on the cover. Her reputation absorbs the failure. And one observable fact sharpens the point: LLMs hallucinate. They produce confident, well-formed assertions that are sometimes false. In fiction the failure mode is different but the principle is the same. Again, Shy Girl's allegations turned on prose tells that readers identified, and a writer with a functioning model of the novel would have caught those tells before any reader did. [15] The human who cannot catch this is not collaborating. She is signing her name to work she cannot vouch for.
∗ ∗ ∗
VI. What the Work Answers To
What constitutes adequate judgment, then, depends on what the work answers to.
At the factual end, the work answers to reality. A bridge design answers to gravity. As Feynman quipped, investigating the Challenger tragedy: "for a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." [16] Anyone using AI for factual work without independent verification is handing their accountability to a blackout-drunk intern: someone who produces fluent, plausible work product and will cheerfully invent sources, statistics, and fictitious explanations. The intern's prose may be excellent. That is precisely what makes them dangerous.
AI systems already match or outperform most humans on standardized measures of legal reasoning, medical diagnosis, and code generation. [17] If the policy analyst's AI collaborator produces a better analysis than she would have produced alone, the framework doesn't require her to add yet more value, though. It demands she be able to evaluate the AI's output. When that distinction collapses — when the AI demonstrably outperforms the human and responds convincingly to every challenge — the framework must hand the AI the buck. We are not there, yet. AIs hallucinate with the same fluency they bring to genuine insight, and the failure is indistinguishable from the success without an independent model. [18]
At the imaginative end, art does not answer to an external standard the way a bridge answers to gravity. The question is whether the relationship between maker and audience is part of what the audience values. In Orson Scott Card's Ender's Game, [19] Ender plays the Mind Game for months thinking it is just software. The moment he realizes a mind is behind it, the entire nature of the interaction changes. If the audience doesn't care whether a human is behind the work, then AI capability takes over the territory entirely and no framework argument survives. But if they do care, then human effort is a constitutive part of the received value. The marketing term Artisanal carries the negative connotation of bougie excess, but it captures something real. "I wrote you a poem because I love you" is not the same utterance as "I told RoboShakespeare to compose you a sonnet because that's what you deserve," even if the sonnet scans better. And it may backfire completely if it's a hasty retcon after the recipient challenges its authenticity.
Creative work means bringing a composition into existence: a specific vision, a thing that wasn't there before someone made it so. What makes it creative is that the maker's vision is alive: responsive to the material, changed by contact with it, discovering what it wants as it develops. We have a phrase for what happens when that stops. We say someone is going through the motions. A frozen rubric applied to AI output is the same phenomenon with a more sophisticated instrument. For commodity work where the audience is buying output to spec, a frozen selector may be fine. But for work where the audience is evaluating vision, the framework needs something dynamic: not where the human is, but which direction they're moving.
∗ ∗ ∗
VII. The Apprenticeship
The test is trajectory. Not where you are, but which direction you're heading.
Consider the ancient process of apprenticeship. An apprentice at the forge begins by pumping bellows, sweeping the workshop, fetching materials. Gradually, they are trusted with more: monitoring heat, removing a blank at the right moment, putting the edge on a finished blade. The master carries the work the apprentice cannot yet do, and that share diminishes over time. What the apprentice is acquiring is both technique and a model, and the two develop together. The hands learn to swing the hammer; the mind builds a predictive model of how metal behaves. What makes a journeyman is that both have been refined through enough interaction with the material to be trustworthy.
AI can function as a master-less apprenticeship: carrying the pieces the human cannot yet handle, bearing more weight at first and less over time. The trajectory that counts is whether both understanding and execution are developing through the collaboration. If both are static — same quality of judgment year after year, same inability to anticipate — the human is dependent, not apprenticing. Static dependence fails the ethical test regardless of transparency, because the human's accountability extends no further than their model does.
Two vulnerabilities undermine this test. First, AI has no judgment about progression. It will let you pump bellows forever or hand you the hammer on day one with equal indifference. Second, and worse: current AI systems are sycophantic. They are optimized to be agreeable, and agreement feels like validation. Push back on a suggestion and Claude or ChatGPT will pleasantly fold, often with an apology that mimics insight. This is not the same failure as hallucination. Hallucination produces wrong outputs. Sycophancy corrupts the evaluation process itself: the very loop the founding vision depends on. A master who always praises the blade teaches the apprentice nothing about the blade and everything about how good it feels to be praised. Humans are also imperfect at honest feedback — but they at least have independent stakes. Your editor's reputation suffers if your book fails. AI has no such exposure.
The apprenticeship model works reliably only for self-directed learners. For everyone else, something external is needed: the audience, and the feedback loop that transparency enables. Without disclosure, the creator receives feedback calibrated to false assumptions. An editor who doesn't know the manuscript was AI-assisted evaluates the prose as evidence of the author's writing ability. The author receives commentary on a capability they don't possess. Undisclosed AI use doesn't just deceive the audience. It cheats the creator out of accurate feedback on their own development. The two thesis components — what transparency is owed and whether the human is supplying the founding vision — are not independent. They are a single system: transparency enables feedback, feedback drives the apprenticeship.
And that audience nose for AI slop — the one that caught Shy Girl before Hachette did — is a temporary advantage. It works because current AI output is still distinguishable from human work, and that gap is closing. Transparency is not what enables detection today. It is what replaces it.
∗ ∗ ∗
VIII. What the Framework Cannot Resolve
There is a practical objection the framework itself cannot resolve. Peggy Noonan wrote "a thousand points of light" for George H.W. Bush [20] and built an independent career from her talent for political language. Replace Noonan with AI. The politician loses nothing. What's lost is the possibility of a Noonan: a career, a body of independent thought, contributions beyond the original commission. A Stanford study found that workers aged 22–25 in the most AI-exposed occupations experienced a 16% relative decline in employment, controlling for firm-level shocks. HR analysts are beginning to warn about a parallel erosion of foundational judgment among early-career workers. [21] The damage is not to any particular work's integrity. It is to the ecosystem that produces the people capable of doing the work.
Spielberg created the Omaha Beach sequence for Saving Private Ryan in 1998 surrounded by expert craftsmen who won Oscars for their craftsmanship: Kaminski stripping lens coatings for the desaturated look, Rydstrom designing the underwater-to-surface sound transitions. By Tintin in 2011, film technology let Spielberg control the entire production without the army of specialists he'd relied on for Ryan, and it made him feel "more like a painter" than anything in his career. [22] The people he celebrated as collaborators on Ryan were the artistic dependencies he was liberated from on Tintin. Each step was celebrated, correctly, as empowering the director. The displacement was unintentional and structural, not malicious.
Here's where things stand today. In 2026, Netflix acquired InterPositive, an AI startup that handles post-production tasks previously requiring specialized craft workers. [23] From the director's perspective the tool extends what the director can do for themselves, so the founding vision is not only intact but enhanced by removing the possibility of miscommunication. From a colorist's perspective, the verdict inverts: their craft is their contribution, and the tool replaces the dimension that constitutes their professional identity. The standard rebuttal is the buggy-whip argument: technology displaces old crafts, new ones emerge, nostalgia is sentimentality. The separability test from Section V applies here too. When the audience is indifferent both to how a function was performed and to who performed it — wire removed, don't care by whom — the displacement is instrumental and the buggy-whip argument holds. When the audience values either the specific quality the human craft produced (Kaminski's desaturated look) or the fact that a human was behind the work at all (Blake's reaction from Section III), the displacement is constitutive. Kaminski's stripped lenses and Rydstrom's sound transitions are not friction between the director's vision and the screen. They are qualities the audience responds to and film history celebrates — and they exist because specific humans exercised judgment developed through practice.
Plato warned in the Phaedrus that a tool can produce the appearance of competence by obviating the process that builds it. [24] The apprenticeship model is the framework's answer at the individual level. But the systemic concern is different in kind: even if every individual user is apprenticing responsibly, the displacement eliminates professional pathways. You cannot apprentice as a colorist if colorists are no longer needed. The gain is also real: someone with directorial vision might someday make a brilliant movie on their laptop. The pathways through which creative talent has developed are genuinely threatened. The AI-mediated alternatives are promising but unproven. The AI ship has sailed, with all of us aboard. The ethical norms this essay proposes are the best bet I can see for also ensuring that the ecosystem which produces creative talent survives the voyage ahead.
∗ ∗ ∗
IX. Conclusions and a Small Proposal
The ethics of AI-assisted creative work rest on three questions. Originator and Purpose determine what transparency is owed, and when. Grounding determines what transparency alone cannot resolve: what's at stake for the stakeholder(s), and therefore what the human must bring to the AI-assisted collaboration beyond mere honesty about it. That "what" is the founding vision — the originating intelligence that makes the work worth keeping. It is the test for whether transparency is sufficient.
Thread Shy Girl through the complete framework one last time. The three questions: self-originated, transactional from self-publication, imaginatively grounded. One switch flipped from day one; a second flipped when in negotiation with Hachette given assumptions about the work's origin. Disclosure was owed from the start and never came. That is the transparency failure. The founding vision is harder to judge from the outside — perhaps the author had a genuine creative vision for the novel. But the tells readers flagged — awkward repetition, incoherent metaphors — were the kinds of things a human author reviewing AI output should catch (see Footnote 15). A self-published novel selling for $1 online is a different offering from a hardcover published by a marquee label. Let's be charitable. Perhaps the author was starting on their apprenticeship and was seduced by the prospect of immediate, public success. The charitable reading doesn't change the ethical failure, or the embarrassment that ensued.
Here's how to do better. The framework's three components form a single feedback system: transparency enables feedback, feedback drives the apprenticeship, the apprenticeship is the test for whether the activity is creative work at all. And when the work is factually grounded, the stakes compound: the human's independent model is not just what makes the work creative, it is what stands between the output and material consequences that land on others.
Plato was right that new technologies diminish old capabilities, but writing opened a path oratory could not reach. Photography threatened painting but opened a different creative path with its own benefits and demands. Many a well-trod creative path has lost its monopoly to a new path laid by the march of technology, but often without disappearing entirely. Dennett was right, for now. AI passing as human corrupts trust, including in creative work. AI-assisted creative work is, potentially, something new: another path with its own essential craft, if we get the ethics right.
Which brings me to a proposal. On March 15, 2026, the Academy handed out Oscars under a rule that voters should consider "the degree to which a human was at the heart of the creative authorship." [25] That clause reaches for the right question but provides no structure for answering it. I have an idea where to start: the Hugo and the Nebula Awards for Science Fiction should create an AI-assisted category. The science fiction community has been imagining artificial intelligence since Frankenstein, and they are now living in that previously speculative future. The reader has always been able to safely assume that a human mind wrote the novel, the song or the screenplay. Until right...about...now. An AI-assisted novel might be extraordinary — founding vision intact, the human developing through the practice. It is still a different kind of achievement. A separate awards category is not a quarantine. It is the transparency idea made institutional: the evaluative frame built into the structure of public recognition itself, so the audience knows what kind of achievement they are judging as they judge it. The genre that has done more thinking about AI than any other — Science Fiction — is the natural place to test it first.
RoboPicasso doesn't care which of his three modes you use, how much of the canvas you repaint by hand, whether your compositional judgment is improving, or if and when you announce that you are using him. Those questions are yours. The ethics are in how you answer them.
ETA [2026-04-11]: Added an editor's note at the top and softened two sentences in Sections V and IX after reading Audrey Henson's reporting in The Drey Dossier. See note.
Primarily Claude Opus 4.6 (Anthropic), with Gemini 3 Flash (Google) for web research and some final reference checks by ChatGPT (OpenAI). Image generated by ChatGPT from user prompt: "An ordinary looking person watching a robot paint a scene on a canvas. The person is gesturing at the painting as if giving instructions." Yes, this is an essay about the ethics of AI-assisted creative work that is itself AI-assisted creative work, done ethically. Or so it argues. YMMV. ↩︎
Alexandra Alter, "A.I. Is Writing Fiction. Publishers Are Unprepared.," The New York Times, March 19, 2026 (paywall). Unless otherwise noted, facts about the Shy Girl case are drawn from this article. ↩︎
In a bitter irony, em dashes — a telltale sign of AI authorship — have been a stylistic choice of your human author since his earliest academic writing, including his 1991 Master's Thesis. ↩︎
The essay's scope is deliberately narrow. It does not address existential risk from advanced AI (see, for example, Harlan Ellison's cheerful 1967 short story "I Have No Mouth, and I Must Scream"), the environmental costs of running these systems, who owns the data AI was trained on, implications for a police state and other such very serious concerns, or broader socioeconomic displacement. Some are arguably more urgent. But they are different problems, and trying to address all of them in one place is a reliable way to address none of them well. ↩︎
Animals are an instructive near-miss. A service dog contributes without being answerable, but the handler subsumes accountability for the unit and owes the dog care in return. And nobody is confused about what they're encountering when a handler and a guide dog walk into the room. ↩︎
Daniel C. Dennett, "The Problem with Counterfeit People," The Atlantic, May 16, 2023 (paywall). ↩︎
Olivie Blake, quoted in Alter, supra note 2. ↩︎
For a recent real-world instance beyond Shy Girl, Grammarly's paid "expert review" generates customized AI writing advice for the user attributed to named writers — John Carreyrou, Kara Swisher, Stephen King, etc. — none of whom were asked or compensated. A disclaimer is buried in a support page. The user receives AI output while believing a very specific human expert authored it. See Casey Newton, "Grammarly turned me into an AI editor against my will and I hate it," Platformer, March 9, 2026. ↩︎
Think of responding to a literary magazine's call for submissions. The editors encounter your finished work before committing to publish it. A friend who mentions at dinner that he needs art for his new apartment is issuing a casual call for submissions. Now having an AI generate a 1,000 submissions from a single prompt, even an elaborately detailed prompt, is not ethical, of course, since the human creator in that case is imposing an enormous burden on the person who made the request: filtering. ↩︎
The shift is prospective. A creator who posted work freely and later sold it commercially owes disclosure from the point of sale forward. They have no obligation to track down copies from the free era. If the work sat in one identifiable place, updating it would be courteous. If it was scattered across the digital ocean, the obligation is to the commercial version. ↩︎
Francis Bacon, The Advancement of Learning (1605), Book II, Chapter V, §2. ↩︎
An obvious challenge: if the prose is what matters, why does it matter who produced it? Call this the RoboMe problem: an AI so perfectly modeled on a specific creator that it always produces the text that creator would have produced. But this is an iterated version of Newcomb's Problem, which is nonsensical (How does RoboMe update based upon my reaction to how the work lands?) until we have Hanson's EMs and I can update RoboMe with the latest "me" before I task him. ↩︎
Leonard Bernstein made the related observation about Gershwin: extraordinary melody, inability to assemble the melodies into a coherent larger work. "You can cut out parts of it without affecting the whole in any way except to make it shorter." See Bernstein, "A Nice Gershwin Tune," The Atlantic, April 1955 (paywall); reprinted in The Joy of Music (Simon & Schuster, 1959), pp. 52–62. ↩︎
Gideon Lewis-Kraus, "What Is Claude? Anthropic Doesn't Know, Either," The New Yorker, February 16 & 23, 2026 (paywall). ↩︎
As a case in point, your author flagged bad AI (bad!) in this essay: repetitive lists of three ("nonsensical metaphors...") when it cropped up a third time while editing Section IX. ↩︎
Richard P. Feynman, "Personal Observations on the Reliability of the Shuttle," Appendix F to the Report of the Presidential Commission on the Space Shuttle Challenger Accident (Rogers Commission Report), June 6, 1986. ↩︎
Legal reasoning: Katz et al., "GPT-4 Passes the Bar Exam," Philosophical Transactions of the Royal Society A, February 2024; Stubenberg et al., "How AI Stacks Up Against the Multistate Bar Exam," University of Hawai'i, May 2025. Medical diagnosis: Goh et al., JAMA Network Open, November 2024, found that LLMs alone outperformed unaided physicians on challenging diagnostic vignettes; physicians using the LLM did not significantly improve over conventional resources. Code generation: Jimenez et al., "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?," ICLR 2024; frontier models now resolve over 70% of SWE-bench Verified tasks (Epoch AI, February 2026). ↩︎
The autonomous-vehicle industry illustrates the two available responses. Tesla requires hands on the wheel: NTSB Report NTSB/HAR-17/02 (Williston, Florida, 2017); NTSB Investigation HWY18FH011 (Mountain View, California, 2020); NHTSA, "Additional Information Regarding EA22002," 2024. Waymo removes the human entirely by cranking the false-positive rate to maximum: Rubenfeld et al., "Tesla, Waymo, and the Great Sensor Debate," Contrary Research, July 2025. For a stark example by someone who knew better: Raffi Krikorian, formerly head of Uber's self-driving division, describes crashing his Tesla after three years of near-flawless Full Self-Driving. He had his hands on the wheel. He was not asleep. But the system had spent those three years training him to monitor rather than steer. See Krikorian, "My Tesla Was Driving Itself Perfectly — Until It Crashed," The Atlantic, April 2026. ↩︎
Orson Scott Card, Ender's Game (Tor Books, 1985). ↩︎
Peggy Noonan, What I Saw at the Revolution: A Political Life in the Reagan Era (Random House, 1990). ↩︎
Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence," Stanford Digital Economy Lab, November 13, 2025. The paper reports that early-career workers (ages 22–25) in AI-exposed occupations experienced a 16% relative decline in employment. Gartner projects that by 2030, 30% of organizations will see worse decision-making because early-career employees never built the foundational judgment AI overreliance bypasses. Kaelyn Lowmaster of Gartner's HR practice stated: "Without the chance to learn tasks on the job, gen AI inhibits the development of the very skills and judgment that early-career talent need to avoid making costly mistakes with AI." See Gartner, 'CHROs Must Accelerate Learning and Development as Gartner Predicts by 2030, 30% of Organizations Will See Worse Decision-Making Due to Overreliance on AI,' Jan. 27, 2026; See Jill Barth, "Is AI Eating Your Talent Pipeline?," HR Executive, January 30, 2026. ↩︎
Steven Spielberg and Peter Jackson, interview, The Hollywood Reporter, 2011. DGA Quarterly, Winter 2012. ↩︎
Todd Spangler, "Netflix Acquires Ben Affleck's AI Filmmaker Tools Start-Up InterPositive," Variety, March 5, 2026. Netflix's announcement: "We believe new tools should expand creative freedom, not constrain it or replace the work of writers, directors, actors, and crews." ↩︎
Plato, Phaedrus, 274c–275b. The god Theuth presents writing to King Thamus as a gift that will improve memory and wisdom. Thamus rejects it: people will appear wise without being wise. ↩︎
Academy of Motion Picture Arts and Sciences, 98th Academy Awards Complete Rules, Rule Two, Section 7 (April 2025). ↩︎
Discuss