Since Parmy Olson won last year’s Financial Times and Schroders Business Book of the Year Award with Supremacy, about tech companies’ battle for control of artificial intelligence, she has started using large language models more frequently in her own research. “[They] can be a helpful tool for bouncing ideas around, [exploring] angles, and getting historic references to make comparisons,” she says.
As the 2025 edition of the prize launches, the debate over whether generative AI is a threat or an opportunity for authors is consuming the industry.
“We’re keenly aware these technologies can be used in ways that will dilute the market for human-authored works,” says Umair Kazi, director of advocacy and policy at The Authors Guild, the US professional organisation for writers. “But at the same time they are hugely useful tools.”
Gen AI’s fluent prose might put some writers out of a job. Evidence is also growing that some LLMs have been trained by developers — without authors’ consent — on pirated versions of copyrighted books.
Concern about illegal scraping has united authors against the practice. Mary Rasenberger, a former copyright and media lawyer who is now chief executive of The Authors Guild, says “we have never before had that level of agreement among our membership on any issue”.
The challenge of AI has also brought together publishers and agents. Esmond Harmsworth, president of the literary agency Aevitas, says: “Since the author and the publisher could easily be replaced [by AI] it’s been a more pleasant negotiation and one in which we join forces to try to come up with solutions to this.” Agents are now insisting on clauses in book contracts to control the future training of LLMs on authors’ work or, in some cases, license its use for a fee.
But AI is also an opportunity. The same engines offer automated assistance to authors in brainstorming and researching ideas, or editing and reviewing what they have written.
Olson says she still “can’t see any model being able to generate text that could replace my own writing”. She says the prose of Gen AI is “bland” and that “it will always be soul-destroying to not write in your own voice”.
Using LLMs for research, as Olson does, falls well within guidelines for responsible and effective use of AI, produced for authors last month by Wiley, which publishes academic works, textbooks and general business books.
Josh Jarrett, Wiley’s senior vice-president for AI growth, says the assumption was writers were “going to use these tools anyway and we need to find the right place on that continuum”, which could start with widespread tools such as spellcheck and Grammarly and, unchecked, run to automated drafting of whole books.
The guidance, assembled after surveying 5,000 authors and researchers, states the technology should be used “as a companion to [the] writing process, not a replacement”. It lays out when authors should disclose AI use — for example, when the tool “altered [their] thinking on key arguments or conclusions”. Wiley allows for the use of AI to prepare “educational content”, such as case studies and practice questions, with oversight and disclosure. Its guide is a “living document”, says Jarrett, which will evolve as the technology develops.
There are signs AI is working its way deeper into the writing process, not least OpenAI co-founder Sam Altman’s announcement on social media platform X last month that an as yet unreleased model was “good at creative writing”.
Serious publishers and agents take a hard line against the use of AI to write whole books — but some are experimenting. Wiley tried to produce its manual Generative AI For Dummies using the technology. Jarrett says while it was useful for drafting chapter headings, it “didn’t actually save much time”. Wiley says generative AI could be used to develop new formats, such as a concise edition of a heavyweight textbook.
Executive coach Marshall Goldsmith has an AI avatar that responds to questions by drawing on his prior work, including his bestseller What Got You Here Won’t Get You There. When asked whether the technology was useful for coaching, MarshallGoldsmith.ai responded that the best outcome combines both human and machine: “It’s a both/and proposition, not either/or.”
James Levine, principal at agency Levine Greenberg Rostan, says the biggest emerging threat is in the spoken, rather than written word, as “several publishers are now experimenting with the use of AI to record audiobooks”. On the other hand, Harmsworth points out that the rapid recording of material that might otherwise never be made available in audio form could be a boon for visually impaired readers.
Kevin Anderson, chief executive of book-writing service Kevin Anderson & Associates, believes AI will hit ghostwriters at the lower to middle range of the sector. They are typically paid $25,000-$50,000 for 18 months’ work on a book that will raise a business leader or celebrity’s profile. Anderson points out AI could put together an adequate how-to book that is “generic, comprehensive, well written and well organised” in a weekend.
At the top end of the profession, where his agency recently sealed a ghostwriting deal worth nearly $500,000, Anderson says it is harder for machines to replace humans. “Generating content isn’t necessarily the part that humans do better than AI. It’s figuring out what the content should be and being that interviewer, using their human intuition to . . . figure out how to get [the story] out of the person the right way,” he says.
Even if some authors do not yet use AI, their agents and publishers almost certainly do. Springer Nature this month introduced a tool to fight AI-generated fake research and identify irrelevant references in its book and journal submissions. Levine uses specialist AI “personas” to help critique incoming book proposals on technical subjects, although he only does so with authors’ permission and using models that are not trained on the input.
The technology is advancing fast. “Already we are seeing a large decline in some of the side jobs that authors have done to supplement book income — including copy writing, business writing and some journalism; and now they see AI-generated books competing with their own and in some cases using their text or identities,” warns Rasenberger.
Harmsworth reckons the models have not yet caught up with talented writers. But “the big question that we have all been worried about is how long is that going to last”.
For more on the FT and Schroders Business Book of the Year Award 2025, visit www.ft.com/bookaward and https://businessbook.live.ft.com/