Gutenberg is to blame. His invention of printing ultimately led society to view public discourse, creativity, and news as “content.” As a commodity that fills the products we call publications or, more recently, websites. Many journalists today believe that the value of their work lies primarily in the creation of content. You have to keep this attitude in mind if you want to understand the benefits and dangers of artificial intelligence (AI) for journalism.
Because it produces machines – generative artificial intelligence or large language models like ChatGPT – that can create endless content: texts that sound exactly as if we had written them ourselves, because they were trained with all our words. The models have no understanding of the meaning of the words, no concept of truth. They are simply programmed to be able to predict with high probability what the next word in a sentence should be.
A New York lawyer named Steven Schwartz learned the hard way about ChatGPT’s fallibility. In a now infamous case, he used the software to search for precedents. His lawsuit involved an out-of-control trolley on an airplane and his client’s allegedly injured knee. ChatGPT dutifully provided him with more than half a dozen case citations.
After the firm filed its brief in court, the opposing side said it could not find the cases. So Schwartz turned to ChatGPT again and asked the bot to show him the complete cases, which ChatGPT promptly did, accompanied by appropriate legal papers.
The judge called them “legal gibberish” and summoned Schwartz and his colleagues to court. I was one of the journalists present and witnessed the public humiliation of the lawyers. The bot had constructed the supposed precedents independently.
“The world now knows the dangers of ChatGPT,” the lawyers’ lawyer told the judge. The court had fulfilled its task of warning the public about these risks. The judge interrupted him: “I didn’t mean to do that.” The problem was not the technology, but the lawyers who used it, who ignored warnings about the dubious precedents, who failed to check it. The lawyers’ attorney said Schwartz “was playing with live ammunition. He wasn’t aware of it because technology lied to him.”
But ChatGPT didn’t lie because, as I said, he has no concept of truth. Nor did he “hallucinate,” as its inventors call it. He just predicted word sequences that sounded right but weren’t.
The case serves as a cautionary tale for media companies that are jumping on the new technology to let the bots write texts – for example because they appear cool and trendy, save work, perhaps cut jobs and still want to produce more and more content. In the US, some media companies have come under fire because their AI-generated content was proven to be factually incorrect. America’s largest newspaper chain, Gannett, just stopped using an AI tool that generated embarrassing reports about sporting events. A football game, as if aliens had taken part, was described as a “close encounter of the sporting kind.”
I advise editors and publishers to avoid newswriting language models and only use them for proven applications, such as converting financial reports into simple news stories. The texts must also be checked before they are published. Fact-free gibberish from the machine could destroy the authority and credibility of both media and technology companies – and tarnish the reputation of artificial intelligence as a whole.
There are good applications for AI. I benefit from it every day, for example when I use Google Translate, Maps, Assistant and autocomplete. Programs like ChatGPT cannot replace the work of journalists, but they can complement it. I recently tested a new tool called NotebookLM. This app can summarize a folder containing a journalist’s research and structure the content. AI software could also be usefully used in language teaching, where mastery of the language rather than facts is important. I even believe that it can be used to expand reading and writing skills. This could help people who find writing intimidating to communicate and tell their stories more effectively.
For writers like me, however, there is a catch. As journalists, we are storytellers and have the power to tell other people’s stories, deciding which ones are told, who appears in them, how they begin, and how they end. We believe this enables us to explain the world in a meaningful way. Authors and journalists see AI as competition. By generating prose that is believable at first glance in a matter of seconds, it devalues writing and deprives authors of their status.
In my opinion, this is one of the reasons why we experience hostile media coverage of new technologies. Now journalists have once again become overwhelmed with excitement – the latest in a long series of panic attacks about films, television, comics, music lyrics and video games. They warn about the dangers of the internet, social media, our phones and now artificial intelligence, claiming that these technologies will dumb us down, make us addicted, take away jobs and destroy democracy through a flood of disinformation.
You should calm down again. A 2020 study shows that in the United States, no age group “engaged with fake news for more than a minute per day on average, and it made up no more than 0.2 percent of their total media consumption.” The problem for democracy is less disinformation than the willingness of some to believe lies that fuel their own fears and hatred. Journalism should report on the roots of fanaticism and extremism and not blame new technologies.
In my book “The Gutenberg Parenthesis” I examine the beginning of the print age, which we are now gradually leaving towards the digital age. In the beginning, rumor was more trustworthy than print, because anyone could anonymously produce a book or pamphlet – just as anyone can build a website or write a tweet today. In 1470 – just 15 years after the publication of the Gutenberg Bible – the Latin scholar Niccolò Perotti first called for censorship of printed writings. He was so appalled by the poor translation of an edition of Pliny that he wrote to the Pope asking him to appoint a proofreader to check all the texts before printing. When I thought about it, I realized that Perotti wasn’t about censorship. Rather, he foresaw the establishment of editorial offices and publishing houses that would ensure quality and authority in print for centuries.
Like Perotti in his day, the media and politicians are now demanding that something be done about harmful online content. Governments – as well as editors and publishers – are no longer able to cope with the volume of content and posts, which is why they are tasking platforms with monitoring and censoring everything that is said online. But that is an impossible undertaking.
As we have seen, journalists must be careful when using AI, but at the same time there is a risk of demonizing the technology. In the best case scenario, there could be an opportunity for journalism in the further development of AI: if it causes journalists to rethink their role in society and ask themselves how they can improve public discourse. The Internet offers them new opportunities to engage with communities, to build relationships of trust and authority with them, to listen to their needs, to discover voices that have not been heard for too long and to give them a platform and to expand the work of journalism beyond publishing to the Internet.
The value of journalists lies not only in the content they create. It lies in reporting on and serving communities. Like Nicolò Perotti, we should anticipate the development of new Internet services: services that help users manage the volume of content, verify its veracity, assess the authority of news sources, discover diverse voices, promote talent, and recommend content that deserve our time and our attention. Perhaps this could be the basis for a new journalism in the age of AI.
Translation: Anuschka Tomat