Skip to main content

ChatGPT is changing the way we write. Here’s how – and why it’s a problem

ChatGPT is changing the way we write. Here’s how – and why it’s a problem

The Conversation explores how tools like ChatGPT are subtly reshaping the way we write. While these tools can save time and improve clarity, they can also smooth out our quirks and dilute our voice. As the author wisely notes, “writing should be about expressing your ideas in your own way.”

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Have you noticed certain words and phrases popping up everywhere lately?

Phrases such as “delve into” and “navigate the landscape” seem to feature in everything from social media posts to news articles and academic publications. They may sound fancy, but their overuse can make a text feel monotonous and repetitive.

This trend may be linked to the increasing use of generative artificial intelligence (AI) tools such as ChatGPT and other large language models (LLMs). These tools are designed to make writing easier by offering suggestions based on patterns in the text they were trained on.

However, these patterns can lead to the overuse of certain stylistic words and phrases, resulting in works that don’t closely resemble genuine human writing.

The rise of stylistic language

Generative AI tools are trained on vast amounts of text from various sources. As such, they tend to favour the most common words and phrases in their outputs.

Since ChatGPT’s release, the use of words such as “delves”, “showcasing”, “underscores”, “pivotal”, “realm” and “meticulous” has surged in academic writing.

And although most of the research has looked specifically at academic writing, the stylistic language trend has appeared in various other forms of writing, including student essays and school applications. As one editor told Forbes, “tapestry” is a particularly common offending term in cases where AI was used to write a draft:

I no longer believe there’s a way to innocently use the word ‘tapestry’ in an essay; if the word ‘tapestry’ appears, it was generated by ChatGPT.

Examples of overused stylistic words and their simplified alternatives, from a ChatGPT query made on September 11. ChatGPT/screenshot

Why it’s a problem

The overuse of certain words and phrases leads to writing losing its personal touch. It becomes harder to distinguish between individual voices and perspectives and everything takes on a robotic undertone.

Also, words such as “revolutionise” or “intriguing” – while they might seem like they’re giving you a more polished product – can actually make writing harder to understand.

Stylish and/or flowery language doesn’t communicate ideas as effectively as clear and straightforward language. Beyond this, one study found simple and precise words not only enhance comprehension, but also make the writer appear more intelligent.

Lastly, the overuse of stylistic words can make writing boring. Writing should be engaging and varied; relying on a few buzzwords will lead to readers tuning out.

There’s currently no research that can give us an exact list of the most common stylistic words used by ChatGPT; this would require an exhaustive analysis of every output ever generated. That said, here’s what ChatGPT itself presented when asked the question.

The top 50 stylistic words commonly used in AI outputs, according to ChatGPT. ChatGPT/screenshot

Possible solutions

So how can we fix this? Here are some ideas:

1. Be aware of repetition

If you’re using a tool such as ChatGPT, pay attention to how often certain words or phrases come up. If you notice the same terms appearing again and again, try switching them out for simpler and/or more original language. Instead of saying “delve into” you could just say “explore”, or “look at it closely”.

2. Ask for clear language

Much of what you get out of ChatGPT will come down to the specific prompt you give it. If you don’t want complex language, try asking it to “write clearly, without using complex words”.

3. Edit your work

ChatGPT can be a helpful starting point for writing many different types of text, but editing its outputs remains important. By reviewing and changing certain words and phrases, you can still add your own voice to the output.

Being creative with synonyms is one way to do this. You could use a thesaurus, or think more carefully about what you’re trying to communicate in your text – and how you might do this in a new way.

4. Customise AI settings

Many AI tools such as ChatGPT, Microsoft Copilot and Claude allow you to adjust the writing style through settings or tailored prompts. For example, you can prioritise clarity and simplicity, or create an exclusion list to avoid certain words.

The custom instruction settings in ChatGPT can be useful in tailoring outputs to meet your needs. ChatGPT/screenshot

By being more mindful of how we use generative AI and making an effort to write with clarity and originality, we can avoid falling into the AI style trap.

In the end, writing should be about expressing your ideas in your own way. While ChatGPT can help, it’s up to each of us to make sure we’re saying what we really want to – and not what an AI tool tells us to.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

“1945” Episode 5: The Surrender of Japan

“1945” Episode 5: The Surrender of Japan

New episodes of 1945 available Thursdays. View as a webpage.

1945
Episode 5: Japan Surrenders

Listen Now

In this episode, hosts Kirk Saduski and Donald Miller speak with historians Richard Frank, Doris Kearns Goodwin, and John McManus about the August 1945 atomic bombings of Hiroshima and Nagasaki and the subsequent surrender of Japan. Plus, Academy Award nominee Patricia Clarkson reads an excerpt from Hiroshima by John Hersey.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Listen Now

Episode 5: Japan Surrenders

Topics Covered in This Episode:

  • Atomic bombings of Hiroshima and Nagasaki

  • Japan surrenders

  • V-J Day

  • Experience of WWII veterans returning home

New episodes of 1945 are available each Thursday on Apple Podcasts, Spotify, YouTube, or wherever you get your podcasts.

Ever wonder why white people are called Caucasians? The Unseen Truth: When Race Changed Sight in America.

Ever wonder why white people are called Caucasians? The Unseen Truth: When Race Changed Sight in America.

Dr. Sarah Lewis has compiled a story—a history—of how the name came to describe white people and continues to be used to this day.

Brenee Brown mentions this book in her podcast: “In this episode, Dr. Sarah Lewis joins me again to talk about her new book, The Unseen Truth: When Race Changed Sight in America. With examples from her historical research, she walks me through the power of visual culture in generating equity and justice. We talk about how what we see and what’s left unseen shapes everything we believe about ourselves and other people — and how we can start changing the narrative about who counts and who belongs in America.”

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Harvard University Press says this:

The Unseen Truth shows how visual tactics have long secured our regime of racial hierarchy in spite of its false foundations—and offers a way to begin to dismantle it. In a masterpiece of historical detective work, Lewis examines the Caucasian War’s role in the nineteenth century in revealing the instability of the entire regime of racial domination. Images of the Caucasus region and peoples captivated the American public but also showed that the place from which we derive “Caucasian” for whiteness was not white at all. Cultural and political figures from P. T. Barnum to Frederick Douglass, W. E. B. Du Bois to Woodrow Wilson recognized these fictions and more, exploiting, unmasking, critiquing, or burying them. The true significance of this hidden history has gone unseen—until now.

“In ‘The Unseen Truth,’ it is almost as if Sarah Lewis has given us a new pair of glasses that allow us to see history in ways that were previously unclear… It has changed the way I observe the world. Lewis has provided us with an indispensable resource to better see ourselves.”

— Clint Smith, author of “How the Word Is Passed: A Reckoning with the History of Slavery Across America,” winner of the National Book Critics Circle Award for Nonfiction

Americans have long been invested in an imaginary story: that whiteness stems from the mountainous region between Eastern Europe and Western Asia known as the Caucasus. In The Unseen Truth: When Race Changed Sight in America, Sarah Lewis tells the story of the origins of the “Caucasian race” and the concealment of its discrediting in the early 20th century. Lewis has written a bold intellectual history, drawing from school atlases and encyclopedias, circus sideshows, yellow journalism, and presidential files to reveal the false foundations of ideas of race that continue to shape the United States.

The Nation provides something of an overview. Blumenbach based his conclusions at least in part on Phrenology, a long debunked pseudoscience popular in the nineteenth century to classify and describe human behavior.

Books in review

The Unseen Truth: When Race Changed Sight in America

by Sarah Lewis Buy this book

The Caucasus was identified as the homeland of the white race by the German naturalist Johann Friedrich Blumenbach in his 1795 treatise On the Natural Varieties of Mankind, which was written to provide a more scientific footing for the notion of polygenesis: the theory that God created separate human races for different parts of the earth. Blumenbach believed that all living humans were descended from the family of Noah after they came stumbling out of the ark when it landed on Mount Ararat in the southern Caucasus. In his telling, God sent Noah’s darker-skinned sons off to other lands to begin the African and Asian races, while his lightest-skinned son simply remained in place. Blumenbach further pinpointed one local group, the Circassians, as the “purest” examples of the white race, on the basis of nothing more than travelers’ tales about the exemplary beauty of Circassian women.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

The Grok ‘White Genocide’ Incident Shows How AI Can Become a Propaganda Machine

The Grok ‘White Genocide’ Incident Shows How AI Can Become a Propaganda Machine

Link to Parker Molloy’s Substack:

The Present Age
The Grok ‘White Genocide’ Incident Shows How AI Can Become a Propaganda Machine
Something very weird happened yesterday with Elon Musk’s xAI chatbot Grok — and if you’re at all concerned about the future of AI, it should have you deeply worried…
Read more

Parker Molloy

May 15

Something very weird happened yesterday with Elon Musk’s xAI chatbot Grok — and if you’re at all concerned about the future of AI, it should have you deeply worried.

For several hours on Wednesday, Grok started injecting references to “white genocide” in South Africa into completely unrelated conversations across X. When a baseball podcast asked about Orioles shortstop Gunnar Henderson’s stats, Grok answered the baseball question but then launched into a bizarre monologue about white farmers being attacked in South Africa and the controversial “Kill the Boer” song. Some X users asked about simple topics like baseball players or videos of fish being flushed down toilets. One user just asked Grok to talk like a pirate. But instead of staying on topic, they got replies about the conspiracy theory of “white genocide” in South Africa, puzzling users across the platform.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

How much should we trust AI when it transcribes speech? A recent article by AP News explores a troubling reality: some AI-powered transcription tools used in hospitals have been found to invent text that was never said. This issue doesn’t just raise technical concerns—it underscores the critical importance of accuracy, especially in sensitive settings like healthcare. At Adept, where precision is everything, we believe this piece offers a timely reminder of why human oversight still matters.

SAN FRANCISCO (AP) — Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every 10 audio transcriptions he inspected, before he started trying to improve the model.

A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.

The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.

That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.

Such mistakes could have “really grave consequences,” particularly in hospital settings, said Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year.

“Nobody wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There should be a higher bar.”

Whisper also is used to create closed captioning for the Deaf and hard of hearing — a population at particular risk for faulty transcriptions. That’s because the Deaf and hard of hearing have no way of identifying fabrications “hidden amongst all this other text,” said Christian Vogler, who is deaf and directs Gallaudet University’s Technology Access Program.

OpenAI urged to address problem

The prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.

“This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company’s direction. “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”

An OpenAI spokesperson said the company continually studies how to reduce hallucinations and appreciated the researchers’ findings, adding that OpenAI incorporates feedback in model updates.

While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.

Whisper hallucinations

The tool is integrated into some versions of OpenAI’s flagship chatbot ChatGPT, and is a built-in offering in Oracle and Microsoft’s cloud computing platforms, which service thousands of companies worldwide. It is also used to transcribe and translate text into multiple languages.

In the last month alone, one recent version of Whisper was downloaded over 4.2 million times from open-source AI platform HuggingFace. Sanchit Gandhi, a machine-learning engineer there, said Whisper is the most popular open-source speech recognition model and is built into everything from call centers to voice assistants.

Professors Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia examined thousands of short snippets they obtained from TalkBank, a research repository hosted at Carnegie Mellon University. They determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding “two other girls and one lady, um, which were Black.”

In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”

Researchers aren’t certain why Whisper and similar tools hallucinate, but software developers said the fabrications tend to occur amid pauses, background sounds or music playing.

OpenAI recommended in its online disclosures against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes.”

Transcribing doctor appointments

That warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor’s visits to free up medical providers to spend less time on note-taking or report writing.

Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla, which has offices in France and the U.S.

That tool was fine-tuned on medical language to transcribe and summarize patients’ interactions, said Nabla’s chief technology officer Martin Raison.

Company officials said they are aware that Whisper can hallucinate and are addressing the problem.

It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.

Nabla said the tool has been used to transcribe an estimated 7 million medical visits.

Saunders, the former OpenAI engineer, said erasing the original audio could be worrisome if transcripts aren’t double checked or clinicians can’t access the recording to verify they are correct.

“You can’t catch errors if you take away the ground truth,” he said.

Nabla said that no model is perfect, and that theirs currently requires medical providers to quickly edit and approve transcribed notes, but that could change.

Privacy concerns

Because patient meetings with their doctors are confidential, it is hard to know how AI-generated transcripts are affecting them.

A California state lawmaker, Rebecca Bauer-Kahan, said she took one of her children to the doctor earlier this year, and refused to sign a form the health network provided that sought her permission to share the consultation audio with vendors that included Microsoft Azure, the cloud computing system run by OpenAI’s largest investor. Bauer-Kahan didn’t want such intimate medical conversations being shared with tech companies, she said.

“The release was very specific that for-profit companies would have the right to have this,” said Bauer-Kahan, a Democrat who represents part of the San Francisco suburbs in the state Assembly. “I was like ‘absolutely not.’ ”

John Muir Health spokesman Ben Drew said the health system complies with state and federal privacy laws.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.