Skip to main content

Science says you need a human transcriptionist!

It’s complicated . . .From Psychology Today

Listening, in particular, was more demanding. As stories unfolded into complex ideas, listeners recruited a broader set of brain regions involved in memory retrieval, sustained attention, and social cognition. These included areas like the angular gyrus and posterior cingulate cortex, which help link incoming language to stored knowledge, and the medial prefrontal cortex, which supports imagining other people’s thoughts and intentions.

These networks allowed the listener not only to absorb the speaker’s words but to track their meaning over time, integrate it with prior knowledge, and infer intention. Speaking did not require the same level of integration. It remained more localized, focused on generating language and responding to immediate context. This involved regions like Broca’s area in the left frontal lobe, which helps plan speech, and nearby motor areas responsible for controlling the muscles used in speaking.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Cognition

How the Brain Builds Conversations Across Time

Related brain processes—speaking and listening—use distinct systems.

Posted July 14, 2025 | Reviewed by Devon Frye

Key points

  • The brain builds conversational meaning across multiple timescales, from short phrases to full narratives.

  • While brief segments rely on shared brain regions, others engage different systems for speaking and listening.

  • These findings explain how people keep track of conversations and shift fluidly between roles.

“Happy talk,

Keep talkin’ happy talk,

Talk about things you’d like to do.”

These lyrics from South Pacific hint at something deeply human: Our lives unfold through talk.

Our conversations give form to our thoughts and tie us to one another. But beneath the surface of every spoken exchange lies a complex neural process, one that shapes how we create and interpret meaning together.

A new study published in Nature Human Behaviour reveals that the brain organizes this exchange by adapting to the timescale of the conversation. At shorter intervals, the brain uses overlapping systems for both speaking and listening. But as the dialogue stretches into full thoughts or stories, speaking and listening begin to rely on distinct processes. This layered structure helps explain how people carry out fluid, responsive conversations.

How the Brain Follows Conversations

To explore the inner mechanics of dialogue, researchers in Japan invited pairs of individuals to engage in unscripted conversation while lying in separate scanners, speaking through headphones and microphones. Their goal was not to study isolated words or scripted exchanges, but the fluid, spontaneous rhythms of how human communication unfolds in daily life.

The researchers segmented each conversation into varying lengths, from fleeting phrases to full narrative arcs. They then examined how the brain responded to these different timescales. During short exchanges, the same neural systems were active whether a person was speaking or listening. It seemed that, in the early moments of a conversation, both parties relied on a shared set of circuits to manage the rapid flow of words. However, as the conversation deepened and the timescale lengthened, the brain began to diverge in its treatment of each role.

Listening, in particular, was more demanding. As stories unfolded into complex ideas, listeners recruited a broader set of brain regions involved in memory retrieval, sustained attention, and social cognition. These included areas like the angular gyrus and posterior cingulate cortex, which help link incoming language to stored knowledge, and the medial prefrontal cortex, which supports imagining other people’s thoughts and intentions.

These networks allowed the listener not only to absorb the speaker’s words but to track their meaning over time, integrate it with prior knowledge, and infer intention. Speaking did not require the same level of integration. It remained more localized, focused on generating language and responding to immediate context. This involved regions like Broca’s area in the left frontal lobe, which helps plan speech, and nearby motor areas responsible for controlling the muscles used in speaking.

article continues after advertisement

In this asymmetry lies a profound insight. To speak is to project thought outward, but to listen is to reconstruct another person’s inner world. It is no surprise, then, that the brain allocates its deepest resources to the act of listening.

Why Speaking and Listening Feel So Different

To uncover how this works, the researchers constructed computational models capable of predicting whether a person was speaking or listening based solely on their brain activity.

Even the smallest acknowledgments, like “right,” “uh-huh,” and “you know,” elicit stable patterns in the brain. These fragments serve a subtle but vital purpose. They signal presence, mark engagement, and keep the rhythm of dialogue intact. In doing so, they reflect the fundamentally social nature of language: We do not speak into a void, but to be heard, understood, and affirmed.

As conversations become emotionally charged or intellectually complex, the gap between speaker and listener widens. The listener, more than the speaker, must navigate shifting layers of meaning. This involves not only cognitive effort, but emotional attunement.

Brain areas like the anterior insula and amygdala become more active during emotionally rich moments, helping the listener register tone and affect. Other regions, such as the temporoparietal junction, help track the speaker’s perspective, allowing the listener to imagine what the speaker might be feeling or intending. To listen well is to hold another person’s experience in mind, to mirror their emotions without losing oneself.

A Brain Designed for Dialogue

Conversation is more than the exchange of words. It is a layered, time-dependent process involving memory, emotion, attention, and the ability to switch between speaker and listener. The brain makes this possible by drawing on flexible systems: some geared for rapid responses, others tuned for extended stretches of meaning.

article continues after advertisement

What emerges is a brain finely shaped for connection. As South Pacific reminds us, “Happy talk, keep talkin’ happy talk.” The complex choreography within the brain allows us not only to speak, but to understand and be understood.

References

Yamashita, M., Kubo, R., & Nishimoto, S. (2025). Conversational content is organized across multiple timescales in the brain. Nature Human Behaviour, 1-13.

About the Author

William A. Haseltine, Ph.D., is known for his pioneering work on cancer, HIV/AIDS, and genomics. He is Chair and President of the global health think tank Access Health International. His recent books include My Lifelong Fight Against Disease.

Online:

Access Health, Facebook, X, LinkedIn

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Facts are a slippery thing with the Copilot() function in Excel

Read the full article • Share this on Facebook – X/Twitter – Threads – BlueSky

OFFICE WATCH

Our 30th year of watching Word, Excel, Outlook and PowerPoint.

20 August 2025 – Vol. 30 No.32

Read the full article • Share this on FacebookX/TwitterThreadsBlueSky

Just like Copilot or its parent ChatGPT, the Copilot function in Excel use for analysis but not so much for getting hard facts. Our testing of the new Copilot() feature shows that no-one should trust what AI says is true.

We’ve taken Microsoft’s example and extended them a little to show the real-world pitfalls and tricks for using Copilot() in Excel. It wasn’t hard to find factual errors in Copilot() responses, some big, small or not understandable!

Some lessons we learned from Copilot()

  • Copilot has a slippery and changing concept of ‘truth’.

  • Carefully word the prompt and context.

  • Carefully check results.

  • Sorting has to be done as part of the Copilot prompt, but isn’t always correct.

  • Filtering to exclude some results, individual or as a group, can be done in the prompts

  • Copilot has trouble parsing first and last names with a middle initial.

  • Headings for Copilot() lists may or may not appear. Better to be specific.

Airports

Microsoft’s Copilot() example shows how to get a list of airports.

Source: Microsoft

Like most Microsoft carefully chosen examples, if you do a little digging the problems arise.

We add a filter by population and asked for more details “Airports in cities over half million people, show airport name and code”

As you can see, Copilot() returns a dynamic (spill) array which can include multiple columns.

However, there are problems:

  • Gold Coast/Tweed Heads has a population of over 700k and it’s airport should be on the list.

  • Canberra and Newcastle have populations just over 500k and should have been included.

  • The proper name is “Sydney Kingsford Smith Airport”. Changing the prompt to ask for “full airport name” gives a more accurate result.

    • Just one example of how careful wording of AI prompts is important.

Which only confirms what we’ve said about AI for some time:

Always check the facts and be careful about the wording of prompts.

Another factual error

Just another factual error we found in our testing. Asking for ” Airports in cities over half million people, show airport name and code” for the UK might seem correct but it’s not.

London has 5 or 6 airports (it depends). However, you define “London airports”, it should at least include Gatwick (LGW) and London City (LCY). Luton, Stansted and especially Southend are also called “London airports” with a certain generosity of spirit .

Copilot makes the same mistake with New York, only listing JFK and not La Guardia (LGA).

But change the prompt to ask for distance from a location and suddenly Gatwick airport appears! LCY, which is even closer to Greenwich, is still missing.

This isn’t pedantic nit-picking, it’s examples of a common problem with the current AI systems. We rarely get a ‘factual’ result from Copilot or ChatGPT that doesn’t need some changes.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

DWTP—Peace of Mind and a Piece of One’s Mind

From Daily Writing Tips–DailyWritingTips.com;

Word of the Day

Obdurate

adjective | AHB-duh-rut


Obdurate is a formal word that means “resistant to persuasion.” It is usually used to describe someone who is stubborn or not willing to change their opinion or the way they do something.

“Even after numerous attempts to negotiate, the obdurate politician remained steadfast in his opposition to the proposed legislation.”

Today’s Writing Tip

“Peace of Mind” and “A Piece of One’s Mind”

Two idioms that sound similar and are often played with for punning effect are “peace of mind” and “give someone a piece of one’s mind.”

Understanding “Peace of Mind”

peace: freedom from anxiety, disturbance (emotional, mental, or spiritual), or inner conflict; calm, tranquillity.

The expression “peace of mind” belongs to a category of phrases that place the feeling of peace within a specific organ or faculty:

  • “peace of heart

  • “peace of soul . . .

  • “peace of conscience”

One might seek peace of mind through prayer or meditation. Self-help books, religions, and various philosophies promise it:

Nine Ways to Find Peace of Mind

The peace of mind Jesus offers is not of this world.

Islam teaches that in order to achieve true peace of mind . . . one must submit.

I . . . found great peace of mind in doing what Hinduism exhorts me to do.

The Idiom “Give Someone a Piece of One’s Mind”

Then there’s the expression “give someone a piece of one’s mind.” It means to chide, tell someone off, tell someone how the cow ate the cabbage, tell someone exactly what you think, in no uncertain terms:

When she saw the lipstick stain on his collar, she gave him a piece of her mind.

The third time the wheel fell off, he gave the mechanic a piece of his mind.

Commercial and Punning Uses of the Expressions

As with so many other common expressions, “peace of mind” is often altered for commercial purposes or efforts at punning.

I understand calling an opinion blog Piece of Mind. I suppose Iron Maiden had a reason for calling an album Piece of Mind. And a bookstore called Piece of Mind makes a kind of sense.

But why you’d name a tobacco brand Piece of Mind escapes me. And to call a program for sufferers of Alzheimer’s disease Piece of Mind strikes me as a bit tasteless:

The Piece of Mind program engages individuals in the early to middle stages of Alzheimer’s through interactive tours and art-making experiences.

Unintended Substitution of “Piece” for “Peace”

Then there is the out-and-out unintended substitution of piece for peace, as in this headline at EzineArticles:

Buying a Personal Safe for Piece of Mind and Security

And in this book review of I, Rhoda Manning, Go Hunting with My Daddy & Other Stories:

Gilchrist’s short stories are indeed therapeutic. They tell real stories about real people searching for love, for happiness, for piece of mind . . . .

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Today’s Quiz

Question 1:

What does the idiom “peace of mind” signify?

a) a state of anxiety and disturbance

b) a state of tranquility, free from emotional, mental, or spiritual disturbance

c) the act of telling someone off

d) finding a piece of one’s own mind

Question 2:

What does the idiom “give someone a piece of one’s mind” mean?

a) provide advice or comfort to someone

b) tell someone exactly what you think, in no uncertain terms

c) share a part of your knowledge or wisdom with someone

d) assist someone in achieving peace of mind

Question 3:

Which of the following sentences correctly uses the idiom “peace of mind”?

a) Once she had finished her taxes, she had peace of mind knowing it was all sorted.

b) After arguing with his teacher, he decided to give her peace of mind.

c) The peace of mind was cut into three pieces and distributed among the students.

d) She sat down with peace of her mind and started painting.

Question 4:

Which of the following sentences correctly uses the idiom “give someone a piece of one’s mind”?

a) I’m sorry for giving you a piece of my mind yesterday; I was just really stressed out.

b) The priest gave me a piece of his mind; now I feel so peaceful and calm.

c) He managed to give a piece of his mind to the puzzle.

d) When I go to the mountains, I can finally give a piece of my mind.

Question 5:

Which of the following sentences appropriately applies one of the idioms from the lesson?

a) Despite his obdurate attitude, the piece of mind she received after discussing the issue was unparalleled.

b) In the face of his obdurate refusal to listen, she found a piece of her mind within her patience.

c) The obdurate student received peace of mind after repeatedly disrupting the class.

d) Her reward for her obdurate resistance to giving in to their demands was a peace of mind she had never experienced before.


The correct answers are as follows:

  1. b) a state of tranquility, free from emotional, mental, or spiritual disturbance

  2. b) tell someone exactly what you think, in no uncertain terms

  3. a) Once she had finished her taxes, she had peace of mind knowing it was all sorted. (“Peace of mind” is used correctly here, as the sentence refers to the tranquility experienced after completing a task.)

  4. a) I’m sorry for giving you a piece of my mind yesterday; I was just really stressed out. (“Giving you a piece of one’s mind” is used correctly here to express the act of telling someone off or expressing dissatisfaction or annoyance.)

  5. d) Her reward for her obdurate resistance to giving in to their demands was a peace of mind she had never experienced before. (This sentence accurately employs the idiom ‘”peace of mind,” signifying a state of inner tranquility that the woman attains from her obdurate [resolute] decision to meditate daily.)

Health, Politics & Government, Race & Gender

Government health datasets were altered without documentation, Lancet study shows: https://journalistsresource.org/home/federal-health-data-modification-lancet/

Researchers examined more than 200 federal datasets and found that nearly half of them were altered between January and March. In most, the term “gender” was replaced with “sex.”

by Naseem S. Miller | August 19, 2025

(Adam Custer on Unsplash)

or months now, researchers and journalists have been documenting the disappearance of federal health data and monitoring changes to government websites. Now, a new analysis finds that some of the existing datasets have also been modified, most of them lacking a notice or log about the change.

Researchers compared more than 200 federal datasets that were available between January and March with their archived versions and found that nearly half were altered. In most cases, the word “gender” was changed to “sex.” Only 15 of the altered datasets included a note about the modification.

“The lack of transparency is a particular concern,” says Janet Freilich, a professor at Boston University School of Law, and co-author of the study, which was published in The Lancet in July.

Alterations were made across multiple federal agencies, including the Department of Veterans Affairs and the Centers for Disease Control and Prevention. The reason for the modifications was not documented in the datasets, but they coincide with a January 20 presidential directive instructing federal agencies to use the term “sex” instead of “gender.”

Federal health datasets have been a major source of information for scientists, and undocumented changes to existing data can undermine confidence in government statistics and distort research.

“There are two levels of harm here,” says Freilich, a patent lawyer by training, who has been following changes to the federal data in recent months. “If you think you’re looking for whatever the column title reflects, but the column — the underlying data — actually reflects something else, then you’re going to get a wrong answer. But the second level of harm is, this really impairs trust in federal data.”

A screenshot of CDC’s Youth Risk Behavior Surveillance System, captured on Aug. 18, 2025.

In March, Freilich co-wrote a paper in The New England Journal of Medicine on the disappearing data, finding that from Jan. 21 to Feb. 11 2025, the Centers for Disease Control and Prevention had removed 203 databases.

“I’m not expecting this information to come back,” Freilich says. “I just plead for transparency.”

Michelle Kaufman, an associate professor and director of the Gender Equity Unit at the Johns Hopkins Bloomberg School of Public Health, who was not involved in the Lancet study, said that while most people are aware that several federal datasets have been taken down, “this actual doctoring of it takes it to the next level.”

“I’ve been telling my students, ‘You might want to find other data sets that aren’t connected to the U.S. government, because we don’t know the accuracy at this point,’” Kaufman says.

She has also been advising her students to immediately download federal datasets they might need for research.

“You don’t know if it’s going to be there tomorrow,” she says.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

The study and its findings

Freilich and her co-author Aaron Kesselheim, a professor of medicine at Harvard Medical School, examined metadata from more than 200 datasets from the Department of Health and Human Services, the Centers for Disease Control and Prevention, and the Department of Veterans Affairs, covering Jan. 20 to March 25, 2025.

Under the OPEN Government Data Act, federal agencies keep lists of information about all their datasets, called their metadata, including a unique ID, title, creation data, description, and content of the dataset (here’s an example). These lists are collected from each agency regularly by Data.gov, which acts as a central hub that brings together datasets from across the federal government and other sources.

Some of the datasets researchers examined in the Lancet study include the Behavioral Risk Factor Surveillance System Prevalence Data, Global Tobacco Surveillance System, and U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States.

Using Microsoft Word’s comparison tool, the authors then manually compared current datasets to the archived versions recorded by the Internet Archive. They focused on word changes, not numerical data. Researchers also didn’t track changes to the wording on government websites.

In one example, the authors identified a Department of Veterans Affairs modified dataset about veteran health care use in 2021, in which a column titled “gender” was renamed “sex”. Those words were also changed in the dataset’s title and description. Before March 5, the dataset had not changed since it was published in 2022.

Because many datasets did not have an archived copy, the Lancet study may not be representative of all datasets in federal repositories, the authors note. But in addition to documenting undisclosed changes to some of the existing datasets, the study reveals an increase in the pace of data alterations since January: 4% of changes happened in late January, while 72% occurred in March.

Researchers also found:

  • In 25% of altered datasets, the change from “gender” to “sex” made the data descriptions more consistent, as the word “gender” had been applied to data also labeled as “sex.”

  • In four datasets, “social determinants of health” was changed to “non medical factors.” In one, “socioeconomic status” was changed to “socioeconomic characteristics.” In another existing dataset, the question “Are PTSD clinical trials gender diverse?” was changed to “Do PTSD clinical trials include men and women?”

  • Of the altered datasets, 89 involved changes in classification or categorization, such as changing the column headers. About 25 had modified descriptive text, such as tags and paragraph overview.

To safeguard data integrity, Freilich and Kesselheim call for stronger transparency measures, independent archiving, and international alternatives.

“Gender” and “sex” in research

Sex and gender capture different information in research.

Sex usually refers to a person’s biological characteristics, whereas gender refers to socially constructed roles and norms, according to a 2023 paper by Kaufman, published in the Bulletin of the World Health Organization.

“So just because you were born as a designated sex category at birth, it does not mean that, psychologically, that’s how you feel, and that’s where the separation of biological sex comes in as separate to the social construction of gender,” Kaufman says.

Gender has been a focus of research, particularly in psychology, since the 1970s. Researchers still conflate the two concepts, which can make it difficult to compare studies. However, overall, gender and sex are not interchangeable in most studies and surveys. Gender captures a wider range of social experiences of people, compared with sex, which only captures male and female.

“Whether you’re talking about intersex people biologically, or nonbinary, third gender, transgender people in terms of identity, it erases that experience because you have to fit people into one of those two categories, male or female,” Kaufman says.

In addition, if a study aims to investigate the social constructions of gender and how roles and norms might have impacted health outcomes, using “sex” would make it difficult to interpret the results.

“Is it about the biology, the hormones, the chemical makeup of the person that led to these health outcomes, or was it their roles as a woman, or expectations as a man, that then led them down a certain path to those health outcomes?” Kaufman says. “By going back to this sort of gender essentialism of sex being a binary and that lining up completely with gender is sort of backtracking a lot of the research that’s been done over the past several decades.”

Where to find archived data

There’s no perfect alternative to the government databases.

“There’s a lot that can be done on the non-governmental side, but the government has such a leg up in the scope of information it can gather and its authority to gather information that others just can’t get access to,” Freilich says.

Some non-governmental organizations do have their own datasets, as we explained in a February 2025 piece.

Since January, several volunteer groups and newsrooms have also been downloading and archiving government datasets and making them available to the public.

We’ve curated some of those resources below.

  • The Data Rescue Project is a collaboration among a group of data organizations and members of the Data Curation Network. The project — a clearinghouse for preserving at-risk public information — has created a Data Rescue Tracker and a Portal to catalogue ongoing public data rescue efforts.

  • Harvard Dataverse: Harvard Dataverse is a large publicly available repository of data from researchers at Harvard University and around the world, covering a range of topics from astronomy to engineering to health and medicine.

  • The Harvard Library Innovation Lab Team has released more than 311,000 datasets harvested in 2024 and 2025 on Source Cooperative.

  • DataLumos is a crowdsourced repository for at-risk US federal government data. DataLumos is hosted by ICPSR, an international consortium of more than 800 academic institutions and research organizations.

  • Public Environmental Data Project: Run by a coalition of volunteers from several organizations, including Boston University and the Harvard Climate and Health CAFE Research Coordinating Center, the project has compiled a large list of federal databases and tools, including the CDC’s Social Vulnerability Index and Environmental Justice Index.

  • The Federal Environmental Web Tracker is monitoring and tracking changes to thousands of pages of federal government websites.

  • STAT News is backing and monitoring CDC data in real time.

  • Run by health policy data analyst Charles Gaba, ACASignups.net has a list of archived versions of cdc.gov web pages.

  • Here are some of the CDC datasets uploaded to the Internet Archive before January 28th, 2025.

  • Archive.org has an “End of Term 2024 Web Crawls” downloadable data collection.

  • The Data Liberation Project, run by MuckRock and Big Local News, has a list of archived datasets.

  • Looking for an alternative to the National Library of Medicine’s PubMed to look for research papers? Try Europe PMC. Germany is also planning a global alternative to PubMed.

  • Data journalist Hannah Recht is tracking changes to the U.S. Census datasets.

  • Dataindex.us is a collaborative effort to monitor changes to federal datasets.

  • The 19th, an independent nonprofit newsroom reporting on gender, politics, and policy, has archived government documents, including the CDC’s maternal mortality data, the CDC’s abortion and contraception data, research studies on teens, and guidelines from the National Academies on how to collect data on gender and sexuality.

  • Investigative Reporters & Editors: The nonprofit journalism organization has downloaded more than 120 data sets from the federal websites, as recently as November. Some of those data sets include Adverse Event Reporting System, Behavioral Risk Factor Surveillance System, Medical Device Reports, Mortality Multiple Cause-of-Death Database, National Electronic Injury Surveillance System (NEISS), National Practitioner Databank, Nuclear Materials Events Database, OSHA Workplace Safety Data, and Social Security Administration Death Master File. IRE members can contact the organization and order the data sets. The organization has been providing data to members since the early 1990s.

About The Author

Naseem S. Miller

Naseem Miller is the senior editor for health at The Journalist’s Resource. She joined JR in 2021 after working as a health reporter in local newspapers and national medical trade publications for two decades. Immediately before joining JR, she was a senior health reporter at the Orlando Sentinel, where she was part of the team that was named a 2016 Pulitzer Prize finalist for its coverage of the Pulse nightclub mass shooting. You can follow her on Bluesky.

Speech Sequencing: The Hidden Architecture Behind Human Fluency

New research reveals how a little-known brain region may help transform thoughts. William A. Haseltine Ph.D. Best Practices in Health Posted July 28, 2025 | Reviewed by Devon Frye

Key points

  • Speaking fluently involves organizing the precise sequence of sounds required to say words.

  • A brain region called the middle precentral gyrus appears to play a key role in organizing sequences of sound.

  • Disrupting this region causes stuttering, hesitations, or speech errors.

Every day, we speak thousands of words, without rehearsal or hesitation. We order coffee. We soothe a child. We describe a memory, tell a joke, argue, confess, comfort, persuade. To us, speech feels as natural as breathing. Yet from the brain’s perspective, it is anything but simple.

New research published in Nature Human Behaviour suggests that speech fluency rests on an intricate, moment-to-moment system for sequencing sounds in the correct order. This process is so seamless that we rarely notice it, unless something goes wrong. But inside the brain, a specialized region is working tirelessly to prepare each syllable, line them up, and deliver them at just the right time.

This region, the middle precentral gyrus, is a little-known fold of brain tissue tucked in the frontal lobe. It may be the key to why our speech flows like a symphony, instead of crumbling into a clatter of broken notes.

Thought Is Not Enough

To speak is not merely to have a thought. It is to turn that thought into motions: tiny, precise muscular movements of the lips, tongue, vocal cords, jaw, and diaphragm. These parts must dance together, millisecond by millisecond, to produce even a simple word. What comes first? What comes next? How long should each syllable last?

This coordination is what scientists call speech-motor sequencing. This study reveals the middle precentral gyrus, or the mPrCG, to be its architect.

Using recordings from 14 patients undergoing brain monitoring, the researchers asked participants to say short syllable sequences. As people prepared to speak, the researchers saw something surprising: the mPrCG lit up not just during speech, but long before it began. The more complex the sequence, the longer it stayed active, quietly assembling the motor instructions before a single word escaped the lips.

In a sense, the mPrCG was acting like a conductor before the orchestra plays, scanning the musical score and preparing each cue. It was not producing the sound itself. It was preparing the order of operations.

A Glitch in the Machine

But how do we know this region isn’t just reacting to speech, rather than preparing it? To test this, the researchers directly stimulated the mPrCG with gentle electrical currents while participants spoke.

The results were immediate. People who had just spoken fluently a moment before began to pause, stumble, or say syllables in the wrong order. Some dragged out their speech, others inserted unintended gaps.

But when asked to simply repeat “ba-ba-ba,” their speech was perfect. The breakdowns only appeared when the sequence required coordination. It’s like a pianist flawlessly playing a single note but fumbling when asked for a short melody. The hands are fine. The memory is intact. But the choreography is lost.

Interestingly, the mPrCG is located near regions involved in reading and writing. Some patients with damage in this area struggle not only with speaking, but also with forming written sentences or reading aloud. This hints at a deeper principle: the brain may use a shared sequencing system for many types of expression: spoken, written, gestured. Whether you’re typing a text or delivering a toast, the same basic architecture might help you organize your thoughts into a meaningful sequence.

What this research shows is that fluency is not a given. It is constructed, second by second, by systems that work in silence. When those systems fail or falter, the result isn’t just noise; it’s disconnection.

People with speech disorders often describe knowing exactly what they want to say but being unable to unlock the words. This study suggests a clear reason why: The neural blueprint for speech, assembled in the mPrCG, has been disrupted. ​​

Understanding this system could pave the way for better tools to support people with stuttering, aphasia, or other speech coordination challenges. Even for fluent speakers, it offers a reminder: slowing down and practicing articulation may help reinforce the very sequencing networks that make speech possible.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Rewriting the Map of Speech

For over a century, scientists have looked for the “speech center” in the brain. What this study suggests is that there is no single center. Instead, speech arises from a community of brain regions, each with its own role. Some regions select the words. Others control the lips or vocal cords. But the mPrCG appears to do something uniquely human: sequence our intentions into actions.

In daily life, we rarely notice this machinery. But perhaps we should. Because it reminds us of something profound: fluency is not a gift, it is an act of construction. Every sentence we speak is the result of a hidden chain of decisions, prepared and executed with remarkable precision. And when that chain is disrupted, we glimpse the delicate scaffolding beneath our most human act.

What makes our speech powerful is not just vocabulary; it is structure. Without sequencing, there is no fluency. Without fluency, we are left alone with our thoughts, unable to share the stories that make us who we are. Recognizing this hidden complexity can deepen our empathy for those who struggle to speak, and remind us to be patient, whether with others or ourselves, when the words don’t come easily.

References

Liu, J. R., Zhao, L., Hullett, P. W., & Chang, E. F. (2025). Speech sequencing in the human precentral gyrus. Nature Human Behaviour, 1-18.

Funny Literal Illustrations Of English Idioms And Their Meanings.

When we use language we don’t often notice what some words mean if taken literally, but when you actually pause for a second and think about what some expressions literally mean-https://thelanguagener

From The MARVELOUS Language Nerds: https://thelanguagenerds.com/2022/funny-literal-illustrations-of-english-idioms-and-their-meanings/

—and—you’d be surprised how you didn’t notice that before. The words that we use every day are so full of joy and wonder and a lot of fun if taken out of context and played with.

That’s what Roisin Hahessy did. Roisin loves to play with words and put them in humorous illustrations to show their double meanings. For her last project, she went after idioms and everyday expressions to illustrate their literal meanings and the result is a funny batch of witty and funny illustrations that make you both laugh and wonder about the craziness of the English language.

MORE TO COME! WATCH FOR THEM!

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.