Obdurate is a formal word that means “resistant to persuasion.” It is usually used to describe someone who is stubborn or not willing to change their opinion or the way they do something.
“Even after numerous attempts to negotiate, the obdurate politician remained steadfast in his opposition to the proposed legislation.”
Today’s Writing Tip
“Peace of Mind” and “A Piece of One’s Mind”
Two idioms that sound similar and are often played with for punning effect are “peace of mind” and “give someone a piece of one’s mind.”
Understanding “Peace of Mind”
peace: freedom from anxiety, disturbance (emotional, mental, or spiritual), or inner conflict; calm, tranquillity.
The expression “peace of mind” belongs to a category of phrases that place the feeling of peace within a specific organ or faculty:
“peace of heart
“peace of soul . . .
“peace of conscience”
One might seek peace of mind through prayer or meditation. Self-help books, religions, and various philosophies promise it:
Nine Ways to Find Peace of Mind
The peace of mind Jesus offers is not of this world.
Islam teaches that in order to achieve true peace of mind . . . one must submit.
I . . . found great peace of mind in doing what Hinduism exhorts me to do.
The Idiom “Give Someone a Piece of One’s Mind”
Then there’s the expression “givesomeone a piece of one’s mind.” It means to chide, tell someone off, tell someone how the cow ate the cabbage, tell someone exactly what you think, in no uncertain terms:
When she saw the lipstick stain on his collar, she gave him a piece of her mind.
The third time the wheel fell off, he gave the mechanic a piece of his mind.
Commercial and Punning Uses of the Expressions
As with so many other common expressions, “peace of mind” is often altered for commercial purposes or efforts at punning.
I understand calling an opinion blog Piece of Mind. I suppose Iron Maiden had a reason for calling an album Piece of Mind. And a bookstore called Piece of Mind makes a kind of sense.
But why you’d name a tobacco brand Piece of Mind escapes me. And to call a program for sufferers of Alzheimer’s disease Piece of Mind strikes me as a bit tasteless:
The Piece of Mind program engages individuals in the early to middle stages of Alzheimer’s through interactive tours and art-making experiences.
Unintended Substitution of “Piece” for “Peace”
Then there is the out-and-out unintended substitution of piece for peace, as in this headline at EzineArticles:
Buying a Personal Safe for Piece of Mind and Security
And in this book review of I, Rhoda Manning, Go Hunting with My Daddy & Other Stories:
Gilchrist’s short stories are indeed therapeutic. They tell real stories about real people searching for love, for happiness, for piece of mind . . . .
Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.
Today’s Quiz
Question 1:
What does the idiom “peace of mind” signify?
a) a state of anxiety and disturbance
b) a state of tranquility, free from emotional, mental, or spiritual disturbance
c) the act of telling someone off
d) finding a piece of one’s own mind
Question 2:
What does the idiom “give someone a piece of one’s mind” mean?
a) provide advice or comfort to someone
b) tell someone exactly what you think, in no uncertain terms
c) share a part of your knowledge or wisdom with someone
d) assist someone in achieving peace of mind
Question 3:
Which of the following sentences correctly uses the idiom “peace of mind”?
a) Once she had finished her taxes, she had peace of mind knowing it was all sorted.
b) After arguing with his teacher, he decided to give her peace of mind.
c) The peace of mind was cut into three pieces and distributed among the students.
d) She sat down with peace of her mind and started painting.
Question 4:
Which of the following sentences correctly uses the idiom “give someone a piece of one’s mind”?
a) I’m sorry for giving you a piece of my mind yesterday; I was just really stressed out.
b) The priest gave me a piece of his mind; now I feel so peaceful and calm.
c) He managed to give a piece of his mind to the puzzle.
d) When I go to the mountains, I can finally give a piece of my mind.
Question 5:
Which of the following sentences appropriately applies one of the idioms from the lesson?
a) Despite his obdurate attitude, the piece of mind she received after discussing the issue was unparalleled.
b) In the face of his obdurate refusal to listen, she found a piece of her mind within her patience.
c) The obdurate student received peace of mind after repeatedly disrupting the class.
d) Her reward for her obdurate resistance to giving in to their demands was a peace of mind she had never experienced before.
The correct answers are as follows:
b) a state of tranquility, free from emotional, mental, or spiritual disturbance
b) tell someone exactly what you think, in no uncertain terms
a) Once she had finished her taxes, she had peace of mind knowing it was all sorted. (“Peace of mind” is used correctly here, as the sentence refers to the tranquility experienced after completing a task.)
a) I’m sorry for giving you a piece of my mind yesterday; I was just really stressed out. (“Giving you a piece of one’s mind” is used correctly here to express the act of telling someone off or expressing dissatisfaction or annoyance.)
d) Her reward for her obdurate resistance to giving in to their demands was a peace of mind she had never experienced before. (This sentence accurately employs the idiom ‘”peace of mind,” signifying a state of inner tranquility that the woman attains from her obdurate [resolute] decision to meditate daily.)
Can you tell when a piece of writing was generated by AI? Roger J. Kreuz takes a closer look at the surprisingly tricky task of identifying ChatGPT-authored texts. From telltale quirks like overused em dashes to oddly formal word choices (“delves,” anyone?), he argues that detection remains more interpretive art than hard science. A fascinating read for editors, educators, and anyone thinking critically about authorship in the age of generative AI.
Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.
Over the past few years, researchers have been exploring whether it’s even possible to distinguish human writing from artificial intelligence-generated text. But the best strategies to distinguish between the two may come from the chatbots themselves.
Too good to be human?
Several recent studies have highlighted just how difficult it is to determine whether text was generated by a human or a chatbot.
Research participants recruited for a 2021 online study, for example, were unable to distinguish between human- and ChatGPT-generated stories, news articles and recipes.
Language experts fare no better. In a 2023 study, editorial board members for top linguistics journals were unable to determine which article abstracts had been written by humans and which were generated by ChatGPT. And a 2024 study found that 94% of undergraduate exams written by ChatGPT went undetected by graders at a British university.
Our mission is to share knowledge and inform decisions.
About us
Clearly, humans aren’t very good at this.
A commonly held belief is that rare or unusual words can serve as “tells” regarding authorship, just as a poker player might somehow give away that they hold a winning hand.
Researchers have, in fact, documented a dramatic increase in relatively uncommon words, such as “delves” or “crucial,” in articles published in scientific journals over the past couple of years. This suggests that unusual terms could serve as tells that generative AI has been used. It also implies that some researchers are actively using bots to write or edit parts of their submissions to academic journals. Whether this practice reflects wrongdoing is up for debate.
In another study, researchers asked people about characteristics they associate with chatbot-generated text. Many participants pointed to the excessive use of em dashes – an elongated dash used to set off text or serve as a break in thought – as one marker of computer-generated output. But even in this study, the participants’ rate of AI detection was only marginally better than chance.
Given such poor performance, why do so many people believe that em dashes are a clear tell for chatbots? Perhaps it’s because this form of punctuation is primarily employed by experienced writers. In other words, people may believe that writing that is “too good” must be artificially generated.
But if people can’t intuitively tell the difference, perhaps there are other methods for determining human versus artificial authorship.
Stylometry to the rescue?
Some answers may be found in the field of stylometry, in which researchers employ statistical methods to detect variations in the writing styles of authors.
I’m a cognitive scientist who authored a book on the history of stylometric techniques. In it, I document how researchers developed methods to establish authorship in contested cases, or to determine who may have written anonymous texts.
One tool for determining authorship was proposed by the Australian scholar John Burrows. He developed Burrows’ Delta, a computerized technique that examines the relative frequency of common words, as opposed to rare ones, that appear in different texts.
It may seem counterintuitive to think that someone’s use of words like “the,” “and” or “to” can determine authorship, but the technique has been impressively effective.
A stylometric technique called Burrow’s Delta was used to identify LaSalle Corbell Pickett as the author of love letters attributed to her deceased husband, Confederate Gen. George Pickett. Encyclopedia Virginia
Burrows’ Delta, for example, was used to establish that Ruth Plumly Thompson, L. Frank Baum’s successor, was the author of a disputed book in the “Wizard of Oz” series. It was also used to determine that love letters attributed to Confederate Gen. George Pickett were actually the inventions of his widow, LaSalle Corbell Pickett.
A major drawback of Burrows’ Delta and similar techniques is that they require a fairly large amount of text to reliably distinguish between authors. A 2016 study found that at least 1,000 words from each author may be required. A relatively short student essay, therefore, wouldn’t provide enough input for a statistical technique to work its attribution magic.
More recent work has made use of what are known as BERT language models, which are trained on large amounts of human- and chatbot-generated text. The models learn the patterns that are common in each type of writing, and they can be much more discriminating than people: The best ones are between 80% and 98% accurate.
However, these machine-learning models are “black boxes” – that is, we don’t really know which features of texts are responsible for their impressive abilities. Researchers are actively trying to find ways to make sense of them, but for now, it isn’t clear whether the models are detecting specific, reliable signals that humans can look for on their own.
A moving target
Another challenge for identifying bot-generated text is that the models themselves are constantly changing – sometimes in major ways.
Early in 2025, for example, users began to express concerns that ChatGPT had become overly obsequious, with mundane queries deemed “amazing” or “fantastic.” OpenAI addressed the issue by rolling back some changes it had made.
Of course, the writing style of a human author may change over time as well, but it typically does so more gradually.
At some point, I wondered what the bots had to say for themselves. I asked ChatGPT-4o: “How can I tell if some prose was generated by ChatGPT? Does it have any ‘tells,’ such as characteristic word choice or punctuation?”
The bot admitted that distinguishing human from nonhuman prose “can be tricky.” Nevertheless, it did provide me with a 10-item list, replete with examples.
These included the use of hedges – words like “often” and “generally” – as well as redundancy, an overreliance on lists and a “polished, neutral tone.” It did mention “predictable vocabulary,” which included certain adjectives such as “significant” and “notable,” along with academic terms like “implication” and “complexity.” However, though it noted that these features of chatbot-generated text are common, it concluded that “none are definitive on their own.”
As the U.S. signals a retreat from global climate leadership, other nations are stepping into the vacuum. In this article from The Conversation, political science professor Sarah J. Hummel explores how countries like China are taking the lead in international climate negotiations and investment, reshaping the power dynamics of global climate governance. For those working in or adjacent to international policy, sustainability, or global development, it’s a timely reminder that leadership can shift — and often does — when others step back.
Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.
When President Donald Trump announced in early 2025 that he was withdrawing the U.S. from the Paris climate agreement for the second time, it triggered fears that the move would undermine global efforts to slow climate change and diminish America’s global influence.
A big question hung in the air: Who would step into the leadership vacuum?
I study the dynamics of global environmental politics, including through the United Nations climate negotiations. While it’s still too early to fully assess the long-term impact of the United States’ political shift when it comes to global cooperation on climate change, there are signs that a new set of leaders is rising to the occasion.
World responds to another US withdrawal
The U.S. first committed to the Paris Agreement in a joint announcement by President Barack Obama and China’s Xi Jinping in 2015. At the time, the U.S. agreed to reduce its greenhouse gas emissions 26% to 28% below 2005 levels by 2025 and pledged financial support to help developing countries adapt to climate risks and embrace renewable energy.
Some people praised the U.S. engagement, while others criticized the original commitment as too weak. Since then, the U.S. has cut emissions by 17.2% below 2005 levels – missing the goal, in part because its efforts have been stymied along the way.
Just two years after the landmark Paris Agreement, Trump stood in the Rose Garden in 2017 and announced he was withdrawing the U.S. from the treaty, citing concerns that jobs would be lost, that meeting the goals would be an economic burden, and that it wouldn’t be fair because China, the world’s largest emitter today, wasn’t projected to start reducing its emissions for several years.
Globally, leaders from Italy, Germany and France rebutted Trump’s assertion that the Paris Agreement could be renegotiated. Others from Japan, Canada, Australia and New Zealand doubled down on their own support of the global climate accord. In 2020, President Joe Biden brought the U.S. back into the agreement.
Amazon partnered with Dominion Energy to build solar farms, like this one, in Virginia. They power the company’s cloud-computing and other services. Drew Angerer/Getty Images
On July 24, 2025, China and the European Union issued a joint statement vowing to strengthen their climate targets and meet them. They alluded to the U.S., referring to “the fluid and turbulent international situation today” in saying that “the major economies … must step up efforts to address climate change.”
In some respects, this is a strength of the Paris Agreement – it is a legally nonbinding agreement based on what each country decides to commit to. Its flexibility keeps it alive, as the withdrawal of a single member does not trigger immediate sanctions, nor does it render the actions of others obsolete.
The agreement survived the first U.S. withdrawal, and so far, all signs point to it surviving the second one.
Who’s filling the leadership vacuum
From what I’ve seen in international climate meetings and my team’s research, it appears that most countries are moving forward.
One bloc emerging as a powerful voice in negotiations is the Like-Minded Group of Developing Countries – a group of low- and middle-income countries that includes China, India, Bolivia and Venezuela. Driven by economic development concerns, these countries are pressuring the developed world to meet its commitments to both cut emissions and provide financial aid to poorer countries.
Diego Pacheco, a negotiator from Bolivia, spoke on behalf of the Like-Minded Developing Countries group during a climate meeting in Bonn, Germany, in June 2025. IISD/ENB | Kiara Worth
China, motivated by economic and political factors, seems to be happily filling the climate power vacuum created by the U.S. exit.
In 2017, China voiced disappointment over the first U.S. withdrawal. It maintained its climate commitments and pledged to contribute more in climate finance to other developing countries than the U.S. had committed to – US$3.1 billion compared with $3 billion.
China’s interest in South America’s energy resources has been growing for years. In 2019, China’s special representative for climate change, Xie Zhenhua, met with Chile’s then-ministers of energy and environment, Juan Carlos Jobet and Carolina Schmidt, in Chile. Martin Bernetti/AFP via Getty Images
The British government has also ratcheted up its climate commitments as it seeks to become a clean energy superpower. In 2025, it pledged to cut emissions 77% by 2035 compared with 1990 levels. Its new pledge is also more transparent and specific than in the past, with details on how specific sectors, such as power, transportation, construction and agriculture, will cut emissions. And it contains stronger commitments to provide funding to help developing countries grow more sustainably.
In terms of corporate leadership, while many American businesses are being quieter about their efforts, in order to avoid sparking the ire of the Trump administration, most appear to be continuing on a green path – despite the lack of federal support and diminished rules.
USA Today and Statista’s “America’s Climate Leader List” includes about 500 large companies that have reduced their carbon intensity – carbon emissions divided by revenue – by 3% from the previous year. The data shows that the list is growing, up from about 400 in 2023.
What to watch at the 2025 climate talks
The Paris Agreement isn’t going anywhere. Given the agreement’s design, with each country voluntarily setting its own goals, the U.S. never had the power to drive it into obsolescence.
The question is if developed and developing country leaders alike can navigate two pressing needs – economic growth and ecological sustainability – without compromising their leadership on climate change.
This year’s U.N. climate conference in Brazil, COP30, will show how countries intend to move forward and, importantly, who will lead the way.
Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.
Government health datasets were altered without documentation, Lancet study shows: https://journalistsresource.org/home/federal-health-data-modification-lancet/
Researchers examined more than 200 federal datasets and found that nearly half of them were altered between January and March. In most, the term “gender” was replaced with “sex.”
or months now, researchers and journalists have been documenting the disappearance of federal health data and monitoring changes to government websites. Now, a new analysis finds that some of the existing datasets have also been modified, most of them lacking a notice or log about the change.
Researchers compared more than 200 federal datasets that were available between January and March with their archived versions and found that nearly half were altered. In most cases, the word “gender” was changed to “sex.” Only 15 of the altered datasets included a note about the modification.
“The lack of transparency is a particular concern,” says Janet Freilich, a professor at Boston University School of Law, and co-author of the study, which was published in The Lancet in July.
Alterations were made across multiple federal agencies, including the Department of Veterans Affairs and the Centers for Disease Control and Prevention. The reason for the modifications was not documented in the datasets, but they coincide with a January 20 presidential directive instructing federal agencies to use the term “sex” instead of “gender.”
Federal health datasets have been a major source of information for scientists, and undocumented changes to existing data can undermine confidence in government statistics and distort research.
“There are two levels of harm here,” says Freilich, a patent lawyer by training, who has been following changes to the federal data in recent months. “If you think you’re looking for whatever the column title reflects, but the column — the underlying data — actually reflects something else, then you’re going to get a wrong answer. But the second level of harm is, this really impairs trust in federal data.”
A screenshot of CDC’s Youth Risk Behavior Surveillance System, captured on Aug. 18, 2025.
In March, Freilich co-wrote a paper in The New England Journal of Medicine on the disappearing data, finding that from Jan. 21 to Feb. 11 2025, the Centers for Disease Control and Prevention had removed 203 databases.
“I’m not expecting this information to come back,” Freilich says. “I just plead for transparency.”
Michelle Kaufman, an associate professor and director of the Gender Equity Unit at the Johns Hopkins Bloomberg School of Public Health, who was not involved in the Lancet study, said that while most people are aware that several federal datasets have been taken down, “this actual doctoring of it takes it to the next level.”
“I’ve been telling my students, ‘You might want to find other data sets that aren’t connected to the U.S. government, because we don’t know the accuracy at this point,’” Kaufman says.
She has also been advising her students to immediately download federal datasets they might need for research.
“You don’t know if it’s going to be there tomorrow,” she says.
Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.
The study and its findings
Freilich and her co-author Aaron Kesselheim, a professor of medicine at Harvard Medical School, examined metadata from more than 200 datasets from the Department of Health and Human Services, the Centers for Disease Control and Prevention, and the Department of Veterans Affairs, covering Jan. 20 to March 25, 2025.
Under the OPEN Government Data Act, federal agencies keep lists of information about all their datasets, called their metadata, including a unique ID, title, creation data, description, and content of the dataset (here’s an example). These lists are collected from each agency regularly by Data.gov, which acts as a central hub that brings together datasets from across the federal government and other sources.
Using Microsoft Word’s comparison tool, the authors then manually compared current datasets to the archived versions recorded by the Internet Archive. They focused on word changes, not numerical data. Researchers also didn’t track changes to the wording on government websites.
In one example, the authors identified a Department of Veterans Affairs modified dataset about veteran health care use in 2021, in which a column titled “gender” was renamed “sex”. Those words were also changed in the dataset’s title and description. Before March 5, the dataset had not changed since it was published in 2022.
Because many datasets did not have an archived copy, the Lancet study may not be representative of all datasets in federal repositories, the authors note. But in addition to documenting undisclosed changes to some of the existing datasets, the study reveals an increase in the pace of data alterations since January: 4% of changes happened in late January, while 72% occurred in March.
Researchers also found:
In 25% of altered datasets, the change from “gender” to “sex” made the data descriptions more consistent, as the word “gender” had been applied to data also labeled as “sex.”
In four datasets, “social determinants of health” was changed to “non medical factors.” In one, “socioeconomic status” was changed to “socioeconomic characteristics.” In another existing dataset, the question “Are PTSD clinical trials gender diverse?” was changed to “Do PTSD clinical trials include men and women?”
Of the altered datasets, 89 involved changes in classification or categorization, such as changing the column headers. About 25 had modified descriptive text, such as tags and paragraph overview.
To safeguard data integrity, Freilich and Kesselheim call for stronger transparency measures, independent archiving, and international alternatives.
“Gender” and “sex” in research
Sex and gender capture different information in research.
Sex usually refers to a person’s biological characteristics, whereas gender refers to socially constructed roles and norms, according to a 2023 paper by Kaufman, published in the Bulletin of the World Health Organization.
“So just because you were born as a designated sex category at birth, it does not mean that, psychologically, that’s how you feel, and that’s where the separation of biological sex comes in as separate to the social construction of gender,” Kaufman says.
Gender has been a focus of research, particularly in psychology, since the 1970s. Researchers still conflate the two concepts, which can make it difficult to compare studies. However, overall, gender and sex are not interchangeable in most studies and surveys. Gender captures a wider range of social experiences of people, compared with sex, which only captures male and female.
“Whether you’re talking about intersex people biologically, or nonbinary, third gender, transgender people in terms of identity, it erases that experience because you have to fit people into one of those two categories, male or female,” Kaufman says.
In addition, if a study aims to investigate the social constructions of gender and how roles and norms might have impacted health outcomes, using “sex” would make it difficult to interpret the results.
“Is it about the biology, the hormones, the chemical makeup of the person that led to these health outcomes, or was it their roles as a woman, or expectations as a man, that then led them down a certain path to those health outcomes?” Kaufman says. “By going back to this sort of gender essentialism of sex being a binary and that lining up completely with gender is sort of backtracking a lot of the research that’s been done over the past several decades.”
Where to find archived data
There’s no perfect alternative to the government databases.
“There’s a lot that can be done on the non-governmental side, but the government has such a leg up in the scope of information it can gather and its authority to gather information that others just can’t get access to,” Freilich says.
Since January, several volunteer groups and newsrooms have also been downloading and archiving government datasets and making them available to the public.
We’ve curated some of those resources below.
The Data Rescue Project is a collaboration among a group of data organizations and members of the Data Curation Network. The project — a clearinghouse for preserving at-risk public information — has created a Data Rescue Tracker and a Portal to catalogue ongoing public data rescue efforts.
Harvard Dataverse: Harvard Dataverse is a large publicly available repository of data from researchers at Harvard University and around the world, covering a range of topics from astronomy to engineering to health and medicine.
DataLumos is a crowdsourced repository for at-risk US federal government data. DataLumos is hosted by ICPSR, an international consortium of more than 800 academic institutions and research organizations.
Public Environmental Data Project: Run by a coalition of volunteers from several organizations, including Boston University and the Harvard Climate and Health CAFE Research Coordinating Center, the project has compiled a large list of federal databases and tools, including the CDC’s Social Vulnerability Index and Environmental Justice Index.
Dataindex.us is a collaborative effort to monitor changes to federal datasets.
The 19th, an independent nonprofit newsroom reporting on gender, politics, and policy, has archived government documents, including the CDC’s maternal mortality data, the CDC’s abortion and contraception data, research studies on teens, and guidelines from the National Academies on how to collect data on gender and sexuality.
Investigative Reporters & Editors: The nonprofit journalism organization has downloaded more than 120 data sets from the federal websites, as recently as November. Some of those data sets include Adverse Event Reporting System, Behavioral Risk Factor Surveillance System, Medical Device Reports, Mortality Multiple Cause-of-Death Database, National Electronic Injury Surveillance System (NEISS), National Practitioner Databank, Nuclear Materials Events Database, OSHA Workplace Safety Data, and Social Security Administration Death Master File. IRE members can contact the organization and order the data sets. The organization has been providing data to members since the early 1990s.
Naseem Miller is the senior editor for health at The Journalist’s Resource. She joined JR in 2021 after working as a health reporter in local newspapers and national medical trade publications for two decades. Immediately before joining JR, she was a senior health reporter at the Orlando Sentinel, where she was part of the team that was named a 2016 Pulitzer Prize finalist for its coverage of the Pulse nightclub mass shooting. You can follow her on Bluesky.
The Oral History Association invites you to join one of our standing and/or award committees. Committees are the backbone of many of the Association’s achievements and activities. Among the duties is advocating for oral history and the Association, developing education and public programming initiatives, fostering networking opportunities, fundraising for the Association, building on the Association […]
Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.
I’m so excited that I’m writing today’s newsletter from my beloved hometown on the Mediterranean coast north of Barcelona.
I truly needed this break, and I’m enjoying every second I spend with my family… even waking up earlier than anyone to still send you these newsletters that are so important to me.
Climate Ages is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Here we go, much love from the place that saw me grow up into my nerdy self!
There was a time when I believed research spoke for itself.
If the science was rigorous, if the paper was peer-reviewed, then surely someone outside the academic circles, somewhere, would recognize its value.
It wasn’t until I started framing my work through the lens of why it mattered that everything changed.
That’s when:
More people cared. Funders listened. Collaborators reached out. And I finally felt aligned with the reason I became a scientist in the first place.
Let’s talk about how this works and how you can apply it to your own work.
Why framing matters more than you think
Imagine you’re working on soil microbe interactions. Important stuff. But say your abstract starts with: “Species-specific microbial consortia mediate nitrogen flux under fluctuating precipitation scenarios.”
Now imagine you reframe it like this:
“Farmers around the world are struggling to grow food in increasingly unpredictable weather. Our team discovered how the right microbes can help soil hold onto nutrients during droughts, boosting resilience for crops and communities.”
The science didn’t change. The framing did.
You went from “technical niche” to “climate resilience and food security.”
That’s not spin. That’s storytelling rooted in purpose.
The 3-Part Shift: From Research → Relevance → Resonance
Here’s the simple framework I use with Climate Ages stories and with my students:
What is the research? State it clearly, without jargon.
Why does it matter to someone specific? Who benefits? What changes? What’s at stake?
How can you tell that story in human language? Use real-world comparisons. Ground it in emotions, not just data.
Example:
“We studied fossil reef isotopes to understand abrupt extinction events.”
“Ancient reefs reveal how fast oceans can change. Understanding past collapses can help protect coral reefs today.”
Real talk: This is also how you unlock funding
Funders don’t fund ideas. They fund outcomes.
If your grant proposal sounds like a technical exercise, it may get lost in the pile. But if you can clearly show:
What’s at stake Who it helps Why it matters now
…you’re speaking their language.
That’s how you go from “another proposal” to “an urgent opportunity to fund.”
Try this exercise today
Pick a project you’ve worked on recently. Now rewrite it using this template:
Problem: What issue does your research help solve?
Action: What did you do or discover?
Impact: Why does it matter in the real world?
Here’s another example:
Before: “We assessed groundwater salinization trends in peri-urban aquifers under increased anthropogenic stressors.”
After: “Millions rely on groundwater to drink, farm, and live. Our study shows how urban sprawl is quietly salting our water, and what can be done to protect it.”
Which one do you think a journalist, funder, or policymaker is more likely to engage with?
Exactly.
Science with purpose isn’t fluff. It’s strategy.
When you frame your research through the lens of purpose:
You clarify your message. You build trust with non-scientists. You create ripple effects beyond citations.
This is what we do at Climate Ages’ Outreach Lab every day: help scientists like you connect the dots between curiosity, credibility, and change.
Because the world doesn’t need more research papers that never get translated to a lay audience.
It needs more scientists brave enough to say: “This matters. And here’s why.”
Your turn: Today’s exercise
Pick one of your current or past projects. Rewrite it using the Problem → Action → Impact method. Then post it on LinkedIn, share it in your newsletter, or pitch it to a journalist.
Or just send it to me: I’d love to see how you reframe your science through purpose.
Bridge your Science with the World It’s ready to listen.
P.S. One last note: I’m opening the first Outreach Lab cohort in mid-September.
It’s a program designed to help you build a profitable and scalable Science Newslettter that attracts collaborations, brings funding, and increases your impact as a scientist.