Skip to main content

Metatheory of Change Thinking and Accompanying Change Differently

Metatheory of Change Thinking and Accompanying Change Differently

As part of his Metatheory of Change, Dr. Stern talks about self-interruption. It’s very difficult to find information about self-interruption, but in transcribing speech, it’s something we encounter every day. In research interviews, in oral histories, in patient narratives.

When a parent is asked about receiving the initial diagnosis of their child’s cancer, they stutter and stumble, no matter how long they’ve been dealing with the news. Those stutters and stumbles speak volumes.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Here’s what he has to say:

When people interrupt their inner processes (self-interruption), a motive is required for this.

Why does someone push back their tears? Why does someone hide a tenderly emerging insecurity? Why does someone pretend they are not angry?

These are everyday situations which we are used to. Because of the interruption, the experience remains more diffuse, and the inner process terminates more quickly. Why do people use such self-interruptions?

When internal reasons are at play, then, as a rule, it is the expectation that the consequences of the interruptions will be less grave than the consequences of permitting them. Thus, fears are involved which often play in the background and are not questioned. The simple question “Why do you not allow yourself to cry?” will often cause a degree of alienation. However, if something new is really supposed to happen, then it needs an interruption of the self-interruption of the self-perception. From a metatheory viewpoint this is a core function of counselling. Only then can either the need, and the associated fear, become concise, or the ways and means by which the client works himself up into an unfruitful inner dialogue.

One of the most favoured ways in which people interrupt their self-perception is through speaking. However, when speaking serves to reduce experiencing (‘talking something away’), then it is important that the counsellor prevents the speaking. Otherwise highly fruitful chances in the present moment are lost and the possible intensity in the counselling sequence is destroyed (<a href=”https://metatheorie-der-veraenderung.info/wpmtags/daniel-stern/”>Daniel Stern</a>).

I’m going to continue to explore self-interruption because I think it is a critical part of self-expression.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Annual Meeting Author Signing Interest Call

Annual Meeting Author Signing Interest Call

Calling All Authors! The Annual Meeting will feature an Author Signing on Thursday, October 16, from 10:00 to 10:30 a.m. This special event gives selected authors the opportunity to connect with readers and showcase their work. Each participating author will be assigned table-top space to display and sell books. Please note that authors are responsible […]

More from Chicago Manual of Style’s July Q&A

More from Chicago Manual of Style’s July Q&A

Q. “. . . go to high school in Washington[, D.C.].” Is the final period necessary? Delete it? (Yes, it’s the last of a longer quotation.)

A. In your version, where you’re using brackets to supply not just the abbreviation but the comma that would normally go with it, you don’t need that final period; we can assume that your bracketed interpolation includes all sentence punctuation, including any final period. And that’s what we might expect if you were supplying the end of a sentence that’s missing or illegible in the source. In other words, your brackets restore the end of a sentence that would normally be punctuated like this:

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

“. . . go to high school in Washington, D.C.”

But if you’re simply clarifying for readers that the text is referring to the district rather than the state, don’t add that comma. Instead, put “D.C.” in brackets and add the sentence-ending period:

“. . . go to high school in Washington [D.C.].”

That extra period is needed for the same reason you’d add a period to the end of a sentence like this one (from CMOS 6.13):

His chilly demeanor gave him an affinity for the noble gases (helium, neon, etc.).

But there would be no periods in an initialism like DC in current Chicago style, so you’d normally write this:

“. . . go to high school in Washington [DC].”

See also CMOS 6.110 (which has a similar set of examples but without periods) and 12.70–74 (on editorial interpolations and clarifications).

This one has been driving me crazy!

Abbreviations

Q. Is there any chance that “am” and “pm” will become acceptable as correct forms of “a.m.” and “p.m.”?

A. There are six ways to write the abbreviations for ante meridiem (before noon) and post meridiem (after noon):

All caps with periods: 10 A.M., 10 P.M.
All caps without periods: 10 AM, 10 PM
Small caps with periods: 10 A.M., 10 P.M.
Small caps without periods: 10 AM, 10 PM
Lowercase with periods: 10 a.m., 10 p.m.
Lowercase without periods: 10 am, 10 pm

Each of these—including “am” and “pm”—is a legitimate choice. For nearly a century, Chicago’s preferred form was the third: small capital letters with periods. This preference, however, applied only to published documents (among other factors, small capitals weren’t an option on typewriters).

When we changed our preference to “a.m.” and “p.m.” (in 2003, with the publication of CMOS 15), the growth of computers in writing and publishing played a role: small caps require extra steps to apply, and they don’t always translate well across applications (when they’re even available). We could have flipped a coin and settled on all-caps “AM” and “PM” (but not “A.M.” and “P.M.”; Chicago style now omits periods in abbreviations that include two or more capital letters). When we instead chose lowercase “a.m.” and “p.m.,” we liked the fact that they’re unambiguous (“AM” and “PM” both have a number of other meanings), and we hoped the periods would help readers recognize in any context that these are abbreviations, not words.

But if you don’t like the periods, don’t fret: Merriam-Webster labels “am” and “pm” as British variants, so you’re hardly alone in your preference. If you’re being published, however, be prepared to defer to your publisher’s house style, whatever that may be.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Tech industry tried reducing AI’s pervasive bias. Now Trump wants to end its ‘woke AI’ efforts

Tech industry tried reducing AI’s pervasive bias. Now Trump wants to end its ‘woke AI’ efforts

Trump-aligned lawmakers are pushing AI companies to abandon “woke AI” efforts like fairness, safety, and bias mitigation—branding them as ideological overreach. But here’s the danger: that erases vital protections for racial, gender, and LGBTQ+ communities.

When fairness becomes “woke”—what happens to marginalized voices? Human-led, bias-aware language services are crucial when tech is being politically optimized.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

By MATT O’BRIEN

Updated 7:00 AM GMT-3, April 27, 2025

CAMBRIDGE, Mass. (AP) — After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products.

In the White House and the Republican-led Congress, “woke AI” has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to “advance equity” in AI development and curb the production of “harmful and biased outputs” are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee.

And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and “responsible AI” in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on “reducing ideological bias” in a way that will “enable human flourishing and economic competitiveness,” according to a copy of the document obtained by The Associated Press.

In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work.

But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive.

Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to “see” and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light.

“Black people or darker skinned people would come in the picture and we’d look ridiculous sometimes,” said Monk, a scholar of colorism, a form of discrimination based on people’s skin tones and other features.

Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients.

“Consumers definitely had a huge positive response to the changes,” he said.

Now Monk wonders whether such efforts will continue in the future. While he doesn’t believe that his Monk Skin Tone Scale is threatened because it’s already baked into dozens of products at Google and elsewhere — including camera phones, video games, AI image generators — he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone.

“Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune,” Monk said. “But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there’s a lot of pressure to get to market very quickly.”

Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden’s administration “coerced or colluded with” them to censor lawful speech.

Michael Kratsios, director of the White House’s Office of Science and Technology Policy, said at a Texas event this month that Biden’s AI policies were “promoting social divisions and redistribution in the name of equity.”

The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: “Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities.”

Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias.

One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field.

Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google’s own photos app sorted a picture of two Black people into a category labeled as “gorillas.”

Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age.

Biden’s election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI’s ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up.

Then came Google’s Gemini AI chatbot — and a flawed product rollout last year that would make it the symbol of “woke AI” that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on.

Google’s was no different, and when asked to depict people in various professions, it was more likely to favor lighter-skinned faces and men, and, when women were chosen, younger women, according to the company’s own public research.

Google tried to place technical guardrails to reduce those disparities before rolling out Gemini’s AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right.

With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of “downright ahistorical social agendas through AI,” naming the moment when Google’s AI image generator was “trying to tell us that George Washington was Black, or that America’s doughboys in World War I were, in fact, women.”

“We have to remember the lessons from that ridiculous moment,” Vance declared at the gathering. “And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens’ right to free speech.”

A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration’s new focus on AI’s “ideological bias” is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people’s lives.

“Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognize and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time,” said Nelson, the former acting director of the White House’s Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications.

But Nelson doesn’t see much room for collaboration amid the denigration of equitable AI initiatives.

“I think in this political space, unfortunately, that is quite unlikely,” she said. “Problems that have been differently named — algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other —- will be regrettably seen us as two different problems.”

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

The Meaning of American Independence

The Meaning of American Independence

AV: You know, I think the 4th of July has always been complex to different parts of our society.

And maybe it was an easy question to answer a few years ago, a decade ago. It was an easy question to answer that we are singular, unique, the shining city on a hill. It’s become a lot more challenging. It’s become a lot more challenging because I think we’re discovering things about ourselves,

maybe aspects that elements of our society, minority groups in particular, kind of took for granted and embraced America in a beautiful way, maybe, you know, in a special way as critical patriots. as critical patriots, understanding that we were imperfect, that we were created to strive to be a more perfect union.

And I think all of us could see that now, which is actually a pretty interesting and amazing opportunity to learn about ourselves and how much is left undone. And I’m kind of reflecting on that as I’m thinking about this really powerful question. 249, you know, a big round number 250, right around the corner. how far we’ve come, how much of this more perfect union we’ve created, and how much more work is left to do. And clearly, there is a lot of work to do.

Why It Matters
The Meaning of American Independence Day
Thank you for joining me for a Substack Live with Mike Madrid! Why It Matters is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber…

Listen now

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Call for conference volunteers

Call for conference volunteers

The Oral History Association is seeking enthusiastic volunteers to help make the 2025 Annual Meeting a success! From welcoming attendees at registration, to supporting sessions as a room runner, to guiding participants on local tours, volunteers play a vital role in creating a smooth and welcoming experience. In return, volunteers receive free meeting registration (with […]