Skip to main content

Scientists alarmed as Rubin Observatory changes biography of astronomer Vera Rubin amid Trump’s push to end DEI efforts

Scientists alarmed as Rubin Observatory changes biography of astronomer Vera Rubin amid Trump’s push to end DEI efforts

The article by Sharmila Kuthunur highlights a subtle change that might have gone unnoticed but underscores how easily history can be rewritten—especially when it’s written in code rather than in stone.

The recent alteration of Vera Rubin’s biography in the Rubin Observatory’s website serves as a stark reminder of the broader need to capture and preserve oral histories before they can be rewritten or erased. Oral histories provide an unfiltered record of lived experiences. When governments or institutions attempt to reshape narratives for political purposes, these firsthand accounts become even more valuable as a safeguard against revisionism. By transcribing and archiving oral histories, we ensure that diverse voices and untold stories remain protected, accessible, and truthful for future generations.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Astronomers are expressing disappointment and alarm as the federally-funded Rubin Observatory altered the biography of renowned astronomer Vera Rubin, for whom the facility is named, on its website. The amended version curtails her legacy of championing women in science and removes all mentions of the observatory’s efforts to reduce barriers for women and other historically underrepresented groups in the field.

“No executive order, no political edict is going to undermine or end our efforts to make the scientific workforce look more like our people,” astronomer John Barentine told Space.com. “If anything, it is giving us more encouragement to continue to do this work, because it is the morally, philosophically and politically right thing to do.”

The edits, first reported by ProPublica on Jan. 30, came as federal agencies across the government scramble to revamp their websites in order to comply with a U.S. executive order issued by President Donald Trump, which ends funding for diversity, equity, and inclusion (DEI) efforts and removes all mentions of them from public-facing websites.

On Jan. 27, a portion of Rubin’s bio titled “She advocated for women in science” was removed entirely before being republished later that day in a diluted form, ProPublica reported. As of publication of this story Tuesday (Feb. 11), the altered bio still excludes a paragraph that originally read: “Science is still a male-dominated field, but Rubin Observatory is working to increase participation from women and other people who have historically been excluded from science. Rubin Observatory welcomes everyone who wants to contribute to science, and takes steps to lower or eliminate barriers that exclude those with less privilege.”

One sentence in the final paragraph, which originally read, “Vera Rubin offers an excellent example of what can happen when more minds participate in science,” was changed to replace “more” with “many,” altering the meaning from emphasizing the need for diverse perspectives to simply highlighting a high number of people.

“This is the story of what happened in her life,” Yvette Cendes, a radio astronomer at the University of Oregon, told Space.com. “She was a huge champion for women in science in particular because she faced things that were discriminatory for women — diminishing those stories is pretty disturbing, frankly.”

Other pages on the observatory’s website, including the jobs and staff bio pages, have also been modified to erase mentions of diversity and inclusion efforts. The Observatory, its funder, the National Science Foundation, and the White House did not respond to Space.com’s request for comment on Feb. 3.

Rubin earned worldwide recognition for changing the way we think of the universe by showing that galaxies are mostly composed of dark matter, the mysterious, invisible substance that makes up much of the cosmos. Her research provided crucial evidence for dark matter’s existence through observations of stars in our neighboring galaxy Andromeda, where she found that stars moved at the same rate regardless of their position — an indication of “missing” mass, which she proposed could be explained by dark matter. Her findings shifted scientific consensus toward accepting dark matter as a fundamental component of the universe, opening new realms in astronomy and physics.

Beyond her scientific achievements, Rubin also paved the way for women in science. Perhaps most notably, in 1964, she battled to gain access to observe at the famed Palomar Observatory in California, becoming the first woman officially allowed to use its telescopes. Colleagues recall that when Rubin noticed the only restroom at the observatory was labeled “MEN,” she cut out a tiny paper skirt and taped it to the image of a man on the door. “She turned around and said, ‘Now you have a ladies’ room’ and then she got to work — that was Vera Rubin,” reads a 2021 statement from former Carnegie Science President Eric Isaacs.

Throughout her career, she championed women in the field. As one example, “she frequently would see the list of speakers [at a conference],” former colleague Neta Bahcall of Princeton University told Astronomy.com, “and if there were very few or no women speakers, she would contact [the organizers] and tell them they have a problem and need to fix it.”

“But what if she hadn’t been that fierce? What if she hadn’t been the personality that we have all come to know — the unstoppable warrior?” Isaacs said in the Carnegie Science statement. “And here’s the question that really haunts me, which is how many Vera Rubins have we lost to these kinds of obstacles?”

As similar barriers are threatening to resurface due to the Trump administration’s ongoing efforts to erase initiatives aimed at improving diversity in science, the astronomy community seems to be remaining steadfast in its refusal to reverse decades of progress.

“Astronomy is not going to let Vera’s contributions be forgotten,” said Barentine. Various groups are actively working to use tools to archive content that has already been removed, as well as content that could potentially be erased from federal websites.

“The idea that they can somehow obliterate these sources is dead wrong — scientists in general and astronomers in particular are not going to take these threats lying down,” he said. “But we have a long road ahead and I expect there’ll be times when that road will be very difficult to walk.”

He declined to disclose the specifics of these efforts, but noted that “the forces aligned against this should be aware that it’s happening, and they won’t be able to stop it.”

Even at NASA, offices associated with DEI initiatives were shut down during Trump’s first few days in office. A recently instated, high-profile program called Here to Observe (H2O), which paired undergraduate students from underrepresented groups with scientists running NASA missions, was recently grounded. The media outlet 404, an independent journalist-founded news website, reported that NASA employees were told to “drop everything” and “scrub mentions” of a list of words from public-facing sites, including “Indigenous People,” “Equity,” “Accessibility,” “Environmental Justice” as well as “Anything specifically targeting women (women in leadership, etc.).” NASA has since removed “inclusion” as one of its core values.

The flurry of changes triggered by the directive has led to the erasure of articles featuring NASA astronomers from underrepresented communities that the agency published in years past, like this one. Now, these pages sometimes display launch schedules of past SpaceX launches instead of the original prose. The original titles appear to remain. Agency employees have also been instructed to remove their pronouns from all work communications and instead follow a pre-designed signature block adopted by the agency, NPR reported.

Astrobiologist Michaela Musilova, who served as the Director of the HI-SEAS space research station in Hawaii, told Space.com that her efforts to encourage more women, people of color and LGBTQ+ scientists to join her simulated missions to the moon and Mars resulted in more applicants from these communities.

“Representation matters — some of them told me that they only applied because they saw that others like them were successful in this sector too,” she said. During those simulated missions, “the more diverse a crew was, the more successful a mission ended up being — the team got along better, was able to problem solve more efficiently and they were also more productive with their research projects.”

The impacts of the ongoing changes, which have prompted many talented and experienced people to leave the space agency, “will likely be long-term and they could cause many interesting projects to not get pursued or finished,” she said.

On May 17, 1996 — nearly 50 years after her own graduation in 1948 — Rubin addressed the graduating class at the University of California, Berkeley, saying: “I hope that you will fight injustice and discrimination in all its guises. I hope you will value diversity among your friends, among your colleagues, and, unlike some of your regents, among the student body population.”

“I hope that when you are in charge, you will do better than my generation has.”

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

Voice of Wilderness: “Oral History in Practice” Webinar Series (4/23, 5/7, 5/21)

Voice of Wilderness: “Oral History in Practice” Webinar Series (4/23, 5/7, 5/21)

What does oral history look like in practice? What goes into community-rooted storytelling projects and what are the outcomes? Voice of Witness is hosting a series of intimate conversations with practitioners who have developed and activated dynamic oral history projects. We’ll explore the connections between storytelling and community building, liberation, ethics, civic engagement, public art, […]

2025 Annual Meeting – Call for Posters

2025 Annual Meeting – Call for Posters

Deadline: May 16, 2025 Up to four people may present as part of a poster submission at the 2025 Annual Meeting. A poster is primarily a visual representation of a topic or project. An effective poster presentation highlights, with a visual display, the main points or components of a project. Text and images should be […]

Oral Historian at William & Mary

Oral Historian at William & Mary

Location: Williamsburg, VA Are you a trained oral historian looking for a welcoming community in which to learn and contribute your talents? The Special Collections Research Center, within the Earl Gregg Swem Library at William & Mary, is seeking a person to lead the oral history program at William & Mary Libraries. In this role, […]

AI ‘brain decoder’ can read a person’s thoughts with just a quick brain scan and almost no training

AI ‘brain decoder’ can read a person’s thoughts with just a quick brain scan and almost no training

Scientists have made new improvements to a “brain decoder” that uses artificial intelligence (AI) to convert thoughts into text.

Their new converter algorithm can quickly train an existing decoder on another person’s brain, the team reported in a new study. The findings could one day support people with aphasia, a brain disorder that affects a person’s ability to communicate, the scientists said.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

A brain decoder uses machine learning to translate a person’s thoughts into text, based on their brain’s responses to stories they’ve listened to. However, past iterations of the decoder required participants to listen to stories inside an MRI machine for many hours, and these decoders worked only for the individuals they were trained on.

“People with aphasia oftentimes have some trouble understanding language as well as producing language,” said study co-author Alexander Huth, a computational neuroscientist at the University of Texas at Austin (UT Austin). “So if that’s the case, then we might not be able to build models for their brain at all by watching how their brain responds to stories they listen to.”

In the new research, published Feb. 6 in the journal Current Biology, Huth and co-author Jerry Tang, a graduate student at UT Austin investigated how they might overcome this limitation. “In this study, we were asking, can we do things differently?” he said. “Can we essentially transfer a decoder that we built for one person’s brain to another person’s brain?”

The researchers first trained the brain decoder on a few reference participants the long way — by collecting functional MRI data while the participants listened to 10 hours of radio stories.

Then, they trained two converter algorithms on the reference participants and on a different set of “goal” participants: one using data collected while the participants spent 70 minutes listening to radio stories, and the other while they spent 70 minutes watching silent Pixar short films unrelated to the radio stories.

Using a technique called functional alignment, the team mapped out how the reference and goal participants’ brains responded to the same audio or film stories. They used that information to train the decoder to work with the goal participants’ brains, without needing to collect multiple hours of training data.

Next, the team tested the decoders using a short story that none of the participants had heard before. Although the decoder’s predictions were slightly more accurate for the original reference participants than for the ones who used the converters, the words it predicted from each participant’s brain scans were still semantically related to those used in the test story.

For example, a section of the test story included someone discussing a job they didn’t enjoy, saying “I’m a waitress at an ice cream parlor. So, um, that’s not … I don’t know where I want to be but I know it’s not that.” The decoder using the converter algorithm trained on film data predicted: “I was at a job I thought was boring. I had to take orders and I did not like them so I worked on them every day.” Not an exact match — the decoder doesn’t read out the exact sounds people heard, Huth said — but the ideas are related.

“The really surprising and cool thing was that we can do this even not using language data,” Huth told Live Science. “So we can have data that we collect just while somebody’s watching silent videos, and then we can use that to build this language decoder for their brain.”

Using the video-based converters to transfer existing decoders to people with aphasia may help them express their thoughts, the researchers said. It also reveals some overlap between the ways humans represent ideas from language and from visual narratives in the brain.

“This study suggests that there’s some semantic representation which does not care from which modality it comes,” Yukiyasu Kamitani, a computational neuroscientist at Kyoto University who was not involved in the study, told Live Science. In other words, it helps reveal how the brain represents certain concepts in the same way, even when they’re presented in different formats.

The team’s next steps are to test the converter on participants with aphasia and “build an interface that would help them generate language that they want to generate,” Huth said.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

AI ‘brain decoder’ can read a person’s thoughts with just a quick brain scan and almost no training

AI ‘brain decoder’ can read a person’s thoughts with just a quick brain scan and almost no training

Scientists have made new improvements to a “brain decoder” that uses artificial intelligence (AI) to convert thoughts into text.

Their new converter algorithm can quickly train an existing decoder on another person’s brain, the team reported in a new study. The findings could one day support people with aphasia, a brain disorder that affects a person’s ability to communicate, the scientists said.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.

A brain decoder uses machine learning to translate a person’s thoughts into text, based on their brain’s responses to stories they’ve listened to. However, past iterations of the decoder required participants to listen to stories inside an MRI machine for many hours, and these decoders worked only for the individuals they were trained on.

“People with aphasia oftentimes have some trouble understanding language as well as producing language,” said study co-author Alexander Huth, a computational neuroscientist at the University of Texas at Austin (UT Austin). “So if that’s the case, then we might not be able to build models for their brain at all by watching how their brain responds to stories they listen to.”

In the new research, published Feb. 6 in the journal Current Biology, Huth and co-author Jerry Tang, a graduate student at UT Austin investigated how they might overcome this limitation. “In this study, we were asking, can we do things differently?” he said. “Can we essentially transfer a decoder that we built for one person’s brain to another person’s brain?”

The researchers first trained the brain decoder on a few reference participants the long way — by collecting functional MRI data while the participants listened to 10 hours of radio stories.

Then, they trained two converter algorithms on the reference participants and on a different set of “goal” participants: one using data collected while the participants spent 70 minutes listening to radio stories, and the other while they spent 70 minutes watching silent Pixar short films unrelated to the radio stories.

Using a technique called functional alignment, the team mapped out how the reference and goal participants’ brains responded to the same audio or film stories. They used that information to train the decoder to work with the goal participants’ brains, without needing to collect multiple hours of training data.

Next, the team tested the decoders using a short story that none of the participants had heard before. Although the decoder’s predictions were slightly more accurate for the original reference participants than for the ones who used the converters, the words it predicted from each participant’s brain scans were still semantically related to those used in the test story.

For example, a section of the test story included someone discussing a job they didn’t enjoy, saying “I’m a waitress at an ice cream parlor. So, um, that’s not … I don’t know where I want to be but I know it’s not that.” The decoder using the converter algorithm trained on film data predicted: “I was at a job I thought was boring. I had to take orders and I did not like them so I worked on them every day.” Not an exact match — the decoder doesn’t read out the exact sounds people heard, Huth said — but the ideas are related.

“The really surprising and cool thing was that we can do this even not using language data,” Huth told Live Science. “So we can have data that we collect just while somebody’s watching silent videos, and then we can use that to build this language decoder for their brain.”

Using the video-based converters to transfer existing decoders to people with aphasia may help them express their thoughts, the researchers said. It also reveals some overlap between the ways humans represent ideas from language and from visual narratives in the brain.

“This study suggests that there’s some semantic representation which does not care from which modality it comes,” Yukiyasu Kamitani, a computational neuroscientist at Kyoto University who was not involved in the study, told Live Science. In other words, it helps reveal how the brain represents certain concepts in the same way, even when they’re presented in different formats.

The team’s next steps are to test the converter on participants with aphasia and “build an interface that would help them generate language that they want to generate,” Huth said.

Thanks for reading Capturing Voices! Subscribe for free to receive new posts and support my work.