An escalating campaign, led by Rep. Jim Jordan (R-Ohio) and other Republicans, has cast a pall over programs that study political disinformation and the quality of medical information online
Facing litigation, Stanford University officials are discussing how they can continue tracking election-related misinformation through the Election Integrity Partnership (EIP), a prominent consortium that flagged social media conspiracies about voting in 2020 and 2022, several participants told The Washington Post. The coalition of disinformation researchers may shrink and also may stop communicating with X and Facebook about their findings.
The National Institutes of Health froze a $150 million program intended to advance the communication of medical information, citing regulatory and legal threats. Physicians told The Post that they had planned to use the grants to fund projects on noncontroversial topics such as nutritional guidelines and not just politically charged issues such as vaccinations that have been the focus of the conservative allegations.
NIH officials sent a memo in July to some employees, warning them not to flag misleading social media posts to tech companies and to limit their communication with the public to answering medical questions.
“If the question relates in any way to misinformation or disinformation, please do not respond,” read the guidance email, sent in July after a Louisiana judge blocked many federal agencies from communicating with social media companies. NIH declined to comment on whether the guidance was lifted in light of a September appeals court ruling, which significantly narrowed the initial court order.
“In the name of protecting free speech, the scientific community is not allowed to speak,” said Dean Schillinger, a health communication scientist who planned to apply to the NIH program to collaborate with a Tagalog-language newspaper to share accurate health information with Filipinos. “Science is being halted in its tracks.”
Academics and government scientists say the campaign also is successfully throttling the years-long effort to study online falsehoods, which grew after Russian attempts to interfere in the 2016 election caught both social media sites and politicians unaware.
Interviews with more than two dozen professors, government officials, physicians, nonprofits and research funders, many of whom spoke on the condition of anonymity to discuss their internal deliberations freely, describe an escalating campaign emerging as online propaganda is rising.
Social media platforms have pulled back on moderating content even as evidence mounts that Russia and China have intensified covert influence campaigns; next week, the disinformation watchdog NewsGuard will release a study that found 12 major media accounts from Russia, China and Iran saw the number of likes and reposts on X nearly double after Musk removed labels calling them government-affiliated. Advances in generative artificial intelligence have opened the door to potential widespread voter manipulation. Meanwhile, public health officials are grappling with medical misinformation, as the United States heads into the fall and winter virus season.
Conservatives have long complained that social media platforms stifle their views, but the efforts to limit moderation have intensified in the past year.
The most high-profile effort, a lawsuit known as Missouri v. Biden, is now before the Supreme Court, where the Biden administration seeks to have the high court block a ruling from the U.S. Court of Appeals for the 5th Circuit that found the White House, FBI and top federal health officials likely violated the First Amendment by improperly influencing tech companies’ decisions to remove or suppress posts on the coronavirus and elections. That ruling was narrower than a district court’s finding that also barred government officials from working with academic groups, including the Stanford Internet Observatory. But the Biden Justice Department argues the injunction still contradicts certain First Amendment principles, including that the president is entitled to use his bully pulpit to persuade American companies “to act in ways that the President believes would advance the public interest.”
“The university is deeply concerned about ongoing efforts to chill freedom of inquiry and undermine legitimate and much needed academic research in the areas of misinformation — both at Stanford and across academia,” Stanford Assistant Vice President Dee Mostofi told The Post. “Stanford believes strongly in academic freedom and the right of the faculty to choose the research they wish to pursue. The Stanford Internet Observatory is continuing its critical research on the important problem of misinformation.”
Jordan has issued subpoenas and demands for researchers’ communications with the government and social media platforms as part of a larger congressional probe into the Biden administration’s alleged collusion with Big Tech.
“This effort is clearly intended to deter researchers from pursuing these studies and penalize them for their findings,” Jen Jones, the program director for the Center for Science and Democracy at the Union of Concerned Scientists, an environmental group that promotes scientific research, said in a statement.
Disinformation scholars, many of whom tracked both covid-19 and 2020 election-rigging conspiracies, also have faced an onslaught of public records requests and lawsuits from conservative sympathizers echoing Jordan’s probe. Billionaire Elon Musk’s X has sued a nonprofit advocacy group, the Center for Countering Digital Hate, accusing it of improperly accessing large amounts of data through someone else’s license — a practice that researchers say is common. Trump adviser Stephen Miller’s America First Legal Foundation is representing the founder of the conspiracy-spreading website, the Gateway Pundit, in a May lawsuit alleging researchers at Stanford, the University of Washington and other organizations conspired with the government to restrict speech. The case is ongoing.
Nadgey Louis-Charles, a spokeswoman for the House Judiciary Committee that Jordan chairs, said the Jordan-led investigation is focused on “the federal government’s involvement in speech censorship, and the investigation’s purpose is to inform legislative solutions for how to protect free speech.”
“The Committee sends letters only to entities with a connection to the federal government in the context of moderating speech online,” she said. “No entity receives a letter from the Committee without a written explanation of the entity’s connection to the federal government.”
Missouri Attorney General Andrew Bailey (R) in a statement said the federal government “silenced” information because “it didn’t fit their narrative.”
“Missouri v. Biden is the most important First Amendment case in a generation, which is why we’re taking it to the nation’s highest court,” he said.
“As a pro-democracy organization, American First Legal is committed to defeating the censorship-industrial complex that is crushing freedom and promoting dangerous conspiracy theories about Americans who dare to question government dogma,” Miller said in a statement.
‘A serious threat to the integrity of science’
In September 2022, an NIH council greenlit a $150 million program to fund research on how to best communicate health issues to the public. Administrators had planned the initiative for months, convening a strategy workshop with top tech and advertising executives, academics, faith leaders and physicians.
“We know there’s a lot of inaccurate health information out there,” said Bill Klein, the associate director of the National Cancer Institute’s Behavioral Research Program at a meeting approving the program. He showed a slide of headlines about how online misinformation hampered the response to the covid-19 pandemic, as well as other public health issues, including gun violence and HIV treatment.
The program was intended to address topics vulnerable to online rumors, including nutrition, tobacco, mental health and cancer screenings such as mammograms, according to three people who attended a planning workshop.
Yet in early summer 2023, NIH officials contacted some researchers with the news that the grant program had been canceled. NIH appended a cryptic notice to its website in June, saying the program was on “pause” so that the agency could “reconsider its scope and aims” amid a heated regulatory environment.
Schillinger and Richard Baron, the CEO of the American Board of Internal Medicine, warned that the decision posed “a serious threat to the integrity of science and to its successful translation” in a July article in the JAMA. In an interview with The Post, Barron noted that there are limited sources of funding for health misinformation research.
NIH declined requests for an interview about the decision to halt the program, but spokesperson Renate Myles confirmed in an email that the Missouri v. Biden lawsuit played into the decision. Myles said a number of other lawsuits played a role but declined to name them.
Myles said the litigation was just one factor and that budgetary projections and consideration of ongoing work also contributed to the decision. She said that an initial approval of a concept does not guarantee it will be funded and that NIH currently funds health communication research. The agency does not officially release numbers about funding in the area, but she said a working group estimated that NIH spent $760 million over five years.
“NIH recognizes the critical importance of health communications science in building trust in public health information and continues to fund this important area of research,” she said.
NIH and other public health agencies have also sought to limit their employees’ communications with social media platforms amid the litigation, according to internal agency emails viewed by The Post that were sent in July after a Louisiana judge blocked many federal agencies from communicating with social media companies.
In one instance, an NIH communications official told some employees not to flag misleading social media posts to tech companies — even if they impersonated government health officials or encouraged self-harm, according to a July email viewed by The Post. The employees were told they could not respond to questions about a disease area or clinical trial if it did “relate in any way to misinformation or disinformation.”
The Election Integrity Partnership may also curtail its scope following lawsuits questioning the validity of its work, including the Missouri v. Biden case.
Led by the Stanford Internet Observatory and the University of Washington’s Center for an Informed Public, the coalition of researchers was formed in the middle of the 2020 presidential campaign to alert tech companies in real time about viral election-related conspiracies on their platforms. The posts, for example, falsely claimed Dominion Voting Systems’ software switched votes in favor of President Biden, an allegation that also was at the center of a defamation case that Fox News settled for $787 million.
In March 2021, the group released a nearly 300-page report documenting how false election fraud claims rippled across the internet, coalescing into the #StopTheSteal movement that fomented the Jan. 6 attack at the U.S. Capitol. In its final report, the coalition noted that Meta, X (formerly Twitter), TikTok and YouTube labeled, removed or suppressed just over a third of the posts the researchers flagged.
But by 2022, the partnership was engulfed in controversy. Right-wing media outlets, advocacy groups and influencers such as the Foundation for Freedom Online, Just the News and far-right provocateur Jack Posobiec argued that the Election Integrity Partnership was part of a coalition with government and industry working to censor Americans’ speech online. (Posobiec didn’t respond to a request for comment, but after this story was published online he posted the request on X with the comment: “Every one of these programs will be penniless and powerless by the time I am done.”)
Jordan has sent several legal demands to see the coalition’s internal communications with the government and social media platforms and hauled them into Congress to testify about their work.
Louis-Charles, the Judiciary Committee spokeswoman, said in a statement that the universities involved with EIP “played a unique role in the censorship industrial complex given their extensive, direct contacts with federal government agencies.”
The probe prompted members of the Election Integrity Partnership to reevaluate their participation in the coalition altogether. Stanford Internet Observatory founder Alex Stamos, whose group helps lead the coalition, told Jordan’s staff earlier this year that he would have to talk with Stanford’s leadership about the university’s continued involvement, according to a partial transcript filed in court.
“Since this investigation has cost the university now approaching seven [figure] legal fees, it’s been pretty successful I think in discouraging us from making it worthwhile for us to do a study in 2024,” Stamos said.
Kate Starbird, co-founder of the University of Washington Center for an Informed Public, declined to elaborate on specific plans to monitor the upcoming presidential race but said her group aims to put together a “similar coalition … to rapidly address harmful false rumors about the 2024 election.”
She added, “It’s clear to me that researchers and their institutions won’t be deterred by conspiracy theorists and those seeking to smear and silence this line of research for entirely political reasons.”
Another participant in the Election Integrity Partnership, who spoke on the condition of anonymity, said the group was “looking at ways to do our work completely in the open” to avoid allegations that direct communications with the platforms are a part of a censorship apparatus.
The researchers have been encouraged by the recent ruling in the Court of Appeals for the 5th Circuit in the Missouri v. Biden litigation, which struck down a July 4 injunction that barred government officials from collaborating or coordinating with the Election Integrity Partnership, the Stanford Internet Observatory and other similar groups.
‘Naughty & Nice List’
In recent weeks, Jordan has sent a new round of record requests to at least two recipients of grants from the National Science Foundation’s Convergence Accelerator program, according to three people familiar with the matter.
The program, one of many run by the independent agency to promote research, awards funding to groups creating tools or techniques to mitigate misinformation, such as software for journalists to identify misinformation trending online.
George Washington University professor Jonathan Turley and the conservative advocacy group The Foundation for Freedom Online wrote separate reports portraying the program as an effort by the Biden administration to censor or blacklist American citizens online. Afterward, Jordan requested grant recipients’ communications with the White House, technology companies and government agencies, according to two of the people.
Turley said in a statement that “free speech is a core value of higher education” and that he is concerned that universities are using partnerships with the government to silence some users.
“If universities are supporting efforts to regulate or censor speech, there should be both clarity and transparency on this relationship. In past years, academics have demanded such transparency in other areas of partnership with the government, including military research,” Turley said. “Free speech values should be of equal concern to every institution of higher learning.”
Some NSF grant recipients who have not received requests from Jordan’s committee say they are facing a barrage of online threats over their work, which has prompted some to buy services that make it harder to find their addresses, such as DeleteMe.
Hacks/Hackers, a nonprofit coalition of journalists and technologists, received an NSF grant to develop tools to help people share accurate information about controversial topics, such as vaccine efficacy. The group has faced political scrutiny from Sen. Joni Ernst (R-Iowa), who tweeted they had received $5 million from President Biden to create “a naughty & nice list to police the content posted by family & friends” with her usual slogan “MakeEmSqueal.”
Connie Moon Sehat, a researcher-at-large for the group, said she and other researchers have faced online attacks including threats to reveal personal information and veiled death threats. She says members of her team are at times under high levels of stress and having ongoing conversations about how to elevate accurate information on social media, as some platforms become increasingly toxic.
“We are double- and triple-checking what we write, above what we used to, to try to communicate our good intentions — in the face of efforts that willfully misconstrue our work and desire to serve the public,” Sehat said. “And I worry more broadly that we researchers may self-censor our inquiry, or that some will drop out altogether, to stay safe.”
As Jordan’s probe expands, some university lawyers have urged academics to hold on to their records and be prepared to receive subpoenas from the committee, according to two people familiar with the matter.
The probe has sparked a wave of fear among university academics, prompting several to take a lower profile to avoid the scrutiny. Laura Edelson, an assistant professor of computer science at Northeastern University, recently left her role as chief technologist at the Justice Department’s antitrust division. She said she tailored her job search to only private universities that are not subject to public records laws.
“I knew that because of the way our field is being attacked that the cost of the work I do is a lot higher at a public institution,” she said. “I just didn’t want to pay that cost, and that’s why I only applied to private universities.”
The left-leaning nonprofit Center for Democracy and Technology argued in a Thursday report that the disinformation field is facing a dual threat: Social media platforms have become less responsive to concerns from researchers about misinformation while the political and regulatory backlash against the scholarship has eroded the relationships between academics, nonprofits and industry.
“The more efforts to recast counter-election-disinformation as censorship succeed, the more difficult it will become for governments and others to work with researchers in the field,” wrote the nonprofit, which receives some of its funding from tech corporations, including Google and Meta.
The scrutiny has caught the academic community by surprise, as non-faculty staff and researchers debate how to protect themselves from new legal threats. When Dannagal Young, a professor of communication and political science at the University of Delaware, alerted university lawyers that she’d been asked to talk with Democratic congressional staffers about potentially testifying before Jordan’s subcommittee, she felt her preparation was lacking.
While the lawyers were eager to help, according to Young, in their initial response they spent more time prepping her on how to discuss President Biden’s relationship to the school than they did on what kinds of questions she might be asked on Capitol Hill.
“I don’t think university lawyers are prepared to navigate that kind of politically motivated space,” she said. The University of Delaware didn’t respond to a request for comment.
Many academics, independent scholars and philanthropic funders are discussing how to collectively defend the disinformation research field. One proposal would create a group to gather donations into a central fund to pay for crisis communications and — most critically — legal support if one of them gets sued or subpoenaed in a private case or by Congress. The money could also fund cybersecurity counseling to ward off hackers and stalkers and perhaps physical security as well.
“There is this growing sense that there need to be resources to allow for freedom of thought and academic independence,” said one longtime philanthropy grant maker who spoke on the condition of anonymity to discuss internal matters.
University academics are also mulling ways to rebrand their work to attract less controversy. One leader in a university disinformation research center said scholars have discussed using more generic terms to describe their work such as “information integrity” or “civic participation online.” Those terms “have less of a bite to them,” said a person, who spoke on the condition of anonymity to speak on the private discussions. Similar conversations are occurring within public health agencies, another person said.
“This whole area of research has become radioactive,” the person said.