1 Apr 2026

Navigating the social media minefield

Researchers are becoming increasingly wary of unsupported online criticism and disinformation

By Timothy Spence

Scientists and researchers seeking to announce promising findings, exchange ideas or contribute to policy discussions outside of academic circles may find social media an expedient tool.

But increasingly, these professionals are discovering that online platforms can be a double-edged sword: while sharing their work and expertise is crucial for visibility and meeting funder expectations, it also exposes them to baseless criticism and even accusations of spreading false information.

"It is getting harder and harder for scientists, and researchers in general, to convey their findings without facing attacks on their legitimacy and accuracy of their research," says Valère Ndior, a professor of Public Law at the University of Western Brittany in Brest, France, who has spent more than a decade studying misinformation on social media. Researchers attempting to challenge deceptive online information may themselves be accused of political bias or not being objective in their research, he explains, adding that some scientists disengage because they feel overwhelmed and unable to sustain meaningful dialogue. "It is quite sad to see in some instances that researchers end up leaving social media altogether because they feel like the pressure is getting too intense or feel threatened," he says.

Ndior, a past president of the Francophone Network of International Law, is also involved in European efforts to make science and research findings more accessible to the public. As an ambassador for the Science Comes to Town (SCTT) project, he has participated in recent discussions on public perceptions of science and online misinformation. The European Commission-funded initiative connects Brest and two other cities — Kiel in Germany and Split in Croatia — and aims to strengthen ties between communities, universities and research institutions to promote science awareness and cooperation across Europe.

Ndior draws a clear distinction between critical thinking and thoughtless criticism. Yet he sees online discussions being increasingly shaped by suspicious behaviour and coordinated information campaigns — including "raids" aimed at dominating discussions and "astroturfing," where small groups act as larger movements to sway debates. Tracking these patterns has become harder because some platforms have restricted data or charge for it, despite provisions of the European Union's Digital Securities Act aimed at ensuring researchers' access. According to Ndior, "Platforms became more reluctant from the moment access to data became a legal requirement rather than a voluntary initiative."

Tipping points

Unfounded claims and scepticism about science existed long before Internet chat groups. But the noticeable rise in online hostility has intensified around contentious topics such as the coronavirus pandemic and climate change, alongside the weakening of gatekeepers to moderate content.

For Ndior, dubious claims about pandemic countermeasures and vaccine safety highlighted the struggle between "balancing freedom of expression and the protection of public health and order." Efforts to remove social media content deemed false or risky for public health triggered accusations of censorship and the silencing of legitimate discussion, and in some cases, legal challenges.

Concerns about attacks on the credibility of scientists and researchers have generated a flurry of academic papers in recent years and prompted professional communities to address ways to counter online attacks. Such unease extends to the generation that has grown up with digital media. Ndior points out that his students, some of whom research artificial intelligence regulation, hate speech, online surveillance and other controversial topics, hesitate to share their work or viewpoints online. "Many of them don't want to expose themselves too much because they don't feel ready to be faced with the backlash of other people criticising their research," he explains.

Safer harbours for engagement

Legal challenges against false or defamatory posts are not necessarily a solution — especially given the trans-jurisdictional nature of the Internet. Lawsuits also take time and money, and if they fail, can be counterproductive. Instead, Ndior advises scientists and researchers to be selective in how and where they communicate. Professional online platforms and moderated discussion spaces offer a better environment for engagement. He also highlights the value of more traditional formats, such as radio interviews and in-person discussions at schools, libraries and science fairs, which encourage dialogue with the public. "Personal contact makes a big difference," he says, while acknowledging that these approaches lack the global reach of social media.

Ndior’s interest in digital law and online misinformation developed unexpectedly. As a doctoral student in international law in the early days of social media, he began analysing posts from governments and heads of state to determine if they could be considered official and thus used as evidence in litigation. His work attracted media attention and led to conference and speaking invitations. "This was not part of my plan," he says, "but this is what happens as a researcher — your interest is sparked by a certain event or phenomenon, and it suddenly becomes your focus."

Ndior believes SCTT provides "a double benefit" for researchers, enabling them to connect with peers and interact with the public. "There is a wide community of people who are not in academia who are critical thinkers looking for opportunities to ask questions, to get information and contribute their ideas,” he says. "I really value that kind of feedback."

Science|Business, which connects policymakers, academics and industry through its news reporting, analysis and events, is an SCTT partner.