Three Pravda Roundtable Takeaways: Disinformation & What Really Matters
Disinformation has entered a new phase – and we are all still lingering two steps behind, figuring out how to go about it. On the heels of the 1st Pravda Ecosystem Report, we convened a roundtable to outline how experts from different planes of the disinformation field see the next steps in front of all of us. The event was a joint initiative of the Center for Information, Democracy and Citizenship (CIDC) at AUBG and Sensika Technologies. Bringing together journalists, researchers, tech and civil society leaders, the event spotlighted one of the most active disinformation networks in Europe: the Russian-linked Pravda Network.
Pravda report co-authors Georgi Angelov and Dr. Jacob Udo-Udo Jacob, investigative journalist and one of Putin’s main targets, Christo Grozev, Bulgaria’s former defence minister Dr.Velizar Shalamanov were among the experts who put their heads together to debate the way forward.
What began as an investigation into false narratives quickly expanded into a wider discussion on how disinformation functions in 2025—how it is seeded, amplified, and adapted using both psychological tactics and technology. Here come the key takeaways from the event:
Divide and Conquer: The Key Strategy Behind Russian Disinformation
At the heart of disinformation operations lies a simple truth: facts alone no longer win the information war. Emotional impact drives engagement, and engagement determines reach. Disinformation thrives not because it’s convincing, but because it captures attention and preys on fear, frustration, and division.
The Pravda Network exemplifies this shift. According to journalist Christo Grozev, if there was to be one key selector, one trait that unifies the choice of disinformation topics across the range of countries targeted by Pravda – it is divisiveness. This erosion of trust and connections in society is the tool the Russian propaganda machine has evaluated to be the most effective attack at democracy there is. Another key concept discussed at the event was reflexive control—a strategy lifted from Soviet military doctrine. Rather than reacting to situations, reflexive control shapes how people interpret them in advance. It works by feeding an audience carefully designed information to guide them toward a pre-calculated conclusion—one that feels like their own. And algorithms do a great job at that.
AI: A Double-Edged Force
Artificial intelligence was a central theme of the roundtable—and for good reason. AI now plays a critical role on both sides of the disinformation battle.
On one side, it enables faster, wider, and more convincing manipulation. Generative models can create fake images, distorted narratives, or misleading “evidence” that spreads online before anyone can respond. There’s also a growing threat of AI model poisoning, where systems are deliberately trained on false or biased data to skew their outputs—making trusted tools untrustworthy.
But AI also offers new ways to fight back. The Disinformation Observatory’s real-time dashboard, developed with Sensika, was showcased as a powerful example. It monitors narrative trends across languages and geographies, detects coordinated campaigns, and gives researchers a clearer picture of how false information moves and mutates.
This kind of visibility is critical. As Pravda report author Dr. Jacob noted, disinformation often doesn’t invent crises—it hijacks existing ones. When the next controversy breaks, being able to map how narratives are weaponized in real time is a major advantage.
The High Cost of Silence
Yet, technology alone cannot fill the growing trust gap. When institutions fail to clearly explain their actions, decisions, or policies, they create an information vacuum. That silence is quickly filled by speculation, misinformation, and targeted propaganda.
As Bulgaria International Journalism Fellowship Editorial Director Dean Starkman emphasized, disinformation isn’t just about spreading lies—it thrives where truth is missing. The real contest isn’t between AI tools or troll farms; it’s between consistent public communication and the uncertainty that takes hold when it’s absent.
Communities that don’t hear from their governments, public bodies, or trusted media are more likely to believe what they hear elsewhere—especially when that message taps into fear or resentment.
The Path Forward
The battle against disinformation isn’t a short-term fight – it’s an ongoing effort to protect public understanding in a world where information can be distorted as easily as it is shared.
Discussions like the one we had at the Disinformation Observatory roundtable make one thing clear: this is no longer about chasing the latest falsehood. It’s about building lasting infrastructure for truth—networks that connect analysts, journalists, educators, technologists, and decision-makers in a shared effort to defend reality.
Disinformation will keep evolving, but so can we—by strengthening the ties between knowledge and trust, and ensuring that clarity, not confusion, has the louder voice.