If you work in pharmacovigilance or drug safety, you know how much structure goes into post-market monitoring and adverse event reporting. Clinical trials, non-interventional studies, and patient surveys are all essential, but even with these systems in place, critical signals can be missed.
Even when spontaneous reports are in place, most adverse drug reactions never get flagged. One review found that 94% of ADRs go unreported, including many serious cases.
Another study of FDA data showed that for high-risk drugs, only 20–33% of expected serious events were ever recorded. This is both a serious brand risk and a missed opportunity to strengthen patient safety.
Patients are already generating pharmacovigilance data online. Are you equipped to act?
At the same time, patients are sharing side effects and real-world experiences online: on Reddit, in Facebook groups, on health forums, and even on TikTok. These posts may be informal, but they’re increasingly influential. And they often surface safety concerns long before they reach formal reporting systems.
More pharmacovigilance teams are starting to treat social platforms as a meaningful source of early safety insight.
This is the shift: social media is becoming a critical layer of real-world data in pharmacovigilance. As the real-world stories below prove well, when harnessed correctly, it helps teams detect signals earlier, respond faster, and protect patients more effectively. But detecting credible, reportable safety signals in that noise isn’t easy.
To address this, leading safety teams are outsourcing to fully managed pharmacovigilance services that specialize in social signal detection — backed by the structure, systems, and regulatory experience to make social data usable.
Why social data is hard to use well in pharmacovigilance
Social data is messy, informal, and rarely structured for compliance workflows. It’s full of incomplete posts, slang, sarcasm, and missing context. Most pharmacovigilance teams don’t have the systems, nor the bandwidth, to reliably detect and act on what matters.
Detecting credible adverse events from social posts takes more than keyword matching. It requires:
![]() |
Repeatable, auditable methods for evaluating and documenting potential ADRs |
![]() |
Language models trained on how patients actually talk, not just textbook terms |
![]() |
Systems that get smarter over time with human feedback and pharmacovigilance safety context |
Most internal teams lack the tools, time, or linguistic models to operationalize social data at scale. In fact, a 2021 review in JMIR Public Health and Surveillance found that social media can uncover safety issues months or even years before regulatory action. But only if teams have the right structure, tools, and expertise to act on those signals.
What good social signal detection looks like in real-world ADR practice
When done well, social signal detection surfaces problems earlier, adds context to existing safety data, and even reshapes response strategies. Here are a few illustrative examples:
Unreported liver symptoms prompt label update
On popular online discussion boards and in health forums, patients flagged fatigue, jaundice, and other liver-related side effects while taking cholesterol-lowering medications: issues that weren’t emphasized during clinical trials. Shortly after release, online discussions regarding statin side effects and others outlining the effects on liver function helped trigger further clinical investigations and, ultimately, a formal label update from the FDA.
Off-label sleep aid trends shape prescriber education
Online discourse revealed a growing off-label use of antidepressants like trazodone and mirtazapine as sleep aids. This risk was particularly pronounced for teens and anxious patients. Social media posts — ranging from discussions about SSRIs for kids to crowdsourced lists of insomnia meds and detailed user accounts of anxiety-driven sleep aid choices — pointed to a broader pattern of use. One manufacturer used those insights to refine the educational materials provided to patients prescribed their drugs.
Social media discourse surfaces rare vaccine side effects early
During the COVID-19 vaccine rollout, informal reports of myocarditis in young men began circulating on mainstream social media platforms well before formal pharmacovigilance systems picked them up. This online discourse included posts such as this thread highlighting the growing incidence of heart-related deaths and viral videos featuring the targeted hashtag #myocarditis.
Together, this online discourse helped drive earlier scrutiny of potential vaccine side-effects, a signal that was later validated by clinical studies including a ScienceDirect Review and a 2021 PubMed study.
What regulators expect now: Social media signal detection
Neither the FDA nor EMA mandates social media monitoring as a standalone requirement, both agencies are clear: if you find an adverse event online, you’re responsible for reporting it just like any other source.
The Euopean Medicines Agency Good Pharmacovigilance Practices (GVP) Module VI explicitly acknowledges digital and publicly accessible data sources as valid inputs as long as your social listening and monitoring methods are systematic, well-documented, and verifiable.
Similarly, the ICH E2D guidance reinforces this expectation, noting that companies should assess all available safety data regardless of its origin.
In short: If social media can surface safety signals and your team has access to those signals, regulators expect you to take action.
Why pharmacovigilance teams are outsourcing social media monitoring
Social signal detection isn’t just a data problem, it’s a compliance, resource, and reputational risk challenge.
The platforms are noisy. The language is informal. And the stakes are high. While pharmacovigilance teams can’t afford to miss a credible signal, or mishandle one. That’s why more organizations are working with industry leaders in social listening and online risk intelligence like Resolver. Our fully-managed service monitors the right platforms, filters out the noise, and delivers structured reports aligned with regulatory expectations.
Pharmacovigilance teams rely on us for:
![]() |
Validated NLP and ML tools built to flag adverse events in real-world language not just clinical terms |
![]() |
Clear, auditable workflows that meet EMA and FDA documentation and regulatory standards |
![]() |
Human expert analysis and triage that separates credible AEs from irrelevant posts |
![]() |
Compliant and audit-ready reports your team can use for further assessment or submission |
Social media isn’t just background chatter. It’s where a patient mentions blurry vision that never made it into a trial summary. Or flags chest pain a few days after starting a new med. It’s fragmented, yes. But it’s real — and often shows up long before formal channels catch on.
For teams ready to act on it, the payoff is faster detection, better protection, and smarter decisions. Resolver helps you make sense of it so you can protect patients, stay compliant, and act with confidence.
Learn more about Resolver’s pharmacovigilance solutions
Darren Burrell leads strategy for Resolver’s Corporate Intelligence division, where he helps clients in pharma and other industries detect risks in online spaces. With a background in risk intelligence, digital innovation, and compliance strategy, he specializes in turning unstructured data into actionable insights that align with regulations.