As clinical trials grow more complex, so does the volume of data that requires medical and operational review. Phase III trials now generate an average of 3.6 million data points—three times the volume collected a decade ago. At the same time, the pharmaceutical industry invests $200 billion annually in research and development to bring new drugs to market, but the number of drug approvals remains constrained.
The challenge is no longer simply collecting data; it’s reviewing, interpreting and overseeing that data. Using artificial intelligence (AI) for clinical data review and oversight can accelerate workflows while maintaining regulatory rigor.
“The growth in clinical trial data has been exponential, but the approvals haven't kept up pace with the [research and development] spend,” says Simone Sharma, lead clinical product manager at Revvity Signals. “Traditional manual review can't handle this volume and also simultaneously maintain the rigor that's needed in this kind of highly regulated industry.”
In regulated environments, AI-enabled clinical oversight—including AI-assisted listing generation, signal detection and documented review workflows—can accelerate medical and operational review while preserving traceability. However, successful AI adoption hinges on trust, governance and accountability.
Evolving Demands of Clinical Oversight
Modern clinical oversight requires continuous monitoring of safety, data quality and protocol adherence across increasingly complex, multi-site global trials. AI-powered decision support systems can streamline clinical workflows, assist in diagnostics and enable personalized treatment, but they won’t be accepted or implemented without robust governance frameworks.
AI-enabled solutions support automated listing generation and proactive signal detection. These features make it easier to identify potential protocol deviations and data anomalies, accelerate medical and data review workflows, and enable pattern recognition at scale across millions of data points.
“AI-assisted listing generation addresses quite a major bottleneck in the traditional manual review of the workflow,” Sharma says. “Using AI to take what we consider more of the low-level work from clinical programmers so they can concentrate on high-value work helps the medical monitors and data managers access the data that they want without having to wait so long.”
Accelerating this process can shorten trial timelines while improving the consistency and defensibility of review decisions.
The Trust Factor
AI can sift through large volumes of data and identify patterns at scale, but trustworthiness is a significant concern.
Rather than replacing clinical expertise, AI strengthens it, as clinical judgment, contextual interpretation and regulatory responsibility remain human-led. This “human in command” approach provides clear authority and accountability.
“Every kind of AI-assisted output informs decisions about what's going to be reviewed or what's not going to be reviewed still needs that human on top of it to make the decision,” says Sharma. “Your teams need to monitor AI performance over time to show that AI is behaving predictably [and] consistently…and humans should be able to intervene anytime…to override whatever AI is doing.”
Efforts to boost the trustworthiness of AI in clinical research, including the Center for Drug Evaluation and Research (CDER) AI Council and U.S. Food and Drug Administration (FDA) initiatives to use agentic AI to enable more complex workflows, stressed the need for built-in guardrails, including human oversight, to ensure reliable outcomes.
Compliance is Key
The International Council for Harmonisation (ICH) issued its final version of the Guideline for Good Clinical Practice E6(R3) in January 2025 to modernize how clinical trials should be conceived, conducted and overseen. While the guideline does not prescribe specific technologies, it reinforces risk-based quality management, sponsor accountability and continuous oversight—creating a framework where governed AI can support compliance.
National regulations and industry-focused initiatives have resulted in oversight programs that maximize AI benefits while mitigating risks. In fact, risk-proportionate oversight, traceable review activity and documentation aligned with regulatory expectations are critical for data trustworthiness.
“AI helps identify highest-risk data points and processes, which allows experts to review where it matters most [compared to before] when it used to be 100 percent source data verification,” Sharma says. “Now, with risk-based and quality monitoring and machine learning algorithms getting better and better, we have the ability to only focus on data points or sites that have been historically problematic.”
The approach, according to Sharma, is to embrace AI with the understanding that it needs to be embedded into current processes and the risk has to be proportionate to the benefits.
Build or Buy: Practical Considerations
Many sponsors and clinical research organizations (CROs) are evaluating whether to build AI internally or adopt purpose-built solutions. In regulated settings, internal development requires significant investments in validation, governance and documentation to ensure tools can withstand regulatory scrutiny.
While internal solutions can be customized to specific processes and workflows, building, testing and maintaining the tools all demand resources. Building internally also means taking responsibility for compliance and ongoing lifecycle management. In contrast, adopting purpose-built solutions speeds the deployment timeline and offers vendor expertise and predictable costs. Moreover, vendors support lifecycle management, validation documentation and inspection readiness, while sponsors retain ultimate regulatory responsibility.
“Regardless of whether you build or buy solutions, they need to earn trust, and they have to strengthen the existing workflows and operations, not disrupt them,” says Sharma. “Our Signals Clinical solution operationalized AI in a way that offers full transparency, full traceability and full reproducibility, and now that we're bringing AI into the mix, we have focused on making sure that capability is still very data-grounded…to allow end users to work faster and more efficiently whilst maintaining their rigor in the human oversight in clinical development.”
Signals Clinical operationalizes governed, human-in-command AI to deliver transparent, inspection-ready clinical oversight—without disrupting your existing ecosystem. To learn more, watch this video.