Why 85% of Clinical Trials Miss Enrollment Targets — And What Your Site Selection Strategy Is Getting Wrong
The brutal statistics have barely budged in a decade. Eighty to 85% of clinical trials fail to meet their enrollment targets. Thirty-seven percent of investigator sites under-enroll. And 11% of sites enroll zero patients. Yet the response from the industry remains unchanged: more feasibility calls, more relationship-building, more hope.
This isn’t a recruitment problem. It’s a site selection problem. And it’s costing sponsors an estimated $50 million to $300 million per delayed program, depending on therapeutic area.
The real issue? Sponsors treat feasibility as a checkbox exercise. They ask the same questions they’ve asked for fifteen years. They accept historical enrollment data without context. They sign contracts with sites that have no business being in the trial. And then—when Week 8 arrives and half your sites are dead weight—they’re shocked.
Meanwhile, the CRO and clinical operations industry is trying to solve an unsolvable problem: how to make it work with sites that shouldn’t have been selected in the first place.
Data-Driven Site Selection Is the Differentiator
The infrastructure exists to solve this problem. AI can cross-reference 15 years of enrollment data by therapeutic area, by indication, by disease prevalence in catchment areas, by investigator track record, by site infrastructure maturity, and by past performance. It can flag sites that performed well in similar trials at similar enrollment rates in similar seasons. It can predict dropout rates, protocol deviation likelihood, and data quality by site.
But most teams still use Excel. They still make final site decisions in September for a trial launching in March because that’s when the contract cycle works. They still rely on relationships with CRO site managers who have incentives to keep contracts full rather than keep contracts quality.
The real differentiator isn’t access to better data. It’s the willingness to drop underperforming sites at Week 8 instead of hoping they’ll catch up at Week 32.
Treat Site Selection Like Portfolio Management
This requires a fundamental shift in how sponsors approach site strategy:
First, use predictive modeling before feasibility, not after. Use historical enrollment curves, disease prevalence, and site capability assessments to set realistic targets by site before the contract is signed. Don’t adjust down later—build in buffer upfront.
Second, establish performance thresholds with automatic trigger points. If a site hasn’t enrolled at 50% of target by Week 8, activate the contingency plan. The contingency plan isn’t “more frequent monitoring calls.” It’s site replacement, protocol simplification, or indication expansion in that geography.
Third, diversify your site portfolio like a financial portfolio. Don’t put 60% of your enrollment weight on three large academic centers hoping they’ll deliver. Spread enrollment across site types—academic medical centers, private practices, community sites—and across geographies where the disease prevalence actually supports your enrollment assumptions.
Fourth, invest in the enablement infrastructure. A site selected correctly but set up for failure is worse than a poorly selected site that never started. Provide enrollment training, patient screening tools, and real-time enrollment dashboards. Make the site’s job easier, not harder.
The Sponsor’s Strategic Question
Enrollment failure isn’t a supply problem. It’s a demand problem. You’ve built your trial assuming sites exist and investigators want to participate. But your site selection strategy treats it like a checkbox exercise, not like a portfolio management problem.
The question isn’t whether you have access to better data or better tools. The question is whether you’re willing to view site selection as a strategic decision with measurable ROI, which means being willing to make hard calls when the data says a site won’t work.
The sites that consistently meet or exceed enrollment targets do one thing differently: they make site selection decisions based on data, not relationships. And they enforce those decisions with the discipline of portfolio managers, not the optimism of program managers.
The 85% failure rate persists because the industry optimizes for engagement, not enrollment. The 15% who succeed optimize for signal-to-noise ratio. That’s not a recruitment strategy. It’s a site selection strategy. And unlike recruitment, it’s actually controllable.