Back to Insights
HTA Strategy

5 Common Pitfalls in HTA Submissions

Pier Lasalvia, MD
Pier Lasalvia, MDCo-founder, CTO & Co-CEO
February 10, 2026 6 min read

After reviewing hundreds of HTA submissions across multiple agencies, clear patterns emerge. The same mistakes appear again and again — not because teams lack expertise, but because the pressures of timelines and budgets push important details to the margins.

Here are five pitfalls we see most often, along with practical guidance on how to avoid them.

1. Misaligned Decision Problem

The most consequential mistake happens at the very beginning: framing the decision problem in a way that doesn't match the agency's perspective.

This typically manifests as:

  • Defining the population too broadly (or too narrowly) relative to the licensed indication
  • Choosing a primary outcome that the agency considers secondary
  • Framing the value proposition around clinical endpoints when the agency prioritizes patient-reported outcomes

Red Flag

If your decision problem reads like marketing copy rather than a clinical question, it's misaligned. HTA agencies want to see the question framed from the healthcare system's perspective, not the manufacturer's.

How to avoid it: Before building any model, map the decision problem against the agency's published methodology guide. NICE, CADTH, and PBAC all publish detailed scoping documents — use them as your template, not your product's clinical development plan.

2. Wrong Comparators

Choosing the wrong comparator is the second most common reason for negative HTA decisions. The issue isn't always obvious — your global clinical trial may have used a comparator that made perfect sense for regulatory approval but doesn't reflect local clinical practice.

Common mistakes include:

  • Using placebo when active comparators are standard of care
  • Using a branded comparator when generics dominate the market
  • Ignoring best supportive care as a relevant option
  • Selecting the weakest comparator to make your product look better

How to avoid it: Conduct a local clinical practice survey early in development. Talk to the physicians who will prescribe your product, not just the KOLs who designed the trial. Every market has its own treatment landscape.

3. Ignoring Structural Uncertainty

Most cost-effectiveness models lock in key structural assumptions early — the number of health states, the cycle length, the time horizon, the type of survival extrapolation. These choices often receive less scrutiny than parameter values, but they can have an outsized impact on results.

Agencies are increasingly aware of this. NICE's updated methods guide explicitly requires exploration of structural uncertainty, and CADTH routinely requests alternative model structures as scenario analyses.

How to avoid it: Present at least two structurally different approaches (e.g., partitioned survival vs. state-transition) and explain why you chose your base case. If you can't justify a structural choice with clinical logic, it's a vulnerability.

Structural vs. Parameter Uncertainty

Parameter uncertainty (e.g., what is the hazard ratio?) can be handled with sensitivity analysis. Structural uncertainty (e.g., should we model disease progression as discrete states or continuous time?) requires fundamentally different model architectures. Both matter, but structural uncertainty is harder to address after the fact.

4. Deterministic-Only Sensitivity Analysis

Running only one-way deterministic sensitivity analyses (DSA) is no longer acceptable for any major HTA agency. Yet we still see submissions where the probabilistic sensitivity analysis (PSA) is treated as an afterthought — run once at the end, with limited interpretation.

The problem with deterministic-only analysis:

  • It doesn't capture the interaction between parameters
  • It can't generate cost-effectiveness acceptability curves
  • It gives a false sense of certainty
  • It's increasingly seen as a signal of low methodological rigor

How to avoid it: Build your PSA from the start, not as a post-hoc addition. Define distributions for all key parameters at the time you populate the model. Run at least 5,000 iterations and present results as scatter plots, acceptability curves, and expected value of information analyses.

5. Missing Budget Impact Model

Many teams treat the budget impact model (BIM) as a secondary deliverable — something to produce after the cost-effectiveness model is done and only if the agency specifically requests it. This is a strategic error.

Most HTA agencies now require or strongly recommend budget impact analysis alongside cost-effectiveness. Even when a therapy demonstrates good cost-effectiveness, a high total budget impact can trigger:

  • Requests for managed entry agreements
  • Delayed or conditional approvals
  • Requirements for real-world evidence collection
  • Outright rejection on affordability grounds

How to avoid it: Build your budget impact model in parallel with your cost-effectiveness model. Use consistent assumptions, and be transparent about uptake projections. Agencies are more forgiving of high budget impact when the manufacturer demonstrates awareness of affordability constraints and proposes mitigation strategies.

Self-Assessment: Are You Ready?

Use this interactive checklist to evaluate your submission readiness. Check off each item you've addressed.

HTA Submission Readiness Checklist
0/5

Best Practice Summary

The common thread across all five pitfalls is timing. The decisions that determine submission success — comparator selection, model structure, analysis approach — are most impactful when made early and most expensive to fix when discovered late. Investing in robust HTA strategy during Phase II can save millions in Phase III and beyond.

The Bottom Line

HTA submissions fail not because the science is weak, but because the strategy is incomplete. By addressing these five pitfalls early in your development program, you dramatically improve your chances of a favorable recommendation — and ultimately, patient access to your therapy.