Key Takeaways: PHQ-9 Modality Comparison Framework
-
1No Universal Winner: Modalities like paper, tablet, digital, and AI voice aren’t ranked they are optimized for different clinic profiles. The choice depends on whether you prioritize completion rate, latency, or integration.
-
2The Strengths of Paper: Paper remains highly effective in captive in-clinic settings, with completion rates as high as 99.8%. However, it fails significantly in alert latency, longitudinal tracking, and scoring accuracy.
-
3The Risk of Digital Completion: While digital pre-visit forms increase overall completion when they work, they suffer from high failure rates (15–32%) and significant demographic inequities, particularly for Medicare and minority populations.
-
4Tablets and Kiosks: These offer strong in-clinic completion (~97%) and direct EHR integration. The trade-off is the overhead of hardware management and potential accessibility barriers for older patients.
-
5AI Voice and Real-Time Latency: AI voice is the only modality where alerts can fire the moment a response is recorded. Evidence like the HopeBot study (ICC 0.91) suggests it matches self-administration reliability while offering superior safety routing.
-
6The Equity Dimension: Patient populations complete modalities at different rates. Practices serving diverse groups must consider if a single modality is sufficient or if they need a tiered approach to ensure accessibility.
PHQ-9 modality comparison in 2026 is a question every mental health and primary care clinic eventually has to answer, and it is a question that the vendor demos make harder to think through clearly, not easier. This guide cuts through the modality marketing and walks through the decision framework that actually matters.
A clinical director sits through three vendor pitches in a single afternoon.
The tablet vendor tells her that tablets are the future and paper is obsolete. The pre-visit digital vendor tells her that asynchronous screening is the only way to free up clinic time, and that any modality requiring an in-person interaction is a step backward. The AI voice vendor tells her that voice administration is the only modality with real-time alert routing, and that anything less is a workflow failure waiting to happen.
By 5 pm, she has heard three versions of “ours is the best.” None of them are wrong, exactly. None of them is right, either. Each vendor is describing the dimension on which their modality genuinely wins, and quietly skipping the dimensions on which it does not.
This is the structural problem with the PHQ-9 modality comparison content in 2026. The honest landscape is not a ranking. It is a multidimensional trade-off, and the right modality for a given clinic depends on which trade-offs that clinic can absorb and which ones it cannot.
This guide does the work that the vendor demos do not. It starts with the decision framework the six dimensions that actually matter then walks through what the published evidence shows about each of the four modalities in 2026, the equity considerations most posts skip entirely, and which clinic profiles each modality actually fits.
This is a buyer’s guide, not a sales pitch. MedLaunch operates in one of the four modalities discussed. The framework comes first.
Table of Contents
1. Why “Which PHQ-9 Modality Is Best?” Is the Wrong Question

The framing problem starts at the top. A clinic owner researching PHQ-9 modalities encounters a category in which every vendor pitches their modality as universally superior, every comparison post ranks the options on a single scale, and every discussion implicitly assumes there is a winner waiting to be revealed.
There isn’t.
Paper PHQ-9 is not obsolete. Tablet PHQ-9 is not the only path forward. Pre-visit digital PHQ-9 is not always the right answer for the captive clinic moment. AI voice PHQ-9 is not categorically better than any of the above. Each of these claims is true in some clinic profiles and false in others, and treating them as universal truths is what causes practices to deploy the wrong modality and then conclude incorrectly that the modality category itself was flawed.
The right question is one the vendor demos rarely ask: what is your clinic optimizing for?
A clinic that operates a high-volume primary care check-in with a patient population including significant numbers of older patients on Medicare is optimizing differently from a digital-native therapy practice running 50-minute sessions with engaged, technology-comfortable clients. A multi-site mental health system running formal measurement-based care across 40 clinicians is optimizing differently from a solo psychiatrist managing medication for 80 long-term patients. The right modality is the one whose strengths align with what the practice values most and whose limitations the practice can absorb.
This guide is built around that question. The first half maps the decision framework. The second half evaluates each modality against it. The third part is a profile-by-profile match.
2. The Six Decision Dimensions That Actually Matter

Six dimensions distinguish the four PHQ-9 modalities from each other in 2026. Different practices weight them differently, which is the entire reason different practices land on different modalities for legitimate reasons.
Dimension 1 — Completion rate
What percentage of intended PHQ-9 administrations actually result in a completed, scored response?
This is the foundational dimension because a screening that does not happen produces no clinical signal. A modality with sophisticated scoring, perfect EHR integration, and real-time alert routing produces zero clinical value if the patient does not complete it.
Completion rates vary substantially across modalities and across patient populations. A 2025 cluster-randomized study across 39 French primary care practices reported completion rates between 93% and 100% across paper and digital in-clinic modalities. Asynchronous pre-visit digital modalities, by contrast, report completion rates between 15% and 32% in published primary care studies, a fundamental order-of-magnitude difference.
Dimension 2 — Alert latency
How quickly does a positive Question 9 response reach the clinician who needs to act on it?
This dimension matters more in some clinic contexts than others. For a primary care practice screening universally with PHQ-2 first and PHQ-9 only on positive screens, alert latency is important but not always critical. For a mental health or psychiatry practice where every visit involves PHQ-9 screening and Question 9 alert handling is a core safety workflow, alert latency directly determines whether the practice’s safety protocol fires reliably or fails on a busy Tuesday afternoon.
The honest spectrum: paper alerts arrive when the form is reviewed (often after the consultation has begun); tablet and pre-visit digital alerts arrive when the chart is opened (often during the consultation); AI voice alerts can fire the moment the response is recorded (before the patient enters the consultation room).
Dimension 3 — Workflow integration
How much friction does the modality add to the practice’s existing workflow?
A modality that requires the front desk to distribute, monitor, and collect forms during high-volume hours has different operational characteristics from one that requires no staff involvement. A modality that needs hardware management, integration projects, and EHR connectors has different setup overhead from one that operates within an existing EHR’s native form infrastructure. A modality that creates a parallel system the clinical team must monitor separately has different daily-cost characteristics from one that delivers results directly into the EHR they already use.
Dimension 4 — Patient accessibility
Which patient populations can complete the modality reliably, and which cannot?
This is the dimension most posts skip and the dimension most differentiates a serious comparison from a marketing one. Different patient populations have different accessibility profiles across modalities:
- Older patients often complete paper at higher rates than tablets, particularly when dexterity or visual acuity is a factor.
- Patients with limited digital literacy may complete in-clinic tablet but struggle with portal-based pre-visit screening.
- Hearing-impaired patients may complete paper, tablet, and portal modalities but cannot complete voice administration without accommodation.
- Patients with limited English proficiency require translated versions of the instrument across all modalities, but the practical implementation varies.
- Demographic factors influence asynchronous completion specifically. Published data shows Hispanic and Latino patients are 40% less likely to complete asynchronous PHQ-9 than non-Hispanic patients; Medicare-insured patients are 36% less likely than privately-insured patients (2024 quality improvement study, 33 Northern California clinic sites).
A practice serving a homogeneous population can optimize for that population. A practice serving a diverse population may face genuine modality-mismatch issues with any single solution.
Dimension 5 — Cost and infrastructure
What does the practice actually pay, in money and in operational overhead?
This includes obvious line items (subscription cost, hardware, integration setup) and hidden ones (staff time to distribute and collect forms, paper printing and storage, IT support for tablets, vendor BAA management, training, troubleshooting). Cost is rarely the determining factor for clinic owners who have already decided to invest in PHQ-9 automation, but it sets the floor for what modalities are operationally sustainable.
Dimension 6 — Data integrity
Once the screening is completed, how reliable is the data that reaches the clinician?
This includes scoring accuracy (manual scoring introduces a small error rate; automated scoring does not), longitudinal tracking infrastructure (does the system store and trend scores across visits, or does the clinician have to compile this manually?), EHR delivery (does the score arrive in the chart automatically, or must staff transcribe it?), and audit logging (is there a defensible record of when the screening occurred, who reviewed it, and what action followed?).
Modalities differ substantially across these sub-dimensions. A scoring error rate of 2-3% in manual paper administration is not a high rate, but at the volume of a busy practice across a year, the cumulative effect on treatment decisions is non-trivial.
3. Paper PHQ-9: Where It Genuinely Wins in 2026
Paper PHQ-9 is not a strawman. It is the standard against which every other modality is measured, and it has real strengths that the marketing for digital alternatives consistently underplays.
Where paper genuinely wins
Completion rate in captive clinic settings is among the highest of any modality. The 2025 Sebo cluster-randomized study, conducted across 39 primary care practices in France, found a completion rate of 99.8% for paper administered in the waiting room. The mixed-mode comparison (paper-or-tablet) showed 96.8% completion. The difference is small but meaningful: when a patient is in the waiting room and a research assistant hands them a paper form, virtually every patient completes it.
Universal patient accessibility. Paper requires no device, no battery, no wifi, no app installation, no portal account, no hearing capability beyond the standard reading-and-writing capability the instrument already requires. It works in any clinic, in any country, for any patient who can read the language the instrument is printed in.
Zero technology cost or infrastructure overhead. The PHQ-9 is in the public domain. The form is free. The scoring is a sum of nine integers. There is no vendor BAA to negotiate, no integration project, no hardware to manage. For practices in resource-constrained settings or for practices unwilling to add another technology vendor to their stack, this is a real and underappreciated advantage.
Patient privacy in the most direct sense. No digital trail. No data flowing through a third-party system. No vendor with access to the patient’s responses. For some patient populations and some clinical contexts, this matters genuinely.
Familiarity for older and less digitally-literate patient populations. Patients who have completed paper questionnaires throughout their lives understand the format. Patients who experience tech anxiety, who have low digital literacy, or who simply prefer paper, often produce more complete and more thoughtful responses on paper than on screens.
Where paper genuinely fails
Alert latency is the largest failure mode. A positive Question 9 response on a paper form is identified when the form is reviewed, typically by the medical assistant scoring it after the patient has been roomed, or by the clinician opening the chart at the start of the consultation. By that point, the patient may have been in the building for 20-30 minutes, and the clinical safety protocol has lost the most actionable window.
Manual scoring introduces error. A 2-3% scoring error rate is not high in absolute terms, but at the volume of a typical practice’s annual screenings, the cumulative effect on treatment decisions is meaningful, particularly when a miscalculated score crosses a severity threshold and changes the clinical decision that follows.
Illegibility, missing items, and incomplete forms. Paper administration produces a non-trivial rate of forms that cannot be scored due to handwriting issues, skipped items, or ambiguous responses. These are not catastrophic individually but cumulative.
No automated longitudinal tracking. Tracking PHQ-9 trends across visits requires either a clinician compiling the data by hand or a back-office staff member transcribing scores into a tracking system. Most practices intend to do this and most practices stop doing it within months of starting.
Workflow disruption during high-volume hours. The front desk distributes the form, monitors completion, retrieves the form, scores it, and routes it to the chart. This is real labor that scales linearly with volume.
The honest fit for paper
Paper remains the right answer for resource-constrained practices, for practices serving older or technology-uncomfortable patient populations, for low-volume practices where the workflow disruption is absorbable, and for practices unwilling to add a vendor relationship for the screening function. It is the wrong answer for practices where alert latency on Question 9 is a clinical priority, for practices that need automated longitudinal tracking for measurement-based care, and for high-volume practices where the workflow cost of distribute-and-collect is operationally significant.
4. Tablet and Kiosk PHQ-9: Where It Genuinely Wins

Tablet-based PHQ-9 is the most established digital modality, with deployments going back over a decade in primary care and large medical groups.
Where tablet genuinely wins
Strong in-clinic completion rates with automatic scoring. The Sebo 2025 study reported 96.8% completion for mixed paper-or-tablet modes, slightly below paper alone but with the substantial benefit of zero scoring errors and zero illegibility issues.
Direct EHR integration in well-implemented systems. Established tablet platforms offer EHR connectors that deliver the scored result into the patient’s chart automatically. The clinician opens the chart and sees the score; no transcription required.
Familiar interaction model. Touchscreens are nearly universally usable across patient populations, with a much smaller learning curve than mobile apps or portal logins.
Multiple intake forms in a single interaction. PHQ-9 is one of many forms a patient may need to complete at intake. Tablet-based intake platforms handle PHQ-9 alongside demographics, insurance verification, prior history, and other instruments, consolidating the front-desk workflow into a single patient interaction.
Captive-moment delivery. The patient is in the building. The form is in their hand. Completion happens during a window the practice controls, rather than depending on the patient to remember to complete a portal form three days before the visit.
Where tablet genuinely fails
Hardware management overhead. Tablets need to be charged, sanitized between patients, kept available, kept functional, kept updated. Tablets break, get dropped, get lost, get walked off with. The operational overhead is real, particularly at multi-site practices.
Demographic accessibility limitations. Patients with dexterity issues, vision concerns, or low digital literacy complete tablets at lower rates than paper. Older patient populations often prefer paper, and the practice that hands a 78-year-old a tablet may produce a less-complete or less-thoughtful response than the same patient would have produced on paper.
Alert latency still depends on chart-review timing. The completed PHQ-9 may flow into the EHR automatically, but the clinician’s awareness of a positive Question 9 still depends on when they open the chart. Real-time alerting requires additional configuration that many tablet platforms do not natively support.
In-person visit dependency. Tablet-based screening requires the patient to physically present at the clinic. For telehealth visits, follow-up between in-person visits, or for measurement-based care that screens at higher cadences than physical visits, tablet is not a viable single solution.
Hardware-and-environmental costs that compound. Acquisition, replacement, IT support, and software licensing combine into ongoing operational costs that scale with practice size.
The honest fit for tablet
Tablet is the right answer for primary care and multi-specialty groups that already use tablet-based intake platforms, for high-volume practices where automated scoring and EHR integration justify the hardware overhead, and for practices serving patient populations comfortable with touchscreens. It is less commonly the right answer for outpatient mental health practices where the workflow model is different and where Question 9 alert latency is a higher priority than universal in-clinic capture.
5. Pre-Visit Digital (Email, Portal, SMS) PHQ-9: Where It Genuinely Wins

Pre-visit digital PHQ-9 delivered before the visit by email, patient portal, or SMS, and completed by the patient at home, has become the default modality embedded in mental-health-focused EHRs (SimplePractice, TherapyNotes, ICANotes, Owl Practice) and in many large primary care system implementations.
Where pre-visit digital genuinely wins
Score arrives before the visit begins. When the patient completes the PHQ-9 the evening before the appointment, the clinician opens the chart already informed about depression severity. The first three minutes of the consultation, which would otherwise be spent reviewing the form, are recovered.
Frees up in-clinic staff time during high-volume hours. No tablet to distribute. No paper to collect. No staff member monitoring waiting-room completion. The captive-moment workflow is removed entirely.
Higher follow-up assessment completion. A 2024 quality improvement study at 33 Northern California clinic sites found that patients who completed asynchronous (pre-visit web-based) PHQ-9 were 2.4 times more likely to also complete a Columbia Suicide Severity Rating Scale follow-up assessment than patients who completed PHQ-9 synchronously in clinic. Asynchronous delivery has a measurable positive effect on safety-protocol completion downstream.
Native to most mental-health EHRs. For practices already on SimplePractice, TherapyNotes, ICANotes, Owl Practice, or similar mental-health-focused EHRs, pre-visit PHQ-9 is included rather than a separate procurement decision.
Scales across in-person and telehealth visits. Unlike tablet, pre-visit digital works equally well for telehealth and for many practices, that single property makes it the practical default.
Where pre-visit digital genuinely fails
Variable and often disappointing completion rates. Published studies report completion rates between 15.5% and 32% for portal- or email-based health questionnaires, depending heavily on patient population, reminder cadence, and engagement strategy. The Northern California study found that overall PHQ-9 completion increased dramatically when pre-visit digital was added, but this was attributable to the asynchronous-plus-in-clinic combination, not asynchronous alone.
Significant demographic inequities. The same Northern California study found Hispanic and Latino patients were 40% less likely to complete asynchronous PHQ-9 than non-Hispanic patients. Medicare-insured patients were 36% less likely to complete asynchronously than privately-insured patients. These are not small effects, and they map onto the specific patient populations that the U.S. Preventive Services Task Force depression screening guidance is most concerned about.
Patient-driven completion. The PHQ-9 happens if the patient remembers to complete it, has the technological capability to access the portal or email, has the literacy to navigate the form, and has the motivation to finish it. Each of these is a real friction point that disproportionately affects vulnerable patients.
Alert latency depends on chart-review timing. Even when the score is in the chart before the visit, the Question 9 alert is only seen when the clinician actually opens the chart, which may not happen until the consultation begins.
Difficult to recover when completion fails. When a patient has not completed the pre-visit PHQ-9 and arrives at the clinic, the practice faces a choice: skip the screening, or fall back to a different modality (paper or tablet) for that patient. Without a fallback workflow, completion gaps simply persist.
The honest fit for pre-visit digital
Pre-visit digital is the right answer for practices already on a mental-health EHR with native PHQ-9 functionality, for therapy practices serving engaged digital-native patient populations, for telehealth-heavy practices, and for measurement-based care implementations that need to track PHQ-9 between in-person visits. It is the wrong answer when used as the only modality for a practice serving a diverse patient population, the demographic equity issues are real and the completion gaps fall disproportionately on the patients screening guidance is most concerned about.
6. AI Voice PHQ-9: Where It Genuinely Wins
AI voice PHQ-9 is the newest of the four modalities and the smallest in terms of commercial deployment as of 2026. The defining features: the patient completes the assessment by speaking with an AI voice assistant rather than filling in a form, scoring is calculated in real time from the spoken responses, and the result, including any positive Question 9 response, is delivered directly into the EHR before the consultation begins.
Where AI voice genuinely wins
Real-time alert latency. This is the modality’s structural differentiator. The AI captures the patient’s response to Question 9 the moment it is recorded; the alert can fire to designated clinical staff within seconds, before the patient enters the consultation room. No other modality reaches this latency profile. For practices where Question 9 alert handling is a core safety workflow, this is the property that justifies adoption.
Wording, completion, and scoring consistency. Every patient receives the validated nine items in the validated order, with the validated wording. The AI does not skip questions, paraphrase them, or move on before the patient has responded. Scoring is automatic with zero arithmetic error rate. The fidelity to the validated instrument is higher than any human-administered modality and matches or exceeds any digital form-based modality.
Captive-moment completion without staff involvement. The screening happens during a window the practice controls (waiting room or remote pre-visit), but does not require the front desk to distribute, monitor, or retrieve anything. The completion rate is high because the moment is captive; the workflow cost is low because no staff member is involved.
Direct evidence of agreement with self-administered PHQ-9. The HopeBot study from University College London (2025, currently a preprint) reported an intraclass correlation coefficient of 0.91 (95% CI 0.88-0.93) between voice-chatbot-administered and self-administered PHQ-9 in 132 adults across the UK and China a level of agreement consistent with what the broader mode-of-administration literature would predict for any well-implemented delivery mode. 71% of participants reported greater trust in the chatbot version than the self-administered version.
Pre-consultation EHR delivery. The structured score, severity classification, individual item responses, and any flagged Question 9 result land in the patient’s chart before the consultation begins. The clinician arrives at the visit already informed, with no manual reconciliation required.
Where AI voice genuinely fails
Smallest evidence base of the four modalities. The HopeBot study is a single 132-person within-subject study and is currently a preprint. The broader mode-of-administration literature (the foundation under which voice administration sits) is well-established, but vendor-specific peer-reviewed validation of voice-AI PHQ-9 implementations is still sparse compared to paper, tablet, and portal modalities. Clinically vigilant readers should expect this evidence base to mature over the next two to three years.
Hearing capability required. Patients who are hearing-impaired or who experience age-related hearing loss may not be able to complete voice administration without accommodation. For these patients, an alternative modality is required.
Quiet environment required. The accuracy of speech recognition depends on environmental noise levels. A waiting room with multiple patients, ambient conversation, or operational noise may produce capture issues that a tablet or paper form would not.
Vendor and BAA dependency. Like any digital modality, voice administration requires a vendor relationship, a Business Associate Agreement signed before patient data flows, and ongoing vendor management. Practices unwilling or unable to take on additional vendor relationships will not adopt voice for procurement reasons regardless of clinical fit.
Newest category — implementation patterns less established. Tablet kiosks, portal-based screening, and EHR-native PHQ-9 forms have over a decade of operational best practices established. Voice administration is new enough that practices adopting it are often the first deployments in their region or specialty, with all the implementation friction that implies.
The honest fit for AI voice
AI voice is the right answer for outpatient mental health and psychiatry practices where Question 9 alert latency is a clinical priority, where pre-consultation delivery of the scored result has measurable impact on consultation quality, and where the captive-moment-without-staff-involvement property removes meaningful workflow load from a busy clinic. It is less commonly the right answer for primary care practices already on tablet-based intake platforms, for therapy practices where the EHR-native pre-visit form already meets clinical needs, and for practices serving patient populations with significant hearing or environmental constraints.
7. The Honest Comparison Matrix
A side-by-side view across the six dimensions. Specific implementations vary by vendor; verify product capabilities directly before procurement.
| Dimension | Paper | Tablet/Kiosk | Pre-Visit Digital | AI Voice |
|---|---|---|---|---|
| Completion rate | 99.8% in-clinic captive (Sebo 2025) | 96.8% in-clinic captive (Sebo 2025) | 15-32% async, varies by population | High in captive moments; HopeBot 132/132 completion |
| Alert latency | Form-review dependent (typically 15-30 min after completion) | Chart-review dependent (typically at consultation start) | Chart-review dependent | Real-time (seconds after response captured) |
| Workflow integration | High disruption (distribute, collect, score, route) | Medium (distribute, collect; auto-scored, EHR integration) | Low in-clinic; setup overhead and monitoring required | Low (no staff involvement during administration) |
| Patient accessibility | Universal (literate); preferred by older populations | Most patients (digitally literate); harder for low-dexterity, low-vision | Digitally literate, motivated, English-proficient; demographic inequities documented | Hearing-capable in quiet environments; alternative needed for hearing-impaired |
| Cost & infrastructure | Print + manual labor; no vendor | Hardware + maintenance + integration | EHR-native (often included) or vendor | Vendor + BAA + integration |
| Data integrity | Manual scoring (small error rate); no auto longitudinal | Auto-scored; EHR-integrated; longitudinal in vendor system | Auto-scored; EHR-integrated; longitudinal native to most EHRs | Auto-scored; EHR delivery; real-time alerting; longitudinal native |
A few observations about this matrix.
The four modalities are not ranked options. They are specialized tools, each strongest on different dimensions. Paper wins on accessibility and cost. Tablet wins on captive-moment completion and EHR integration. Pre-visit digital wins on follow-up assessment completion and telehealth scalability. Voice wins on alert latency and pre-consultation delivery.
The dimension that genuinely separates AI voice from the other three is alert latency. No other modality fires the Question 9 alert in real time. For practices where this is the priority, voice is the structural answer; for practices where it is not, the other three modalities are competitive on the dimensions that matter to that practice.
8. Which Modality Fits Which Clinic Profile
The decision framework, applied.
Solo therapist or small mental-health practice already on a competent EHR (SimplePractice, TherapyNotes, ICANotes, Owl Practice). Pre-visit digital PHQ-9 native to the EHR is almost always the right starting point. It is included in the platform, automatically scored, integrated into the chart, and supports longitudinal tracking. Adding a separate vendor on top of a competent native solution is a procurement decision that needs to be justified by something the native tool does not provide.
High-volume primary care practice with established tablet-based intake. Tablet PHQ-9 is the natural fit. The check-in workflow is already built around tablet-based forms; adding PHQ-9 to that workflow is a configuration question rather than a procurement one. The completion rate is high, the scoring is automatic, and the integration is established.
Mental health or psychiatry practice that values real-time Question 9 alert routing and pre-consultation delivery. AI voice PHQ-9 is the structural fit. The alert latency advantage matters most when the practice’s safety protocol is built around clinician notification before the consultation begins, and the pre-consultation delivery has measurable clinical value. This is the cluster MedLaunch was specifically built for.
Resource-limited practice or paper-comfortable patient population. Paper PHQ-9 remains the right answer. Universal accessibility, zero technology cost, and high captive-moment completion outweigh the alert-latency and longitudinal-tracking limitations for practices that do not need those features. The honest assessment is that paper is not obsolete; it is a specific solution for a specific profile.
Multi-site behavioral health organization running formal measurement-based care. A measurement-based-care platform is usually the right answer at this scale, with PHQ-9 delivered through whichever modality the platform supports best typically pre-visit digital with tablet or voice fallback. Single-modality deployments tend to underperform at this scale because of the demographic equity issues.
Telehealth-heavy practice. Pre-visit digital is the natural fit because tablet-based capture is not viable across telehealth visits. Voice administration is also viable for the captive moment of a telehealth check-in.
Practice serving a diverse patient population (mixed digital literacy, language, age, insurance type). No single modality fits all patients. The right design is multi-modality with clinic-side decision rules about which modality is offered to which patient, see Section 10.
9. The Equity and Accessibility Dimension Most Posts Ignore

This is the section most modality-comparison content skips, and it is the section that most differentiates a thoughtful procurement decision from a feature-checklist comparison.
PHQ-9 modalities are not equally accessible across patient populations. The published evidence is direct.
The Northern California 33-clinic quality improvement study found Hispanic and Latino patients were 40% less likely to complete asynchronous (pre-visit digital) PHQ-9 than non-Hispanic patients. Medicare-insured patients were 36% less likely to complete asynchronously than privately-insured patients. These are not small effects, and they fall on the specific patient populations the U.S. Preventive Services Task Force depression screening guidance is most concerned about: older adults, patients with limited English proficiency, lower-income patients, and patients with chronic conditions.
The implication is uncomfortable: a practice that adopts pre-visit digital as its only modality may produce excellent screening rates among privately-insured, English-proficient, digital-native patients, and substantially worse screening rates among the patients most at clinical risk. The screening data the practice reviews looks fine in aggregate. The screening data segmented by demographic does not.
Tablet has its own accessibility profile. Patients with dexterity issues, vision concerns, or low digital literacy complete tablets at lower rates than paper. Older patients on average produce more thoughtful and complete responses on paper than on touchscreens.
Voice administration has yet another accessibility profile. Hearing-impaired patients cannot complete voice administration without accommodation. Patients with limited English proficiency require translated voice models, which not every vendor supports. Patients with social anxiety around speaking may produce different responses than the same patients would produce on a paper form.
Paper, despite its other limitations, remains the most universally accessible modality across patient populations.
The honest implication for any practice serving a diverse patient population: a single-modality deployment may produce equity issues that are not visible in aggregate metrics but show up clearly when screening data is segmented by demographic. Practices that take this seriously typically deploy more than one modality, with clinic-side rules about which modality is offered to which patient paper as fallback for tablet failure modes, in-clinic capture for patients who do not complete pre-visit digital, voice for patients for whom the captive-moment alert latency matters and who can complete it accessibly.
The goal is not modality purity. The goal is that every patient receives an appropriately-administered PHQ-9 with the alert latency the practice’s safety protocol requires.
10. When the Right Answer Is More Than One Modality
The hybrid case is more common than single-modality deployments would suggest, and it is worth articulating directly.
Several common hybrid patterns:
Pre-visit digital primary, in-clinic fallback. The practice sends PHQ-9 by portal or email before the appointment. Patients who do not complete it receive a tablet or paper form at check-in. This pattern captures the engaged digital-native patients efficiently and recovers the gaps without forcing anyone into a modality that does not work for them.
Voice primary, paper fallback. The practice administers PHQ-9 by voice in a quiet check-in space for most patients. Hearing-impaired patients, patients in noisy environments, or patients who decline voice receive a paper form. This pattern captures the alert-latency advantage where it works and preserves universal accessibility where voice does not.
Tablet primary, portal between visits. Tablet handles the in-clinic measurement; pre-visit digital handles between-visit measurement-based care for patients in active treatment. This pattern matches the modality to the clinical context rather than imposing a single modality across all contexts.
The hybrid case is also where the multi-modality vendor question becomes operational. Practices managing two or three modalities through different vendors face integration overhead. Practices managing them through a single vendor or through a single EHR with multi-modality support have a substantially simpler operational profile.
For mental health and psychiatry practices specifically, the most common hybrid that produces the best clinical-and-equity outcome is voice for in-clinic captive-moment screening with real-time alert routing, plus pre-visit digital for between-visit measurement-based-care tracking, plus paper as the universal fallback for patients for whom the digital modalities do not work. This is the pattern MedLaunch is designed to fit into rather than replace voice handles the alert-latency-critical layer; the existing EHR handles the longitudinal tracking; paper remains available where needed.
11. Frequently Asked Questions
Which PHQ-9 modality has the highest completion rate?
In captive in-clinic settings, paper has the highest documented completion rate (99.8% in the 2025 Sebo cluster-randomized study), slightly above mixed paper-or-tablet (96.8%). Asynchronous pre-visit digital modalities have substantially lower completion rates (15-32% in published studies) but reach patients who would not present in person for an in-clinic screening. AI voice administration in captive moments produces high completion rates comparable to other in-clinic modalities. Completion rate alone is not the right comparison metric; the right comparison includes which patients are completing in each modality and whether the practice’s overall screening reach is improving.
Is paper PHQ-9 really still acceptable in 2026?
Yes, for the right practice profile. Paper is universally accessible, requires no infrastructure, has the highest in-clinic completion rates of any modality, and produces clinically valid PHQ-9 scores when scored correctly. The trade-offs are alert latency on positive Question 9 responses, manual scoring error rate, and the absence of automated longitudinal tracking. For practices where these trade-offs are absorbable, paper remains a defensible choice. The vendor narrative that paper is obsolete in 2026 is a marketing position, not a clinical conclusion.
Why does pre-visit digital have such a wide completion-rate range?
Completion rates for portal-based or email-based PHQ-9 vary substantially based on patient population, reminder cadence, engagement strategy, and EHR-portal usability. Studies report rates between 15.5% and 32% across primary care populations, with significant demographic variation. Higher engagement strategies (multi-channel reminders, simplified portal flows, SMS delivery rather than email) tend to produce higher completion rates than minimal-effort deployments.
How fast does an AI voice alert reach the clinician compared to other modalities?
AI voice alerts can fire in real time, within seconds of the patient’s response being captured before the patient enters the consultation room. Paper alerts arrive when the form is reviewed, typically 15-30 minutes after completion. Tablet and pre-visit digital alerts arrive when the chart is opened, typically at the start of the consultation. The latency difference is structural, not vendor-specific: only the voice modality captures the response in real time and routes the alert without waiting for a form-review or chart-open event.
Does AI voice produce the same PHQ-9 scores as paper?
The most direct evidence is the HopeBot study (University College London, 2025, currently a preprint), which reported an intraclass correlation coefficient of 0.91 (95% CI 0.88-0.93) between voice-chatbot-administered and self-administered PHQ-9 in 132 adults. This level of agreement is consistent with what the broader mode-of-administration literature would predict for any well-implemented delivery mode. The instrument’s underlying validity is preserved when the validated nine items are delivered in their validated form across modalities.
Can a clinic deploy more than one modality simultaneously?
Yes. Many practices deploy two or three modalities together: pre-visit digital primary with in-clinic fallback, voice for captive-moment screening with paper as accessibility fallback, or tablet for in-clinic with portal for between-visit measurement-based care. Multi-modality deployment matches the modality to the clinical context rather than imposing a single modality across all contexts. The operational overhead depends on whether the modalities are managed through a single vendor or platform versus separate vendors with separate integrations.
What about HIPAA and patient privacy across the four modalities?
Paper PHQ-9 has no digital privacy considerations beyond standard medical records management. Tablet, pre-visit digital, and voice administration all involve a vendor processing patient data on the practice’s behalf and require Business Associate Agreements, encrypted data flows, and appropriate access controls. Specific HIPAA implementation varies by vendor and should be verified directly during procurement. None of the digital modalities are categorically more or less HIPAA-aligned than the others; the relevant question is the specific vendor’s implementation.
Which modality is best for telehealth practices?
Pre-visit digital is the most natural fit for telehealth-only practices, since tablet and paper modalities require a physical visit. AI voice administration is also viable for telehealth, particularly when administered as the patient checks into the telehealth session. Some practices use a hybrid pattern pre-visit digital primary, with voice administration as a backup at the start of the telehealth session for patients who did not complete the pre-visit form.
Which modality is best for primary care?
Tablet-based intake platforms have over a decade of operational track record in primary care and are often the right structural fit, particularly for practices already using tablet-based check-in workflows. Pre-visit digital is increasingly common as a complement, particularly for practices targeting USPSTF screening compliance with population-level workflow. AI voice is less commonly deployed in primary care as of 2026 because the workflow model differs from that of outpatient mental health.
Which modality is best for outpatient mental health?
The honest answer depends on what the practice is optimizing for. EHR-native pre-visit digital is the default for solo therapists already on SimplePractice, TherapyNotes, ICANotes, or Owl Practice. AI voice is the structural fit for practices where Question 9 alert latency is a clinical priority and where pre-consultation delivery to the clinician has measurable value. Larger multi-site practices running formal measurement-based care often deploy a measurement-based-care platform (NeuroFlow, Greenspace, Mirah) that handles PHQ-9 across multiple delivery modes.
12. Conclusion
The clinical director in the demo room is hearing three confident pitches and one honest conclusion: there is no universal best PHQ-9 modality.
Paper, tablet, pre-visit digital, and AI voice are not ranked options on a single scale. They are specialized solutions, each strongest on different dimensions, each weaker on others, and each appropriate for different clinic profiles. The right modality is the one whose strengths align with what the practice values most: completion rate, alert latency, workflow integration, patient accessibility, cost, or data integrity and whose limitations the practice can absorb.
The published evidence is honest about the trade-offs. Paper has the highest captive-moment completion rate. Tablet automates scoring and integrates with the EHR. Pre-visit digital frees up in-clinic time and improves follow-up assessment completion, with documented demographic equity issues. AI voice is the only modality with real-time alert latency, with the smallest evidence base of the four and specific accessibility constraints.
The equity dimension matters more than most posts acknowledge. A practice serving a diverse patient population may produce excellent aggregate metrics with a single-modality deployment while quietly under-screening the patients most at clinical risk. Multi-modality deployment, with clinic-side rules about which modality fits which patient, is the answer most thoughtful practices arrive at.
For mental health and psychiatry practices in 2026, the modality that matches Question 9 alert latency to the clinical priority of safety protocol fidelity is AI voice. For practices where that latency is not the priority, the other three modalities offer competitive answers on the dimensions that matter to those practices.
The right question is not which modality wins. The right question is which dimensions matter most to your practice and which trade-offs you can live with.
Walk through which modality matches your clinic’s priorities.
Book a 20-minute call to walk through your practice’s specific completion-rate goals, alert-latency requirements, and EHR integration constraints to find the right fit.