v22.0: Non-Duplicate Value Decision Plan (CareConnect + 211 Complement Strategy)
Summary
This document converts three inputs into one objective path forward:
- The original strategic plan (ambitious differentiation through reliability/referrals/navigation).
- Devil's-advocate critique (how each element can fail or become duplicate/low value).
- Steelman critique (best-case strategic value for each element).
The output is a decision system, not a belief system:
- Pre-register hypotheses.
- Pre-register falsifiers.
- Score options with a weighted model.
- Execute in stage gates.
- Keep only initiatives that hit outcome thresholds.
This is designed to prevent CareConnect from becoming:
- a subset of 211,
- a duplicate of 211,
- or an inferior alternative to 211.
User Review Required
[!IMPORTANT] This plan is decision-complete, but Phase 1 pilot operations must not start until these approvals are explicit.
Execution references:
Required approvals:
- Confirm v22.0 objective function (connection outcomes over directory breadth).
- Confirm hard constraints (no breadth race with 211, no query-text logging, no claim overstatement).
- Confirm pilot domain (default: housing intake).
- Confirm pilot partner target range (5-10 providers, 2-3 frontline orgs).
- Confirm stage-gate thresholds and kill rules as written.
- Confirm API integration redlines (no user query-text sharing; no forced user-identifying telemetry).
- Confirm integration-blocked contingency path (narrow scope or responsible deprecation criteria).
If any item changes, update this document before Gate 0 sign-off.
Phase 0 Progress Snapshot (2026-03-09)
Completed technical foundation:
- Pilot schema migration applied (
20260308120000_v22_pilot_phase0_tables). - Admin hardening migration applied (
20260308121000_v22_harden_bulk_update_service_status_admin_check). - Pilot storage verification scripts added (preflight + post-migration + rollback helper).
- Internal pilot APIs implemented and documented (
/pilot/events/*,/pilot/metrics/scorecard,/pilot/integration-feasibility). - Pilot API/schema/metrics tests implemented and passing.
- Step 1 approvals D1-D7 locked in approval checklist.
- Integration feasibility decision recorded as
conditionalwith controls C1-C3. - Offline/local threat model updated with mitigation owners/dates and no unresolved critical findings.
- Gate 0 minimum-mode baseline report artifact created.
- Gate 0 minimum-mode baseline M1/M3 execution completed with read-only linked-project queries.
Still required before Gate 0 exit:
- Complete the remaining conditional integration control C1 before any external integration activation.
Gate 0 Exit Readiness (Current)
- Current readiness: NO-GO (as of 2026-03-29)
Blocking items:
- C1 legal closure is pending candidate partner terms for clause-level review.
- Partner-operation evidence for D4 (named partner list + outreach owner execution) is pending.
- Baseline report is complete, but M1/M3 are
NULLin the current window due to zero observed events.
Canonical gate decision control:
Decision Log Authority
This document defines strategic intent and thresholds. Decision records and sign-off state are tracked in:
Evidence-Quality Protocol (Critical)
This plan now incorporates prior internal research and two external AI-agent research memos.
Rules for using those memos:
- External-agent outputs are hypothesis inputs, not facts, because those agents did not inspect the CareConnect codebase.
- Any externally sourced claim affecting architecture, governance, or go/no-go decisions must be re-validated using:
- primary source citation, and
- local repo evidence where relevant.
- Claims with weak/indirect sources must be tagged
investigateand cannot gate critical decisions. - Numeric scoring from external-agent reports is non-binding and advisory only.
Problem Statement
Current strategic risk:
- CareConnect can drift into "directory competition" with 211, where 211 has structural breadth and channel advantages.
- CareConnect can make claims that are stronger than current evidence (verification freshness, accessibility absolutes, comparative language coverage framing).
- CareConnect can spend engineering effort on features that improve UI but do not improve real-world service connection outcomes.
Desired strategic position:
- CareConnect complements 211.
- CareConnect owns local "last-mile access performance" in Kingston.
- CareConnect proves value with measurable outcome improvements.
Inputs (Source Synthesis)
Input A: Original Plan (Opportunity Thesis)
Strongest opportunities identified:
- Service Reliability Layer
- Warm Referral Layer
- Access Navigation Layer
- Hard scope discipline (stop breadth-first duplication)
Input B: Devil's Advocate (Failure Modes)
Primary risks:
- Rebranding without outcome gains.
- Operational complexity outpacing team capacity.
- Stale operational data causing trust erosion.
- Low partner adoption for referral workflows.
- Attribution bias in pilot outcomes.
Input C: Steelman (Strategic Upside)
Primary upside:
- Differentiation via connection success (not listing count).
- Durable value if CareConnect becomes frontline workflow infrastructure.
- High leverage if failed-contact and time-to-connection metrics improve materially.
Objective Function and Constraints
Primary Objective
Improve successful service connection outcomes for Kingston residents and frontline workers.
Secondary Objectives
- Preserve privacy-first architecture.
- Reduce access friction for high-need scenarios.
- Increase provider and frontline trust in data actionability.
Hard Constraints
- Do not build a breadth race with 211.
- Do not degrade privacy guarantees (no raw query logging).
- Do not introduce unverifiable comparative claims.
- Do not scale initiatives that fail stage-gate metrics.
- Do not accept third-party integration terms that require raw user query sharing.
- Do not represent external-agent claims as confirmed evidence without re-validation.
Non-Goals (v22.0)
- Provincial expansion as a core objective.
- Becoming a primary general-purpose directory for all service categories.
- Large-scale data ingestion volume growth as a success metric.
Decision Framework (Objectivity Protocol)
Every initiative must have:
- Hypothesis (steelman claim)
- Falsifier (devil's-advocate failure condition)
- Metric (quantitative test)
- Kill threshold (explicit stop condition)
- Evidence horizon (by when signal must appear)
No initiative proceeds to scale without passing all five fields.
Initiative Hypothesis Register
| ID | Initiative | Hypothesis (Steelman) | Falsifier (Devil) | Primary Metric | Kill Threshold | Evidence Horizon |
|---|---|---|---|---|---|---|
| H1 | Service Reliability Layer | Operational status signals materially reduce dead-end attempts. | Data becomes stale/noisy and harms trust. | Failed contact rate | <10% relative improvement after pilot cycle 1 | 8 weeks |
| H2 | Warm Referral Layer | Referral tokens/workflows increase completed connections. | Partner workflow adoption stays too low. | Referral completion capture rate | <30% partner usage by end of pilot cycle 1 | 8 weeks |
| H3 | Access Navigation Layer | Barrier-aware scripts reduce abandonment. | Scripts become stale/unused and create maintenance burden. | Search-to-contact conversion and repeat-failure rate | No statistically meaningful conversion lift and no repeat-failure reduction | 10 weeks |
| H4 | Scope Discipline | Narrowing scope to last-mile outcomes increases strategic fit. | Product becomes too thin and loses user utility. | Monthly active frontline sessions + connection outcomes | Active usage declines >20% with no outcome gains | 12 weeks |
| H5 | Privacy Positioning | Precise privacy claims strengthen trust without harming analytics utility. | Copy changes reduce stakeholder confidence or insight quality. | Trust survey + ability to monitor outcomes without query logging | Trust does not improve and governance team cannot monitor outcomes | 6 weeks |
| H6 | 211 Integration Feasibility | CareConnect can consume 211 baseline data while preserving privacy redlines. | Integration requires prohibited telemetry or is operationally blocked. | Signed technical/legal feasibility decision | No viable telemetry-safe path by end of Phase 0 | 2 weeks |
| H7 | Local Data Security | Offline/local storage can remain privacy-protective under device-loss threat model. | Local storage design introduces material confidentiality risk. | Security review findings (critical/high) | Any unresolved critical finding at pilot launch | 4 weeks |
| H8 | User Preference Fit | Target cohorts value private/offline self-serve for selected use cases. | High-need users overwhelmingly require human-assisted channel first. | Frontline-assisted preference and outcome study | No demonstrated fit for priority cohorts in pilot domain | 8 weeks |
Weighted Scoring Model
Each initiative is scored before build and at each stage gate.
Criteria and Weights
- Non-duplicate value vs 211: 35%
- Outcome impact potential (connection success): 25%
- Feasibility with current team/capacity: 20%
- Evidence speed (time to clear signal): 10%
- Risk/governance burden (inverse): 10%
Scoring Formula
Weighted Score = sum(criterion_score_1_to_5 * weight)
Thresholds
>= 4.0: Green (build or scale)3.2 - 3.9: Yellow (pilot only, tighten scope)< 3.2: Red (defer or kill)
Tie-breaker Rule
When two initiatives tie, pick the one with:
- higher non-duplicate score,
- lower operational burden,
- faster evidence horizon.
Stage-Gated Execution Plan
Phase 0: Baseline + Instrumentation (2 weeks)
Goal:
- Establish credible pre-intervention baseline.
- Build measurement plumbing before product expansion.
Deliverables:
- Baseline definitions for failed contact rate, time to connection, referral completion.
- Event schema and dashboards for pilot metrics.
- Partner onboarding packet and data-sharing boundaries.
- Pre-registered analysis plan (what counts as success/failure).
- 211 API/legal feasibility assessment with explicit privacy redline review.
- Offline/local-storage threat model and mitigation checklist.
- Evidence re-validation log for all external-agent-derived claims used in planning.
Gate 0 Exit Criteria:
- All primary metrics have baseline values.
- Privacy review approved.
- Pilot participants confirmed.
- Measurement queries validated end-to-end.
- Integration feasibility decision recorded (
go,conditional, orblocked). - No unresolved critical security findings in offline/local-storage design.
Phase 1: Focused Pilot Build (6-8 weeks)
Scope:
- Domain: housing intake (default pilot domain).
- Pillars in scope: Service Reliability + Warm Referral.
- Access Navigation in constrained mode (only for pilot services).
Deliverables:
- Provider status update workflow.
- Frontline referral token workflow.
- Outcome capture states and barrier reasons.
- Weekly pilot scorecard with trend analysis.
Gate 1 Exit Criteria (must pass all):
- Failed contact attempts reduced by at least 30% vs baseline.
- Time-to-successful-connection reduced by at least 25%.
- Reliability freshness SLA compliance at least 70%.
- Referral outcome capture at least 50% of pilot referrals.
- Data-decay audit fatal error rate at or below 10% (see M6).
Gate 1 Decisions:
- Pass all: proceed to Phase 2 scale.
- Miss 1 metric: iterate one more cycle with narrowed scope.
- Miss 2+ metrics: kill weakest initiative and re-run with one pillar.
Phase 2: Conditional Expansion (8-12 weeks)
Scope:
- Expand only successful initiative(s) to second domain.
- Keep strict stop conditions.
- Add Access Navigation only if reliability and referral foundations are stable.
Gate 2 Exit Criteria:
- Outcome gains persist in second domain.
- Operational burden remains within team capacity.
- No material privacy/governance regressions.
Gate 2 Decisions:
- Scale to broader Kingston coverage.
- Keep as focused infrastructure for specific domains.
- Sunset non-performing modules.
Pilot Design (Bias-Controlled)
Population
- 5-10 high-volume Kingston providers in selected domain.
- 2-3 frontline organizations (caseworkers, navigators, outreach staff).
Cohort Structure
- Pilot cohort: services/orgs using new workflows.
- Comparison cohort: similar services/orgs not yet using new workflows.
Measurement Window
- Baseline window: 4 weeks pre-pilot.
- Pilot window: 8 weeks.
- Optional extension window: 4 weeks for remediation cycle.
Bias Controls
- Pre-register metric formulas before pilot start.
- Avoid changing success thresholds mid-cycle.
- Control for seasonality where possible (same service categories).
- Document external shocks (policy/funding/weather/service closures).
- Separate adoption failure from product efficacy failure.
Metrics Catalog (Precise Definitions)
M1: Failed Contact Rate
Definition:
failed_contact_rate = failed_contact_events / total_contact_attempts
Failed contact events include:
- disconnected phone,
- no response after defined SLA window,
- intake not available when marked available,
- referral rejected due to invalid routing.
M2: Time to Successful Connection
Definition:
time_to_connection = timestamp(successful_connection) - timestamp(initial_search_or_referral)
Reported as:
- median,
- p75,
- p90.
M3: Referral Completion Capture Rate
Definition:
completion_capture_rate = referrals_with_terminal_state / total_referrals
Terminal states:
- connected,
- failed,
- client_withdrew,
- no_response_timeout.
M4: Freshness SLA Compliance
Definition:
freshness_compliance = services_meeting_status_sla / pilot_services_total
SLA tiers:
- crisis: 24h,
- high-demand non-crisis: 48h,
- others in pilot: 7 days.
M5: Repeat Failure Rate
Definition:
repeat_failure_rate = users_or_referrals_with_2plus_failures / total_users_or_referrals
Purpose:
- validates if guidance/fallbacks reduce repeated dead ends.
M6: Data-Decay Fatal Error Rate
Definition:
fatal_error_rate = records_with_access_blocking_errors / records_sampled
Fatal errors include:
- wrong/disconnected phone number,
- invalid/defunct intake path,
- materially incorrect eligibility that blocks access,
- closed or unavailable service still presented as available.
Sampling protocol:
- minimum 20-record monthly random sample in pilot scope,
- dual verification (web source + call or provider confirmation),
- classify and log error severity.
M7: Preference-Fit Indicator
Definition:
preference_fit = cohort_tasks_preferably_completed_via_careconnect / cohort_total_tasks
Purpose:
- tests where privacy/offline self-serve is materially preferred,
- prevents over-applying CareConnect in scenarios needing immediate human navigation.
Proposed Repo-Level Implementation Map
This section maps plan execution to likely code locations.
A) Types and Schemas
Potential changes:
- Extend
types/service.tswith operational-status metadata. - Add referral and barrier event types in
types/(new files): types/referral.tstypes/service-operational-status.ts- Add Zod schemas in
lib/schemas/for all new write endpoints: lib/schemas/referral.tslib/schemas/service-status.ts
B) API Routes
Add scoped endpoints under app/api/v1:
app/api/v1/referrals/route.ts(create/list referral events)app/api/v1/referrals/[id]/route.ts(update terminal states)app/api/v1/services/[id]/status/route.ts(provider status updates)app/api/v1/pilot/metrics/route.ts(pilot scorecard data)app/api/v1/integration/feasibility/route.ts(optional internal endpoint for integration readiness status)
Requirements:
- rate limits and auth checks where applicable,
- no raw query text persistence,
withCircuitBreakeraround Supabase calls,- explicit
Cache-Controlbehavior for sensitive flows. - enforce integration redline checks in adapters (no raw query forwarding).
C) Search and Ranking
Potential integration points:
Planned behavior:
- down-rank stale/unavailable services for non-crisis intents,
- promote high-confidence available alternatives,
- surface deterministic "Plan B" fallbacks.
D) Frontend Workflows
Candidate locations:
components/home/(search UX integration),components/services/(service detail reliability + access path UI),app/[locale]/dashboard/(partner/frontline referral and status tooling).
E) Analytics and Observability
Candidate locations:
Actions:
- add pilot-specific aggregated metrics events,
- preserve no-query-text policy,
- track metric quality (missing terminal states, stale statuses).
F) Governance and Documentation
Docs to update when implementation starts:
docs/governance/standards.md(operational-status provenance rules)docs/architecture.md(new data flow for referral/status)docs/api/openapi.yaml(new referral/status endpoints)docs/runbooks/(status stale / referral pipeline incident handling)docs/security/(offline/local data threat model and controls)
Operating Model and RACI (Pilot)
Roles
- Product owner: pilot scope, success criteria, gate decisions.
- Engineering lead: implementation quality and operational stability.
- Data/governance lead: provenance audits and freshness SLA tracking.
- Partner success lead: provider onboarding and adoption support.
Decision Cadence
- Weekly metric review (operational).
- Bi-weekly strategic review (scope and risk).
- Gate decisions at end of each phase with recorded rationale.
Risk Register (Top Risks + Mitigations)
| Risk | Type | Trigger | Mitigation | Owner |
|---|---|---|---|---|
| Stale status data harms trust | Product/Governance | SLA compliance <70% | Auto-expire stale status, fallback messaging, provider reminders | Governance lead |
| Low partner adoption | Execution | <30% partner activity | Simplify workflow, onboarding support, reduce required fields | Partner success |
| Metrics are noisy/unattributable | Decision quality | Contradictory trend signals | Keep control cohort, pre-register analysis, annotate confounders | Product + Data |
| Privacy drift | Compliance | New fields risk re-identification | Privacy review before release, aggregate/minimize retained metadata | Eng lead |
| Team capacity overload | Delivery | Missed sprint goals for 2 cycles | Reduce to one pillar, defer secondary features | Product owner |
| API chokepoint | Strategic | 211 integration requires prohibited telemetry or restrictive terms | Enforce redlines; use conditional integration mode; predefine blocked-path fallback | Product owner + Eng lead |
| False-confidence from secondary research | Decision quality | Non-validated external-agent claims drive design | Evidence re-validation log; primary-source-only for gate decisions | Data/governance lead |
Go/No-Go Decision Trees
Gate 0 (After Baseline + Instrumentation)
- If metrics are not baseline-ready: do not start pilot build.
- If partner commitments are incomplete: shrink pilot scope before build.
- If integration is
blockedand no safe fallback scope is approved: pause rollout and execute contingency decision.
Gate 1 (After Pilot Cycle 1)
- Pass all Gate 1 metrics: scale successful pillar(s).
- Miss one metric: one remediation cycle max.
- Miss two or more metrics: kill weakest pillar and re-run narrowly.
Gate 2 (After Expansion Cycle)
- If gains persist and operational load is manageable: expand.
- If gains regress or costs spike: keep as focused domain solution.
- If no durable gains: sunset pilot modules and preserve learnings.
Integration Contingency Paths
If integration is conditional:
- proceed only with documented compensating controls,
- set a re-negotiation milestone before Phase 2 expansion.
If integration is blocked:
- choose one of:
- narrow CareConnect to tightly bounded, high-confidence local workflows, or
- execute responsible deprecation plan.
- deprecation trigger defaults:
- repeated failure of primary outcome thresholds across two cycles, and
- fatal error rate above threshold without recoverability.
90-Day Execution Timeline
Weeks 1-2: Phase 0
- Instrumentation and baseline completion.
- Partner and cohort lock.
- Pre-registration of hypotheses/falsifiers/thresholds.
- API feasibility + legal/privacy redline assessment.
- Offline/local data security threat modeling.
Weeks 3-6: Phase 1 Build
- Service Reliability + Warm Referral core implementation.
- Limited Access Navigation for pilot services.
- Internal QA, privacy review, and runbook drafting.
Weeks 7-10: Phase 1 Live Pilot
- Weekly scorecard and issue triage.
- Adoption support and data quality interventions.
- End-of-cycle Gate 1 decision.
Weeks 11-13: Remediation or Scale Prep
- If yellow: one remediation cycle.
- If green: Phase 2 expansion prep.
- If red: narrowed continuation or sunset execution.
First 14 Days: Detailed Checklist
- Finalize v22.0 objective function and hard constraints.
- Approve metric definitions and formulas in this document.
- Select pilot domain and services (recommended: housing intake).
- Confirm pilot partners and assign owner per partner.
- Define data retention and privacy boundaries for new events.
- Create baseline dashboards and validate query accuracy.
- Publish internal claim language updates to avoid overstatement.
- Freeze non-pillar feature work except reliability/security defects.
- Complete evidence re-validation log for all external-agent-derived claims used in this plan.
- Record integration decision (
go/conditional/blocked) with rationale.
External-Agent Research Assimilation (Hypothesis Backlog)
The following items were accepted from external-agent reports as investigate/validate inputs:
- Integration-first trajectory as default strategic path.
- API terms may create telemetry reciprocity risk.
- Offline-first systems require explicit lost/stolen-device safeguards.
- Directory decay rate is a first-order operational risk requiring monthly audits.
- Human-assisted vs self-serve preference must be tested with frontline cohorts.
None of the above are treated as confirmed facts until validated through Phase 0 evidence protocol.
Weekly Decision Journal Template (Post-Approval)
Use this only for ongoing weekly operating decisions after Step 1 is approved. Canonical Step 1 sign-offs remain in v22.0 Approval Checklist.
| Date | Decision | Rationale | Evidence Used | Alternatives Rejected | Owner |
|---|---|---|---|---|---|
| YYYY-MM-DD | Example: Keep Warm Referral, defer Access Navigation scale | Adoption hit threshold, navigation signal still weak | Pilot week 4 scorecard | Scale both now | Product owner |
Documentation Links
Strategic context and evidence base:
- CareConnect vs 211 Objective Evaluation (2026-02-27)
- CareConnect vs 211 Evidence Matrix (2026-02-27)
- CareConnect vs 211 Positioning Playbook (2026-02-27)
Final Recommendation
Proceed with v22.0 under one condition:
- Treat this as a measurable experiment with kill criteria, not a pre-committed multi-quarter build.
This is the best objective, non-biased path because:
- steelman defines upside,
- devil's advocate defines falsification,
- stage gates force evidence-based survival of initiatives.