Designed sessions
5
Curated walkthroughs for the current platform shape.
Guided Sessions
This surface is designed for your developer/owner use only. Each session is meant to walk you through one real platform capability, show which backend process produces the visible output, and help you decide what should be strengthened next.
Designed sessions
5
Curated walkthroughs for the current platform shape.
Live examples available
5
Session anchors currently backed by real stored cases or outputs.
Saved debriefs
0
Internal session takeaways already captured for later planning.
High-priority follow-ups
0
Notes currently marked as the strongest direction-setting changes.
Kolb session guide
Follow one case from the intake surface into stored source data and structured FHIR output.
Suggested duration
15-20 min
Saved debriefs: 0
Why this session matters for direction
You should come away understanding where raw source retention stops and where interoperable output begins.
Activity 1
See the front-door capture experience that creates the initial source payload.
Backend process
Submitting the intake form calls `/api/intake`, which validates the payload, writes the raw intake record, and kicks off intake indexing plus the intake-to-FHIR persistence flow.
How the visible output comes to be
The later FHIR bundle, search rows, and provenance links all depend on this initial validated source event.
Activity 2
Move from the user-entered starting point to the stored bundle that the rest of the platform reads.
Backend process
The FHIR store writes Bundle/Patient/Encounter resources plus an indexed table row that reviewer, benchmark, and export surfaces read later.
How the visible output comes to be
The current live example is a0863045-07f4-4090-a5bf-30ae77ba4a51, which shows how a recent intake became a bundle.
Activity 3
See how QbitQure keeps later structured output tied back to its origin.
Backend process
Reviewer-case links expose a reviewer-safe source-intake preview that reads the retained blob inside the internal reviewer boundary without collapsing that source layer into the structured bundle.
How the visible output comes to be
If no source preview is available yet, the case still shows the structured output layer without source replay.
Questions to answer after the session
Do the current intake fields create the right source record for the downstream structures you want?
Is the provenance path obvious enough when you move from the bundle back to the raw intake layer?
Recent takeaways
No debriefs saved for this session yet.
Save debrief
Capture what this session 1: source intake to structured fhir clarified, what still feels weak or misleading, and what should change next.
Kolb session guide
Understand how queue state, ownership, handoff, sign-off, and workflow projection are produced from one stored reviewer case model.
Suggested duration
15-20 min
Saved debriefs: 0
Why this session matters for direction
You should be able to explain how the reviewer UI and the Task-like workflow projection stay in sync.
Activity 1
See which cases are waiting, in progress, or completed, and what readiness signals are exposed.
Backend process
The reviewer dashboard reads recent bundle rows, normalizes workflow defaults, and builds one reviewer-case model that every internal workflow surface reuses.
How the visible output comes to be
The queue cards are not a separate system. They are a view over stored review fields on the indexed case row.
Activity 2
Inspect one real case and see how status controls, provenance, pathway links, and readiness are assembled.
Backend process
The work surface for a0863045-07f4-4090-a5bf-30ae77ba4a51 is built from the canonical reviewer-case domain model, not ad hoc route-local logic.
How the visible output comes to be
Changing status updates the same stored row that later drives benchmark, export, and workflow views.
Activity 3
See how the same underlying state becomes a Task-like read-only artifact.
Backend process
The workflow projection route reads the reviewer-case model and emits a FHIR-aligned Bundle with Task-like and Provenance-style resources.
How the visible output comes to be
The projection exists because the workflow state is already explicit in storage; it is not a separate workflow engine.
Questions to answer after the session
Which reviewer-state fields feel production-relevant, and which still feel like demo scaffolding?
Does the Task-like projection reveal the right workflow information for future interoperability work?
Recent takeaways
No debriefs saved for this session yet.
Save debrief
Capture what this session 2: reviewer workflow and governance clarified, what still feels weak or misleading, and what should change next.
Kolb session guide
See how accepted PGx pathway signals flow into outcomes, retrospective cases, and benchmark comparison views.
Suggested duration
20-25 min
Saved debriefs: 0
Why this session matters for direction
You should understand how a pathway review becomes later benchmark material rather than staying trapped in a single case page.
Activity 1
See the current accepted pathway catalogue and how it connects back into case work surfaces.
Backend process
Accepted pathway registry entries, pathway snapshots, and reviewer-case mapping determine which pathway routes and benchmark cases appear later.
How the visible output comes to be
When accepted-pathway cases are available, they become the anchor for outcomes, retrospective views, and benchmark analysis.
Activity 2
Inspect how saved pathway direction, testing prompts, and signed-off final outcomes are compared retrospectively.
Backend process
The benchmark helper derives recommendation kind, testing signal, agreement level, and possible signed-off overrides from stored pathway snapshots plus the final clinician outcome.
How the visible output comes to be
The benchmark view is a read-model over completed cases, not a separate reviewer truth source.
Activity 3
See which completed cases are eligible for retrospective and benchmark work.
Backend process
Completion gates plus accepted-pathway presence decide whether a case enters the retrospective dataset.
How the visible output comes to be
The retrospective view exists because the reviewer-case model can be filtered into a de-identified benchmark-ready subset.
Questions to answer after the session
Do the current accepted pathways expose the right kinds of signals for later benchmarking?
Where do you want pathway logic to stay narrow and explainable versus become more expressive?
Recent takeaways
No debriefs saved for this session yet.
Save debrief
Capture what this session 3: accepted pathways, outcomes, and benchmark comparison clarified, what still feels weak or misleading, and what should change next.
Kolb session guide
Understand how reviewer-adjudicated benchmark labels and the advisory ranking pilot relate to one another.
Suggested duration
20-30 min
Saved debriefs: 0
Why this session matters for direction
You should be able to tell which benchmark outputs are derived fallback semantics and which are explicit reviewer gold-set labels.
Activity 1
Use the adjudication surface to save an explicit retrospective target label, confidence, and rationale.
Backend process
Benchmark adjudication writes dedicated benchmark-only fields back onto eligible retrospective case rows without altering live reviewer workflow state.
How the visible output comes to be
When no adjudication exists yet, the page still shows the derived suggestion that would otherwise be used as fallback.
Activity 2
See how the advisory benchmark pilot orders retrospective cases and which label source it prefers.
Backend process
The priority benchmark prefers adjudicated labels when they exist and falls back to derived workflow semantics when they do not.
How the visible output comes to be
The ranking is produced from stored retrospective features, benchmark labels, and scoring helpers, not from a live clinical decision engine.
Questions to answer after the session
What benchmark labels feel robust enough to become your real gold set?
Where should the advisory ranking remain descriptive, and where do you want stronger experimental evaluation next?
Recent takeaways
No debriefs saved for this session yet.
Save debrief
Capture what this session 4: benchmark adjudication and advisory ranking clarified, what still feels weak or misleading, and what should change next.
Kolb session guide
See how a completed governed case becomes a handoff artifact and export package.
Suggested duration
15-20 min
Saved debriefs: 0
Why this session matters for direction
You should understand which stored workflow, pathway, and bundle elements are already strong enough to share externally.
Activity 1
Inspect the clinician-facing summary layer that sits on top of the canonical reviewer-case model.
Backend process
The handoff view reuses stored case state, pathway context, and readiness semantics instead of inventing a new export-only model.
How the visible output comes to be
The current completed example is a0863045-07f4-4090-a5bf-30ae77ba4a51.
Activity 2
See how the platform bundles the clinical bundle, workflow projection, handoff summary, and pathway snapshots together.
Backend process
The export route builds a compact package from the stored FHIR bundle plus read-only workflow and reviewer-case projections.
How the visible output comes to be
This output exists because provenance, workflow state, and pathway context have already been made explicit elsewhere in the platform.
Questions to answer after the session
Which parts of the export package feel ready for external explanation today?
Where do you want the interoperability story to get stronger before you share it widely?
Recent takeaways
No debriefs saved for this session yet.
Save debrief
Capture what this session 5: handoff and interoperability package clarified, what still feels weak or misleading, and what should change next.