Logistics:

Short Description

This track will focus on testing proposed updates to the Bulk Data Implementation Guide (IG).   Version 1.2 incorporates learnings from implementer experience with v1.0 and has been developed over the past few months through a series of open community meetings coordinated by the Argonaut FHIR Accelerator.

Long Description

Healthcare and payor organizations have many reasons to transmit data on large populations of patients, such as moving clinical data into an analytic data warehouse, sharing data between organizations, or submitting data to regulatory agencies. Today, bulk export is often accomplished with proprietary pipelines, and data transfer operations often involve an engineering and field mapping project. The bulk data implementation guide (IG) is exciting effort by HL7, Argonaut, and SMART to bring the FHIR standard to bear on these challenges of bulk-data export. This track will focus on testing proposed updates for v1.2 that incorporate learnings from implementation experience with v1.0 of the IG and were developed over the last few months through a series of open community meetings coordinated by the Argonaut FHIR Accelerator.


Type

Submitting Work Group/Project/Accelerator/Affiliate/Implementer Group

Proposed Track Lead


Related tracks

Provide links to other tracks that are likely to have overlap of use-cases and/or participants (used to help guide seating arrangements and possibly drive track consolidation)


FHIR Version


Specification(s) this track uses


Resources


Clinical input requested (if any)

N/A


Patient input requested (if any)

N/A


Expected participants


Zulip stream


Track Orientation


Track details

System Roles


Scenarios - based on current draft of Bulk Data IG


Scenario 1: Bulk data export with retrieval of referenced files on an open endpoint

Data Provider and Data Consumer follow the flow outlined in the Bulk Data Export IG to generate and retrieve a dataset using one or more of the new capabilities in v1.2 

Scenario 2: Bulk data export with retrieval of referenced files on a protected endpoint

  1. Data Consumer registers with Data Provider, per SMART Backend Services Authorization
  2. Data Consumer obtains an access token, per SMART Backend Services Authorization
  3. Data Consumer follows the flow outlined in the Bulk Data Export IG to generate and retrieve a dataset using one or more of the the new capabilities in v1.2, passing in the access token.

  4. Data Consumer attempts to retrieve a dataset using the flow outlined in the Bulk Data Export IG without providing a valid access token and fails

Security and Privacy Considerations

SMART Backend Services Authorization will be required to participate in Scenario 2

Report Out: document