Logistics:
- Daily Checkins will take place at 10am and 3pm ET (GMT-4) on Zoom: https://zoom.us/j/6164548130
- Communication - all track discussions will take place on the Bulk Data Zulip Stream: https://chat.fhir.org/#streams/179250/bulk%20data
- Sign up sheet
- Overview for participants of other tracks: Friday at 11AM on main GoToMeeting
Submitting WG/Project/Implementer Group
FHIR-I / Argonaut Bulk Data Workgroup
Justification and Objectives
Argonaut has selected Bulk Data as a 2020 project and work has begun on v1.2 of the IG. Participants in this track will prototype and test some of these proposed improvements.
FHIR Bulk Data Resources
- Bulk Data Implementation Guide v1.0
- Bulk Data IG - Current Draft
- FHIR Overview Video (not Bulk Data specific)
- IG v1.0 Overview Presentation (Slides / Video)
- Server and Client Reference Implementation
- Inferno Community Edition Test Suite, Inferno Program Edition Test Suites
- IG v1.2 Proposed Enhancements Overview
This track will use FHIR R4
Track Leads
- Dan Gottlieb
- Josh Mandel
Participants
System Roles
Data Provider: Bulk Data server implementing the draft file export enhancements to v1.0 of the IG.
Data Consumer: Bulk Data client with capability to retrieve binary and text files following the draft file export enhancements to v1.0 of the IG.
Scenarios - based on current draft of Bulk Data IG
Scenario 1: Bulk data export with retrieval of referenced files on an open endpoint
Data Consumer issues the following request
GET [base]/Patient/$export?_outputFormat=ndjson Accept: application/fhir+json Prefer: respond-async
- Data Provider and Data Consumer follow the flow outlined in the Bulk Data Export IG to generate and retrieve a dataset. This dataset must include DocumentReference resources with a populated attachment.url element. The requiresAccessToken property in the Bulk Data manifest should be set to false.
- Data Consumer retrieves binary files and text files referenced from DocumentReference.attachment.url
Scenario 2: Bulk data export with retrieval of referenced files on a protected endpoint
- Data Consumer registers with Data Provider, per SMART Backend Services Authorization
- Data Consumer obtains an access token, per SMART Backend Services Authorization
Data Consumer issues the following request with the access token
GET [base]/Patient/$export?_outputFormat=ndjson Accept: application/fhir+json Prefer: respond-async
- Data Provider and Data Consumer follow the flow outlined in the Bulk Data Export IG to generate and retrieve a dataset. This dataset must include DocumentReference resources with a populated attachment.url element. The requiresAccessToken property in the Bulk Data manifest must be set to true.
- Data Consumer attempts to retrieve files referenced from DocumentReference.attachment.url without an access token and fails
- Data Consumer retrieves binary files and text files referenced from DocumentReference.attachment.url with an access token and succeeds
Bonus Scenarios:
- Prototype group membership enhancements outlined on Github (also see notes here)
- Prototype deleted resource enhancements outlined on Github (also see notes here)
Security and Privacy Considerations
SMART Backend Services Authorization will be required to participate in Scenario 2