For the directory bulk data extraction, to request an entire copy of all content in the directory, the scope selection can be defined at the top level, and just specifying that we would like to retrieve all content for the specified resource types from the base of the FHIR server.
A healthcare directory may curate such an extract on a nightly process, and just return this without needing to scan the live system, and the value returned in the
transactionTime in the result should contain the timestamp at which this was generated (including timezone information), and that should be used in a subsequent call to retrieve changes since this point in time.
Once a system has a complete set of data, it is usually more efficient to ask for changes since a point in time, in which case the request should use the value above (
transactionTime) to update the local directory.
This behaves just the same as the initial request, with the exception of the content.
Note: The current bulk data handling specification does not handle deleted items, and the recommendation is that periodically a complete download should be done to check for "gaps" to reconcile the deletions (which could also be due to security changes), however content shouldn't usually be "deleted" it should be marked as inactive, or end dated.
Proposal: Include a deletions bundle(s) for each resource type to report all the deletions (when using the _since parameter) which would be in a new property "deletions" in the process output, as demonstrated in the example status tracking output section below. This bundle would have a type of "collection", and each entry would be as per a deleted item in a history
<!-- no resource included for a delete -->
<!-- response carries the instant the server processed the delete -->
The total in the bundle will just be the count of deletions in the file, the total in the operation result will indicate the number of deletion bundles in the ndjson (same as the other types).
List defined subsets
The previous sections are all that is defined by the FHIR Bulk Data extract specification, however we may choose to implement an additional parameter to this operation to permit the selection to also filter to resources that are included in a specified list resource. The approach is similar to the same capability defined by FHIR http://hl7.org/fhir/search.html#list
This could be used by client applications such as a Primary Care System that wanted to only periodically update using this technique, but only with resources that they currently have loaded in their "local directory" - internal black book, which were cached there from previous searches to the system.
In this example the Primary Care System would be responsible for keeping
List/45 up to date with what it is tracking, and a national service may decide that permitting this List resource management is too much overhead, however local enterprise directories may support this type of functionality.
Here I will only document the use of the global export, as an initial request.
The initial request:
This will return either:
- a status 4XX or 5XX with an
OperationOutcomeresource body if the request fails,
- or a status 202 Accepted when successful, with a
Content-Locationheader with an absolute URI for subsequent status requests, and optionally an
OperationOutcomein the resource body if desired
After a bulk data request has been started, the client MAY poll the URI provided in the
This will return:
- HTTP Status Code of
202 Acceptedwhen still in progress (and no body returned)
- HTTP status code of
5XXwhen a fatal error occurs, and an
OperationOutcomein json format for the body with the detail of the error
(Note this is a fatal error in processing, not some error encountered while processing files - a complete extract can contain errors)
- HTTP status of
200 OKwhen the processing is complete, and the result is a json object as noted in the specification (and an example included below)
// Note that this deletions property is a proposal, not part of the bulk data spec.
Retrieving the complete extract
While downloading, also recommend including the header Accept-Encoding: gzip to compress the content as it comes down.
(Note: our implementation will probably always gzip encode the content - as we are likely to store the processing files gzip encoded to save space in the storage system)
This is the simplest part of the process, and that is just calling DELETE on the status tracking URL.
This then tells the server that we are all finished with the data, and it can be deleted/cleaned up. The server may also include some time based limits where it may only keep it for a set period of time before it automatically cleans it up.