Date: Sept 20, 2022


Quarter: Q1


Minutes Approved as Presented 


This is to approve minutes via general consent. "You have received the minutes. Are there any corrections to the minutes? (pause) Hearing none, if there are no objections, the minutes are approved as printed."

Goals

ONC and North Star Architecture Building Blocks Update

Discussion items

TimeItemWhoNotes
90 min

ONC and North Star Architecture Building Blocks Update

Katie Tully (ONC) and USDS team

ONC-CDC-USDS UpdateH7 WGM_FNAL.pdf

USDS FHIR PHWG Building Blocks Presentation.pdf 


  • Background
    • Core issues with current state
      • Data use agreements
      • Send data multiple times, in different ways, multiple ends
      • PH information systems can have inconsistencies
    • Emerging state:
      • TEFCA- common agreements and rules of the road
      • Standardized data sent and received (Helios)
      • USCDI & USCDI+
    • North Star Arch- all current data initiatives driving towards this new state
      • USCDI & USCDI+- common data model
      • Helios- standardize exchange
      • TEFCA- policy framework
      • Health IT certification program enhancements
    • North Star Arch
      • CDCs vision for a future state public health ecosystem
        • Coordinated and interoperable
        • Shared services developed on agile principles and human-centered design
      • See Big Picture diagram slide
      • US Digital Service- large team imbedded in CDC
      • Phases of DMI strategic roadmap
        • Lay groundwork
        • Adopt standards, migrate systems and establish impact, (working near here today)
        • Expand foundation for boarder impact
        • Improve ecosystem continuously
      • Questions
        • Shared cloud infrastructure- one of the challenges is how you deal with multi cloud environments within and across agencies, how are you dealing with this?
          • Dedicated team focusing on multi-cloud
        • USCDI+ (For Public Health)
          • Goals:
            • Improve data utility and availability, helping to save time and resources for end users and p officials.
            • Quality of data available to public health to conduct disease surveillance and disease investigation
            • A unified response across local partners, jurisdictions, and all levels of government
            • Allowing flexibility to meet changing and emerging needs to public health.
          • What are the core set of data needs for data exchange for public health
          • Domains:
            • Case based surveillance
            • Lab data
            • Multi directional exchange with healthcare and other partners
            • Maternal and child health
            • Resource reporting and situational awareness
            • Risk behaviors and driver of inequity
            • Comments-
              • need to think about the people who overlap across activities
              • USCDI+ COULD accommodate PH to PH and PH to CDC exchange- need further discussion and resource support to address. Incorporating this specific data could help to address the variation seen across jurisdictions as well as across programs.
              • Interest in how tooling will impact prioritization and implementation
            • USCDI+ feedback process is rolling
              • Not tied to any requirement/regulation, currently is serving as an exploratory process
              • Harmonize data element, data class names and definitions across use cases
              • Need for core dataset showing elements that apply across all use cases
            • Opportunities
              • Implementation guidance analogous to US Core for USCDI+ for PH-
              • Question- Since this is moving target, what is the end state? Trying to find a different technical platform so they can talk through some of those examples within a 2-4 week period. To be continued.
              • Big gap with USCDI+ for implementers- Dan Chaput will follow up with ONC
              • Often data requirements included in guides vary due to program requirements. What can be done to help adopt common definitions within programs for common data elements? This is shared commitment amongst federal partners that requires funding commitment and time commitment (5 years). What does the change management look like and how does that cost? Will be working with the PH orgs (CSTE, ASTHO, NACCHO....) and CDC to talk about harmonization. Had a similar convo with SSA and VA where they had to put that work on hold in order to allow for harmonization. Small team with measures in place to dedicate to targeted use cases.
              • The CDC Cancer Program has been working to help standardize State Cancer Registries and NAACCR. When changes are proposed they force evaluation with US Core. NAACCR also provides an indication of whether data elements are in US Core. The CDC Cancer program does put requirements in their funding to state programs.
              • Profile and IG Development
              • Opp to data element definitions in USCDI+ for PH to profiles being represented in FHIR PH Library
              • Long term commitment
            • US Digital Service- Public Health Data Infrastructure- Virginia Project
              • USDS- Based in OMB but detailed to other agencies like CDC. Most come from private tech sector
              • Was initially focused on COVID, now expanding on broader focus to focus on the time between data arrival to data ready for analytics
              • HL7 V2 to FHIR R4
              • Name standardization
              • Geocoding
              • Tabularizing of FHIR Data
              • See Virginia Prototype slide
                • Raw data blob storage of eCR, ELR< and VXU (V2 and FHIR)
                • Convert data to FHIR
                • Standardize data
                • Geocode
                • Link records
                • Place in FHIR server
                • Apply schema to Data
                • Tabularize
                • Schema-compliant tables
                • Analytics
                • Interest in hearing not only in functionality, and deployment and mixing and matching
                • Built in Azure and Google platform
                • Please Share the mapping the V2 to FHIR and CDA to FHIR mappings considering HL7 is doing this work.
                  • HL7 V2 to FHIR converted used from Microsoft which is based on the HL7 V2 to FHIR work. Will send any feedback on mapping to HL7
                • Ingestion Pipeline
                  • Taking in Data in V2 (VXU, ORU), eCR CDA
                  • Picking up data from blob storage
                  • Developed common interface within their tool so they could write cloud specific implementations
                  • Cloud storage and authentication
                  • Both Azure and Google cloud platform
                  • Each atomic operation is its own thing and can be combined configured to meet the needs of the local implementation
                  • Will end up at the FHIR server which will be central engine for storage
                  • Down the road vision will be to choose a health data format will make data interoperability easier down the road
                  • Conversion HL7 v2 to FHIR
                    • Using Azure implementation to do this.
                    • Building blocks can also be applicable at an element level
                  • Once in FHIR conduct Name standardization, also doing the same for phone and address
                    • Focused on normalization
                  • Address standardization and geocoding
                    • Built an interface around geocoding tools (3rd party services) to get geolocation data for an address and standardize the address. Using FHIR extensions to store lat and long
                    • When translating from V2 to FHIR you find that elements in V2 may not yet existing in FHIR. Are you profiling those resources where they don’t already exist, or using existing work? Yes for Lat Long. They haven’t yet found something that they need to publish yet but are keeping metrics. Will publish if needed.
                    • Using FHIR converter and relying on it for the vocab translations where the V2 vocab doesn’t necessarily match to the FHIR vocab. Trying to minimize data loss as much as possible
                  • Adding a linking Identifier
                    • Developed a proof of concept for linking patients using normalized demographics and hashing.
                    • Have road map plans to make more robust and integrate with eMPI vendor
                    • Creating a single patient resource? Currently creating a patient per source and then linking by assignment of common identifier
                    • For Cancer- lab data doesn’t have good identifiers and in convo with lexus nexus. Trying to figure out how to deal with identifiers. USDS- there is some thought on how the order of the building blocks work- standardization does precede data linkage. Once standardize then link/create hash. Will be looking at how different geocoder perform and evaluate.
                    • Operationally- you track the transactions and put them in a blob and transform, how do you maintain fidelity? Convo for another time
                    • NBS comment
                    • Measured improvement/benefits
                      • Demographic recovery
                      • Reducing the number of patients due to record linkage
                      • Containerized FHIR conversion (cloud agnostic packaging)
                    • Current logic would retain discrepant values between data sources
                    • Not connecting with eMPI yet but coming
                  • Analytics Pipeline
                    • FHIR Data is nonrelational and analytics wants tabular data, so FHIR converting to tabular form
                    • Dynamic scheme generation
                    • Extract data from FHIR Server
                      • GET Request- FIR server base URL, Query Parameters
                      • Tabularize
                      • Persist the data (parquet and csv)
                    • FHIR server and targeted data extraction capabilities
                      • Schema-compliant table generation
                      • Custom schema generation + ease of integration with FHIR server
                      • More modern data storage, efficient and persistent data stores
                      • Arch improvements for our pilot partner
                        • Orchestration
                        • Reliability
                        • Ease of use
                        • Speed
                        • Monitoring
                        • Data quality and completeness
                      • But what if my org needs V2?
                        • Report stream- FHIR R4 to V2 for ORU_R01, with validation to ensure no data loss, with possibility of value add
                      • Areas of improvement- see slide
                      • Where is this Headed-
                        • FHIR Developer community of practice (specific to building blocks, not necessarily the same as the current Wed PH FHIR CoP)
                        • Open-source building block library
                        • Pipeline as a product repositories




Action items