Skip to end of metadata
Go to start of metadata






Chris Baumgartner Washington State DOH
Craig Newman Altarum
Danny Wise Allscripts
Dave deRoode Lantana Consulting Group
Devann Kirkpatrick TN Dept of Health
Erin Holt Coyne TN Dept of Health
Gary Dickinson EHR WG
George Dixon Allscripts
Leslye Rauth IHS
Lura Daussat Public Health Informatics Institute
Mead Walker MWC
Melanie Epstein-Corbin California Department of Public Health
Nosipho Beaufort Public Health Informatics Institute
Rita Altamore State of Washington DOH
Sarah Gaunt Lantana Consulting Group
Shu McGarvey CDC/NGC


Discussion items





5 minWelcome, agenda review, meeting minute approval Erin
  • Motion to approve last week's minutes (Mead Walker/Craig Newman (14-0-0))
  • Reminder that the Jan/Feb 2020 ballot closes on 1/27
    • The WG has several documents out for ballot
  • Next week we will go back to FCC rather than Intercall
20 minMeasures of Interoperability Success
  • We’ve had several folks indicate an interest in continuing the discussion on how to measure the success of interoperability incentive programs (spurred on by a HIMSS blog post)
  • Points to consider for possible feedback to HIMSS (from a listserv email on 1/20):

    • Any assessment of overall interoperability success should include the impact on population health (public health) in addition to the benefits for providers and patients
    • Measures should consider both the quantity of messages and the quality of messages
      • Just increasing the number of messages flowing will not be sufficient for maximal impact
        • More messages may not equate to improved experience if it means that more fragmented data is available and requiring reconciliation
        • More complex patterns of data flow (eg. from a provider to an HIE and then on to a public health agency) may complicate data analysis
          • Should a multi-hop exchange count as just a single exchange of data or multiple?
        • Many submitters to public health are still using batch submissions which are still valuable (albeit perhaps not as valuable as real time exchange) but harder to assess
      • The quality of the data is just as (or more) important
        • Quality can include accuracy, timeliness, completeness and standardization
          • The quality and quantity can change over time (both for the better and the worse)
            • Analysis shouldn't be a one time assessment
        • Measuring the adoption of standard terminologies could be one way to assess improved quality
        • Proportion of discrete data (coded test results rather than a textual narrative, discrete name components, etc) may also be a useful metric
      • A reduction in cost and time to onboard submitters is also a potential metric (with the ability to measure real world impact on public health programs)
      • Other reductions in cost are also good indicators of success
        • Does it require fewer resources to data cleanse, deduplicate data, etc?
      • Even with public health, there may be significant variation between programs
        • For example, electronic immunization reporting was well established prior to incentive programs but still saw a marked increase in data exchange while electronic case reporting has only developed recently and is not as widely implemented (but getting off the ground is often the hardest part)
      • Ultimately, it is the impact on patients, providers and public health programs which should be the true measure of interoperability success
      • Some assessment work is already happening in the public health space
        • Societies such as the American Immunization Registry Association (AIRA) may already be capturing data
        • Individual jurisdictions are already looking at assessing data quality in submitted data
    • The impact will also depend on the previous state (was there no exchange, UI-based exchange, etc)
    • AIRA is investigating data quality at rest which might be able to be stratified by time to look for increases in quality
    • Another aspect would be to try to assess the fidelity of exchanges (especially during multi-hop scenarios) to ensure that the data is not compromised during transmission and that the context is maintained
    • HL7 is unlikely to officially respond so, any responses would come from individual organizations
    • The WG doesn't have plans to formally pursue further projects to define measures of success, but these calls are good places for folks to lead further discussions or recruit others to join projects outside the WG

Action items