Chairs: Virginia Lorenzi, Debi Willis
Scribe: Dave deBronkart
|Dave deBronkart||Dave deBronkart||x|
|Jose Costa Teixeira|
|Casey Thompson||Casey R. Thompson||x|
|Didi Davis - VP @ Sequoia Project|
|Laura Bright (OneRecord)|
Meeting Minutes from Discussion
|Announcement||Rachel Richesson has students in last semester of a 2 year nursing informatics program who may be able to help. Reach out to her.|
|Organization||Approval of this Agenda||done|
|Prior call Minutes (7/16/20) approval||done|
|It's HL7 election time - through end of July.|
As discussed, Virginia voted (for our WG) for Riki Ulrich for ASD Co-chair..
A.I. survey (Virginia's email)
(https://confluence.hl7.org/download/attachments/63242358/Survey%20Questions%20-%20AI%20enabled%20systems%20standardization%20in%20health%20care.pdf?api=v2) There is a request from the policy workgroup to all workgroups related to AI and I thought the group might want to respond.
RESPOND TO HL7 BY JULY The following notes from our meeting were transcribed into a google doc AI and Health Data - comments from Patient Empowerment WG, which was submitted to HL7 on July 27 (for inclusion in HL7's their response by 7/31) If interested, see email today subject "ANSI RFI survey requesting information on Standardization of AI-enabled Systems in Healthcare" GET LINK FROM VIRGINIA
Jan: transparency, & commented code. Biases & discrimination creep in - no (well documented, she says - people whose disability payments were cut in half) The info must be available to researchers too.
Dave: Opthalmic migraine example from his case; Morgan Gleason's false "pregnancies"; Dave's false billing codes
Dave: Also cite the book Weapons of Math Destruction by former Wall Street quant Cathy O'Neil. Significant harm including downward "death spirals" when non-transparent AIs talk to each other.
Debi: there are so many errors in EHRs (e.g. June JAMA article) that an AI could go seriously wrong.
Debi: Patients should have a right to make sure the data's correct before an AI uses it to make a diagnosis
Mikael: GDPR already has a right (Section 4) to say "Don't use automated decision making on me."
Mikael: Add that we want AI to help, but we've got to be sure it's reliable, not a bogus or biased algorithm or a good one that's reading bad data.
Nancy: We've got wrong billing codes in people's insurance history; also simple mistakes.
Nancy: granularity of consent on what HCPs can do with the chart data ≠
Abigail: we should expect Explainable AI - Wikipedia: "contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implementation of the social right to explanation."
Abigail: treat the AI like a clinician with a role, never give it absolute authority.
Annual WG meeting - advance notice on planning
Your co-chairs have an early August deadline to submit our requests for the September 21-25 WGM meeting. High level details and how to register: http://www.hl7.org/events/working_group_meeting/2020/09/index.cfm.
There is also the Sept 9-11 virtual FHIR Connectathon: registration and info here: https://www.hl7.org/events/fhir/connectathon/2020/09/
Lisa proposes we might team up with an existing track doing harmonious work. Example: our concern about corrections would fit with the Care Coordination track, with a scenario where the patient injects a correction before the care plan gets made.Abigail what should we note here about the reference records you mentioned?
Side note: Nancy mentions "Values this group holds dear" - let's crowdsource that! e.g. (these are only my guesses, not actual answers)
PE WG projects
Updated WG priorities document is here - reformatted to clarify next steps for each. Please review!
Instead of covering all 4 projects weekly, with a tiny time slot for each, your co-chairs decided we'll grant time slots each week to whoever has something to present. Default for now will be 1-2 projects each week, alternating as the need arises.
Our WG co-chairs (led by Virginia) will be shepherding our two PSS's through the HL7 process.
|(Quickie:)||Patient Corrections project: News!||From co-lead Debi: "We would like to start having weekly meetings to work on patient corrections. I would like to get a list of people interested in doing that so we can decide on a time slot." email@example.com|
Patient Contributed Data (Jan & Maria) - continue where Jan left off on July 7
Jan notes that we should (in a white paper?) take into account the evolving practice of OurNotes (co-generated visit notes).
(Dave thought: imagine the drafting and evolution of a visit note in the same way people collaborate on a Google Doc before its release ... or should this belong under patient CONTRIBUTED data below??)
New note: Purpose suggested by Virginia in the PSS: "to define patient contributed data and the work that has been done to standardize interoperability in the field up to this point as well as consideration of gaps and needs and recommendations)." Note that PSS for the white paper has been drafted and can be found here: Patient Contributed Data Whitepaper
|Organization||HL7 process (if time allows)||Did not get to this.|
See Virginia's email titled "HL7 101 Continued" re HL7 balloting process.
This is to approve minutes via general consent. "You have received the minutes. Are there any corrections to the minutes? (pause) Hearing none, if there are no objections, the minutes are approved as printed."
Meeting Minutes from Discussion
|Decision Link(if not child)|