Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Quorum = chair + 4yes/no
Co chairsxDavid HayxLloyd McKenzie
ex-officio
Wayne Kubick, CTO

...

MembersMembersMembersObservers/Guests
xHans Buitendijk
Brian Postlethwaite
Paul KnappxAnne W., scribe
xJosh MandelxJohn MoehrkexBrian PechxBrian Phinney
xGrahame Grieve



xBhagvan Kommadi






xDidi Davis






xMelva Peters
























Agenda

  • Roll Call
  • Agenda Check
  • Minutes from 2020-10-14 FMG Agenda/Minutes
  • Action items
  • Review Items
  • Discussion Topics
    • Recording conformance to quality criteria within the resource vs. spreadsheet (continued from last week)
    • What connectathon will look like in 18 months
    • QA things that we need to tighten up (standing item)
  • Reports
    • Connectathon management (David/Brian)
    • SGB 
    • MnM 
    • FMG Liaisons 
  • Process management
  • AOB (Any Other Business)

Minutes

  • Roll Call
    • Guests noted above have no agenda items to add
  • Agenda Check
    • MOTION to approve: Hans/John
  • Minutes from 2020-10-14 FMG Agenda/Minutes
    • MOTION to approve: Hans/John
    • VOTE: All in favor
  • Action items
  • Review Items
  • Discussion Topics
    • Recording conformance to quality criteria within the resource vs. spreadsheet (continued from 2020-10-07 FMG Agenda/Minutes)
      • Lloyd: We really want to not have a shared Google spreadsheet with this information at all. There's concern about exposing the list of applications that we were using to declare our level 4 and 5 conformance with to the outside world. The information would manifest within the artifacts themselves instead. There would be a link to the relevant quality criteria that the WG is asserting it has vetted the content against based on maturity level. If it's in the artifact and being controlled, then we don't have to maintain the information in a spreadsheet.
      • David: So we need to identify what those are and change the publisher to expose them. Lloyd: Yes, and we would be migrating what is in the current Google spreadsheet into the source somewhere. We'd direct people authoring content to fill in these extensions. 
      • Josh: This feels like a lot of process and granularity. Is there some lighter-weight convention that would allow people to link back to whatever assessment they made on their maturity level? Lloyd: The challenge is maturity in practice applies to individual artifacts. Different resources, value sets, profiles, etc. are at different maturity levels. The criteria for each can be different. David suggests they could notate the justification for maturity level in text.
      • Josh shares a real life example from https://json-ld.org/test-suite/reports/. They have conformance tables for all the main features that people self report. 
      • Lloyd: Right now we are not exposing maturity level for IG artifacts at all. We are exposing conformance levels in the core spec for everything, but by and large, aside from resources, it's Grahame's best guess or based on the conformance level of the resources. We do need to pay attention to maturity of profiles because eventually we're going to want to take them normative.
      • Grahame: The maturity level of the terminology products is tied to the profiles they're using. We can't have UTG including all the value sets and IGs, but we haven't formally discussed what the policy is for terminology and what stays in the IG. There's an area of the IG where it's modular. There might be separate parts of the IG where it makes sense to say this part isn't as mature as that part, but not at the individual profile level. Lloyd: If you have confidence that the implementation of the profiles is consistent then that's fine, but we might have maturity groups. Could have, in the IG, and extension that defines a maturity group and then artifacts point to the maturity group. Then it doesn't have to be maintained at the artifact level. There are also some resources that essentially just take on the highest bar of anything that references them. So if you've got datatype profiles or value sets that are referenced in different places, if one of them hit normative then essentially they all do. Grahame notes you can have a resource in more than one grouping.
      • Grahame: We could build test reports like Josh shared. Discussion over what that would look like.
      • John: What I'm hearing proposed is more than just tracking if something has been tested. The evidence that builds your argument that maturity has been obtained should be managed in a tooling independent of the implementation guide. When maturity is obtained the guide should go through some kind of process where it's blessed. If we get to the point where evidence is overwhelming that this should go Normative, what is the indicator that is in the implementation guide? Lloyd asserts that he does want the information to be in the guide. If you publicly declare the criter and the evidence that you're providing on a per artifact, group or IG basis and that's part of the publication that the community can look at, then the community can hold the authors to account.
      • Grahame: Editors of IGs are going to say "no, we're not doing this." Lloyd: Then they would stay at a lower maturity level. If we capture it for a collection of artifacts as opposed to individual artifacts, it's not a horrendous amount of work.
      • Discussion over how we ask people to present the evidence at the normative level. There could be a form with places for these things to be put in and the author of the tool would have to complete and present it. Lloyd: Or they could include a note to balloters describing why it's sufficiently ready. Grahame: We could add a section in the IG with a defense of the maturity level.
      • Grahame: TSC should provide clarification on how much work editors should invest in defending maturity. Brian Pech: If they want to go normative they should provide an independently identifiable trail of evidence that shows that the criteria have been met. Lloyd: We could just tie it to our ballot approval process and say that if you have an IG planning to go normative, then you need to attest to all of the quality criteria up to and inclusive of level 5, and when you present it to the FMG we may ask to see the evidence.
      • Lloyd: So we should expose the ability to capture maturity levels with extensions on the artifact in IGs; we allow for maturity levels for certain artifacts be inferred from the level of whatever is referencing them; and there should be a link to specific quality criteria for the artifact. Then our enforcement mechanism is when you're going normative you must formally present your evidence to the FMG that you've met the maturity level. We introduce something in the NIB form where, if you're planning to go normative and you're FHIR, you assert that you have reviewed and met all of the maturity criteria necessary for normative status for the relevant artifacts. When the NIB comes to FMG then we can confirm. It could also be part of the normative notification process instead as it is difficult to update the NIB. Melva notes it could be easily added in to that step.
        •  Lloyd McKenziewill draft a proposal for how WGs should document conformance to quality criteria
    • What connectathon will look like in 18 months
      • David reports that he and Wayne, Sandy, and Grahame discussed how to manage as Connectathons get larger. Wayne suggested considering what it might look like in 18 months or so when we meet physically. 
      • Tracks fall into a number of different categories. Some are about specific implementation guides; some are about building out the spec; some are about specific implementations. Should we have different Connectathons for the different categories?
      • Brian: Does some kind of hybrid option make sense? Based on the virtual Connectathon success it would makes sense for those people who don't want to travel. Hans: If the technology allows it, we should start to generally start to think about hybrid.
      • David: Outside of the hybrid question, would it be one connectathon or several? Josh: How helpful is it to have one giant connectathon vs. a half a dozen small ones that a subset of the community participates in? Is the community aspect important enough to outweigh logistics? Grahame notes there are a lot of logistics involved in having more smaller ones too. Melva notes that survey feedback indicated that there was some frustration because the event wasn't exactly what people had expected.
      • Brian: The other thing to consider is the calendar. Will we go back to 3 in person meetings in 2022? That may influence how the connectathons are staged. From a process and electronic services perspective, are there capabilities HL7 needs to add to allow the organization to more effectively run these non-in person events in the future? Just need to consider. 
      • David: Do we have a strong opinion if we should be splitting up the Connectathons into clumps of categories? Discussion over tying the connectathons to the cadence of balloting. John: This presents an opportunity for us to cooperate with IHE as well. 
      • Could split up into general tracks and specialty tracks.
      • Virtual will likely be part of the future plan. Need to look at past trends and analyze. Not sure if we should have large events or multiple smaller events. Could possibly have regional things - do we have Asia Pacific events, for example.
      • Grahame: We should do a meeting and Connectathon in the European time zone as soon as we can. 
    • Carry forward
      • QA things that we need to tighten up (standing item)
  • AOB (Any Other Business)
    • Lloyd out next week
  • Adjourned at 5:27 pm Eastern