Friday, September 18, 2009

A Novel Way to Exchange Patient Health Information

An interesting post on THCB by Margalit Gur-Arie—titled "What if I Had to do HIT All Over Again?"—critiques the very large, very expensive and very clunky monolithic EMR/Practice Management/Billing system currently dominating the market. She concluded the post this way:
"So if I had to do it all over again, I would take a hard look at Microsoft Office. I would build multiple useful applications, like Word, Excel, Power Point, etc. I would make sure I can export data from one to the other. I would make sure that the user interface is consistent between them. I would allow others to create templates and integrate their software into my tool bars."
I replied:
Wow, Margalit, that's exactly what we done! We've actually just presented the first live public demonstration of a prototype of our system to doctors, educators, and insurers. It went very well!
The demo showed, in real time, how this MS Office based system enables:
  1. Primary care physicians (PCPs) to send personalized referrals to specialists
  2. The specialists to reply to those referrals
  3. The PCPs to respond to the specialists' acceptance reply by sending them XML-based continuity of care documents (CCD) and other supporting data files
  4. The specialists to access and view the resulting patient information
This is all done with encrypted e-mail attachments and a small software program and macro routines that process the e-mails automatically. They automatically encrypt, zip, and attach the files to e-mail and put them in the outbox; as well as retrieve the email from the inbox and unzip, decrypt, format and display those files, and store them encrypted in the recipient's computer.
It requires as few as 5 mouse-clicks per end-user for the entire process. No need for central servers (or any other infrastructural build-out), there is little if any need for IT support, and there are no other costly complexities.
And all the data are stored locally in encrypted files, which areautomatically retrieved and rendered any time via a few button clicks. From a technical perspective, it's a simple node-to-node (peer-to-peer), publisher-subscriber, and asynchronous decentralized desktop solution that uses Office macros, .Net, and SMTP. It is literally the easiest, most convenient, and least costly way I know to exchange and present patient health information securely between any EHRs in a way that promotes care coordination.
Another reader (Alexander) commented:
Margalit, what you describe, basically, reflects the principles, on which the proposed NHIN infrastructure is based. The only difference is that it is supposed to connect RHIO's rather than separate EHR systems. Without a nationwide patient ID, though, it is going to be very challenging to find and link all medical records on the same person since some important data fields used by matching algorithms can be empty or contain incorrect values. Besides, as I mentioned before, it is much more difficult to predict availability of EHR systems installed in small medical offices or hospitals, unless they use cloud-based applications.
To which I wrote:
Margalit - What do you think about the use of a biometric index to create a unique patient identifier (medical record number)? It would negate the necessity to establish and connect to a central repository, and it would enable the fluid exchange of patient health info between any nodes in a mesh network architecture, which is similar to the way communication is done in telephone networks (see
And she responded:
Dr. Beller, I think a biometric ID is probably a very good choice, short of implanting a chip :-)
The NCVHS has been tinkering with this for over a decade, but nothing happened. There seems to be some reluctance on the part of most people to have such identifier. I'm not sure why, since we all get SSNs immediately after birth and think nothing of it.
I think the technology is available for biometrics and the logistics are not insurmountable (put a machine in every DMV).
Alexander, I know that availability is an issue with the current crop of EMRs, but I strongly believe that SaaS is the future. Besides, as Dr. Beller mentioned, we all use phones without the operator having to patch calls through anymore and without having to run to the telegraph office to send something. Technology changes fast and I can see a device or an executable installed in every office to ensure availability.
I'm not ruling out RHIOs or other intermediaries, but I believe the actual data need not reside anywhere other than the provider system.
I replied:
Yes! I suggest that important roles for RHIOs, HIEs, etc. would be:
  • To aggregate de-identified patient data
  • To make those data available to authorized research organizations (universities, etc.) who study the data to help develop and evolve evidence-based preventive, diagnostic, self-maintenance/management, and treatment guidelines that focus on bringing ever-increasing value (i.e., cost-effectiveness) to the patient/consumer
  • To disseminate the resulting guidelines to all parties.
In this scenario, using the decentralized node-to-node architecture, the patient data would be stripped of patient identifiers and shipped to a centralized research data warehouse. The stripping and shipping would be done by the nodes having direct access to where those data are stored, that is, to the nodes belonging to the clinician/provider that access the data from their EHRs, and to the patient nodes having access to their PHRs. Nodes having direct access to the research data warehouses would then receive the de-identified patient data. In other words, the clinician and patient nodes would implement their publisher (sender/transmitter) function to transmit the data, and the RHIO/HIE's data warehouse nodes would implement their subscriber (receiver) function to retrieve the data. And the resulting guidelines would be shipped via the RHIO/HIE nodes by implementing their publisher function; the guidelines would be received by the clinician nodes implementing their subscriber functions and subsequently be presented through clinical decision support software programs.
This scenario is an example of a hybrid mesh node network architecture in which both centralized and decentralized networks work in harmony. BTW, another example of a hybrid mesh is when a multi-site healthcare organization with a centralized EHR system (behind a firewall) connects via nodes to the EHRs and PHRs of other parties outside their organization (beyond their firewall).
Margalit added:
Dr. Beller, it seems other folks are starting to think the same way It's a start.....
And Alexander added:
P2P communication works great when a PCP refers a patient to a specialist or
orders a test. And there are already exchange formats widely used for that, such
as HL7, CCD and CCR. But in order to get all patient EHR's through P2P
connections, (1) the requester has to somehow find out, which peer systems have
that information, (2) make sure they are connected, (3) send a request to each
of them. And every EHR application must have its own authentication and
authorization module to handle external requests... I just don't see how this
may work without an intermediary.
To which Margalit replied:
I agree Alexander. It won't work on a very large scale without an intermediary
or a super node or a translation gateway, whatever we end up calling it.What I
like about eCW's announcement is the change in the way vendors are thinking.
Exchanging information is finally becoming a worthy goal. As long as they are
moving in that direction, every small step is an achievement.
I responded:
I also agree that an intermediary would be useful for larger scale P2P
implementations so that each peer/node can find other peers/nodes during the
publisher-subscriber activation process (i.e., when two nodes connect with each
other for the first time, which includes authentication and authorization). A
RHIO/HIE would be an ideal intermediary supporting such P2P connectivity
regionally. A Federal government agency, or even a “supra-RHIO/HIE” node that
connects the regional ones, could do this nationwide.
Another reader (a physician) then commented:
The interesting thing is that the only 'standard' that clinicians use in the
daily care of people is English. I think this is unlikely to change, Dr. Beller.
And I replied:
I'd go one step further: I believe our country should be engaged in international collaboration and research, so English isn't even a universal standard.
In any case, using a pub/sub node-to-node architecture, there can be one or more nodes between the publisher and subscriber that serve a data translation/conversion function via mapping methodology. That is, if the publisher uses a local terminology standard “A” and the subscriber uses local standard “B,” then the data can be sent to an intermediary node where corresponding terms are translated into the subscriber’s parlance. This would not only improve communications between clinicians in different regions and facilities, but also in between clinicians in different disciplines. Likewise, the terms could be translated into layman's language when communicating with patients!
Another physician responded:
Interesting thread... Some thoughts...
1) The last post about A and B getting translated by an intermediary pretty well describes the desire behind RxNorm (input Multum, First Databank or other and translate to RxNorm or one of the other systems) -- Good idea, would be even better if the Government would create an open wiki or similar to create a crowd-sourced comprehensive drug-drug interaction system (Would cut about $20/Doctor/Month off the cost of e-prescribing, now paid to Drug Data manufacturers).
2)The Pub/Sub Node with some reporting central store - describes well Carol Diamond's and lots of others architecture for an HIE / NHIN infrastructure (eg hybrid federated - pub/sub node and centralized - central clinical repository).
3) The Vermont Blueprint and VITL exchange that Governor Douglas (Vermont Gov, also Chair of National Governors Assoc this year - different topic, but look at his RxReform platform for accessible, affordable accountable healthcare - pretty interesting) -- Anyway, the exchange started generating data for reporting and for community coordination by doing 2 things - 1) Agreeing on transport (started as CCR, then moved to CCD - both work, but as Phil Marshall from WebMD stated in his HIT Policy committee testimony - CCR is easier to use unless one needs to use CCD for standards reasons) and 2) Agreeing on a LIMITED semantic dictionary - make sure to collect a few important things in a structured, easy to manage fashion, and the system can be used by lots of parties.
Bottom line feels as if designing to solve the GOAL of the PROJECT or TASK ends up with a simple, effective solution - the heart of the original post - it was right on target.
To which I replied:
Concerning CCD vs. CCR, I think their reliance on XML makes them both more complicated and inefficient than is necessary. I say this because the data they contain can more easily be laid out in a comma separated value (CSV) file (including any parent-child hierarchies, although they are rarely, if ever, required for health data exchange).
In fact, I've developed an open source app that uses an MS Excel VBA macro to convert a CCD into a much slimmer and much more human readable CSV file at Note that the CSV could be used instead of the CCD for transmitting data from node to node. Nevertheless, CCDs/CCRs are today’s standards and thus cannot be dismissed.
BOTTOM LINE: As our country struggles to transform healthcare into an efficient and effective system, there is great need for a convenient, low-cost, resource conserving, and secure way to exchange any electronic data residing anywhere that doen not require those data to pass through a central server or reside in a central database. This is precisely the kind of decentralized peer-to-peer mesh network architecture, publisher-subscriber communications, and desktop (standalone) applications that I've been describing. It's good to know others are beginning to see the wisdom of this approach!

Note that, while this system currently takes advantage of the power and ubiquity of the MS Office platform, it is not dependent them since the same funcitonality can be built on other platforms.


Recombinant said...

This is an interesting take on the NHIN, HIE, and research data warehousing world and further evidence of the coming convergence between HIE functions and healthcare data warehousing.

I prefer the decentralized peer-to-peer (P2P) thinking of the proposed solution as well as the simplicity of using Microsoft Office as a platform to share continuity of care document (CCD) messages between physicians. P2P is now infamous from Napster, and is an ideal way to exchange content without central hubs or repositories because it scales quickly and quietly by participants. The general idea of an HIE system involves P2P data exchange, but most architectures of today utilize big hubs.

The Microsoft Office-style exchange may work best for small practices, but not for large integrated health networks. EMR implementations such as EPIC and heterogeneous application systems across hospitals and outpatient facilities require centralized interface engines and CCD factories to consolidate interoperability.

A new twist in the development of a national patient identifier is the use of biometrics. This would avoid reliance on patient reported information which is often inconsistent and the cause of privacy issues. Although I personally like the idea, patient privacy folks may not be pleased with the notion of each office keeping a biometric imprint of their patients with the intention of sharing data.

The thought of universal biometrics reminds me of the movie Gattaca. I find it difficult to imagine every hospital and clinic registration system adding a fingerprint swipe or retinal scan to their hardware and software infrastructure. However, it is a clever idea to address the daunting challenge of uniquely identifying patients amongst a few hundred million people before providing medical facts.

I like the idea of adding de-identified feeds at a patient-level into the mix of the NHIN/HIE/RHIO frameworks for the purpose of public health and research. This is the first time I’ve heard of that idea and it might work for some applications. It may only scale for certain applications, because a warehouse is needed to query complex questions such as cohort size estimations. That being said, ePCRN doesn’t differ much from this approach.

Thanks for the thoughtful posting!

Dan Housman
Managing Director, Analytical Applications
Recombinant Data Corp.

Dr. Steve Beller said...

I thank you, Dan, for your thoughtful comment!

The P2P pub/sub architecture I describe is not meant to replace the big hub centralized architectures currently in use by large integrated health networks. Nor am I saying that centralized EMR implementations should be scrapped for an MS Office-style exchange. Instead, the architecture I’m discussing would compliment them. This can be done by adding a node to the hub that has access right to the central database, thereby enabling communication and information exchange with other nodes outside the network (beyond the VPN). This is ideal for connecting disparate networks (including RHIOs and HIEs) with one another, as well as connecting clinician to clinician, clinician to patient, patient to patient, and authorized individuals outside a network to those inside that network.

Regarding the biometrics, the imprint will be converted to a unique alphanumeric string (a biometric index ID). Neither the string nor the actual biometric imprint will ever be exposed--neither at rest nor in transit--as it will be stored in secure databases and encrypted files.