Tuesday, December 01, 2009

Who should Own a Patient’s Health Data, Where should they be Stored, and How should they be Exchanged (Part 2 of 2)

This post is a follow-up to a prior post at this link. It is a thought-provoking conversation that examines whether the patient data stored in the national health information network (NHIN) will, within the next five years, likely be "owned" by major firms such as Oracle, Google, and Microsoft.
I wrote:
While a patient ought to "own" all their health data, it doesn't mean that such ownership is the same as having actual physical possession of them all. After all, each healthcare provider...has physical possession of the data that they collect. It's UNREASONABLE to expect that all those data (including images) be shipped to the patient for local storage and to ask the patient to release those data each time a provider needs them. Instead, the data should be stored where it is collected.
There is one exception, however: the PHR. All PHR data should always be stored with (i.e., physically possessed by) the patient (preferably, imo, in an encrypted data file), even if collecting data through the PHR is done via a kiosk in a doctor's office or through a provider's web site. Furthermore, all EMR/EHR data (with some possible exceptions, such as a psychotherapist's notes) should be sent automatically to the patient's PHR; and the PHR should have the means to help the patient understand what those clinical data mean.
To deal with the privacy issue, the PHR should possess functionality that enables a patient to identify the particular data able to be shared with particular types of providers. In addition, patients' PHRs should give them guidance and warnings about who should have access to particular data based on their roles and responsibilities. In that way, any data are stored in a provider's database/warehouse could only be shared with third parties when explicitly authorized by the patient.
Another commenter wrote then:
From a health delivery context, there are a number of stakeholders and providers who use patient information and who contribute to it...But to me ownership also means who decides where the information is to come from, what form it should take and the analysis of it etc, all questions related to the skills of the medical practitioner. The family physician is the medical practitioner who oversees and looks after the patient's overall health and as such has access to all information contained in a patient's medical record. It is the role of the GP to make diagnosis and recommend treatment, prescribe medications, monitor patient health, refer treatment to other clinical specialists and give other health related advice etc. It seems to me that the owner of the patient's medical health record is collaboration between the patient and the family physician. The patient has the right to know what is contained in that record but ultimately it is the GP who decides what goes there, and how best to use it.
...From NHIN or network perspective, there is a physical ownership component. An administrative entity is needed to manage where a medical health record resides, how it will look like, and where it is to be distributed to. Different parts of the health record will be supplied by different providers. Standards need to be applied and privacy concerns need to be satisfied. Time is another element. Access to accurate medical information in a timely manner are the two overarching considerations of the NHIN.
I replied:
It sounds like you're describing the Medical Home model with the GP controlling the flow of patient data. In that scenario, the patient would authorize a "community of referral," i.e., "trusted partner" clinicians to which the GP can refer and exchange patient data. I agree that the patient need not specify which data should be exchanged with a particular specialist every time the GP makes a referral. But the patient should indicate, at least once, which data can be shared with different types of clinicians. This can be done, for example, by having the patient approve (or modify) a recommended data set and let the GP decide the particulars within that set of data.
...I see the NHIN containing minimal data sets as defined by standard CCR/CCDs. This patient data subset includes provider and patient identifying data; contacts and advanced directives; patient's insurance and financial data; and patient health status, which includes codes for diagnoses, problems, and conditions; allergies; medication prescription information; immunizations; vital signs; recent laboratory results; codes/descriptions for procedures and assessments rendered; history of encounters; and care plan recommendations. By contrast, here's a link to what I consider a comprehensive data set, which includes advanced PHR data and addresses the information needs of the multidisciplinary teams comprising a medical home.
Although an NHIN could make certain important data available to clinicians at great distances, the vast majority of communications are between providers within local/regional HIEs (and other communities of referral), not between those at great distances. So, there's no need for the complexities of a monolithic centralized system for everyday data exchange. It's much simpler, convenient and less costly to use a node-to-node pub/sub architecture that relies on desktop/standalone apps and encrypted e-mail attachments. Such a mesh node network model (which resembles the telephone system) makes more sense than forcing all transactions through a central server. The NHIN would be most useful for biosurveillance and for clinical research since it is a centralized data warehouse provides an easy way to aggregate huge numbers of de-identified records from around the country. The NHIN would also be a good way to store backups of patients' encrypted data files. And since an NHIN would not contain comprehensive data sets, connecting pub/sub nodes with local data stores to one another in a decentralized manner is a more efficient and secure way to exchange extensive patient data. This is why I propose a hybrid cyber-architecture in which nodes connected to central data stores, along with nodes connected to local data stores, are the primary vehicles of data exchange.
And he then wrote:
Some of the models that I have seen rely on a central backbone for communication and coordination. It follows the SOA pattern and would have nodes connecting to a central highway. It seems that connectivity is a big consideration in being able to collect patient information from a variety of sources and providing front end interfaces for people to access information. Collection might be more onerous in a decentralized model. Implementing a monolithic centralized system certainly has its challenges though. For one, there is a larger burden to get a consensus from all of the stakeholders and to determine the most efficient architecture. I suppose there are disadvantages and advantages to both centralized and decentralized approaches. For example if my home is in New York and I travel to San Francisco and get sick, presumably the hospital in SF would have ready access to my health record in the centralized NHIN. I am not sure how transparent that would be in the more decentralized or node to node implementation. There would be connection issues, knowing who to connect to and login issues etc. But I agree with you there are certainly merits to a hybrid (best of both worlds) approach.
To which I replied:
...I think of a central communication backbone as being the Internet with pub/sub nodes connecting to each other across the central highway by exchanging encrypted e-mail attachments asynchronously.
The front end interfaces I'm proposing are programmable data grid templates used by the node to produce the data files (via a node's publisher function) and consume & present the data files (via a node's subscriber functions). The software programs used by the publishing nodes automatically (a) retrieves data from any necessary data store (local and remote) by whatever means required (SQL, XML or CSV parsing, etc.); (b) performs any necessary data pre-processing (i.e., data transformations and translations, analytics, etc.); (c) packages the resulting data set in an encrypted data file; and (d) attaches the file to an e-mail, addresses the e-mail, puts it to the outbox, and ships it to the appropriate subscribing node(s). Corresponding data grid templates, residing with the subscribing node(s), then consume and render the e-mail attachment. All this using local resources and without the complexities of a big centralized system.
[Alternative to having everything stored in a centralized NHIN include]...carrying your encrypted data file (containing a lifetime of health data down to just an emergency data subset) and respective templates on a memory stick or smart card. Another is to have a centralized directory of GP e-mail addresses and patient identifiers whereby your GP's address can be located.
He then responded:
GE Healthcare refers to eHealth as the total healthcare IT infrastructure that connects and adds value to the healthcare delivery system across multiple hospitals or a region, including physicians, care providers, patients, and others. http://www.hospitalhealthcare.com/default.asp?title=Highfocusonpartnershipsandinnovativetechnologies&page=article.display&article.id=19448
Applying the GE definition to an overall strategy not dependent on any one technology but encompassing a number of value added solutions, a best of breed approach if you will, which could be applied to the design and deployment of an efficient, cost effective and improved healthcare IT infrastructure , is close to what you are advocating, I think Steven. A strategy in which a solution is not locked into anyone particular vendor, which rules out the Oracle, Google and Microsoft monopolies, but matches vendor strengths and functionality to the task at hand.
Another commenter then wrote:
MSFT, Google, and Oracle would not want to "own" or be responsible for the safekeeping of the data. I expect the NHIN will end up being a decentralized network. No one will own the NHIN. The US Government will serve an administrative role.
I then added:
The GE model is close to what I'm advocating. I didn't notice any mention by GE for the inclusion of decentralized, asynch, P2P, pub/sub, mesh node networks--which I claim are essential for connecting all parties--but they didn't exclude it either.
I envision all vendors of health IT apps providing APIs that connect to the nodes, i.e., PHR/PHA apps would connect to consumer-facing nodes, EHR/EMR would connect to provider-facing nodes, and CDS (clinical decision support) apps would connect to the aforementioned apps. In addition, APIs for research-related analytic apps would connect those apps to nodes accessing the centralized NHIN data warehouse for which the Feds have the administrative role. I think this is consistent with the previous comments.
Another commenter then wrote:
The system will need to be portable, secure, and inexpensive. While I have a dog in this fight, I feel smart cards are the way things will/should turn out. The systems needs to be architected in a manner in which the data/information follows the patient - the only way to do that is to make it portable, i.e. a smart card (like most of the rest of the world uses). It will need to be secured, using the most modern web-based technolgiiues, such as PKI. The solution, we feel, is smart cards designed for healthcare.
And I replied:
IMO, use of smart cards and memory sticks are certainly part of the solution, and numerous vendors are in this niche. Inclusion of PKI is a good idea. The primary issue, I believe, has to do with determining the best ways to get the data stored in such portable storage devices (as well as in other data stores including DBMSs, XML files and delimited data files) shipped around the country as needed and accessed by any number of diverse third parts software programs. And that issue has to do with factors such as available bandwidth and connectivity, security, privacy, convenience, simplicity, and, of course, cost. I contend that the node-to-node model I'm proposing provides the greatest overall benefits in those terms.
As such, the smart card reader would be connected to a node, in the same way PCs, servers, memory cards, smart phones, etc. have their node connections. The hybrid mesh node architecture, I further contend, would be the most flexible and useful (see this link).
Where I (my company) have a vested interest is in having the nodes utilize optimally efficient delimited data files, modular data grids templates, and email (SMTP) transport to minimize resource consumption, expense, hassle, etc.
A previous commenter then added:
Many folks including GE, and many of us here are advocating mechanisms to provide an appropriate healthcare IT infrastructure...I was involved for three years on a comprehensive project at a cost of millions to build an eHealth system...The eHealth architecture was a centralized model. Cost was a major factor in this project and as I was leaving a re-think and re-planning effort was being carried out to keep the costs down. It seems that flexibility is one of the key words. I think it is terribly important to [be]...thinking outside the box. 
I then wrote:
I would want my de-identified info sent to a regional HIE and the NHIN for research purposes (at least a minimal data set). And I would consider storing a backup of my entire health info over my lifetime remotely in the NHIN, but ONLY if it was in an encrypted data file for which only I had authorized access. Then--in case I could not access by local copy of the file (e.g., if it was destroyed, if I didn't have it with me on a smart card or memory stick and my PC was unavailable, or if I was unconscious or otherwise incapciated and the ER docs needed my emergency data)--data sets that I've (previously) authorized could be extracted from that remotely stored file and sent to appropriate providers. I would want this to be done in a node-to-node (n2n) network, so that no human would have direct access to my data file, and I would also want to use biometric indicators as the universal IDs.
Another commenter then wrote:
All those involved in the management of a patient, including the patient (if compus mentus) should be able to have variable access to the patient's data. Ideally the patient should have a health manager (typified previously by the "Family General Practitioner) who delegates the relevant access to the necessary data in order to optimize the patients' management...The patient needs to take responsibility for his own health care management and thus should hold all the keys in all but emergency situations, and this is where biometrics could be used to review critical data.
My thought is that while the patient should have the option to give the GP authorization to have full and complete control of one's health data without any constraints, such global authorization is not mandatory. If a compus mentus patient refuses to allow certain data to be accessed and/or shared, even though it puts the patient in jeopardy, the patient, with ample warning and education, can still prevent that data from being used; doing so, however, would release the providers from liability and may even increase the patient's liability/cost if lack of that data results in worsening health.
Another commenter then wrote:
The NHIN concept will need to involve a lot of technologies to make it work, including patient identification, information access, information sharing, as well as data storage. Concepts including cloud computing, smart cards and/or memory sticks, mesh node networks, and many others will all play into the NHIN in one form or another.
From an historical IT perspective, there has been a long-standing conflict between the "functionally driven" vs. the "data driven" development models. My position is that a data driven infrastructure is, in the long run, more effective, secure, and adaptable. This allows innovation occurring among vendors and regions as well as the changing trends in healthcare services, patient needs, and ultimately the quality of care to be facilitated.
In my "user/patient" perspective, I want to insure that my information from care received while in the military, as well as the information I received as a child (before I even understood the long-term ramifications involved), is available to my current primary care physician and any specialists. I also want to insure that they have information that I have forgotten or may not realize is pertinent to any pending care I am about to receive.
To support this, I believe a decentralized model can be built more affordably. However, care must be made to insure that a cumbersome set of duplicated data is not created. The worst thing that could happen in the NHIN design would be allowing multiple versions of information to exist for a single patient.
Here are a few of my proposed design requirements:
1) Each provider or stakeholder would continue to have a data repository that is built for speed to allow "current care" efficiencies and reliability (the various EHR initiatives in progress today).
2) Regionally, data warehouses would be created using a common standard for the data architecture (but remaining agnostic from a vendor point of view such that in one place it may be a Microsoft solution and in another it could be Oracle, etc.). These would form the Regional HIO's and become the backbone of the HIE. The "primary" data warehouse for each patient should be located in the region where the most frequent access would occur, such as the one associated with their primary care physician.
3) To complete the NHIN concept, various applications would then be developed that would aggregate the appropriate collections of data from multiple data warehouses for the purpose of satisfying their objectives. I would assume these applications would usually exclude any patient-identifiable data. Otherwise, there needs to be a mechanism for patient authorization of access.
4) As patients travel outside of their regions, local clinics and hospitals who need access to information from the data warehouse would use applications to pull pertinent information specifically associated to the patient for the purpose of providing quality of care (this is where a smart card or some other form of secured patient access tool would be needed). Once this link is established, the regional data warehouse would pull any new data from that facility's repository.
5) If a patient makes a permanent move from one region to another, a set of applications would also exist to move (not copy) the data warehouse information from one region to another. When this happens, some form of an alert could be provided to the local systems/data repositories to place their information in an "inactive" status, or re-link it with the new warehouse.
All of the other technologies and applications associated with the Health IT Infrastructure would then be built and designed based on this model. Some may link to a specific repository associated with a single hospital or provider, relying on the link between it and the regional data warehouse for any long-term information; while others may link directly to the appropriate regional data warehouse.
And another added:
Can I throw an exception here? We have a significant number of people in the U.S. who are mentally competent legally but who either won't understand that they have control over their healthcare data or how to exercise that control, or who simply can't be bothered with it. That doesn't mean they have made the decision to relinquish control, however...Any health info policies and technical infrastructures need to take these folks into account...Poor judgment on the part of a not-terribly-bright or enfranchised patient could lead to disastrous medical care.
A commenter then added this:
I am a firm believer that the data should follow the patient and that the patient should retain control in an entirely decentralized manner. Centralizing the data in any way in the US is fraught with failure. Even in England, in a one payer system, they cannot get it done and that project is now over budget by billions of pounds.
Security is an entirely separate subject but the reality is that a username and password...is not going to work. The system will not work if people do not trust it. So trust and encryption and authentication will be paramount.
...In a smart card system, the identities of the patient (regardless of how many institutions they have been treated at) is federated on the card. The card can act as a much stronger security mechanism than anything else being proposed (offering both PKI keys, the obvious two-factor authentication model, and a photo on the card itself!), can offer portability and interoperability, is inexpensive, and is both scalable and sustainable.
And I chimed in with:
Although we've been having a largely technical discussion to this point, the last two comments reflect the need for sound governance concerning health data at rest and in transit. The point about determining if someone is able, willing and competent to make decisions about controlling the personal data, and if not, what should be done, are examples of areas for which policy and procedure are necessary. Whatever architectural models are used, they must be flexible enough to accommodate policies that may have yet to be established.
I'd like to add to the proposed design the three tier architectural requirements proposed, I believe, by CMS:
(1) RHIO / Regional HIE. (2) State level HIE. and (3) NHIN.
This goes beyond the local data stores, of course, and as I understand it, the data to be managed by each of these has to do with the relevancy of the data for certain purposes. For example, level 1 would be focused on data related to the local 'community of referral,' i.e., PCP/GPs exchanging patient data with the specialists to whom they refer, as well as data shared between hospitals and outside clinicians. Level 2 focuses on data required for public health, as well as for people in state facilities (nursing homes, prisons, etc.). And the NHIN would be focused on data for people in federal facilities, as well as nationwide biosurveillance (e.g., for communicative disease) and other things affecting public safety. I believe there's more to it, but I think this is the general concept.
The issue of what particular data sets would be managed by each tier, what data can and cannot be de-identified, the process for feeding data to each tier, exchanging data between the tiers, and issues related to privacy and security, are governance-related decisions. I'm seeking an architecture that would provide the necessary data relevant to the needs of each tier, but in a way that eliminates (or at least minimizes) overlap and (a) avoids storing patient-identifiable data in centralized databases at any of the tiers while (b) transmitting and presenting the necessary data with minimal resource consumption and cost.
A commenter then wrote:
Biometrics will obviate the need to carry data storage devises...The big hurdle will be getting historical data on file and in the format necessary to access it....Education around responsible healthcare and the results of ignorance would be cheaper for governments than adopting multiple methods and levels of responsibility taking for patients. Determining a level of "legal competence" to decide if a patient retains or loses their right to determine how their data is distributed is a difficult task and requires developing a robust test which takes into account origin and education of the individual i.e special tests formulated for different races/nationalities/religions etc
Another one wrote:
The points about corpus mentus patients: I am a familiar with a term called breaking the glass. Patients would normally make decisions about their healthcare but when incapacitated there is a policy in place to allow other clinical caregivers to make those decisions.
[The]…comments about governance and security are well taken. It would require some form of legislation to be passed that would enact policies for information privacy. Nobody wants Big Brother watching. Security is probably one of the most overarching concerns affecting the implementation of an NHIN.
From what I am reading, aggregated data which would be used for historical trending analysis and could be retained in a centralized repo whereas current data would be local and accessed only by the family doctor and other clinical specialists pertinent to patient care. There are still issues of portability where a patient's medical information needs to be accessed in locales other than where he resides. Encrypted memory sticks, node to node access etc. are options.
And this:
From a security and privacy perspective, the smart card suggestion has a lot of merit to it. The readers and updaters would have to be implemented on a national scale to allow the smart card to be read and updated anytime anywhere. Possibly something accessible through USB would be the most appropriate. With every medical visit the card could be updated with that visit. There could be software running in the provider's office to take information from office records for that patient, aggregate it, and reformat etc to fit with the electronic health record on the smart card. This approach would be simpler and is a medium that folks use and are familiar with. In terms of adding aggregated information to a national repo, providers could download software that would perform the aggregation function. That probably would be voluntary but the information would aid in formulating more effective healthcare policy.
...Also we need an electronic solution for managing drug prescriptions. There would have to be a system for the doctor to electronically transmit a prescription to a pharmacy...Again security and privacy concerns are central issues...conformance is also a major challenge in getting both clinicians and pharmacists to agree to a standard data format.
To which a commenter responded:
Your comment below is exactly what our HealthID software solutions does...We aggregate the data *using HL7 or SOAP/REST) from the HIS or EMR, make it useable for rules and workflow and CCR, and then have some very capable encryption software to write those data to the cards and federate the identities among trusted orgnaizations.
On another blog, a similar conversation was taking place. In it, someone wrote:
I think everybody can agree that patients have a right to see all their medical data and a right to decide who can see what portions of it and be notified of all disclosures of their medical records. I also think that HIPAA already mandates this...My pain point with these new proposals is...it is way too complicated...Unless, we make Internet healthcare equally simple for both doctors and patients, it will not gain adoption...One of the main reasons doctors are not jumping on the EHR bandwagon is the inherent complexity and the lack of proven hard ROI to the doctor. I submit that the same will happen with consumers and PHRs.
...The PHRs that are discussed here and elsewhere require patients to take control of the data. That means setting up the PHR, coming up with provider lists and entering them in the software with proper authorizations for various levels of access. Keeping these authorization lists current. Managing one's credentials and also family members credentials. Making sure that all is up to date. Changing authorizations to various providers and care givers based on changes in health status and on and on....
To which I replied:
It seems to me that with a little creativity and adequate field testing, PHRs can accomplish all that's required...via simple P2P pub/sub node networks.
Let's take the medical home model, for example. Every PCP (GP) establishes a community of referral, i.e., specialists to whom s/he refers patients as needed. The PCP and specialists would establish connections between their decentralized pub/sub nodes, which would enable them to exchange patient data with a few button clicks. The node-based software they use would automatically populate lists of these network connections. By using the e-mail based system I've been presenting, the lists would need little more than each specialist's name, e-mail address, area clinical licensure, and other possible metadata.
Prior to making a referral, the PCP would discuss with the patient why the referral is being made and explain why a particular specialist is being selected, just like things are currently done. Although no authorization by the patient is needed at this point, the patient may request a different specialist for whatever reason. The PCP would then click a button and the referral e-mail is sent.
Once the PCP receives the specialist's referral acceptance e-mail, the data for a CCR or CCD (or some similar data set) would be sent in an encrypted data file via e-mail to the specialists. But prior to sending it, the PCP's node software would determine which data appropriate for that specialist must be excluded from the data file based on the patient's privacy wishes. These data sharing authorizations would have previously come from the patient's PHR by having the patient's node send that information to the PCP's node at an earlier date. The patient would establish the authorizations by, for example, (a) viewing lists showing the types of data that are appropriate for particular types of specialists (and why they are needed) and (b) enabling the patient to modify the list at any time (with appropriate warnings when data elements are deselected). The lists could be organized hierarchically to ease the viewing and selection process. It would even be possible (although I don't know if necessary) to have the data set descriptions e-mailed to the patient for approval prior to routing the data file to the specialist.

Related posts:

Thursday, November 19, 2009

Who should Own a Patient’s Health Data, Where should they be Stored, and How should they be Exchanged (Part 1 of 2)


A thought-provoking conversation on LinkedIn (see this link) examines whether the patient data stored in the national health information network (NHIN) will, within the next five years, likely be "owned" by major firms such as Oracle, Google, and Microsoft.

While most (though not all) commenters replied NO, the discussion covered some very interesting topics about data ownership, storage, privacy, and exchange. Following are excerpts.
I replied to the original question this way:
I contend that ONLY group of people who should be allowed to OWN a patient's (consumer's) identifiable health data is the patient him/herself. The patient may allow other people (i.e., "Trusted Partners") to have access to certain data and to store it securely in centralized databases behind a firewall and/or in distributed encrypted data files stored locally. And when it comes to de-identified data for research purposes, I suggest that those data be available to anyone (e.g., under government control).
The best model for data exchange between the Trusted Partners (TPs) and between the patient and the TPs, imo, is a P2P pub/sub mesh node network resembling telephone networks.
Several others concurred with me about patient ownership and added comments such as:
…interoperability and access to longitudinal patient health data across physicians and time is a burden on bandwidth and very costly…the ownership of data will ultimately be the patient but the help of the government in providing a non-for-profit repository where the data sits and is maintained is a must. France is a good example of how this is possible…The best architecture for a national Health Information Exchange will be technology agnostic infrastructure, where EHRs are easily aggregated from multiple data sources simultaneously upon request by an authorized healthcare organization…All seem to agree that "networks of networks" model is a bit cumbersome and that patients should own the data…While I agree that the patient is the ultimate owner of the data, I do not agree that they should be the data aggregator - which means that patients should not be held accountable or responsible for the collection, entry and management of all their health information.
Another commenter I suggest this scenario:
Patient data "lives" in an encrypted "cloud", identified using a Universal Healthcare Identifier as is being developed by Global Patient Identifier, Inc. In order to render the data useless to hackers & thieves; financial & social security data is NOT stored with it.
EHR standards similar to ASC X12 HIPAA transactions, such that an entity (i.e. Provider) can request the Patient data vie a standard internet web-part that is populated based on selected parameters such as: all data, data for a specified time period, specified type of data (Radiology only, Lab results only) - or a combination of these parameters. In this way - data is available anywhere on the globe it may be needed.
We'll still need to devise a way for the Patient to grant access. Also, we'll have to think about controlling the data which was requested and locally populated. Can it then be stored locally / should it be erased? Perhaps this is managed via the TYPE of access the Patient grants to the requesting entity.
To which I replied:
…I suggest that any database in a public cloud should only contain de-identified data from multiple sources for research (aggregate analyses); the cloud could also store back-ups of encrypted data files for each patient with the originals residing on each end user's (clinician's, patient's, organization's) computer hard drive (or network server). These data files would contain patient records made up of data fed from locally stored sources (e.g., EHRs, PHRs, CPOEs), manual inputs, medical device data streams, and so on. A P2P, pub/sub, node network cyber-infrastructure would enable authorized nodes/users to conveniently exchange of data sets from patients' data files; to minimize cost and complexity, the files can be exchanged via encrypted e-mail attachments. I'll be offering more details on this novel health data exchange model over the next couple of weeks. See this link.
Note that patient control is enabled by decompositing a locally stored data file based on rules reflecting a patient's privacy wishes, so that only the portions authorized by the patient are exchanged. See this link.
Another commenter then wrote:
Consider the information related to a patient to be an "object" in a massive data warehouse. Different data attributes associated with this object are (at least today) "owned" by different people/organizations. For example, some of the provider data is and should be "owned" by the provider and not available to other providers, payers, or patients. I see this as one of today's key perceived barriers to physician/practitioner acceptance of the NHIN model.
Conceptually this design is possible, but a challenge remains with the "physical owner" of the data warehouse, its database and application design characteristics, and its security administration. Ultimately, in any Information System, someone needs to be the "master administrator". Furthermore, the patient does have ownership over who may be authorized to "tag onto" their patient-object (i.e., who have they authorized to provide them care).
One suggestion may be to encrypt the content-data so that the "master administrator" can set the security for various attributes (between provider, patient, payer, government, or other users) without having the ability to access the content. This service can be designed so that the patient may designate these roles, but only for their own patient-object.
I propose that the HIO [health information organization] provide for physical ownership at a "relatively" local level (by metropolitan area or rural region), using cloud computing principles that are updated to incorporate HITECH and related concerns. There needs to be a common interface between these HIO's in order to achieve the NHIN.
To which I replied:
While a patient ought to "own" all their health data, it doesn't mean that such ownership is the same as having actual physical possession of them all. After all, each healthcare provider (from an individual clinician to hospitals to large health system such as Kaiser and Geisinger) has physical possession of the data that they collect. It's unreasonable to expect that all those data (including images) be shipped to the patient for local storage and to ask the patient to release those data each time a provider needs them. Instead, the data should be stored where it is collected.
There is one exception, however: the PHR. All PHR data should always be stored with (i.e., physically possessed by) the patient (preferably, imo, in an encrypted data file), even if collecting data through the PHR is done via a kiosk in a doctor's office or through a provider's web site. Furthermore, all EMR/EHR data (with some possible exceptions, such as a psychotherapist's notes) should be sent automatically to the patient's PHR; and the PHR should have the means to help the patient understand what those clinical data mean.
To deal with the privacy issue, the PHR should possess functionality that enables a patient to identify the particular data able to be shared with particular types of providers. In addition, patients' PHRs should give them guidance and warnings about who should have access to particular data based on their roles and responsibilities. In that way, any data are stored in a provider's database/warehouse could only be shared with third parties when explicitly authorized by the patient.
And please confirm that I understand your proposal: Patient data can be considered an "object" with attribute tags defining those authorized to access data from that object. And you say the object would be stored in a massive data warehouse, but there are problems with determining who should physically own the warehouse database be the "master administrator," as well as the failure for the patient to control who is authorized to add tags to a patient-object. The suggestion is to encrypt the content while allow the master admin to set those authorization tags in accord with the patient's wishes. Using cloud computing principles to support ownership by regional centers and a common interface between them would enable a NHIN.
Assuming my understanding is correct, then what about the following data ownership and exchange model: I agree that patient data ought to be managed as an object. I content that the object ought to be a data file, preferably an encrypted delimited text file (such as comma separated value format) to minimize size and overhead. There would likely be multiple data file objects for each patient, which are stored very locally depending on who entered/collected the data (e.g., on a patient's or clinician's computer, smart phone, memory stick/card, or on a health organization's server, etc.).
For everyday transactions (e.g., when a primary care physician exchanges patient data with the specialists to whom they refer, or when a patient and clinician share data between a PHR and EHR) a desktop or network-based software program would automatically decomposite (break apart) the local data file, extract the authorized data, and ship that data set via an encrypted e-mail attachment using PKI to assure the correct recipient gets it. The recipient can then view those data in a personalized, template-based report A decentralized node-to-node, pub/sub mesh network could do this exceptionally cost-effectively and with minimal complexity, in addition to increasing security and privacy since the nodes' actions are guided by a rules base requiring no human intervention.
Continuing with my proposed model, the NHIN data warehouse would be fed by the same software program in the same manner, with each NHIN server being connected to a node in the network, and with e-mail being the "common interface." When invoking its subscriber function, the NHIN node(s) would automatically retrieve data files sent to it and import those data into its database(s). These files would contain a standardized minimal data set (MDS) based on the CCD/CCR, whereas the data exchanged between healthcare providers, and between patients and providers, would include but not be limited to the MDS. When invoking its publisher function, the NHIN node(s) would send the appropriate data to the appropriate subscriber (provider or researcher) nodes, which may include immunization and disease registry data, biosurveillance data, and de-identified data form cost & quality research. The NHIN would also enable any authorized clinician to access certain patient data residing beyond the confines of the regional data centers. By using these unmanned nodes for carrying out the data exchange processes, the issues of security and privacy are increased, as mentioned above, and the problems associated with a master administrator are eliminated.
The conversation is continued at this link.

Related posts:

Monday, November 16, 2009

Screenshots of Two Novel Health IT Programs


In an effort to "raise the bar" of health IT creativity and utility, I'm posting screenshots of two novel health IT software programs we've been developing: the Life Chart and CCR-Plus.

Life Chart Program


Our LifeChart program depicts a patient's mental health status, treatments, and significant life events over time in a format that is easy and understand. The example below covers seven years of a bipolar patient, but similar chart can be constructed for patients with other disorders. The data can be entered manually and, if available, imported from any database.



Continuity of Care Record Program


The figure below contains screen shots of the CCR-Plus program, which combines a standard CCR with (a) a unique Warnings section that identifies associations between lab results, signs/symptoms, and medication side effects, and (b) a section displaying many lab test result panels and imaging studies in a uniquely clinical useful manner.

Thursday, November 12, 2009

Simple, Low-Cost, Secure Health Data Exchange

The following diagram presents our novel, economical way to exchange patient health information, from anywhere, easily and securely. The system uses Microsoft Office to send comprehensive patient data between clinicians, via e-mail, with just a few mouse-clicks. This convenient, easy to use system does not require additional hardware and works with any other software programs.

The system can be used in many ways[1]. The process for sending referral information between a primary care physician (PCP) and one or more specialists, for example, is shown below: Å’ After the PCP selects a patient, the specialist(s) to receive the referral and the reason for it, the referral data are automatically retrieved from his/her EMR.  The data are then automatically sent to each specialist via an encrypted e-mail attachment. Ž Once the referral is received, each specialist automatically sends a reply back to the PCP with a few mouse-clicks.  After reviewing each reply, the PCP, with a one mouse-click, automatically sends patient data to each appropriate specialist in a continuity of care document (CCD).  A couple of mouse-clicks by the specialist and the CCD is automatically displayed. The CCD data may then be loaded automatically into his/her EMR or EHR.



What Makes the System Unique

The system employs the patented CP Split™ software method to assemble the data in an organized (meaningful/logical) way using electronic containers ("objects"). It's like arranging a child's building blocks according to some thoughtful plan. But instead of actual blocks (physical containers), a grid template software program is used, which consists of electronic containers into which data, from any sources, are organized sensibly and efficiently in preplanned structures. Step 1 (below) shows sample data for patient John Jones in the grid software.

The software program can not only present the data in the grid (as per step 4), but it can also share any of the data. It does the data sharing by automatically taking the data from the grid template and storing them in a simple encrypted data file, which it attaches to an e-mail and sends to a collection of trusted partner recipients (as per Step 2). When the e-mail arrives, each recipient's software program automatically decrypts the data file and extracts the data it contains. Next, it copies those data to the recipient's corresponding grid template, which are organized according to the same preplanned structure (as per Step 3). It then rapidly presents those data in dynamic (interactive) reports by performing any required analyse[2], adding the labels, and and formatting the data—based on their grid locations—in a way that assures the resulting information is readily understandable and useful to each recipient (as per Step 4). Furthermore, the software can easily update the data as needed, as well as exchange the data with other software systems and databases. This unique (patented) process minimizes complexity and cost, while maximizing convenience, security, usability and interoperability[3].


Footnotes:

[1] Ways the system can be used include: (a) automating the patient referral process; (b) sharing patient data between personal health records (PHRs); electronic medical records (EMRs), and electronic health records (EHRs; (c) exchanging health data within and between health organizations and exchanges (e.g., RHIOs and HIEs); and (d) delivering de-identified data to research and public health databases.

[2] Analyses may include logical (e.g., if-then-else) and mathematical (e.g., statistical) operations to provide useful information in a report that goes beyond the raw data. In the sample report above, a simple example of such analysis is coloring the patient’s age (55) red because he is older than 50. And in the blood pressure graph, levels that are too high are colored red, otherwise green.

[3] Interoperability is the ability of a software system to work with other systems without special effort on the part of the end-user.

Related posts:

Monday, November 09, 2009

Health IT: Comparing Cloud Computing and Desktop Applications (Part 2 of 3)


This post is a continuation of the debate about the pros and cons of Cloud versus Desktop computing, which starts at this link.

The cloud computing expert I've been debating responded to my comments by writing:
We are just from different camps of software thinking. I have a CS degree from a college that taught on Unix machine not desktop and my entire experience is working with enterprises systems not desktops I was an early adopter of Java and attended the first Java one where the mantra was "The Network is the computer". Eric Schmidt CEO of Google is doing his best to make this happen. Yes they are looking to the future but Microsoft seem to be stuck in the past.
I reply:
Yes, we are from different schools and thus have very different points of views. We both see the value of networks with you focusing on networking computers within an enterprise ("behind the firewall"), and me from the perspective of enabling information exchange between "disparate islands of information" owned/controlled by (a) vastly different organizations and individuals with vastly different communications (from continuous broadband to occasionally connected dial-up) and (b) widely diverse information needs (including linking hospitals and clinics, individual clinicians across all healthcare disciplines, public health institutions, research organizations, as well as individual patients).
And while I am discussing the value of novel applications that use e-mail to enable all standalone (desktop/laptop/notebook) computers to have publisher and subscriber functionality (i.e., they have server-like functionality that enable P2P data transmission), the technology I'm proposing need not be Microsoft based since it can use any kind of automated spreadsheet grids and e-mail (although I do think the MS Office suite is currently the best, or at least the most ubiquitous, but OpenOffice or other tools could be used instead).
The conversation continues…

In response to my comment: "Since there are concerns about the security of data stored in the Internet cloud, people may feel more secure if they have complete control over their private information (such as personal health information), which is stored in encrypted data files in their own computers and other computerized devices," he wrote:
Cloud is just as safe or safer than office machines under desk. You can just take one home
To which I reply:
It's true that no matter what solution is deployed, security must be taken seriously. Examples of Cloud security risks can be found at http://tinyurl.com/ykv6s6o and http://tinyurl.com/lzr3gg; it's good that many people are working hard to manage those risks. My point is that storing a patient's health record in an encrypted data file residing on a local HD (hard drive) is much less complex and costly than securing the Cloud. This is especially true when data has to be shared with people outside an enterprises firewall.
In response to my comment: "Total cost of ownership is minimized since there is no need to rely on expensive central servers, databases, and server administrators," he wrote:
The cost [of expensive central servers, databases, and server administrators] is amortized, that is the point. And single machine are much more expensive than enterprise (depending on the size of staff).
To which I reply:
I assume you're comparing a thin versus thick/fat clients. With full power PCs continually coming down in price (many just a few hundred dollars), the cost difference can be minimal. And with the PC, you get the added benefits of fewer server requirements (a thick client server does not require as high a level of performance as a thin client server, resulting in drastically cheaper servers); offline working means a constant connection to the central server is often not required; better multimedia performance (thick clients have advantages in multimedia-rich applications that would be bandwidth intensive if fully served); more flexibility (on some operating systems software products are designed for PCs that have their own local resources, which are difficult to run in a thin client environment); using existing infrastructure (as many people now have very fast local PCs, they already have the infrastructure to run thick clients at no extra cost); and higher server capacity (the more work that is carried out by the client, the less the server needs to do, increasing the number of users each server can support) [see http://tinyurl.com/yl7tqxw]. And when it comes to exchanging data with people beyond an enterprises firewall (e.g., connecting with an independent clinician's EHR or patient's local PHR), the thin client is not an option for the remote individual!
In response to my prior comment: "All the information can be accessed anywhere/anytime, even if there is no Internet or other network connections," he wrote:
?? no network, are you talking sneakerNet? You can do the same with servers but....
To which I reply:
No, I'm not talking about handing a disk to someone. What I am talking about using an encrypted delimited data file, stored locally on a PC HD, which contains all the information needed on a patient (or on multiple patients for an aggregate report). No network, no problem. And when there is even a momentary resumption of the network, an e-mail with an attachment containing a portion of a patient's health dataset can be delivered or received with only a second or two of offline connectivity. I'm not talking about a centralized enterprise system, however,
In response to my prior comment: "Unlike communications requiring continuous connectivity, there is no loss of data when a network connection drops out (i.e., unexpected disconnection)," he wrote:
How about when you lose an HD or power in the building?
To which I reply:
If there's no UPS (uninterrupted power supply) on the PC or the HD fails, then you're right, there is no data to be viewed or shared.
In response to my prior comment:"There is no single point of failure to disrupt and entire network when a central server develops problems," he wrote:
Same argument but desktop have no redundancy.
To which I reply:
In addition to having backups (on- and off-premises) and UPSs, the kind of P2P, pub/sub, desktop-based, mesh node networks I'm discussing means that no single node (peer) on the network can prevent other nodes from working since there is no central server (i.e., each desktop node functions like its own server through use of e-mail). If the entire Internet fails (due to a natural disaster, terrorist attack, etc.), the local data files would still be available for local data access. But what about data exchange? In such a situation (that may be caused by) the networks I'm proposing would have communications "auto-failover" process by which the best available alternative methods of data transmission would be used—such as using dial-up, radio, or satellite communication—to send the e-mail attachments from anywhere to anywhere thereby enabling efficient emergency data exchange.
Also consider that central servers can never be immune to failure as evidenced by a recent disruption of an online EMR reported at this link.
In response to my prior comment "Since copies of the encrypted data files can be stored in many different locations (widely distributed), data survivability is enhanced," he wrote:
??? more sneakerNet?
To which I reply:
While physically exchanging disks or memory sticks containing the data file is one way to do it, by "data survivability," I'm referring to the type of emergency situation in my previous reply. When that happens, transmitting locally stored data files via e-mail using auto-failover communication methods provide a better solution than systems requiring Internet access to a centralized database.
In response to my prior comment "Maintenance has improved with automated desktop updates. On hosted systems, furthermore, users are at the mercy of the host; so, if an upgrade does not go well, or the individual user doesn't want or need the new features, the upgrade will still go forward," he wrote:
automated desktop updates require a host.
To which I reply:
Yes, but you don't have to install the update/upgrade, and can even reverse it if you installed it but don't like it.
In response to my prior comment: "There is greater security risk when running a web application online over the Internet than when running a desktop application offline," he wrote:
This is incorrect. Only if the system is not been setup correctly
To which I reply:
I realize there are ongoing debates about security, so here are a few quotes from people who argue that web applications are less secure than desktop applications: "There are always risks involved when dealing with working online, regardless of how secure a host might say a web application is, that fact of the matter stands that the security risk of running an application of the Internet is more significant than when running an application on a standalone desktop computer. Some applications require more security than others, playing Sudoku on a web application would cause little concern, but dealing with sensitive corporate formulas or accounting details in a web environment might be determined risky." [Reference] "Security - web applications are exposed to more security risks than desktop applications. You can have a total control over the standalone applications and protect it from various vulnerabilities. This may not be the case with web applications as they are open to a large number of users in the Internet community thus widening the threat." [Reference] "Security: Working online has its own set of risks like hacking and virus threats. The risk is higher compared to a desktop computer, since a malfunction of the desktop can result in loss of partial data. The crash of a web server can result in consequences beyond the control of a business." [Reference] "Local applications installed on your computer give you better security and do not require a connection to the web. Also, in many cases, local applications provide better integration with the operating system." [Reference]
In response to my prior comment: "Over the life of the software use, web applications are typically significantly more expensive because desktop applications are purchased outright and there are rarely recurring fees for the software use," he wrote:
This depends on a lot of factors but normally subscription based software is cheaper.
To which I ask: On what do you base this conclusion?.

In response to my prior comment: "Desktop applications typically operate faster because they are not affected by Internet traffic, server use, and latency," he wrote:
Only if the system was not setup correctly. You need Enterprise engineers to setup enterprise software.
To which I reply:
I guess by spending enough money and time on maximizing speed, you could reduce server processing time, queue waiting time, and network latency during high traffic. But you'd be hard-pressed even then to have a sophisticated healthcare application operate as quickly as a web app compared to a desktop app on a moderate power PC.
In response to my prior comment: "When using a web application that is hosted by a third party, privacy policies should be in place to prevent that data from being used by the web host," he wrote:
True, same for desktop privacy policies
To which I agree.

In response to my prior comment: "Multiple desktop applications can be integrate and used, enabling a model's functionality to be enhanced by other software programs. This cannot be done securely using a web-browser," he wrote:
Web apps can talk to each other that is the purpose of EAI.
To which I replied:
The seven main challenges of enterprise application integration are reported to be:
  • Constant change - The very nature of EAI is dynamic and requires dynamic project managers to manage their implementation.
  • Shortage of EAI experts - EAI requires knowledge of many issues and technical aspects.
  • Competing standards - Within the EAI field, the paradox is that EAI standards themselves are not universal.
  • EAI is a tool paradigm - EAI is not a tool, but rather a system and should be implemented as such.
  • Building interfaces is an art - Engineering the solution is not sufficient. Solutions need to be negotiated with user departments to reach a common consensus on the final outcome. A lack of consensus on interface designs leads to excessive effort to map between various systems data requirements.
  • Loss of detail - Information that seemed unimportant at an earlier stage may become crucial later.
  • Accountability - Since so many departments have many conflicting requirements, there should be clear accountability for the system's final structure.
  • Emerging Requirements - EAI implementations should be extensible and modular to allow for future changes.
  • Protectionism - The applications whose data is being integrated often belong to different departments that have technical, cultural, and political reasons for not wanting to share their data with other departments.
Furthermore, There are high initial development costs, especially for small and mid-sized businesses . And a fair amount of up front business design, which many managers are not able to envision or not willing to invest in. Most EAI projects usually start off as point-to-point efforts, very soon becoming unmanageable as the number of applications increase [Reference]
Also see http://www.sdtimes.com/content/article.aspx?ArticleID=30776
The bottom line, imo, is that there is a place for centralized web-based enterprise networks residing in the cloud. But they don't even come close to supplanting the need for low-cost, secure, standalone/desktop-based, P2P mesh node networks that can cross firewalls without hassle, work offline using local computer resources, and exchange health information via multiple communication methods using e-mail.

The conversation continues at this link.

Related posts:

Sunday, November 08, 2009

Health IT: Comparing Cloud Computing and Desktop Applications (Part 1 of 3)


I just responded to a comment posted to Linked-In at this link. The comment began:
Health care IT is moving from desktop application to complex multi-faceted Enterprise systems. The OS, wireless devices, and databases are much more sophisticated than even a few years ago. Enterprise security takes a skill set that is not readily available in much of the health care industry. Staff HIT most likely will not have the expertise to harden these systems. The needed skills very with system(s) and network. As the HIE rolls out, security will become more complex. But in saying that, many non technical requirement can be covered today with written policies and enforcement.
I replied:
Better yet ... Stick with desktop applications that exchange patient data via secure email using innovative decentralized peer-to-peer cyberarchitectures. See, for example, http://curinghealthcare.blogspot.com/2009/09/novel-way-to-exchange-patient-health.html
To which the commenter responded:
Stephen, Interesting but I see desktop apps such as Microsoft Excel are giving way to Cloud computing. Desktops app no longer make financial sense from a maintenance standpoint.

A federated system is a great way to exchange data but we will have to see what HITSP, (for one) comes up with.
I then replied with the following:
While desktop apps are becoming Cloud enabled (which can be a good thing in certain circumstance), I do not see desktop apps ever being replaced by the Cloud for many reasons. There are distinct pros and cons of each depending on the situation and use case. For example, accessing and processing data from local storage using local computer resources (as opposed to the cloud) has many benefits, including the following:
  • Since there are concerns about the security of data stored in the Internet cloud [Reference1, Reference2, Reference3], people may feel more secure if they have complete control over their private information (such as personal health information), which is stored in encrypted data files in their own computers and other computerized devices.
  • Total cost of ownership is minimized since there is no need to rely on expensive central servers, databases, and server administrators. Also, there are times when the cloud is more expensive than alternatives (Reference1, Reference2, Reference3).
  • Performance is greatly increased when performing complex, intensive computations since all data processing is done quickly and easily using local computer resources, rather than waiting for a strained central server, or paying for an expensive racks of servers. However, when massive computations from hugh centralized databases must be done, for which a local PC is inadequate, cloud and grid-cloud computing may serve an important function (although grid computing alone may surfice). [Reference]
  • All the information can be accessed anywhere/anytime, even if there is no Internet or other network connections.
  • Unlike communications requiring continuous connectivity, there is no loss of data when a network connection drops out (i.e., unexpected disconnection) [Reference]
  • A node-to-node network [e.g., a mesh node network] is more robust. In web-based networks, a central server breakdown may cause the entire network to shut down and prevent anyone from exchanging data. In the node network, however, a malfunction in one or even many individual computers may have little or no effect on the network as a whole since functioning nodes can still communicate with each other. In other words, there is no single point of failure to disrupt and entire network when a central server develops problems.
  • Since copies of the encrypted data files can be stored in many different locations (i.e., widely distributed), information survivability is enhanced in the face of terrorism and natural disasters.
  • And following are key advantages and disadvantages of web versus standalone applications:
  • Web applications are easily accessibility because they can be easily accessed from any computer or location that has Internet access. With a standalone, the computer must have the application installed. On the other hand, once the standalone application is installed, it is accessible anywhere/anytime, even when there is no adequate Internet connection; Web applications, however, typically rely on persistent and unmanaged Internet connections, or else the data are inaccessible.
  • Maintenance and forced upgrades are lower because with web applications when a company must manage hundreds or thousands of desktop computers, although this has become less of a problem with improvement in automated desktop updates. On hosted systems, furthermore, users are at the mercy of the host; so, if an upgrade does not go well, or the individual user doesn't want or need the new features, the upgrade will still go forward.
  • Over the life of the software use, web applications are typically significantly more expensive over time because desktop applications are purchased outright and there are rarely recurring fees for the software use (except for possible maintenance fees or fee based upgrades associated with them). Many corporate web applications, however, charge users monthly service fees (i.e., "subscription fees") to operate the software. [Reference
  • Web applications relying on the Internet to transfer data, rather than a using a computer's local hard drive, may operate slower. The speed may also vary depending on number of users accessing the application (i.e., network traffic). Standalone applications have no such constraints; the application will operate as fast as the person's computer power allows.
  • When using a web application that is hosted by a third party, privacy policies should be in place to prevent that data from being used by the web host. This is not an issue with standalone applications.[Reference]
  • Multiple desktop applications can be integrate and used, enabling a model's functionality to be enhanced by other software programs. This cannot be done securely using a web-browser.[Reference]
  • "Because all computation is done on the computer that the application is running [offline], the amount of data transmitted over the internet is reduced…In the case of web based application the data is passed back and forth between the client and the server each time a new calculation is to be done. If many clients are connected to the server at the same time this leads to allot of processing on the server and the power of the clients is not used."[Reference]
  • While standalone applications may be the platform dependent (e.g., can only operate on Windows computers), it is possible to build platform neutral applications that avoid this constraint. 
Also see, for example: http://www.slate.com/id/2188015/, http://www.itbusinessedge.com/cm/blogs/byron/in-cloud-computing-vs-desktop-its-the-data-stupid/?cs=31286, http://www.filterjoe.com/2009/05/29/the-desktop-or-the-cloud/, and http://www.inquisitr.com/26717/the-cloud-vs-the-desktop-an-irrelevant-argument/
And I don't understand your claim that "desktops app no longer make financial sense from a maintenance standpoint." After all, as mentioned above, with the automated update methods now built into many desktop apps (including MS Office), maintenance is easy, reliable and free, so I don't understand your claim that "desktops app no longer make financial sense from a maintenance standpoint."

We do agree, however, that a federated system is a great way to exchange data within and between healthcare organizations, as well as between them and between individual clinician and small practices. It would be foolish for HITSP or other standards bodies to eliminate email (SMTP) transport as a viable means of health data exchange, because it is a simple, low-cost and secure method everyone understands.
The debate continues as this link.

Related posts:

Wednesday, October 14, 2009

Promoting Community Wellbeing by Fostering Mind-Body-Spirit Development

I just submitted an idea to the Changemakers, an organization that describes itself as "...a community of action where we all collaborate on solutions." They are looking for ideas about how to improve mental health and community wellbeing with the world. The title of my idea is "Promoting Community Wellbeing by Fostering Mind-Body-Spirit Development." Following are several sections of my submission; the bold headings are the questions they wanted answered. For the complete submission, please visit this link.

What is your idea?

To provide tools that help transform people's lives, in ways that promote community wellbeing, by enabling and motivating individuals to develop their minds (including their beliefs, behaviors and emotions), bodies, and spirit by following life paths guided by profound insight, rational thought, empathy and compassion.

Innovation: What makes your idea unique?

The information it collects includes details about people's core thoughts, feelings, behaviors and health status, as well as the influences of their psychosocial (including economic, political, cultural and religious) and physical environments. In addition to helping improve people's physical and mental health, knowledge gained from this information would likely have a global societal benefit by revealing how much alike people are—regardless of the societies in which they live—by (a) showing how we share many similar core beliefs, perceptions, values, desires and emotions, and (b) clarifying the influence of psychosocial and natural environments. This knowledge could lead to profound understanding of what it means to be human, as well as dramatically increase the empathy we have toward each other by enabling each of us to "put ourselves in others' shoes." And empathy can breed compassion, which are essential for the wellbeing communities and express humankind's spiritual potential.

Problem

Six key problems addressed are:

  • The healthcare industry's failure to adequately address the mind-body connection when providing care (link)
  • The difficulty consumers have making informed decisions, solving personal problems, and taking better care of themselves (link)
  • The need for better evidence-based research, as well as more focus on guideline development, dissemination, and use
  • The need for better protection of populations via biosurveillance
  • Our culture's drift away from empathy and compassion toward a less spiritual set of values (link)
  • The healthcare system's failure to focus on delivering high value (top quality at low cost) to the consumer (link).

What was the defining moment that led you to this innovation?

The idea for the innovation began in the early 1980's as I (Dr. Stephen Beller) began my clinical psychology practice, and has been evolving ever since. At that time I started wondering how to obtain, manage and use comprehensive details about my patients' psychological conditions to help me deliver the best possible care by enabling profound understanding of my patients' problems, determine the best courses of action (treatment planning), evaluate outcomes (the results/consequences of such actions), and continually learn from experience. Since the personal computer (PC) had just entered the market, I figured that using a PC for this purpose was a reasonable thing to do, so I purchased one and began working with spreadsheets.

By the mid 1980's, my efforts led to the creation of a software program that I used in my practice to collect, analyze, and report data information about people's stressful/troublesome life situations, emotional disturbances, maladaptive ways of thinking and acting, psychosocial experiences, and traumatic events. I soon realized that I not only wanted to learn about my patients/clients' mental health problems, but I also wanted a way to know about any related physiological/biomedical factors that were affecting them. My colleagues and I then set out to create the first information technology providing a comprehensive, in-depth, "biopsychosocial" view of patients' conditions and treatments.

This led to a 15-year journey of intensive, cross-disciplinary research and heath IT innovation. During that time, I:

  • Created a universal lifetime computerized patient record system and a suite of decision-support tools for healthcare professionals and consumers
  • Published a blueprint for a national health information network
  • Used the knowledge gained over the years to obtain a patent for a novel process for exchanging and presenting information
  • Presented my ideas and creations to others while establishing international relationships.

Tell us about the social innovator behind this idea.

As the innovator behind this ides, my life goal is to work with others to help re-direct the course of humankind, so we don't have to be ashamed of the world we're leaving our children. Toward that end, I've spent the past three decades in creative pursuits, including inventing unique software systems, writing about the healthcare crisis and cures, and developing close personal and professional bonds with fine individuals across the globe.

I'm currently involved in a wide range of activities devoted to:

  • Healthcare reform
  • Consumer empowerment
  • Continuous improvement of care quality and efficiency using evolving evidence-based guidelines
  • Improving the health and wellbeing of the elderly and impoverished, promoting community wellness, and providing diabetes education
  • Development of novel cost-effective software tools and cyber-infrastructure for the secure exchange, analysis and presentation of meaningful information to healthcare professionals and patients
  • Protecting populations and supporting first responders and trauma department staff in disaster situations.

Being an outspoken critic of our current healthcare system for the past fifteen years, offering disruptive health IT innovations whose full appreciation requires a paradigm shift in thinking, and focusing on bringing high value to the consumer—all these things have made my journey very difficult and frustrating, as well as spiritually fulfilling. Nevertheless, thanks to the Internet, I've been able to develop many wonderful relationships with people in our country and abroad. And thanks to the "flattening of the world" and growing awareness that we must change the way business is done and people are treated, I am for the first time optimistic that social innovation can have a positive and sustainable impact on our species.

Monday, October 12, 2009

Increasing Global Empathy by Understanding the Essence of Humanity

For the past three decades, I've been envisioning ways to foster the realization of positive human potential and wellbeing. Toward that end, I've long imagined a world where people from all nations, societies, ethnicities and faiths have a deep comprehension of the thoughts and feelings common throughout all humankind (i.e., the "essence of humanity").

This knowledge would reveal how much alike we all are by showing how we share many similar core beliefs, perceptions, desires and emotions. It would also clarify the ways in which psychosocial (including economic, political, cultural and religious) and natural environments influences and differentiate people's mental states. And it could lead to a profound understanding of it means to be human, as well as dramatically increase the empathy we have toward each other by enabling each of us to "put ourselves in others' shoes." And empathy can breed compassion, which together are essential ingredients for the wellbeing communities.

Collecting information about people's core thoughts and feelings, and the influences of their psychosocial and natural environments, can be done through use of anonymous questionnaires. The questionnaires would have to have adequate details, be written in many different languages, and take into account the customs of different cultures. Developing the questionnaires would be an international collaborative project that could very well promote wellbeing among the participants as they focus on mutual understanding. For people with computers and Internet access, the information can be input into web forms or send via e-mail. For others, field workers would be used (similar to the census).

Analysis of the resulting information would be reveal common human thoughts and feelings about oneself, family, community and obligation to others; about people's perceptions of the past, present and future; about our wishes, virtues (what's good/right) and values (what's important), degrees of optimism and pessimism, sense of empowerment and helplessness & hopelessness; about our physical and emotional health, our pains and pleasures, our hopes and fears, our angers and frustrations, and feelings of shame, guilt, jealousy and envy; about our beliefs concerning life meaning and purpose; etc. The analysis would also delineate differences in our thoughts and feelings based on our demographics and the psychosocial and natural environments in which we were raised and now live.

By disseminating the results throughout the world, by understanding how much alike all humans are in their core ways of thinking and feeling, and by encouraging ongoing global discussion about them, I contend that empathy will increase throughout the world as ethno-cultural barriers are breached to give rise to a greater sense of oneness/unity. This would foster greater wellbeing the worldwide community!

Thursday, October 08, 2009

Convergence of 3 Core Healthcare Reform Issues: American values, personal responsibility, and pragmatic solutions

In the past few weeks, there's been a wonderful convergence of discussions focused on core issues underlying our country's healthcare reform debates; these issues are: American values, personal responsibility, and pragmatic solutions for a sustainable healthcare system.

For example, in the past two weeks, the Hasting Center posted the following blogs:
And the Health Affairs blog posted this: American Values And Health Reform

What's most exciting is that these issues go to very heart of who we are as a society, what we truly consider important as individuals, our level of spirituality, our degree of empathy and compassion, and our ability to think rationally/sensibly about the present and future. I've been writing extensively about such topics on this blog for several years; following are some links and summaries:
  • Personal Responsibility: A Thorny Issue in Healthcare Transformation – Explains why there's so much to consider when judging healthcare reform policies in terms of promoting personal responsibility.

  • Criteria for a Sustainable Health System – Presents 4 goals that any government healthcare reform proposals ought to focus on achieving i.e., promoting greater Self-Discipline, Personal Responsibility, Empathy and Compassion for the least advantaged (social responsibility), and Public Accountability (transparency). And offers 8 objectives that relate to those achieving those goals, i.e., Balance Investment & Spending, Balance Savings & Borrowing, Balance Conservation & Consumption, Balance Endowments & Entitlements, Connect Ends & Means (resource availability), Connect Should/Must Dos & Can Dos (priorities), Preserve Security/Protection, and Preserve Rights/Freedoms (opportunity and liberty).

  • A Principled and Pragmatic Approach to Healthcare Reform – Discusses two related issues:

    (1) How principled strategies for healthcare reform should be guided by empathy ("putting yourself in others' shoes" to understand what they are going through) and compassion (caring what others are going through and doing what we reasonably can do to help those in distress).

    (2) How a pragmatic strategy ought to find fair and effective ways to pay for the tactics aimed at realizing the two main objectives of a principled strategy: (a) providing universal coverage and (b) continually improving care effectiveness and efficiency leading to ever-better and more affordable approaches to care. Explains how this is made difficult by our society's tendency to focus on short-sighted, quick-fix solutions that are short on empathy and compassion for the public good, and also by our culture's failure to promote self discipline and personal responsibility & accountability. And it points to the need for substantial governmental reform aimed at minimizing lobbyists' influence, quid pro quo favors to party benefactors, operational inefficiencies, etc.

  • How to Reform Healthcare Sensibly: Focus on Two Clear Goals and Low-Cost, High-Quality Care In America: A Reply – Discusses how the focus of the current healthcare reform debate is out of balance, since (a) issues of money and insurance are by far the main focus, (b) issues of quality and knowledge are a minor focus, and (c) issues of empathy and compassion are mostly out of focus. Explains how focusing on all these issues in a more balanced way is absolutely essential for creating a sustainable, high value system in which everyone: (a) has access to excellent affordable healthcare, (b) gets the knowledge and guidance needed to make informed decisions and take responsible action, and (c) is incentivized to "do the right thing." That is, healthcare reform MUST FAIL UNLESS we balance (a) economic strategies that focus primarily on cost-control with (b) strategies aimed at filling the knowledge gap. As the article discussed, likely consequences of this failure include reduced care quality and productivity, as well as provider resistance.

  • Healthcare Reform's Most Important Issue: How to Make it a High-Value System – Discusses why a deep, rational debate about universal insurance versus single payer systems ought to be balanced by focusing on an equally (if not more) important core issue, i.e., how to dramatically increase cost-effectiveness (value to the consumer).

  • Empathy, Taxes, Personal Responsibility, and Healthcare Reform and Empathy, Taxes, Personal Responsibility, and Healthcare Reform – A Timely Debate (part 1) and (part 2) – Discusses how, from a psychological perspective, there is a lack of empathy (i.e., the ability to put oneself in the shoes of another) reflected by the fact that fortunate people with plenty of money or a secure job with an excellent health plan do not want to pay more taxes nor to risk changing the coverage they believe benefits them; even if such changes may benefit many others who are suffering. Includes a contentious debate I had with two people working in the insurance industry.

  • Healthcare Reform: Where to Focus? – Explains why Robert J. Samuelson misinterprets healthcare statistics in a Washington Post article, and why he is erroneous in his conclusions that (a) controlling cost is the central problem, (b) healthcare for the poor in our country is actually quite good, and (c) we cannot afford to view healthcare as a "right" that demands universal insurance for every American.

  • Aligning the Ought-To's with the Can-Do's – Argues that we had better focus on answering the questions: What OUGHT TO BE done to guarantee everyone has access to affordable, high-quality healthcare? and What CAN BE done, realistically, to make that happen? And then, wherever there is a misalignment between these Ought To's and the Can Be's (i.e., when we can't do what we ought to be doing), we'd be wise to ask ourselves: WHAT'S PREVENTING US and HOW CAN WE overcome those obstacles? I then explain why, when it comes to healthcare (as well as other domestic issues and even foreign policy) answering these questions isn't easy because it requires that we stop deceiving ourselves, and start critically and objectively evaluating the values, priorities, goals, and underlying beliefs of our culture.

  • The Whole-Person Integrated-Care (WPIC) Wellness Solution – The first of a series of posts that describes four types of people, with different character traits, who require different approaches to wellness due their different thoughts, emotions, behaviors, knowledge & understanding, and coping strategies.

  • Are you worthy of health insurance and high-value care? and Worthiness, Socialized Medicine, and Individual Responsibility – Examines and debates the questions: Who is worthy of having adequate health insurance and high-value (safe, cost-effective) care; what makes them deserving? And who, on the other hand, is unworthy; what makes them undeserving?