Thursday, November 19, 2009

Who should Own a Patient’s Health Data, Where should they be Stored, and How should they be Exchanged (Part 1 of 2)


A thought-provoking conversation on LinkedIn (see this link) examines whether the patient data stored in the national health information network (NHIN) will, within the next five years, likely be "owned" by major firms such as Oracle, Google, and Microsoft.

While most (though not all) commenters replied NO, the discussion covered some very interesting topics about data ownership, storage, privacy, and exchange. Following are excerpts.
I replied to the original question this way:
I contend that ONLY group of people who should be allowed to OWN a patient's (consumer's) identifiable health data is the patient him/herself. The patient may allow other people (i.e., "Trusted Partners") to have access to certain data and to store it securely in centralized databases behind a firewall and/or in distributed encrypted data files stored locally. And when it comes to de-identified data for research purposes, I suggest that those data be available to anyone (e.g., under government control).
The best model for data exchange between the Trusted Partners (TPs) and between the patient and the TPs, imo, is a P2P pub/sub mesh node network resembling telephone networks.
Several others concurred with me about patient ownership and added comments such as:
…interoperability and access to longitudinal patient health data across physicians and time is a burden on bandwidth and very costly…the ownership of data will ultimately be the patient but the help of the government in providing a non-for-profit repository where the data sits and is maintained is a must. France is a good example of how this is possible…The best architecture for a national Health Information Exchange will be technology agnostic infrastructure, where EHRs are easily aggregated from multiple data sources simultaneously upon request by an authorized healthcare organization…All seem to agree that "networks of networks" model is a bit cumbersome and that patients should own the data…While I agree that the patient is the ultimate owner of the data, I do not agree that they should be the data aggregator - which means that patients should not be held accountable or responsible for the collection, entry and management of all their health information.
Another commenter I suggest this scenario:
Patient data "lives" in an encrypted "cloud", identified using a Universal Healthcare Identifier as is being developed by Global Patient Identifier, Inc. In order to render the data useless to hackers & thieves; financial & social security data is NOT stored with it.
EHR standards similar to ASC X12 HIPAA transactions, such that an entity (i.e. Provider) can request the Patient data vie a standard internet web-part that is populated based on selected parameters such as: all data, data for a specified time period, specified type of data (Radiology only, Lab results only) - or a combination of these parameters. In this way - data is available anywhere on the globe it may be needed.
We'll still need to devise a way for the Patient to grant access. Also, we'll have to think about controlling the data which was requested and locally populated. Can it then be stored locally / should it be erased? Perhaps this is managed via the TYPE of access the Patient grants to the requesting entity.
To which I replied:
…I suggest that any database in a public cloud should only contain de-identified data from multiple sources for research (aggregate analyses); the cloud could also store back-ups of encrypted data files for each patient with the originals residing on each end user's (clinician's, patient's, organization's) computer hard drive (or network server). These data files would contain patient records made up of data fed from locally stored sources (e.g., EHRs, PHRs, CPOEs), manual inputs, medical device data streams, and so on. A P2P, pub/sub, node network cyber-infrastructure would enable authorized nodes/users to conveniently exchange of data sets from patients' data files; to minimize cost and complexity, the files can be exchanged via encrypted e-mail attachments. I'll be offering more details on this novel health data exchange model over the next couple of weeks. See this link.
Note that patient control is enabled by decompositing a locally stored data file based on rules reflecting a patient's privacy wishes, so that only the portions authorized by the patient are exchanged. See this link.
Another commenter then wrote:
Consider the information related to a patient to be an "object" in a massive data warehouse. Different data attributes associated with this object are (at least today) "owned" by different people/organizations. For example, some of the provider data is and should be "owned" by the provider and not available to other providers, payers, or patients. I see this as one of today's key perceived barriers to physician/practitioner acceptance of the NHIN model.
Conceptually this design is possible, but a challenge remains with the "physical owner" of the data warehouse, its database and application design characteristics, and its security administration. Ultimately, in any Information System, someone needs to be the "master administrator". Furthermore, the patient does have ownership over who may be authorized to "tag onto" their patient-object (i.e., who have they authorized to provide them care).
One suggestion may be to encrypt the content-data so that the "master administrator" can set the security for various attributes (between provider, patient, payer, government, or other users) without having the ability to access the content. This service can be designed so that the patient may designate these roles, but only for their own patient-object.
I propose that the HIO [health information organization] provide for physical ownership at a "relatively" local level (by metropolitan area or rural region), using cloud computing principles that are updated to incorporate HITECH and related concerns. There needs to be a common interface between these HIO's in order to achieve the NHIN.
To which I replied:
While a patient ought to "own" all their health data, it doesn't mean that such ownership is the same as having actual physical possession of them all. After all, each healthcare provider (from an individual clinician to hospitals to large health system such as Kaiser and Geisinger) has physical possession of the data that they collect. It's unreasonable to expect that all those data (including images) be shipped to the patient for local storage and to ask the patient to release those data each time a provider needs them. Instead, the data should be stored where it is collected.
There is one exception, however: the PHR. All PHR data should always be stored with (i.e., physically possessed by) the patient (preferably, imo, in an encrypted data file), even if collecting data through the PHR is done via a kiosk in a doctor's office or through a provider's web site. Furthermore, all EMR/EHR data (with some possible exceptions, such as a psychotherapist's notes) should be sent automatically to the patient's PHR; and the PHR should have the means to help the patient understand what those clinical data mean.
To deal with the privacy issue, the PHR should possess functionality that enables a patient to identify the particular data able to be shared with particular types of providers. In addition, patients' PHRs should give them guidance and warnings about who should have access to particular data based on their roles and responsibilities. In that way, any data are stored in a provider's database/warehouse could only be shared with third parties when explicitly authorized by the patient.
And please confirm that I understand your proposal: Patient data can be considered an "object" with attribute tags defining those authorized to access data from that object. And you say the object would be stored in a massive data warehouse, but there are problems with determining who should physically own the warehouse database be the "master administrator," as well as the failure for the patient to control who is authorized to add tags to a patient-object. The suggestion is to encrypt the content while allow the master admin to set those authorization tags in accord with the patient's wishes. Using cloud computing principles to support ownership by regional centers and a common interface between them would enable a NHIN.
Assuming my understanding is correct, then what about the following data ownership and exchange model: I agree that patient data ought to be managed as an object. I content that the object ought to be a data file, preferably an encrypted delimited text file (such as comma separated value format) to minimize size and overhead. There would likely be multiple data file objects for each patient, which are stored very locally depending on who entered/collected the data (e.g., on a patient's or clinician's computer, smart phone, memory stick/card, or on a health organization's server, etc.).
For everyday transactions (e.g., when a primary care physician exchanges patient data with the specialists to whom they refer, or when a patient and clinician share data between a PHR and EHR) a desktop or network-based software program would automatically decomposite (break apart) the local data file, extract the authorized data, and ship that data set via an encrypted e-mail attachment using PKI to assure the correct recipient gets it. The recipient can then view those data in a personalized, template-based report A decentralized node-to-node, pub/sub mesh network could do this exceptionally cost-effectively and with minimal complexity, in addition to increasing security and privacy since the nodes' actions are guided by a rules base requiring no human intervention.
Continuing with my proposed model, the NHIN data warehouse would be fed by the same software program in the same manner, with each NHIN server being connected to a node in the network, and with e-mail being the "common interface." When invoking its subscriber function, the NHIN node(s) would automatically retrieve data files sent to it and import those data into its database(s). These files would contain a standardized minimal data set (MDS) based on the CCD/CCR, whereas the data exchanged between healthcare providers, and between patients and providers, would include but not be limited to the MDS. When invoking its publisher function, the NHIN node(s) would send the appropriate data to the appropriate subscriber (provider or researcher) nodes, which may include immunization and disease registry data, biosurveillance data, and de-identified data form cost & quality research. The NHIN would also enable any authorized clinician to access certain patient data residing beyond the confines of the regional data centers. By using these unmanned nodes for carrying out the data exchange processes, the issues of security and privacy are increased, as mentioned above, and the problems associated with a master administrator are eliminated.
The conversation is continued at this link.

Related posts:

Monday, November 16, 2009

Screenshots of Two Novel Health IT Programs


In an effort to "raise the bar" of health IT creativity and utility, I'm posting screenshots of two novel health IT software programs we've been developing: the Life Chart and CCR-Plus.

Life Chart Program


Our LifeChart program depicts a patient's mental health status, treatments, and significant life events over time in a format that is easy and understand. The example below covers seven years of a bipolar patient, but similar chart can be constructed for patients with other disorders. The data can be entered manually and, if available, imported from any database.



Continuity of Care Record Program


The figure below contains screen shots of the CCR-Plus program, which combines a standard CCR with (a) a unique Warnings section that identifies associations between lab results, signs/symptoms, and medication side effects, and (b) a section displaying many lab test result panels and imaging studies in a uniquely clinical useful manner.

Thursday, November 12, 2009

Simple, Low-Cost, Secure Health Data Exchange

The following diagram presents our novel, economical way to exchange patient health information, from anywhere, easily and securely. The system uses Microsoft Office to send comprehensive patient data between clinicians, via e-mail, with just a few mouse-clicks. This convenient, easy to use system does not require additional hardware and works with any other software programs.

The system can be used in many ways[1]. The process for sending referral information between a primary care physician (PCP) and one or more specialists, for example, is shown below: Œ After the PCP selects a patient, the specialist(s) to receive the referral and the reason for it, the referral data are automatically retrieved from his/her EMR.  The data are then automatically sent to each specialist via an encrypted e-mail attachment. Ž Once the referral is received, each specialist automatically sends a reply back to the PCP with a few mouse-clicks.  After reviewing each reply, the PCP, with a one mouse-click, automatically sends patient data to each appropriate specialist in a continuity of care document (CCD).  A couple of mouse-clicks by the specialist and the CCD is automatically displayed. The CCD data may then be loaded automatically into his/her EMR or EHR.



What Makes the System Unique

The system employs the patented CP Split™ software method to assemble the data in an organized (meaningful/logical) way using electronic containers ("objects"). It's like arranging a child's building blocks according to some thoughtful plan. But instead of actual blocks (physical containers), a grid template software program is used, which consists of electronic containers into which data, from any sources, are organized sensibly and efficiently in preplanned structures. Step 1 (below) shows sample data for patient John Jones in the grid software.

The software program can not only present the data in the grid (as per step 4), but it can also share any of the data. It does the data sharing by automatically taking the data from the grid template and storing them in a simple encrypted data file, which it attaches to an e-mail and sends to a collection of trusted partner recipients (as per Step 2). When the e-mail arrives, each recipient's software program automatically decrypts the data file and extracts the data it contains. Next, it copies those data to the recipient's corresponding grid template, which are organized according to the same preplanned structure (as per Step 3). It then rapidly presents those data in dynamic (interactive) reports by performing any required analyse[2], adding the labels, and and formatting the data—based on their grid locations—in a way that assures the resulting information is readily understandable and useful to each recipient (as per Step 4). Furthermore, the software can easily update the data as needed, as well as exchange the data with other software systems and databases. This unique (patented) process minimizes complexity and cost, while maximizing convenience, security, usability and interoperability[3].


Footnotes:

[1] Ways the system can be used include: (a) automating the patient referral process; (b) sharing patient data between personal health records (PHRs); electronic medical records (EMRs), and electronic health records (EHRs; (c) exchanging health data within and between health organizations and exchanges (e.g., RHIOs and HIEs); and (d) delivering de-identified data to research and public health databases.

[2] Analyses may include logical (e.g., if-then-else) and mathematical (e.g., statistical) operations to provide useful information in a report that goes beyond the raw data. In the sample report above, a simple example of such analysis is coloring the patient’s age (55) red because he is older than 50. And in the blood pressure graph, levels that are too high are colored red, otherwise green.

[3] Interoperability is the ability of a software system to work with other systems without special effort on the part of the end-user.

Related posts:

Monday, November 09, 2009

Health IT: Comparing Cloud Computing and Desktop Applications (Part 2 of 3)


This post is a continuation of the debate about the pros and cons of Cloud versus Desktop computing, which starts at this link.

The cloud computing expert I've been debating responded to my comments by writing:
We are just from different camps of software thinking. I have a CS degree from a college that taught on Unix machine not desktop and my entire experience is working with enterprises systems not desktops I was an early adopter of Java and attended the first Java one where the mantra was "The Network is the computer". Eric Schmidt CEO of Google is doing his best to make this happen. Yes they are looking to the future but Microsoft seem to be stuck in the past.
I reply:
Yes, we are from different schools and thus have very different points of views. We both see the value of networks with you focusing on networking computers within an enterprise ("behind the firewall"), and me from the perspective of enabling information exchange between "disparate islands of information" owned/controlled by (a) vastly different organizations and individuals with vastly different communications (from continuous broadband to occasionally connected dial-up) and (b) widely diverse information needs (including linking hospitals and clinics, individual clinicians across all healthcare disciplines, public health institutions, research organizations, as well as individual patients).
And while I am discussing the value of novel applications that use e-mail to enable all standalone (desktop/laptop/notebook) computers to have publisher and subscriber functionality (i.e., they have server-like functionality that enable P2P data transmission), the technology I'm proposing need not be Microsoft based since it can use any kind of automated spreadsheet grids and e-mail (although I do think the MS Office suite is currently the best, or at least the most ubiquitous, but OpenOffice or other tools could be used instead).
The conversation continues…

In response to my comment: "Since there are concerns about the security of data stored in the Internet cloud, people may feel more secure if they have complete control over their private information (such as personal health information), which is stored in encrypted data files in their own computers and other computerized devices," he wrote:
Cloud is just as safe or safer than office machines under desk. You can just take one home
To which I reply:
It's true that no matter what solution is deployed, security must be taken seriously. Examples of Cloud security risks can be found at http://tinyurl.com/ykv6s6o and http://tinyurl.com/lzr3gg; it's good that many people are working hard to manage those risks. My point is that storing a patient's health record in an encrypted data file residing on a local HD (hard drive) is much less complex and costly than securing the Cloud. This is especially true when data has to be shared with people outside an enterprises firewall.
In response to my comment: "Total cost of ownership is minimized since there is no need to rely on expensive central servers, databases, and server administrators," he wrote:
The cost [of expensive central servers, databases, and server administrators] is amortized, that is the point. And single machine are much more expensive than enterprise (depending on the size of staff).
To which I reply:
I assume you're comparing a thin versus thick/fat clients. With full power PCs continually coming down in price (many just a few hundred dollars), the cost difference can be minimal. And with the PC, you get the added benefits of fewer server requirements (a thick client server does not require as high a level of performance as a thin client server, resulting in drastically cheaper servers); offline working means a constant connection to the central server is often not required; better multimedia performance (thick clients have advantages in multimedia-rich applications that would be bandwidth intensive if fully served); more flexibility (on some operating systems software products are designed for PCs that have their own local resources, which are difficult to run in a thin client environment); using existing infrastructure (as many people now have very fast local PCs, they already have the infrastructure to run thick clients at no extra cost); and higher server capacity (the more work that is carried out by the client, the less the server needs to do, increasing the number of users each server can support) [see http://tinyurl.com/yl7tqxw]. And when it comes to exchanging data with people beyond an enterprises firewall (e.g., connecting with an independent clinician's EHR or patient's local PHR), the thin client is not an option for the remote individual!
In response to my prior comment: "All the information can be accessed anywhere/anytime, even if there is no Internet or other network connections," he wrote:
?? no network, are you talking sneakerNet? You can do the same with servers but....
To which I reply:
No, I'm not talking about handing a disk to someone. What I am talking about using an encrypted delimited data file, stored locally on a PC HD, which contains all the information needed on a patient (or on multiple patients for an aggregate report). No network, no problem. And when there is even a momentary resumption of the network, an e-mail with an attachment containing a portion of a patient's health dataset can be delivered or received with only a second or two of offline connectivity. I'm not talking about a centralized enterprise system, however,
In response to my prior comment: "Unlike communications requiring continuous connectivity, there is no loss of data when a network connection drops out (i.e., unexpected disconnection)," he wrote:
How about when you lose an HD or power in the building?
To which I reply:
If there's no UPS (uninterrupted power supply) on the PC or the HD fails, then you're right, there is no data to be viewed or shared.
In response to my prior comment:"There is no single point of failure to disrupt and entire network when a central server develops problems," he wrote:
Same argument but desktop have no redundancy.
To which I reply:
In addition to having backups (on- and off-premises) and UPSs, the kind of P2P, pub/sub, desktop-based, mesh node networks I'm discussing means that no single node (peer) on the network can prevent other nodes from working since there is no central server (i.e., each desktop node functions like its own server through use of e-mail). If the entire Internet fails (due to a natural disaster, terrorist attack, etc.), the local data files would still be available for local data access. But what about data exchange? In such a situation (that may be caused by) the networks I'm proposing would have communications "auto-failover" process by which the best available alternative methods of data transmission would be used—such as using dial-up, radio, or satellite communication—to send the e-mail attachments from anywhere to anywhere thereby enabling efficient emergency data exchange.
Also consider that central servers can never be immune to failure as evidenced by a recent disruption of an online EMR reported at this link.
In response to my prior comment "Since copies of the encrypted data files can be stored in many different locations (widely distributed), data survivability is enhanced," he wrote:
??? more sneakerNet?
To which I reply:
While physically exchanging disks or memory sticks containing the data file is one way to do it, by "data survivability," I'm referring to the type of emergency situation in my previous reply. When that happens, transmitting locally stored data files via e-mail using auto-failover communication methods provide a better solution than systems requiring Internet access to a centralized database.
In response to my prior comment "Maintenance has improved with automated desktop updates. On hosted systems, furthermore, users are at the mercy of the host; so, if an upgrade does not go well, or the individual user doesn't want or need the new features, the upgrade will still go forward," he wrote:
automated desktop updates require a host.
To which I reply:
Yes, but you don't have to install the update/upgrade, and can even reverse it if you installed it but don't like it.
In response to my prior comment: "There is greater security risk when running a web application online over the Internet than when running a desktop application offline," he wrote:
This is incorrect. Only if the system is not been setup correctly
To which I reply:
I realize there are ongoing debates about security, so here are a few quotes from people who argue that web applications are less secure than desktop applications: "There are always risks involved when dealing with working online, regardless of how secure a host might say a web application is, that fact of the matter stands that the security risk of running an application of the Internet is more significant than when running an application on a standalone desktop computer. Some applications require more security than others, playing Sudoku on a web application would cause little concern, but dealing with sensitive corporate formulas or accounting details in a web environment might be determined risky." [Reference] "Security - web applications are exposed to more security risks than desktop applications. You can have a total control over the standalone applications and protect it from various vulnerabilities. This may not be the case with web applications as they are open to a large number of users in the Internet community thus widening the threat." [Reference] "Security: Working online has its own set of risks like hacking and virus threats. The risk is higher compared to a desktop computer, since a malfunction of the desktop can result in loss of partial data. The crash of a web server can result in consequences beyond the control of a business." [Reference] "Local applications installed on your computer give you better security and do not require a connection to the web. Also, in many cases, local applications provide better integration with the operating system." [Reference]
In response to my prior comment: "Over the life of the software use, web applications are typically significantly more expensive because desktop applications are purchased outright and there are rarely recurring fees for the software use," he wrote:
This depends on a lot of factors but normally subscription based software is cheaper.
To which I ask: On what do you base this conclusion?.

In response to my prior comment: "Desktop applications typically operate faster because they are not affected by Internet traffic, server use, and latency," he wrote:
Only if the system was not setup correctly. You need Enterprise engineers to setup enterprise software.
To which I reply:
I guess by spending enough money and time on maximizing speed, you could reduce server processing time, queue waiting time, and network latency during high traffic. But you'd be hard-pressed even then to have a sophisticated healthcare application operate as quickly as a web app compared to a desktop app on a moderate power PC.
In response to my prior comment: "When using a web application that is hosted by a third party, privacy policies should be in place to prevent that data from being used by the web host," he wrote:
True, same for desktop privacy policies
To which I agree.

In response to my prior comment: "Multiple desktop applications can be integrate and used, enabling a model's functionality to be enhanced by other software programs. This cannot be done securely using a web-browser," he wrote:
Web apps can talk to each other that is the purpose of EAI.
To which I replied:
The seven main challenges of enterprise application integration are reported to be:
  • Constant change - The very nature of EAI is dynamic and requires dynamic project managers to manage their implementation.
  • Shortage of EAI experts - EAI requires knowledge of many issues and technical aspects.
  • Competing standards - Within the EAI field, the paradox is that EAI standards themselves are not universal.
  • EAI is a tool paradigm - EAI is not a tool, but rather a system and should be implemented as such.
  • Building interfaces is an art - Engineering the solution is not sufficient. Solutions need to be negotiated with user departments to reach a common consensus on the final outcome. A lack of consensus on interface designs leads to excessive effort to map between various systems data requirements.
  • Loss of detail - Information that seemed unimportant at an earlier stage may become crucial later.
  • Accountability - Since so many departments have many conflicting requirements, there should be clear accountability for the system's final structure.
  • Emerging Requirements - EAI implementations should be extensible and modular to allow for future changes.
  • Protectionism - The applications whose data is being integrated often belong to different departments that have technical, cultural, and political reasons for not wanting to share their data with other departments.
Furthermore, There are high initial development costs, especially for small and mid-sized businesses . And a fair amount of up front business design, which many managers are not able to envision or not willing to invest in. Most EAI projects usually start off as point-to-point efforts, very soon becoming unmanageable as the number of applications increase [Reference]
Also see http://www.sdtimes.com/content/article.aspx?ArticleID=30776
The bottom line, imo, is that there is a place for centralized web-based enterprise networks residing in the cloud. But they don't even come close to supplanting the need for low-cost, secure, standalone/desktop-based, P2P mesh node networks that can cross firewalls without hassle, work offline using local computer resources, and exchange health information via multiple communication methods using e-mail.

The conversation continues at this link.

Related posts:

Sunday, November 08, 2009

Health IT: Comparing Cloud Computing and Desktop Applications (Part 1 of 3)


I just responded to a comment posted to Linked-In at this link. The comment began:
Health care IT is moving from desktop application to complex multi-faceted Enterprise systems. The OS, wireless devices, and databases are much more sophisticated than even a few years ago. Enterprise security takes a skill set that is not readily available in much of the health care industry. Staff HIT most likely will not have the expertise to harden these systems. The needed skills very with system(s) and network. As the HIE rolls out, security will become more complex. But in saying that, many non technical requirement can be covered today with written policies and enforcement.
I replied:
Better yet ... Stick with desktop applications that exchange patient data via secure email using innovative decentralized peer-to-peer cyberarchitectures. See, for example, http://curinghealthcare.blogspot.com/2009/09/novel-way-to-exchange-patient-health.html
To which the commenter responded:
Stephen, Interesting but I see desktop apps such as Microsoft Excel are giving way to Cloud computing. Desktops app no longer make financial sense from a maintenance standpoint.

A federated system is a great way to exchange data but we will have to see what HITSP, (for one) comes up with.
I then replied with the following:
While desktop apps are becoming Cloud enabled (which can be a good thing in certain circumstance), I do not see desktop apps ever being replaced by the Cloud for many reasons. There are distinct pros and cons of each depending on the situation and use case. For example, accessing and processing data from local storage using local computer resources (as opposed to the cloud) has many benefits, including the following:
  • Since there are concerns about the security of data stored in the Internet cloud [Reference1, Reference2, Reference3], people may feel more secure if they have complete control over their private information (such as personal health information), which is stored in encrypted data files in their own computers and other computerized devices.
  • Total cost of ownership is minimized since there is no need to rely on expensive central servers, databases, and server administrators. Also, there are times when the cloud is more expensive than alternatives (Reference1, Reference2, Reference3).
  • Performance is greatly increased when performing complex, intensive computations since all data processing is done quickly and easily using local computer resources, rather than waiting for a strained central server, or paying for an expensive racks of servers. However, when massive computations from hugh centralized databases must be done, for which a local PC is inadequate, cloud and grid-cloud computing may serve an important function (although grid computing alone may surfice). [Reference]
  • All the information can be accessed anywhere/anytime, even if there is no Internet or other network connections.
  • Unlike communications requiring continuous connectivity, there is no loss of data when a network connection drops out (i.e., unexpected disconnection) [Reference]
  • A node-to-node network [e.g., a mesh node network] is more robust. In web-based networks, a central server breakdown may cause the entire network to shut down and prevent anyone from exchanging data. In the node network, however, a malfunction in one or even many individual computers may have little or no effect on the network as a whole since functioning nodes can still communicate with each other. In other words, there is no single point of failure to disrupt and entire network when a central server develops problems.
  • Since copies of the encrypted data files can be stored in many different locations (i.e., widely distributed), information survivability is enhanced in the face of terrorism and natural disasters.
  • And following are key advantages and disadvantages of web versus standalone applications:
  • Web applications are easily accessibility because they can be easily accessed from any computer or location that has Internet access. With a standalone, the computer must have the application installed. On the other hand, once the standalone application is installed, it is accessible anywhere/anytime, even when there is no adequate Internet connection; Web applications, however, typically rely on persistent and unmanaged Internet connections, or else the data are inaccessible.
  • Maintenance and forced upgrades are lower because with web applications when a company must manage hundreds or thousands of desktop computers, although this has become less of a problem with improvement in automated desktop updates. On hosted systems, furthermore, users are at the mercy of the host; so, if an upgrade does not go well, or the individual user doesn't want or need the new features, the upgrade will still go forward.
  • Over the life of the software use, web applications are typically significantly more expensive over time because desktop applications are purchased outright and there are rarely recurring fees for the software use (except for possible maintenance fees or fee based upgrades associated with them). Many corporate web applications, however, charge users monthly service fees (i.e., "subscription fees") to operate the software. [Reference
  • Web applications relying on the Internet to transfer data, rather than a using a computer's local hard drive, may operate slower. The speed may also vary depending on number of users accessing the application (i.e., network traffic). Standalone applications have no such constraints; the application will operate as fast as the person's computer power allows.
  • When using a web application that is hosted by a third party, privacy policies should be in place to prevent that data from being used by the web host. This is not an issue with standalone applications.[Reference]
  • Multiple desktop applications can be integrate and used, enabling a model's functionality to be enhanced by other software programs. This cannot be done securely using a web-browser.[Reference]
  • "Because all computation is done on the computer that the application is running [offline], the amount of data transmitted over the internet is reduced…In the case of web based application the data is passed back and forth between the client and the server each time a new calculation is to be done. If many clients are connected to the server at the same time this leads to allot of processing on the server and the power of the clients is not used."[Reference]
  • While standalone applications may be the platform dependent (e.g., can only operate on Windows computers), it is possible to build platform neutral applications that avoid this constraint. 
Also see, for example: http://www.slate.com/id/2188015/, http://www.itbusinessedge.com/cm/blogs/byron/in-cloud-computing-vs-desktop-its-the-data-stupid/?cs=31286, http://www.filterjoe.com/2009/05/29/the-desktop-or-the-cloud/, and http://www.inquisitr.com/26717/the-cloud-vs-the-desktop-an-irrelevant-argument/
And I don't understand your claim that "desktops app no longer make financial sense from a maintenance standpoint." After all, as mentioned above, with the automated update methods now built into many desktop apps (including MS Office), maintenance is easy, reliable and free, so I don't understand your claim that "desktops app no longer make financial sense from a maintenance standpoint."

We do agree, however, that a federated system is a great way to exchange data within and between healthcare organizations, as well as between them and between individual clinician and small practices. It would be foolish for HITSP or other standards bodies to eliminate email (SMTP) transport as a viable means of health data exchange, because it is a simple, low-cost and secure method everyone understands.
The debate continues as this link.

Related posts: