Monday, November 09, 2009

Health IT: Comparing Cloud Computing and Desktop Applications (Part 2 of 3)


This post is a continuation of the debate about the pros and cons of Cloud versus Desktop computing, which starts at this link.

The cloud computing expert I've been debating responded to my comments by writing:
We are just from different camps of software thinking. I have a CS degree from a college that taught on Unix machine not desktop and my entire experience is working with enterprises systems not desktops I was an early adopter of Java and attended the first Java one where the mantra was "The Network is the computer". Eric Schmidt CEO of Google is doing his best to make this happen. Yes they are looking to the future but Microsoft seem to be stuck in the past.
I reply:
Yes, we are from different schools and thus have very different points of views. We both see the value of networks with you focusing on networking computers within an enterprise ("behind the firewall"), and me from the perspective of enabling information exchange between "disparate islands of information" owned/controlled by (a) vastly different organizations and individuals with vastly different communications (from continuous broadband to occasionally connected dial-up) and (b) widely diverse information needs (including linking hospitals and clinics, individual clinicians across all healthcare disciplines, public health institutions, research organizations, as well as individual patients).
And while I am discussing the value of novel applications that use e-mail to enable all standalone (desktop/laptop/notebook) computers to have publisher and subscriber functionality (i.e., they have server-like functionality that enable P2P data transmission), the technology I'm proposing need not be Microsoft based since it can use any kind of automated spreadsheet grids and e-mail (although I do think the MS Office suite is currently the best, or at least the most ubiquitous, but OpenOffice or other tools could be used instead).
The conversation continues…

In response to my comment: "Since there are concerns about the security of data stored in the Internet cloud, people may feel more secure if they have complete control over their private information (such as personal health information), which is stored in encrypted data files in their own computers and other computerized devices," he wrote:
Cloud is just as safe or safer than office machines under desk. You can just take one home
To which I reply:
It's true that no matter what solution is deployed, security must be taken seriously. Examples of Cloud security risks can be found at http://tinyurl.com/ykv6s6o and http://tinyurl.com/lzr3gg; it's good that many people are working hard to manage those risks. My point is that storing a patient's health record in an encrypted data file residing on a local HD (hard drive) is much less complex and costly than securing the Cloud. This is especially true when data has to be shared with people outside an enterprises firewall.
In response to my comment: "Total cost of ownership is minimized since there is no need to rely on expensive central servers, databases, and server administrators," he wrote:
The cost [of expensive central servers, databases, and server administrators] is amortized, that is the point. And single machine are much more expensive than enterprise (depending on the size of staff).
To which I reply:
I assume you're comparing a thin versus thick/fat clients. With full power PCs continually coming down in price (many just a few hundred dollars), the cost difference can be minimal. And with the PC, you get the added benefits of fewer server requirements (a thick client server does not require as high a level of performance as a thin client server, resulting in drastically cheaper servers); offline working means a constant connection to the central server is often not required; better multimedia performance (thick clients have advantages in multimedia-rich applications that would be bandwidth intensive if fully served); more flexibility (on some operating systems software products are designed for PCs that have their own local resources, which are difficult to run in a thin client environment); using existing infrastructure (as many people now have very fast local PCs, they already have the infrastructure to run thick clients at no extra cost); and higher server capacity (the more work that is carried out by the client, the less the server needs to do, increasing the number of users each server can support) [see http://tinyurl.com/yl7tqxw]. And when it comes to exchanging data with people beyond an enterprises firewall (e.g., connecting with an independent clinician's EHR or patient's local PHR), the thin client is not an option for the remote individual!
In response to my prior comment: "All the information can be accessed anywhere/anytime, even if there is no Internet or other network connections," he wrote:
?? no network, are you talking sneakerNet? You can do the same with servers but....
To which I reply:
No, I'm not talking about handing a disk to someone. What I am talking about using an encrypted delimited data file, stored locally on a PC HD, which contains all the information needed on a patient (or on multiple patients for an aggregate report). No network, no problem. And when there is even a momentary resumption of the network, an e-mail with an attachment containing a portion of a patient's health dataset can be delivered or received with only a second or two of offline connectivity. I'm not talking about a centralized enterprise system, however,
In response to my prior comment: "Unlike communications requiring continuous connectivity, there is no loss of data when a network connection drops out (i.e., unexpected disconnection)," he wrote:
How about when you lose an HD or power in the building?
To which I reply:
If there's no UPS (uninterrupted power supply) on the PC or the HD fails, then you're right, there is no data to be viewed or shared.
In response to my prior comment:"There is no single point of failure to disrupt and entire network when a central server develops problems," he wrote:
Same argument but desktop have no redundancy.
To which I reply:
In addition to having backups (on- and off-premises) and UPSs, the kind of P2P, pub/sub, desktop-based, mesh node networks I'm discussing means that no single node (peer) on the network can prevent other nodes from working since there is no central server (i.e., each desktop node functions like its own server through use of e-mail). If the entire Internet fails (due to a natural disaster, terrorist attack, etc.), the local data files would still be available for local data access. But what about data exchange? In such a situation (that may be caused by) the networks I'm proposing would have communications "auto-failover" process by which the best available alternative methods of data transmission would be used—such as using dial-up, radio, or satellite communication—to send the e-mail attachments from anywhere to anywhere thereby enabling efficient emergency data exchange.
Also consider that central servers can never be immune to failure as evidenced by a recent disruption of an online EMR reported at this link.
In response to my prior comment "Since copies of the encrypted data files can be stored in many different locations (widely distributed), data survivability is enhanced," he wrote:
??? more sneakerNet?
To which I reply:
While physically exchanging disks or memory sticks containing the data file is one way to do it, by "data survivability," I'm referring to the type of emergency situation in my previous reply. When that happens, transmitting locally stored data files via e-mail using auto-failover communication methods provide a better solution than systems requiring Internet access to a centralized database.
In response to my prior comment "Maintenance has improved with automated desktop updates. On hosted systems, furthermore, users are at the mercy of the host; so, if an upgrade does not go well, or the individual user doesn't want or need the new features, the upgrade will still go forward," he wrote:
automated desktop updates require a host.
To which I reply:
Yes, but you don't have to install the update/upgrade, and can even reverse it if you installed it but don't like it.
In response to my prior comment: "There is greater security risk when running a web application online over the Internet than when running a desktop application offline," he wrote:
This is incorrect. Only if the system is not been setup correctly
To which I reply:
I realize there are ongoing debates about security, so here are a few quotes from people who argue that web applications are less secure than desktop applications: "There are always risks involved when dealing with working online, regardless of how secure a host might say a web application is, that fact of the matter stands that the security risk of running an application of the Internet is more significant than when running an application on a standalone desktop computer. Some applications require more security than others, playing Sudoku on a web application would cause little concern, but dealing with sensitive corporate formulas or accounting details in a web environment might be determined risky." [Reference] "Security - web applications are exposed to more security risks than desktop applications. You can have a total control over the standalone applications and protect it from various vulnerabilities. This may not be the case with web applications as they are open to a large number of users in the Internet community thus widening the threat." [Reference] "Security: Working online has its own set of risks like hacking and virus threats. The risk is higher compared to a desktop computer, since a malfunction of the desktop can result in loss of partial data. The crash of a web server can result in consequences beyond the control of a business." [Reference] "Local applications installed on your computer give you better security and do not require a connection to the web. Also, in many cases, local applications provide better integration with the operating system." [Reference]
In response to my prior comment: "Over the life of the software use, web applications are typically significantly more expensive because desktop applications are purchased outright and there are rarely recurring fees for the software use," he wrote:
This depends on a lot of factors but normally subscription based software is cheaper.
To which I ask: On what do you base this conclusion?.

In response to my prior comment: "Desktop applications typically operate faster because they are not affected by Internet traffic, server use, and latency," he wrote:
Only if the system was not setup correctly. You need Enterprise engineers to setup enterprise software.
To which I reply:
I guess by spending enough money and time on maximizing speed, you could reduce server processing time, queue waiting time, and network latency during high traffic. But you'd be hard-pressed even then to have a sophisticated healthcare application operate as quickly as a web app compared to a desktop app on a moderate power PC.
In response to my prior comment: "When using a web application that is hosted by a third party, privacy policies should be in place to prevent that data from being used by the web host," he wrote:
True, same for desktop privacy policies
To which I agree.

In response to my prior comment: "Multiple desktop applications can be integrate and used, enabling a model's functionality to be enhanced by other software programs. This cannot be done securely using a web-browser," he wrote:
Web apps can talk to each other that is the purpose of EAI.
To which I replied:
The seven main challenges of enterprise application integration are reported to be:
  • Constant change - The very nature of EAI is dynamic and requires dynamic project managers to manage their implementation.
  • Shortage of EAI experts - EAI requires knowledge of many issues and technical aspects.
  • Competing standards - Within the EAI field, the paradox is that EAI standards themselves are not universal.
  • EAI is a tool paradigm - EAI is not a tool, but rather a system and should be implemented as such.
  • Building interfaces is an art - Engineering the solution is not sufficient. Solutions need to be negotiated with user departments to reach a common consensus on the final outcome. A lack of consensus on interface designs leads to excessive effort to map between various systems data requirements.
  • Loss of detail - Information that seemed unimportant at an earlier stage may become crucial later.
  • Accountability - Since so many departments have many conflicting requirements, there should be clear accountability for the system's final structure.
  • Emerging Requirements - EAI implementations should be extensible and modular to allow for future changes.
  • Protectionism - The applications whose data is being integrated often belong to different departments that have technical, cultural, and political reasons for not wanting to share their data with other departments.
Furthermore, There are high initial development costs, especially for small and mid-sized businesses . And a fair amount of up front business design, which many managers are not able to envision or not willing to invest in. Most EAI projects usually start off as point-to-point efforts, very soon becoming unmanageable as the number of applications increase [Reference]
Also see http://www.sdtimes.com/content/article.aspx?ArticleID=30776
The bottom line, imo, is that there is a place for centralized web-based enterprise networks residing in the cloud. But they don't even come close to supplanting the need for low-cost, secure, standalone/desktop-based, P2P mesh node networks that can cross firewalls without hassle, work offline using local computer resources, and exchange health information via multiple communication methods using e-mail.

The conversation continues at this link.

Related posts:
Post a Comment