I discussed why we often require large pools of diverse, non-redundant data to generate reliable/valid information that supports good health and healthcare decisions. I made the case that diagnostic, treatment method, and clinical & financial outcomes data standards should be defined by determining the specific pools of data we need to guide clinical decisions. These standard data pools should include every possible piece of data that might affect the reliability (dependability) and validity (accuracy) of a person's decisions. And the data pools should evolve on an ongoing basis via a thorough evidence-based process of collaborative scientific scrutiny in which data may be prudently added, deleted or modified.
These standard data pools should be used to obtain information over people's entire lifetimes to improve diagnostic and treatment decisions by depicting important trends, associations and cause-effect relationships of health-related signs (e.g., lab test results and vital signs), symptoms (e.g., self-reported physical and psychological problems), and the factors that influence them (e.g., exposure to disease and psychosocial stressors). Furthermore, any information systems used to gather, analyze, disseminate and report these data should be extremely flexible, convenient, and useful.
The answer to the question above, however, doesn't end here.
My quest for an answer began in 1981 as I started my clinical psychology practice. I asked myself back then: How can I obtain and use every important piece of information-about a person's mind, body, actions and environment-for the continuous improvement of the care I deliver?
This quest led me on a 25 year journey across a myriad of knowledge domains, including evidence-based medicine, psychology, the mind-body connection, conventional and complementary and alternative care, wellness, practice guidelines and pathways, decision support, knowledge management, health information technology (HIT), RHIOs and HIEs, outcomes research, public health, performance metrics, transparency, health insurance, competition between providers, the business of healthcare, economic models, politics, and so on.
The more I learned, the more I realized that what was needed is a way to define, validate and manage an enormous variety of data-across all consumer demographics, health problems/diagnoses, treatment methods/procedures, and professional disciplines-for people's entire lifetimes.
We understood that managing these data is a daunting task, which requires:
- Gathering extensive sets of diagnostic, intervention (both well-care and sick-care processes), and outcomes (clinical and financial) data from both controlled studies and everyday practice
- Sharing these data with research scientists and clinicians to establish and evolve evidence-based guidelines
- Disseminating the guidelines to practitioners and consumers, along with useful educational/instructional materials they understand
- Tracking the use of the guidelines and reasons for variance (i.e., why certain recommendations were not followed)
- Evaluating outcomes data relevant to diagnoses and interventions
- Enabling anyone to participate in the process, even if they have low bandwidth and occasional connectivity
- Using cost-effective HIT and filling in existing gaps
- Providing reliable and valid decision support tools
- Empowering consumers to act responsibly and make wise choices
- Fostering collaboration between providers, researchers, public health agencies, etc.
- Supporting first responders and emergency room staff in disaster situations.
Our strategy focused on building health science knowledgebases and using them with evidence-based decision-support tools by:
- Collecting data about a patient, the patient's problem, the treatments rendered and the outcomes using different HIT tools. These data include clinical and financial outcomes, variance data, as well as patient and provider data.
- Sending the date to research databases, stripped of patient identifiers, where scientists and other knowledge workers access, study and discuss the data collaboratively by (a) using analytic tools to find patterns in the data; (b) challenging one another's interpretations of the data, and the assumptions and predictions they make; (c) building clinical models reflecting diagnostic and associated treatment processes; and (d) sharing and evolving these models.
- Validating or the invalidating the intervention-recommendation models. The validated models are supported by the scientific evidence showing that particular interventions are safe, effective, and efficient when used to treat particular types of patients with particular health problems in particular situations. The invalidated models have scientific evidence that shows when particular interventions are not safe, effective, and efficient when used to treat particular types of patients with particular health problems in particular situations; so they are useful for determining when not to use a certain intervention.
- The validated and invalidated intervention-recommendation models become evidence-based practice guidelines, which are stored in health science knowledgebases. Each evidence-based practice guideline is associated with reference and instructional materials, which are also stored in the knowledgebases.
- The evidence-based practice guidelines and related materials in the science knowledgebases are disseminated to authorized stakeholders, where they are stored locally and accessed for use in the decision support tools.
- The decision tools send data about the care process and outcomes to the research databases, which is used to create new and modify existing practice guidelines. This is an ongoing feedback loop leading to continually improving guidelines and outcomes.