The application of future technologies to medical informatics - Medical Informatics
David S. GreenbergAlthough this article will discuss the wonderful information technologies that await the health care field, the important first steps are to define our mission and processes. A few premises:
* Health care is an industry similar to others, whose business rules and processes can be well defined.
* The purpose of applying technology to health care is to achieve dramatic improvements in measures of performance, such as cost, quality, service, and speed, through improvements in data acquisition, organization, and access.
* No application of technology will make an unsuccessful organization successful; the most successful implementations will be modeled after business processes that are already successful in the organization.
* Cost-effective system design mandates use of off-the-shelf, open architectures for health care information systems. Medical informatics does not require the application of unique or proprietary technologies.
* In time, technology will allow for new business processes that will permit quantum leaps in knowledge, efficiency, etc. Current system designs must be as flexible as possible to anticipate and accommodate these new technologies.
The health care industry, while using world-class technology for its diagnostic systems, is remarkably slow to adopt standards and implement systems for its information processes. If this were the transportation industry, we'd still be driving Model Ts, but not sure about which side of the road.
One possible reason for this rather slow adoption of technology and automation is the perception that the rules and processes of health care are too numerous or too mystically complex to succumb to algorithmic reduction. This is true only if you are trying to replace the critical human reasoning function performed by the physician, and in fact many medical informatics initiatives bog down trying to do just that. With the myriad computer systems required to fly a Boeing 747, no one suggests taking off without a pilot on board. The computer systems in that aircraft concentrate on automated monitoring systems, with information displays that support intelligent pilot decisions and automated control functions to carry out the repetitive functions resulting from pilot decisions. This multiplies the effectiveness of the pilot, allowing concentration on tasks that require human skills. Similarly, our medical informatics systems should emphasize multiplication of the physician's effectiveness, and not modeling of the neural network inside his or her brain.
Our mission is not merely to automate existing paper systems to reduce clerical staff costs, but also to achieve a much nobler goal of improving quality of care. The elusive measure of quality means different things to different people. To the medical academician, quality is measured in terms of outcomes--how many of our patients recovered from (or avoided) disease X as compared to some yet-to-be-derived national standard? There is a whole class of development efforts dedicated to measuring and improving performance in this area. One outcome may be the emergence of the OSHA doctor--"This is your computer speaking. Don't order that test. It won't tell you anything. Have you considered diagnosis Y. I would if I were you." Some refer to these as "mother-in-law" systems or "doc-on-a-disk."
To patients-consumers, given a reasonable assumption of proficiency on the parts of their providers, quality can be a matter of how long they are put on hold when they call to make appointments, how long they have to sit in waiting areas, whether nurses know that they were in last week for back pain, whether their lab results are ready when they call, and can the nurses answering calls get immediate access to these results. A whole host of indicators exist that can be classified as "health care delivery quality."
A reasonable approach for the application of technology to medical informatics, and the approach we've adopted locally, is to build a suite of applications that do what is possible today, namely a patient-centered online clinical information system, but with the "hooks" to support the guidelines-based outcomes management tools of tomorrow. By collecting the information we currently use and codifying it in a reasonable database schema, we will have at our disposal the base data from which and upon which more sophisticated clinical management tools can operate. In the meantime, we attempt to apply the best of today's technology to systems that increase our delivery quality, provide a usable clinical record, assist in the quality assurance/improvement function, and allow us to get a better handle on utilization patterns.
The nature of the clinical data in the current medical record is not particularly complex (unless you're trying to decipher handwritten text hieroglyphics). After all, the record is just a collection of data elements organized in some fashion. The typical chart is a study in "nonnormalized," unstructured data. By applying the skills of the data analyst, we really should not have too difficult a time with it. It is really quite amazing that no standards body has yet published a definitive, universally accepted data dictionary for the clinical record, although there are several versions in draft form. Most organizations building a clinical database today still have to "roll their own."
A key problem is data in the record come from multiple sources--the lab, the radiologist, the hospital, specialists, etc. Rather than wait for the "big bang" that will magically put all of this in a common, accessible electronic format, an evolutionary approach makes more sense. That is, capture the information we can through the order entry, encounter data entry, and results reporting processes from the systems we control and to which we have access. Deposit these data into a patient-centered repository; provide good front-ends to access the data; and, little by little, we begin to have the key pieces of the record.
Some providers would say that, without the entire record on line, a system does not deliver adequate value. However, access to summary record data that include "SOAP" notes, lab results, chronic and acute problems lists, medications, sensitivities, and vital signs can, in many cases, preclude having to send for the full chart. In addition, even this partial record provides the benefits of immediate, multiple, and remote access. This can't be any worse than seeing the patient with a few faxed sheets selected by a medical records technician. New system initiatives should be measured for improvement over existing processes and not necessarily against an ideal that may not happen in our lifetimes. Too many medical informatics systems never get started because of this preoccupation with the ideal.
What technologies should underlie our new clinical systems? To begin with, no new technology stands alone. Today's technologies (and tomorrow's) come attached to architectures. When you decide to build a new health center, the architect is your first stop. It is he who determines the outward appearance, shape, structure, and materials of the new building. If the architectural style is Mediterranean, an asphalt shingle roof probably won't fit. There is a similar process happening with the new technologies of informatics. The architecture you choose will determine to a large extent how easily you can incorporate new technologies and which ones offer an appropriate fit.
You may have heard the rumor that the mainframe is dead. It's true--at least for most developers of future applications--and the same goes for minicomputers, because they are just smaller mainframes. Today and the future belong to multiple, nonproprietary, small computers working together in high-speed networks. We are on the brink of a major transition in the computer industry that has a fine parallel in the transportation industry.
I recently traveled from Orlando to Atlanta to attend a conference. I flew there on a Boeing 767 operated by Delta Airlines. I wondered why it wasn't operated by Burlington Northern or Union Pacific. The railroads at one time owned the transportation business in this country, so why are they completely absent from the airline industry? The fact of the matter is that they "missed the boat." They looked at those crazy flying machines and said, "Nobody will ever carry real freight or real passengers with those things!" So they played the ostrich and ignored them, or, worse, warned their customers about how dangerous they were for their precious freight. When was the last time you rode a train? What percentage of the packages you receive come by train? Most of mine come by overnight air express.
A similar thing has happened in the computer industry. The railroad company counterparts, the makers of "big iron" mainframes and the narrow-gauge minicomputers simply didn't believe anybody would ever do any useful computing with PCs. They warned us that our data wouldn't be safe anywhere but in their "railroad cars." And while they were napping, a whole new computing industry overtook them. The new industry is faster, more sophisticated, and infinitely more flexible and adaptable than the mainframes. The mainframes and minis can only go where the track has been laid for them, but the PCs can fly in three dimensions.
This is where the selection of new systems and vendors turns critical. The railroad barons are back trying to convince us they've seen the light. 1, for one, doubt it. There simply aren't any Burlington Northern airplanes. They may wow you, the user, with a graphical front-end interface, but behind it is still that single central data cruncher. They'll try to hook that sleek F-16 on your desktop to a railroad car, and it just won't fly. Just as the airline industry brought about a whole new crop of enterprises that grew up on air travel, so it is with PC-based systems. Look to a company that has no commitment to the legacy architecture. Select an architect committed to the airline business--client/server, distributed computing. That's the unavoidable truth about paradigms--shift happens!
The alternative to a centralized architecture, with a single mini/mainframe at the core, is a fully distributed architecture, where smaller computers are put closer to the actual source and usage points of the data. For example, in a typical multi-clinic HMO, perhaps 90 percent of the clinical data relating to a patient are generated and used at the clinic the patient typically visits. Storing these data anywhere but at the actual clinic means that their accessibility is subject to all sorts of failure conditions external to the clinic itself. It also limits access speeds to the much slower data rates of interfacility telecommunications circuits. Furthermore, failure of the central computer denies data to all clinics. Failure of a clinic-based computer denies access only to the affected clinic, while all other clinics continue to operate normally. This distributed data architecture represents a significant shift in system design that is only now being implemented in a few, very progressive organizations. It is also a key characteristic of client/ server systems, another architectural term that requires a smart user workstation (a PC) to be able to pull data from multiple larger PCs called "servers" that house the databases in each location.
Distributed databases require development of new methods for ensuring that core data are replicated and synchronized throughout the multiclinic system, or between offices and hospitals. The core data generally consist of a patient demographics record and a pointer to where the rest of the patient's data reside. Thus, a patient who normally visits Clinic A will have a listing at Clinics B and C indicating that Clinic A is the repository for that patient's clinical data. The user (client) workstation at Clinic B or C pulls core data from the local server and extended clinical data from the server at clinic A. Applications to manipulate data (i.e., scheduling systems, graphing of lab values, etc.) reside on users' workstations and are indifferent to the fact that data flows in from various sources.
Standards are emerging that will enable this distributed architecture to extend beyond local networks, such that it will be possible for this same user workstation to pull data from distant sources. Rather than creating a computerized medical record that resides in the electronic equivalent of a manila folder stored in the back room file server, we can create the "virtual" medical record, which exists only as a query that returns data from the database servers at the clinic, hospital, and lab simultaneously.
It pretty much goes without saying these days that a graphical user interface (GUI, pronounced GOO-EY) is the essential user interface for new informatics initiatives. The benefits over a character user interface (CHUI, pronounced CHEW-EY) are indisputable--ease of learning, more intuitive presentation of data, common commands and appearance across all applications, and on and on. GUI today means Microsoft Windows, because the battle for the desktop is over, and they won. While the Macintosh line of Apple computers pioneered the friendly GUI interface and are still being marketed actively, our bet is that the PC and "Windows" will prevail in new large business applications. Our organization won't even look at a CHUI application anymore.
There are still a few "railroaders" out there saying you don't need GUIs. Just ignore them. We've rolled out patient demographics, scheduling, and encounter processing applications built in Windows to over 300 receptionists, nurses, and medical records staff, and they love it. A recent survey of new information systems initiatives found that 80 percent of failed implementations were due to a lack of user friendliness in the new system. While Windows doesn't guarantee a friendly application, it definitely contributes to user acceptance.
With local area networks (LANs) in each clinic linking each workstation to distributed database servers, and with the clinics all tied together with high-speed internetworks, a good part of the puzzle is assembled. Now we can think about some of the newer technologies that might fit in well in this architecture.
Access to clinical data in the examination room presents an interesting challenge. There's the hospital's bedside terminal approach, but I'm not crazy about having to contend with Johnny playing with the Windows PC while he waits for the pediatrician. Additionally, concerns have been raised about the intrusiveness of workstations in each exam room. Some form of wireless, networked "pen computer" will probably win the day. They won't be 5 to 7 pound machines, but more likely a one pounder currently called a Personal Digital Assistant (PDA). The problem here is not so much one of the technology, which is finally becoming available in a workable solution. The problem is more one of interface design. We've looked at half a dozen implementations of a physician's penpad, and most suffer from "pick-list-itis," in which finding the data of interest requires a heavy dose of selection from endless sequential lists of choices. An alternative is handwriting recognition, but current wisdom indicates that it is too slow and inaccurate to be applied in a production environment. It may well be that the PDA will be used for quick lookups, order entry, and limited clinical data entry, with continued reliance on clerical data entry and dictation for other parts of the clinical record. An extension of the exam room PDA is the cellular-connected PDA, which would allow a physician to access the clinical record while on call outside the clinic.
The ability to embed a voice or image object inside a document or database is another technology with near-term applicability. Note that neither voice nor image are legitimate data elements, in that neither can be sorted or queried in a truly elemental form. However, until the dictation is transcribed, the voice object can be easily accessed, and the availability of scanned images within the database serves as an interim step until each data source is available in true electronic form. Storing voice as an object is a more realistic mid-term approach than holding out for the Holy Grail of "voice recognition"--where the computer turns voice into typed text.
New technologies follow a well-defined time progression, from early concepts, to market acceptance, and, eventually, to obsolescence. For some technologies, the gestation period can be quite long, with development of a mass market always just around the corner. Voice recognition is a good example of such a technology. Others arrive and take off with a bang, such as the Intel 486 processor. For every new technology that has not yet hit its stride, there are limited-market products available to those willing to take the risk and blaze the trail. However, these early products tend to be incredibly expensive, proprietary, and more likely to become obsolete as soon as the real market for the technology takes off. In addition to the obvious cost implications of jumping into a technology too early, there is the additional problem of limited support for development. Whether it's a new wireless network product or new software development language, if you are early you will have to solve all the problems yourself.
There is nothing inherent in the needs of medical informatics that demands technologies that are not yet ready for prime time. The development and deployment of workable clinical information systems are more a matter of engineering than science and can thus be accomplished by reduction to practice of state-of-the-art technologies that have reached the stage of moderate market acceptance. Unfortunately, finding this "sweet spot" in the technology time line is a bit like playing the stock market--invest too soon, and you may pick a loser; invest too late, and it may be obsolete.
In order for computers to become a ubiquitous part of health care, we need to spend less time telling them how to do what we want them to do, and more time telling them what we want them to do. Fortunately, we're moving in this direction. In this regard, every large health care organization needs to develop internal resources to build or customize its own applications. The more you build yourself, the less you will have to mold your processes to a vendor's preconceived notions of how you ought to work.
A funny thing happens to software vendors. In order to convince you that they have the right product for you, they have to come in as the experts. They forfeit the ability to sit down with you and say, "Gee, I really don't understand what you want to do here. Please explain it to me." To do so would be a sign that they lack the knowledge and experience you expect from your vendor. Granted, there are certain business processes that require standard definitions across all organizations, but the local needs and customs must be acknowledged for software to gain acceptance and offer value.
Your own development staff can ask the "ignorant" questions, get the right answers, and produce the functionality you want. Programming technologies have improved to the point where even smaller organizations can produce in-house applications in reasonable time frames. In-house development with the involvement of your clinical staff reduces the likelihood of failure for new initiatives. Instead of their suffering in silence and being burdened by a system forced on them by management, users should be given an opportunity for "ownership" of the new systems.
For some of you considering new system implementation, the natural choice may be a known vendor. Be sure to demand current architecture, user-friendly interfaces, and the ability to fine-tune your system to meet your peculiar needs. Another approach is to consider developing new medical informatics systems with a solid partnering of physicians and technogurus. Find yourself a technoguru who will learn the nature of your business and who has the right architectural bent. Build your system piece by piece, because the "big bang" is perpetually elusive. Rely on your technoguru to pick the hundreds of embedded technology pieces that make the system work. Involve all elements of your staff in the design process. But, most important, get started!
COPYRIGHT 1994 American College of Physician Executives
COPYRIGHT 2004 Gale Group