2006-03-31 00:00:00

XML data models are viable alternatives to relational data models in healthcare systems

I have received many comments on my recent Data Models in Healthcare series of articles and all of them have been pretty good. One of the more detailed and thoughtful responses came from Daniel Essin, a physician who is the Director of Medical Informatics at Los Angeles County Hospital and CTO of ChartWare. He wrote about using XML data models for clinical data management and in general I agree that XML is a viable persistence model for the use case he refers to. Here’s what Dan said:

I read your article. It’s hard to object to your general conclusion although when it comes to the core of healthcare data, the electronic medical record, I have found that sometimes people spend too much time modeling and not enough time following your advice to understand the data and how it will be used.

The most striking characteristic of clinical documentation is that tends to be composed of nested sections and items. It is extremely common to see data models that attempt to address the content of these documents (i.e. discrete fields for blood pressure, units of measure, etc) while overlooking the natural structure. Indeed at least one commercial system can accept and display almost any discrete data element that you can imagine, but cannot easily present the data originating from a medical encounter as a single, readable document. Alternatively, the structure itself could form the basis of a relational model but doing so would require a deep understating of basic design patterns (such as those in the book by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides). Very few of the people that I have met that work in healthcare IT are aware of this material.

After studying this problem for years, I concluded that the information that makes up the medical record is better represented as a simple XML structure based on nested sections rather than by a traditional relational model. I described the fundamentals of this approach in several papers in the early 1990’s and have had an EMR system that is based on this design in the field for 10+ years in a variety of different settings.

In medicine, change is ever-present. It comes from outside in the form of new medical knowledge, regulations, treatments, etc. It also comes from inside. As the users gain knowledge and experience, they use existing applications differently and develop different and more demanding expectations. It has been my experience that making major changes to a system based on a relational model, once it has been deployed and used for some time, is a costly proposition, as significant changes in the model may require that the software be modified as well.

Using an XML approach based on structure, we can accommodate ANY content from ANY specialty and it is all interoperable and viewable as single set of coherent documents. Most importantly, significant changes can be made to the content that is be placed into the pre-defined structure without breaking the most important function ??? the entire medical record is still viewable and queryable without having to rewrite any applications. Once one becomes comfortable with this level of simplification, it is then trivial (if necessary) to mirror portions of the content in traditional relational tables to facilitate various daily activities and workflow. The latest XML capability in SQL Server, Oracle and DB2 makes the 2-way transition from document to database even easier.

The XML approach which I pioneered has been taken up by several standards groups and forms the basis for HL7’s Clinical Document Architecture standard and the Continuity of Care Record initiative.

There are healthcare people who model (the HL7 organization has developed an extensive OO model), but all too often, people who build healthcare software don’t pay any attention to the prior art whether it be research or applied work of the type produced by the HL7 group. There is a tendency to use NIH (not invented here) thinking as a reason to ignore the fundamentals of computer science and medical informatics.

If you are interested, you can find my stuff in Methods of Information in Medicine from 1991 and in several of the IEEE New Security Paradigms workshops from the mid-90’s.

Like Dan, I, too, have used XML models for data storage for many years. XML is a great persistence model and I do recommend it when the need is as specific as Dan mentions above; where you should be careful is that XML is not a natural model to do aggregation, consolidation, and analytics from various data sources (which is sometimes a pretty big requirement in modern health IT apps). If you do keep you data in XML format from an applications perspective, be prepared to still do some ETL and EII activities to get it into a more “analytics-friendly” format. There are also some modern XML native databases (both open source and commercial) that support querying and aggregation but they are neither popular nor ubiquitious which means tooling support is limited.

I wanted to thank Dan and others who have helped keep this topic of data modeling in healthcare topic alive; it’s very important. If you are a data modeler and would like to share your thoughts in public, I invite you to publish a guest article on my blog or others.

Filed under: — @ 2006-03-31 00:00:00
2006-03-31 00:00:00

XML data models are viable alternatives to relational data models in healthcare systems

I have received many comments on my recent Data Models in Healthcare series of articles and all of them have been pretty good. One of the more detailed and thoughtful responses came from Daniel Essin, a physician who is the Director of Medical Informatics at Los Angeles County Hospital and CTO of ChartWare. He wrote about using XML data models for clinical data management and in general I agree that XML is a viable persistence model for the use case he refers to. Here’s what Dan said:

I read your article. It’s hard to object to your general conclusion although when it comes to the core of healthcare data, the electronic medical record, I have found that sometimes people spend too much time modeling and not enough time following your advice to understand the data and how it will be used.

The most striking characteristic of clinical documentation is that tends to be composed of nested sections and items. It is extremely common to see data models that attempt to address the content of these documents (i.e. discrete fields for blood pressure, units of measure, etc) while overlooking the natural structure. Indeed at least one commercial system can accept and display almost any discrete data element that you can imagine, but cannot easily present the data originating from a medical encounter as a single, readable document. Alternatively, the structure itself could form the basis of a relational model but doing so would require a deep understating of basic design patterns (such as those in the book by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides). Very few of the people that I have met that work in healthcare IT are aware of this material.

After studying this problem for years, I concluded that the information that makes up the medical record is better represented as a simple XML structure based on nested sections rather than by a traditional relational model. I described the fundamentals of this approach in several papers in the early 1990’s and have had an EMR system that is based on this design in the field for 10+ years in a variety of different settings.

In medicine, change is ever-present. It comes from outside in the form of new medical knowledge, regulations, treatments, etc. It also comes from inside. As the users gain knowledge and experience, they use existing applications differently and develop different and more demanding expectations. It has been my experience that making major changes to a system based on a relational model, once it has been deployed and used for some time, is a costly proposition, as significant changes in the model may require that the software be modified as well.

Using an XML approach based on structure, we can accommodate ANY content from ANY specialty and it is all interoperable and viewable as single set of coherent documents. Most importantly, significant changes can be made to the content that is be placed into the pre-defined structure without breaking the most important function ??? the entire medical record is still viewable and queryable without having to rewrite any applications. Once one becomes comfortable with this level of simplification, it is then trivial (if necessary) to mirror portions of the content in traditional relational tables to facilitate various daily activities and workflow. The latest XML capability in SQL Server, Oracle and DB2 makes the 2-way transition from document to database even easier.

The XML approach which I pioneered has been taken up by several standards groups and forms the basis for HL7’s Clinical Document Architecture standard and the Continuity of Care Record initiative.

There are healthcare people who model (the HL7 organization has developed an extensive OO model), but all too often, people who build healthcare software don’t pay any attention to the prior art whether it be research or applied work of the type produced by the HL7 group. There is a tendency to use NIH (not invented here) thinking as a reason to ignore the fundamentals of computer science and medical informatics.

If you are interested, you can find my stuff in Methods of Information in Medicine from 1991 and in several of the IEEE New Security Paradigms workshops from the mid-90’s.

Like Dan, I, too, have used XML models for data storage for many years. XML is a great persistence model and I do recommend it when the need is as specific as Dan mentions above; where you should be careful is that XML is not a natural model to do aggregation, consolidation, and analytics from various data sources (which is sometimes a pretty big requirement in modern health IT apps). If you do keep you data in XML format from an applications perspective, be prepared to still do some ETL and EII activities to get it into a more “analytics-friendly” format. There are also some modern XML native databases (both open source and commercial) that support querying and aggregation but they are neither popular nor ubiquitious which means tooling support is limited.

I wanted to thank Dan and others who have helped keep this topic of data modeling in healthcare topic alive; it’s very important. If you are a data modeler and would like to share your thoughts in public, I invite you to publish a guest article on my blog or others.

Filed under: — @ 2006-03-31 00:00:00
2006-03-26 00:00:00

Healthcare Services Specification Project

A reader (thanks Ed) just sent me this link to the Healthcare Services Specification Project that I thought might interest many of you. They describe their project as:

This project is a collaborative effort between Health Level Seven and the Object Management Group to identify and document service specifications, functionality, and conformance supportive and relevant to healthcare IT stakeholders and resulting in real-world implementations. In addition, several other groups have joined the HSSP effort. The Eclipse Foundation Open Healthcare Framework and the Medical Banking Project have each committed support to this project and are participating.

HSSP has some nice subprojects (like a terminology project, decision support, entity ID, etc) but it looks like it’s a very early effort at this time with very few practically useful downloads. We should all keep an eye on them and even join with them to help (if you can). I certainly plan to.

Filed under: — @ 2006-03-26 00:00:00
2006-03-26 00:00:00

Healthcare Services Specification Project

A reader (thanks Ed) just sent me this link to the Healthcare Services Specification Project that I thought might interest many of you. They describe their project as:

This project is a collaborative effort between Health Level Seven and the Object Management Group to identify and document service specifications, functionality, and conformance supportive and relevant to healthcare IT stakeholders and resulting in real-world implementations. In addition, several other groups have joined the HSSP effort. The Eclipse Foundation Open Healthcare Framework and the Medical Banking Project have each committed support to this project and are participating.

HSSP has some nice subprojects (like a terminology project, decision support, entity ID, etc) but it looks like it’s a very early effort at this time with very few practically useful downloads. We should all keep an eye on them and even join with them to help (if you can). I certainly plan to.

Filed under: — @ 2006-03-26 00:00:00
2006-03-24 00:00:00

Introduction to Dynamic Data Models in Healthcare

Health-IT World.com recently published my Introduction to Dynamic Data Models in Healthcare article. It’s a follow-on to my Healthcare Data Models Matter column from a couple of weeks ago.

Filed under: — @ 2006-03-24 00:00:00
2006-03-24 00:00:00

Introduction to Dynamic Data Models in Healthcare

Health-IT World.com recently published my Introduction to Dynamic Data Models in Healthcare article. It’s a follow-on to my Healthcare Data Models Matter column from a couple of weeks ago.

Filed under: — @ 2006-03-24 00:00:00
2006-03-19 00:00:00

Very nice auto-recognition and auto-login system for healthcare workstations

I met the founders of SensibleVision, creators of the Fast Access computer security and access control software, at HIMSS a few weeks ago. I was fascinated because it’s one of those few security applications that you can understand in under a minute. Fast Access is a program that installs onto Windows workstations and allows users to simply sit in front of their computers and be automatically logged in using facial recognition. It’s a nifty program uses your web cam to ???see??? you, and recognize you. It automatically logs you in when you sit down, and logs you out automatically when you walk away. It doesn’t get any simpler. The folks at SensibleVision sent me a webcam (but there’s nothing special about it, it’s just an off the shelf $25 camera) and their software and we tried it out in my office.

Installation was very simple for both Fast Access and the camera. I just popped in the CD, followed the on screen instructions, and rebooted. I was ready to go in minutes. When the computer rebooted I was prompted to login regularly so that the camera could get ???used to??? my face. When I walked away after that I was automatically logged out. It did take the computer a few logins to fully recognize my face but the software, in its simplicity and effectiveness, is pretty impressive. Because it doesn’t require a centralized system to manage and works directly off of normal Windows authentication (including Active Directory) it’s fairly easy to start with a few workstations to give it a test drive.

For healthcare workers that that have to log into and out of systems with valuable medical information this is a very useful application because the various workstations they use can remember them automatically based on their face. And, in case the facial recognition ever fails, users can simply use their passwords. It lets them log in as soon as they come in front of the computer, and they don’t have to remember to log back out because once they leave the workstation Fast Access will log them out.

The login time was fairly quick. After logging out it took about 3-5 seconds for the camera to log back in.

The only issues I found were that when booting up it takes as long as 3-5 minutes just to start up Fast Access. This could have been a problem with my workstation but I have a pretty beefy system. Since it’s only at boot up, it’s not a big problem for me. Also, when first booting up the camera could not recognize my face very well and after being logged out for 1 or more hours the program took a bit long to respond. These are all probably glitches in the software that will improve over time.

The only process problem I see is that once the system works well for users, they might actually forget their passwords for those systems that do not have this system. I’d have to do a little process analysis to see how big the problem might be.

But, all in all, it’s worth checking out.

Filed under: — @ 2006-03-19 00:00:00
2006-03-19 00:00:00

Very nice auto-recognition and auto-login system for healthcare workstations

I met the founders of SensibleVision, creators of the Fast Access computer security and access control software, at HIMSS a few weeks ago. I was fascinated because it’s one of those few security applications that you can understand in under a minute. Fast Access is a program that installs onto Windows workstations and allows users to simply sit in front of their computers and be automatically logged in using facial recognition. It’s a nifty program uses your web cam to ???see??? you, and recognize you. It automatically logs you in when you sit down, and logs you out automatically when you walk away. It doesn’t get any simpler. The folks at SensibleVision sent me a webcam (but there’s nothing special about it, it’s just an off the shelf $25 camera) and their software and we tried it out in my office.

Installation was very simple for both Fast Access and the camera. I just popped in the CD, followed the on screen instructions, and rebooted. I was ready to go in minutes. When the computer rebooted I was prompted to login regularly so that the camera could get ???used to??? my face. When I walked away after that I was automatically logged out. It did take the computer a few logins to fully recognize my face but the software, in its simplicity and effectiveness, is pretty impressive. Because it doesn’t require a centralized system to manage and works directly off of normal Windows authentication (including Active Directory) it’s fairly easy to start with a few workstations to give it a test drive.

For healthcare workers that that have to log into and out of systems with valuable medical information this is a very useful application because the various workstations they use can remember them automatically based on their face. And, in case the facial recognition ever fails, users can simply use their passwords. It lets them log in as soon as they come in front of the computer, and they don’t have to remember to log back out because once they leave the workstation Fast Access will log them out.

The login time was fairly quick. After logging out it took about 3-5 seconds for the camera to log back in.

The only issues I found were that when booting up it takes as long as 3-5 minutes just to start up Fast Access. This could have been a problem with my workstation but I have a pretty beefy system. Since it’s only at boot up, it’s not a big problem for me. Also, when first booting up the camera could not recognize my face very well and after being logged out for 1 or more hours the program took a bit long to respond. These are all probably glitches in the software that will improve over time.

The only process problem I see is that once the system works well for users, they might actually forget their passwords for those systems that do not have this system. I’d have to do a little process analysis to see how big the problem might be.

But, all in all, it’s worth checking out.

Filed under: — @ 2006-03-19 00:00:00
2006-03-16 00:00:00

Conference on Intellectual Property in the Global Marketplace

I’m a patent holder and I train patent examiners on technology topics so I often see and sometimes work with the folks at the U.S. Patent and Trademark Office (USPTO). If all you know about the USPTO is what you read in the newspapers you should attend some events where you’ll meet the folks that work there. If have found most of the staff that I’ve encountered to be courteous, hard-working, caring, and really trying to do the right thing as often as possible. They have a pretty tough job, though, so it’s hard to know what’s right or wrong (I think they do pretty well).

If you think the patent process is broken or would like to learn more about it, I just got an email announcement this morning that you would find useful:

The U.S. Patent and Trademark Office (USPTO) is holding a two-day conference to address the intellectual property needs of small and medium sized businesses, entrepreneurs, and independent inventors interested in manufacturing or selling their products abroad.

March 27 ??? Presentations to help conference attendees identify intellectual property assets and discuss the steps needed to protect those assets in the United States and abroad. Major presentations will cover patents, trademarks, copyright, and trade secrets.

March 28 ??? Presentations focusing on enforcement issues that may arise in protecting intellectual property rights in the United States and abroad including: patent, trademark, and copyright infringement; unfair competition; counterfeiting; and piracy.

This conference will also include one-on-one consultations between the USPTO attorneys and conference attendees on Monday and Tuesday afternoons.

This program is part of the Federal Government???s Strategy Targeting Organized Piracy (STOP) and the USPTO???s continuing commitment to increase public awareness of intellectual property rights and the enforcement of those rights in the global marketplace.

There is no charge to attend this event, but seating is limited and registration is required.

Filed under: — @ 2006-03-16 00:00:00
2006-03-16 00:00:00

Interested in doing some freelance writing for a Health IT publication?

An editor from a large health IT publication with both a print and online presence has asked me if I know any folks looking to do any freelance writing on “practical” health IT topics. If you’re interested, drop me a note.

Filed under: — @ 2006-03-16 00:00:00
« Previous Page