2005-12-25 00:00:00

Federal Chief Architects Forum establishes Health Information Technology Ontology Project (HITOP) Work Group

From the HITOP mission statement introduction:

The Health Information Technology Ontology Project (HITOP) Work Group is a federal group that will make recommendations for systematically improving healthcare while reducing healthcare cost. Achievement of semantic interoperability through the use of ontology software in high priority health IT projects will both save money and improve the quality of care.

From the local level, to the Regional Health Information Organization level, to the level of states and provinces to the national and international levels, the importance of developing and/or adopting healthcare standards for electronic digital healthcare informatiion systems cannot be overestimated.

They go on to state the challenges:

The challenge is to provide interoperability between disparate standards, languages and practices as well as disparate database and operating system platforms. Adoption of universal standards for terminologies, descriptions of illnesses, and the vast array of pharmaceuticals, medical supplies and equipment is improbable. Therefore, the ability to systematically map existing and developing standards to each other for the purposes of providing for practical interoperability in a necessarily heterogeneous information environment is of paramount importance.

Therefore, the purpose of HITOP is to provide for semantic interoperability in practical terms by employing Semantic Web technologies of the W3C such as the Web Ontology Language and the Resource Description Framework; and by developing mappings through such means as a Common Upper Ontology; and by validating and adopting standard mappings of the various domain ontologies and taxonomies of the healthcare informatics field.

I’m not sure exactly what will come out from the effort, but it’s worth tracking nevertheless. Most people outside of health IT believe that networks or data sharing are the real problems in communicating information between entities. However, that’s wrong: the real problem is sharing the meaning of data and using the lexicons and dictionaries. If this effort can help build an ontology that we can all agree upon, it can only be a good thing.

Filed under: — @ 2005-12-25 00:00:00
2005-12-25 00:00:00

Federal Chief Architects Forum establishes Health Information Technology Ontology Project (HITOP) Work Group

From the HITOP mission statement introduction:

The Health Information Technology Ontology Project (HITOP) Work Group is a federal group that will make recommendations for systematically improving healthcare while reducing healthcare cost. Achievement of semantic interoperability through the use of ontology software in high priority health IT projects will both save money and improve the quality of care.

From the local level, to the Regional Health Information Organization level, to the level of states and provinces to the national and international levels, the importance of developing and/or adopting healthcare standards for electronic digital healthcare informatiion systems cannot be overestimated.

They go on to state the challenges:

The challenge is to provide interoperability between disparate standards, languages and practices as well as disparate database and operating system platforms. Adoption of universal standards for terminologies, descriptions of illnesses, and the vast array of pharmaceuticals, medical supplies and equipment is improbable. Therefore, the ability to systematically map existing and developing standards to each other for the purposes of providing for practical interoperability in a necessarily heterogeneous information environment is of paramount importance.

Therefore, the purpose of HITOP is to provide for semantic interoperability in practical terms by employing Semantic Web technologies of the W3C such as the Web Ontology Language and the Resource Description Framework; and by developing mappings through such means as a Common Upper Ontology; and by validating and adopting standard mappings of the various domain ontologies and taxonomies of the healthcare informatics field.

I’m not sure exactly what will come out from the effort, but it’s worth tracking nevertheless. Most people outside of health IT believe that networks or data sharing are the real problems in communicating information between entities. However, that’s wrong: the real problem is sharing the meaning of data and using the lexicons and dictionaries. If this effort can help build an ontology that we can all agree upon, it can only be a good thing.

Filed under: — @ 2005-12-25 00:00:00
2005-12-24 00:00:00

Unstructured Information Management Architecture SDK for Healthcare IT applications

IBM’s alphaWorks has announced an update to their Unstructured Information Management Architecture SDK. It’s a fascinating tool for use in our industry, though it was not designed specifically for healthcare applications. As we all know the majority of healthcare information is unstructured in nature but we try to force it into a particular structure because as IT people we like to reduce things to a relational data model where possible. We do this because there are lots of tools and technologies available to store, query, format, report, convert, and maintain relational data. Of course the Intersystems (Cache) guys will always argue that their database has offered “dynamically structured” data but that’s not quite the same thing as what I’ll be talking about here.

The UIMA SDK puts a new spin on information management:

Unstructured information management (UIM) applications are software systems that analyze unstructured information (text, audio, video, images, etc.) to discover, organize, and deliver relevant knowledge to the user. In analyzing unstructured information, UIM applications make use of a variety of analysis technologies, including statistical and rule-based Natural Language Processing (NLP), Information Retrieval (IR), machine learning, and ontologies. IBM’s UIMA is an architectural and software framework that supports creation, discovery, composition, and deployment of a broad range of analysis capabilities and the linking of them to structured information services, such as databases or search engines. The UIMA framework provides a run-time environment in which developers can plug in and run their UIMA component implementations, along with other independently-developed components, and with which they can build and deploy UIM applications. The framework is not specific to any IDE or platform.

How it works:

UIMA is an architecture in which basic building blocks called Analysis Engines (AEs) are composed in order to analyze a document. At the heart of AEs are the analysis algorithms that do all the work to analyze documents and record analysis results (for example, detecting person names). These algorithms are packaged within components that are called Annotators. AEs are the stackable containers for annotators and other analysis engines.

How Annotators represent and share their results is an important part of the UIMA architecture. To enable composition and reuse, UIMA defines a Common Analysis Structure (CAS) precisely for these purposes. The CAS is an object-based container that manages and stores typed objects having properties and values. Object types may be related to each other in a single-inheritance hierarchy. Annotators are given a CAS having the subject of analysis (the document), in addition to any previously created objects (from annotators earlier in the pipeline), and they add their own objects to the CAS. The CAS serves as a common data object, shared among the annotators that are assembled for an application.

If you build health IT applications, you owe it to yourself to look beyond structured information and relational databases. By using modern techniques such as natural language processing and text mining you can make useful applications without forcing your users to do click through a myriad of menus and nested combo boxes.

Filed under: — @ 2005-12-24 00:00:00
2005-12-24 00:00:00

Unstructured Information Management Architecture SDK for Healthcare IT applications

IBM’s alphaWorks has announced an update to their Unstructured Information Management Architecture SDK. It’s a fascinating tool for use in our industry, though it was not designed specifically for healthcare applications. As we all know the majority of healthcare information is unstructured in nature but we try to force it into a particular structure because as IT people we like to reduce things to a relational data model where possible. We do this because there are lots of tools and technologies available to store, query, format, report, convert, and maintain relational data. Of course the Intersystems (Cache) guys will always argue that their database has offered “dynamically structured” data but that’s not quite the same thing as what I’ll be talking about here.

The UIMA SDK puts a new spin on information management:

Unstructured information management (UIM) applications are software systems that analyze unstructured information (text, audio, video, images, etc.) to discover, organize, and deliver relevant knowledge to the user. In analyzing unstructured information, UIM applications make use of a variety of analysis technologies, including statistical and rule-based Natural Language Processing (NLP), Information Retrieval (IR), machine learning, and ontologies. IBM’s UIMA is an architectural and software framework that supports creation, discovery, composition, and deployment of a broad range of analysis capabilities and the linking of them to structured information services, such as databases or search engines. The UIMA framework provides a run-time environment in which developers can plug in and run their UIMA component implementations, along with other independently-developed components, and with which they can build and deploy UIM applications. The framework is not specific to any IDE or platform.

How it works:

UIMA is an architecture in which basic building blocks called Analysis Engines (AEs) are composed in order to analyze a document. At the heart of AEs are the analysis algorithms that do all the work to analyze documents and record analysis results (for example, detecting person names). These algorithms are packaged within components that are called Annotators. AEs are the stackable containers for annotators and other analysis engines.

How Annotators represent and share their results is an important part of the UIMA architecture. To enable composition and reuse, UIMA defines a Common Analysis Structure (CAS) precisely for these purposes. The CAS is an object-based container that manages and stores typed objects having properties and values. Object types may be related to each other in a single-inheritance hierarchy. Annotators are given a CAS having the subject of analysis (the document), in addition to any previously created objects (from annotators earlier in the pipeline), and they add their own objects to the CAS. The CAS serves as a common data object, shared among the annotators that are assembled for an application.

If you build health IT applications, you owe it to yourself to look beyond structured information and relational databases. By using modern techniques such as natural language processing and text mining you can make useful applications without forcing your users to do click through a myriad of menus and nested combo boxes.

Filed under: — @ 2005-12-24 00:00:00
2005-12-23 00:00:00

Corporate blogging in Healthcare sector

John Cass recently interviewed me about my thoughts on corporate blogging in healthcare. Matthew Holt was also nice enough to pick up on it.

My basic suggestion in the interview was that all healthcare companies (but especially those in IT) should be blogging and setup interactive forums to discuss their products with their customers. Healthcare is ultimately a “local business” so all healthcare is local and it’s very difficult to create “one size fits all” solutions in our industry. Because we see and touch our customers (also known as patients) regularly it’s even more important for us to be in constant communication with them. Blogging in general, and especially corporate blogging, is a great way to make that happen.

Filed under: — @ 2005-12-23 00:00:00
2005-12-23 00:00:00

File Compression for Remote Diagnosis

A big challenge these days in remote diagnosis technology is getting those big images sent from one place to another. World Changing reported recently about File Compression for Remote Diagnosis. Jamais Cascio said:

Researchers have known for a few years now that applying a mathematical transformation method known as “wavelets” to radiological images can improve the ability of doctors to detect cancer. But Bradley Lucier’s team of mathematicians at Purduehas taken the process to a new level — by using the wavelets method to compress mammogram images by 98%, not only can radiologists still detect cancer better than they can with unmodified images, the mammograms become small enough to send easily over the dial-up computer networks common in poorer parts of the world. The work will appear in the next edition of Radiology.

A single uncompressed mammogram can run up to 50 megabytes in size, and diagnosis typically requires four different images. The wavelet process cuts the file size down to approximately one megabyte per image, well within the capabilities of most dial-up or even cell-phone Internet connections. Although other researchers have demonstrated that the use of wavelets can improve radiological diagnosis, Lucier’s group managed to shrink the image files far more than ever before, using an algorithm Lucier himself created a decade earlier.

Filed under: — @ 2005-12-23 00:00:00
2005-12-23 00:00:00

Software Demonstrations are like watching TV (no, that’s not a good thing)

Will Weider over at Candid CIO posted a great new article: ET and Software Demonstrations. I especially loved what he said about the “Cannit” part of software demonstrations:

I know I am trapped in a bad demonstration when a volley begins between participants and demonstrators. Each question begins with “Can it…” Of course each responseis “Yes.” Or my favorite “Yes, with customization,” which is vendorspeak for NO.

There are so many problems with this approach I don’t know where to begin. Actually I do. The vendor is lying through his/her teeth until proven otherwise. Any simple question that someone asks can be interpreted in a way to illicit a positive response. Usually the questions are too poorly thought out to really capture the intent. Furthermore, if the vendor is not demonstrating, but just volleying back positive responses is anyone really learning anything?

Priceless.

Filed under: — @ 2005-12-23 00:00:00
2005-12-23 00:00:00

Inside the Venture Capital process

Although this post is not exactly about healthcare IT, it is about enabling healthcare IT by getting some money for your startup. I talk regularly to entrepreneurs who pitch ideas (which is one of my favorite parts of blogging) and many of them aren’t sure how to get their ideas funded. Having been through several money raising rounds in my previous startups I recommend not raising money from VCs unless you have absolutely no choice; use angel money, bootstrapping, etc. However, sometimes professionals will be the best choice if you need “smart money” from a good VC.

Here’s a good perspective of the VC process: The Post Money Value: Inside the process.

A great list of characteristics for you health IT startup dreamers was put together by Sam Decker:

  • Defendable and differentiated
  • Competitive cost structure
  • Attractive partnership opportunities
  • Repeat customers
  • Word of mouth opportunity
  • Memorable product and name
  • Potential for PR
  • Attractive to be bought or merged
  • Scaleable staff and systems
  • Scaleable product — build once, sell many times
  • Uncomplicated
  • Focus
  • Niche market or fragmented industry
  • High velocity and large market / industry
  • High perceived value
  • Product can be accessorized ??? revenue synergies
  • Healthy cash flow ???> margin x velocity
  • Demonstrable felt need, demand ??? does it hit a primal chord?
  • Business can be measured for improvement
  • Can claim leadership
  • Sales model is scaleable and predictable
  • Product evokes emotion
  • Can make big wins ??? big customers
  • Limited exposure to legal issues
  • Own relationship with and information about customers
  • Filed under: — @ 2005-12-23 00:00:00
    2005-12-23 00:00:00

    Corporate blogging in Healthcare sector

    John Cass recently interviewed me about my thoughts on corporate blogging in healthcare. Matthew Holt was also nice enough to pick up on it.

    My basic suggestion in the interview was that all healthcare companies (but especially those in IT) should be blogging and setup interactive forums to discuss their products with their customers. Healthcare is ultimately a “local business” so all healthcare is local and it’s very difficult to create “one size fits all” solutions in our industry. Because we see and touch our customers (also known as patients) regularly it’s even more important for us to be in constant communication with them. Blogging in general, and especially corporate blogging, is a great way to make that happen.

    Filed under: — @ 2005-12-23 00:00:00
    2005-12-23 00:00:00

    File Compression for Remote Diagnosis

    A big challenge these days in remote diagnosis technology is getting those big images sent from one place to another. World Changing reported recently about File Compression for Remote Diagnosis. Jamais Cascio said:

    Researchers have known for a few years now that applying a mathematical transformation method known as “wavelets” to radiological images can improve the ability of doctors to detect cancer. But Bradley Lucier’s team of mathematicians at Purduehas taken the process to a new level — by using the wavelets method to compress mammogram images by 98%, not only can radiologists still detect cancer better than they can with unmodified images, the mammograms become small enough to send easily over the dial-up computer networks common in poorer parts of the world. The work will appear in the next edition of Radiology.

    A single uncompressed mammogram can run up to 50 megabytes in size, and diagnosis typically requires four different images. The wavelet process cuts the file size down to approximately one megabyte per image, well within the capabilities of most dial-up or even cell-phone Internet connections. Although other researchers have demonstrated that the use of wavelets can improve radiological diagnosis, Lucier’s group managed to shrink the image files far more than ever before, using an algorithm Lucier himself created a decade earlier.

    Filed under: — @ 2005-12-23 00:00:00
    « Previous PageNext Page »