Version 17 (modified by 11 years ago) ( diff ) | ,
---|
i2b2 AUG 2013
Program
NLP Workshop
- UMLS Ontologies and Ontology Resources (Olivier Bodenreider)
- Ontology-based De-identification of Clinical Naratives (Finch and McMurry)
- Ontology-based Discovery of Disease Activity from the Clinical Record (Lin)
- Ontology Normalisation of the Clinical Narrative (Chen)
- Ontology Concept Selection (Yu)
- Active Learning for Ontology-based Phenotyping (Dligach)
- Conclusion
Academic User Group
- Genomic Cell (Shawn Murphy and Lori Philips)
- SMART Apps (Wattanasin)
- i2b2 Roadmap (Shawn Murphy)
- Planning for the future (Kohane)
- From Genetic Variants to i2b2 using NoSQL database (Matteo Gabetta - Pavia)
- Extending i2b2 with the R Statistical Platform
- Integrated Data Repository Toolkit (IDRT) and ETL Tools (Sebastian Mate - Erlangen; Christian Bauer - Goettingen)
i2b2 SHRINE Conference
- SHRINE Clinical Trials (CT) Functionality and Roadmap (Shawn Murphy)
- SHRINE National Pilot Lessons Learned
- SHRINE Ontology Panel
- University of California Research Exchange (UC ReX) (Doug Berman)
- Preparation for Patient-Centred Research (Ken Mandl)
- Case Study: Improve Care Now (Peter Margolis)
NLP Workshop
UMLS Ontologies and Ontology Resources
Presentation showing how UMLS resources can be used with NLP to extract information from free text.
NLP has two stages:
- Entity Recognition - Identifying important terms within text
- Relationship Extraction - linking entities together
Entity Recognition
Three major problems when identifying entities within a text:
- Entities are missed
- Entities are partially matched - part of the term is matched but another part is missed leading to incomplete information or context. For example, in the term 'bilateral vestibular' only the second word may be matched.
- Ambiguous terms - terms that may have two meanings.
Entities are identified by a combination of normalisation and longest term matching.
Normalisation is the process whereby a term is manipulated to produce a form of words that will match a large number of potential matches. The process involves removing noise words, standardising inflections and derivatives (e.g., remove plural), removing punctuation, converting to lower case, and sorting the words into alphabetical order.
In order to extract the most meaning from the text, an attempt is made to try to match the term with the most number of matching words. For example, 'left atrium' as opposed to just 'atrium'.
Types of Resources useful for Entity Recognition
There are several types of resource:
- Lexical resources - lists of terms with variant spellings, derivatives and inflections, associated with the part of speach to which they refer. These can be either general or include specialist medical terms.
- Ontologies - set of entities with relationships between the entities.
- Technical resources - set of terms and identifiers used to map a term to an ontology.
- Hybrid - A mixture of 1 and 2. They are not strictly speaking ontologies as the relationships may not always be true (e.g., a child may not always be a part of the parent). They are useful for finding terms, but should not be used for aggregation.
Lexical Resources
- UMLS Specialist Lexicon - Medical and general English
- WordNet - General English
- LVG Lexical Variant Generation - specialist tool
- BioLexicon - EU project. Not as general. Mainly focused on genes.
- BioThesaurus - Focused on proteins and genes.
- RxNorm - Drug specific.
Ontological Resources
Terminology Resources
- MetaThesaurus
- Groups terms from many ontologies
- Produces a graph of all the relationships
- Graph is not acyclic and contains contradictions because it reproduces its source ontologies exactly.
- Allows standards to be mapped between.
- RxNorm
- Map between many drug lists.
- Map between branded and generic drug names.
- MetaMap
- Free with licence agreement
- Based on UMLS MetaThesaurus.
- Parses text to find terms.
- Used in IBM's Watson tool.
- Terms can be translated between various standards, including Snomed.
- Copes with term negation and disambiguation.
- TerMine
- WhatIzIt
Relationship Extraction
Orbit Project
The Orbit Project is the Online Registry of Biomedical Informatics Tools.
Ontology-based De-identification of Clinical Naratives
Presentation showing a method to remove Protected Health Information (PHI) from free text fields, using the Apache cTakes lexical annotation tool.
The normal method for attempting to de-identify free text is to train software to recognise personal information. However, the number of training examples available is usually quite small. This team attempted to reverse the task by training the software to recognise non-PHI data.
Pipeline:
- cTakes
- Frequency of term in medical journal articles.
- Match terms to ontologies. Diseases (etc) named after people can be a problem, but matching terms with more than one word implies that it is not a name. For example, 'Hodgkins Lymphoma' would not match 'Mr Hodgkins'
- Remove items from known PHI lists - presumably the person's name and address, etc.
Ontology-based Discovery of Disease Activity from the Clinical Record
Presentation of a project to use NLP to find evidence of disease activity and find its temporal relationship to drug events to identify patients as responders or non-responders for genetic analysis.
This talk put forward a method of using 3 data sets when training the software:
- Annotated training set
- Known set - a pre-annotated set that is used repeatedly to test the software, but not to train it.
- Unknown random set - a random set of a larger set that is used once for testing. The results of the test are manually assesed after the run.
Ontology Normalisation of the Clinical Narrative
Introduction to the Apache cTakes project: a set of tools for parsing text, one of which is a UMLS component, but can also use custom dictionary.
Can be used to extract UMLS concepts (or other codes), but also extracts who the patient is where (for example, knee) and also negation.
Ontology Concept Selection
Presentation of a batch tool for selecting UMLS terms that match local terms. The consensus in the room appeared to be that the UMLS website had similar or better tools.
One topic that was discussed was when terms may safely be aggregated. Three cases were identified where terms may be safely aggregated:
- Terms are the same thing.
- Terms are closely related and do not need to be disambiguated in this specific case.
- True hierarchies, e.g., troponin C, I & T => troponin.
Active Learning for Ontology-based Phenotyping
Presentation on a proposed Active Learning method to reduce the number of training examples an algorithm requires for machine learning, as annotation of examples is slow and expensive.
The normal method for training an NLP algorithm is to randomly select a potion of the data to annotate. This method proposes initially annotating a small number of random samples and selecting subsequent samples to annotate based on the algorithm's output for that sample having a low prediction margin.
Prediction Margin = confidence for class A - confidence for class B
In other words, how sure it is that the best answer is correct.
The presentation showed that in general (but not for every example) the Active Learning method needed fewer annotated examples to reach a high level of confidence.
Conclusion
These are the conclusions that I (Richard Bramley) drew from the NLP conference.
- There are a lot of tools and resources available. The integration of the cTakes tools with UMLS seems especially useful.
- Access to clinician time to train NLP algorithms is essential.
- The statistical analysis of the results is beyond my current capabilities. I may need some training in this area.
Academic User Group
Genomic Cell
Presentation of the new Genome Cell for i2b2 which uses the following pipeline:
- VCF - variant cell format (variations from a reference genome)
- Annovar
- GVF - Genome Variation Format
- i2b2
- Observation (snp - no data)
- use modifiers to record the snp information
Uses the NCBO genome onthologies.
One problem with recording VCF information is that annotations will change over time. This is because:
- The reference human genome changes regularly (~once per year).
- New knowledge changes the way the changes way genomes are annotated (could be much more frequent).
SMART Apps
Presentation of SMART plugin framework for i2b2.
SMART is an API that allows SMART apps