= i2b2 AUG 2013 = == Program == === NLP Workshop === 1. [#NLP1 UMLS Ontologies and Ontology Resources] ''(Olivier Bodenreider)'' 1. [#NLP2 Ontology-based De-identification of Clinical Naratives] ''(Finch and !McMurry)'' 1. [#NLP3 Ontology-based Discovery of Disease Activity from the Clinical Record] ''(Lin)'' 1. [#NLP4 Ontology Normalisation of the Clinical Narrative] ''(Chen)'' 1. [#NLP5 Ontology Concept Selection] ''(Yu)'' 1. [#NLP6 Active Learning for Ontology-based Phenotyping] ''(Dligach)'' 1. [#NLP7 Conclusion] === Academic User Group === 1. [#AUG1 Genomic Cell] ''(Shawn Murphy and Lori Philips)'' 1. [#AUG2 SMART Apps] ''(Wattanasin)'' 1. [#AUG3 i2b2 Roadmap] ''(Shawn Murphy)'' 1. [#AUG4 Planning for the future] ''(Kohane)'' 1. [#AUG5 AUG Community Projects] ''(Murphy)'' 1. [#AUG6 From Genetic Variants to i2b2 using NoSQL database] ''(Matteo Gabetta - Pavia)'' 1. [#AUG7 Identifying Normal Patients] ''(G Weber)'' 1. [#AUG8 Extending i2b2 with the R Statistical Platform] 1. [#AUG9 Integrated Data Repository Toolkit (IDRT) and ETL Tools] ''(Sebastian Mate - Erlangen; Christian Bauer - Goettingen)'' 1. [#AUG10 Other Comments and Things Learnt] === i2b2 SHRINE Conference === 1. [#SHRINE1 SHRINE Clinical Trials (CT) Functionality and Roadmap] ''(Shawn Murphy)'' 1. [#SHRINE2 SHRINE National Pilot Lessons Learned] 1. [#SHRINE3 SHRINE Ontology Panel] 1. [#SHRINE4 University of California Research Exchange (UC ReX)] ''(Doug Berman)'' 1. [#SHRINE5 Preparation for Patient-Centred Research] ''(Ken Mandl)'' 1. [#SHRINE6 Case Study: Improve Care Now] ''(Peter Margolis)'' == NLP Workshop == === [=#NLP1 UMLS Ontologies and Ontology Resources] === Presentation showing how UMLS resources can be used with NLP to extract information from free text. NLP has two stages: 1. Entity Recognition - Identifying important terms within text 1. Relationship Extraction - linking entities together ==== Entity Recognition ==== Three major problems when identifying entities within a text: 1. Entities are missed 1. Entities are partially matched - part of the term is matched but another part is missed leading to incomplete information or context. For example, in the term 'bilateral vestibular' only the second word may be matched. 1. Ambiguous terms - terms that may have two meanings. Entities are identified by a combination of normalisation and longest term matching. Normalisation is the process whereby a term is manipulated to produce a form of words that will match a large number of potential matches. The process involves removing noise words, standardising inflections and derivatives (e.g., remove plural), removing punctuation, converting to lower case, and sorting the words into alphabetical order. In order to extract the most meaning from the text, an attempt is made to try to match the term with the most number of matching words. For example, 'left atrium' as opposed to just 'atrium'. ==== Types of Resources useful for Entity Recognition ==== There are several types of resource: 1. Lexical resources - lists of terms with variant spellings, derivatives and inflections, associated with the part of speach to which they refer. These can be either general or include specialist medical terms. 1. Ontologies - set of entities with relationships between the entities. 1. Technical resources - set of terms and identifiers used to map a term to an ontology. 1. Hybrid - A mixture of 1 and 2. They are not strictly speaking ontologies as the relationships may not always be true (e.g., a child may not always be a part of the parent). They are useful for finding terms, but should not be used for aggregation. ==== Lexical Resources ==== 1. [[http://lexsrv3.nlm.nih.gov/Specialist/Home/index.html|UMLS Specialist Lexicon]] - Medical and general English 1. [[http://wordnet.princeton.edu/|WordNet]] - General English 1. [[http://lexsrv2.nlm.nih.gov/LexSysGroup/Projects/lvg/2012/docs/userDoc/tools/lvg.html|LVG Lexical Variant Generation]] - specialist tool 1. [[http://www.ebi.ac.uk/Rebholz-srv/BioLexicon/biolexicon.html|BioLexicon]] - EU project. Not as general. Mainly focused on genes. 1. [[http://pir.georgetown.edu/pirwww/iprolink/biothesaurus.shtml|BioThesaurus]] - Focused on proteins and genes. 1. [[http://www.nlm.nih.gov/research/umls/rxnorm/|RxNorm]] - Drug specific. ==== Ontological Resources ==== 1. [[http://semanticnetwork.nlm.nih.gov/|UMLS Semantic Network]] ==== Terminology Resources ==== 1. [http://www.nlm.nih.gov/research/umls/knowledge_sources/metathesaurus/index.html|UMLS MetaThesaurus] * Groups terms from many ontologies * Produces a graph of all the relationships * Graph is not acyclic and contains contradictions ''because'' it reproduces its source ontologies exactly. * Allows standards to be mapped between. 1. [[http://www.nlm.nih.gov/research/umls/rxnorm/|RxNorm]] * Map between many drug lists. * Map between branded and generic drug names. 1. [[http://metamap.nlm.nih.gov/|MetaMap]] * Free with licence agreement * Based on UMLS MetaThesaurus. * Parses text to find terms. * Used in IBM's Watson tool. * Terms can be translated between various standards, including Snomed. * Copes with term negation and disambiguation. 1. [[http://www.nactem.ac.uk/software/termine/|TerMine]] 1. [[http://www.ebi.ac.uk/webservices/whatizit/info.jsf|WhatIzIt]] ==== Relationship Extraction ==== 1. [[http://skr.nlm.nih.gov/|SemRep]] ==== Orbit Project ==== The [[http://orbit.nlm.nih.gov|Orbit Project]] is the Online Registry of Biomedical Informatics Tools. === [=#NLP2 Ontology-based De-identification of Clinical Naratives] === Presentation showing a method to remove Protected Health Information (PHI) from free text fields, using the Apache cTakes lexical annotation tool. The normal method for attempting to de-identify free text is to train software to recognise personal information. However, the number of training examples available is usually quite small. This team attempted to reverse the task by training the software to recognise non-PHI data. Pipeline: 1. cTakes 1. Frequency of term in medical journal articles. 1. Match terms to ontologies. Diseases (etc) named after people can be a problem, but matching terms with more than one word implies that it is not a name. For example, 'Hodgkins Lymphoma' would not match 'Mr Hodgkins' 1. Remove items from known PHI lists - presumably the person's name and address, etc. === [=#NLP3 Ontology-based Discovery of Disease Activity from the Clinical Record] === Presentation of a project to use NLP to find evidence of disease activity and find its temporal relationship to drug events to identify patients as responders or non-responders for genetic analysis. This talk put forward a method of using 3 data sets when training the software: 1. Annotated training set 1. Known set - a pre-annotated set that is used repeatedly to test the software, but not to train it. 1. Unknown random set - a random set of a larger set that is used once for testing. The results of the test are manually assesed after the run. === [=#NLP4 Ontology Normalisation of the Clinical Narrative] === Introduction to the [[http://ctakes.apache.org/|Apache cTakes]] project: a set of tools for parsing text, one of which is a UMLS component, but can also use custom dictionary. Can be used to extract UMLS concepts (or other codes), but also extracts '''who''' the patient is '''where''' (for example, knee) and also '''negation'''. === [=#NLP5 Ontology Concept Selection] === Presentation of a batch tool for selecting UMLS terms that match local terms. The consensus in the room appeared to be that the UMLS website had similar or better tools. One topic that was discussed was when terms may safely be aggregated. Three cases were identified where terms may be safely aggregated: 1. Terms are the same thing. 1. Terms are closely related and do not need to be disambiguated in this specific case. 1. True hierarchies, e.g., troponin C, I & T => troponin. === [=#NLP6 Active Learning for Ontology-based Phenotyping] === Presentation on a proposed Active Learning method to reduce the number of training examples an algorithm requires for machine learning, as annotation of examples is slow and expensive. The normal method for training an NLP algorithm is to randomly select a potion of the data to annotate. This method proposes initially annotating a small number of random samples and selecting subsequent samples to annotate based on the algorithm's output for that sample having a low ''prediction margin''. '''Prediction Margin = confidence for class A - confidence for class B''' In other words, how sure it is that the best answer is correct. The presentation showed that in general (but not for every example) the Active Learning method needed fewer annotated examples to reach a high level of confidence. === [=#NLP7 Conclusion] === These are the conclusions that I (Richard Bramley) drew from the NLP conference. 1. There are a lot of tools and resources available. The integration of the cTakes tools with UMLS seems especially useful. 1. Access to clinician time to train NLP algorithms is '''essential'''. 1. The statistical analysis of the results is beyond my current capabilities. I may need some training in this area. == Academic User Group == === [=#AUG1 Genomic Cell] === Presentation of the new Genome Cell for i2b2 which uses the following pipeline: 1. VCF - variant cell format (variations from a reference genome) 1. [[http://www.openbioinformatics.org/annovar/|Annovar]] 1. GVF - Genome Variation Format 1. i2b2 * Observation (snp - no data) * use modifiers to record the snp information Uses the NCBO genome onthologies. One problem with recording VCF information is that annotations will change over time. This is because: * The reference human genome changes regularly (~once per year). * New knowledge changes the way the changes way genomes are annotated (could be much more frequent). === [=#AUG2 SMART Apps] === Presentation of SMART plugin framework for i2b2 that allows apps that use individual patient level data to be displayed or utilised within a SMART container, such as i2b2. Smart apps are registered with the SMART container and then can be selected from an App Store from within the SMART container. Many SMART apps can be embedded into a single page using panels. Examples of SMART apps are: 1. Cardiac Risk Monitor. 1. Patient details. 1. Diabetes Risk. 1. Blood Pressure standardiser. 1. Medication List 1. Procedure List. Another useful SMART app is the Clinical Trial Matcher. It allows you to enter criteria for participation into a clinical trial. You can then view the matcher for particular patients to see if they match the criteria. SMART apps may be useful if working with a project for each trial within i2b2. That is, a cohort of patients is identified for a trial and a project is then created for those patients (see i2b2 CT). Researcher for that trial can then see the individual patient details for the patients within that project. See: * [[http://www.smarti2b2.org]] * [[http://www.smartplatforms.org]] === [=#AUG3 i2b2 Roadmap] === Long-term goals for i2b2: 1. Supporting Cohort Discovery 1. Supporting Big Data 1. Plugin Development: SETL - ETL Cell 1. Continued Development ==== Supporting Cohort Discovery ==== Development is underway to increase the workflow capabilities of i2b2 for clinical trials. This is called i2b2 CT. These improvements include: 1. Better visibility of patient details for users with the correct permissions. This requires the population of the patient mapping table. Patient sets can then be dragged onto a patient list tool that shows all their mapped IDs. Individual patients can then be dragged into user-defined sets that can be used in the same way as query defined patient sets. 1. Patient sets can be used to create new projects for a clinical trial, where they will have a separate project-specific ID. 1. Multi-site clinical trial projects can be set up using SHRINE. 1. Better visibility of patient details using SMART apps. 1. Individual projects may have a subset of the ontology, though can still use the same observation fact table. 1. An specific web client to be written to support the i2b2 CT workflow. ==== Supporting Big Data ==== This will be based on a flag in the Ontology Cell which will inform the CRC that it must query an external system that will return a patient set. ==== Plugin Development: SETL - ETL Cell ==== This new cell allows i2b2 to connect to web services to request specific patient level data. The new cell has two purposes. Firstly, it's main purpose appears to be to allow SMART apps to request extra data. Additionally it can be used to start a bespoke written SSIS package to load data from a file in the File Repository cell. ==== Continue Development ==== ===== Temporal Query UI Improvements ===== The definition of a temporal query has been split into two stages. 1. Define population - normal i2b2 query definition. 1. Define temporal aspects of the query. ===== PostgreSQL Support ===== Initial work to support PostgreSQL has been carried out, but further performance improvements need to be made. For this the team would like help from members of the community familiar with PostgreSQL. ===== Other Upgrades ===== 1. Support for JBoss 7.1 1. Moved to a POJO architecture. 1. Support of SQL Server 2012. === [=#AUG4 Planning for the future] === ==== Suggestions for community development ==== 1. Geolocation Cell. 1. Merging of patients from different i2b2 instances probabilistically. 1. SMART App for simple NLP. ==== The Future of i2b2 ==== Funding for the i2b2 project will end in September 2014. i2b2 will still be partially funded by other projects be there will not be funding for a team to concentrate solely on i2b2. Suggested alternatives are: 1. Have commercial and community editions. 1. Increase involvement of the community in support and development. 1. Kickstarter. === [=#AUG5 AUG Community Projects] === Community projects hosted through [[https://community.i2b2.org/wiki/dashboard.action|community website]]. These include: 1. mi2b2 - for viewing images. 1. Time align - A different time line view. 1. Trends - (developed by Wake Forest) show changes of result set over time. 1. NCBO Ontology Tools. They are currently looking for people to help with the development of the web client. '''Mention was made of CRC plugins that I need to investigate.''' === [=#AUG6 From Genetic Variants to i2b2 using NoSQL database] === Various NoSQL databases available: Mongo DB, Cassandra, Apache CouchDB, Hadoop. For this project they used Apache CouchDB, which is a JSON document store. It used predefined queries written in JavaScript called a Design Document. The Design Document in compiled the first time it is run, so it is usual to run queries after deploying them to precompile all the queries. ==== Genomic Data Load Workflow ==== VCF => Annovar => CSV (Plus additional data from other systems such as patient ID) => Parser => JSON => CounchDB <-> i2b2 ==== Running Queries ==== They have created a web client plugin into which can be dragged a patient set. The plugin then count and list of the patients who have that variant. The plugin is due to be released by the end of the year. They need to do some additional testing on how well the plugin scales. === [=#AUG7 Identifying "Normal" Patients] === A presentation for the identification of normal patients for control groups. The process involved identifying criteria and that excluded the patient from being normal, such as serious illness, age range, missing details, etc. === [=#AUG8 Extending i2b2 with the R Statistical Platform] === Presentation on the use of a set of i2b2 plugins to allow R statistical platform to be used. === [=#AUG9 Integrated Data Repository Toolkit (IDRT) and ETL Tools] === ==== i2b2 Wizard ==== Presentation of the [[http://www.imi.med.uni-erlangen.de/~matesn/i2b2wizard/|i2b2 wizard]] tool that aids in the instalation of i2b2. Changes for version 2: * Different versions of i2b2 can be installed. * Works with different DBs (currently only works with oracle) ==== ETL Tools ==== Presentation of the ETL tools using the Talend ETL framework. === [=#AUG10 Other Comments and Things Learnt] === ==== Ontology Item Sub Queries ==== Ontology items can be defined as a sub query. This can be used for things such as: 1. Date Caluculations 1. Aggregate Values ==== CRC Plugins ==== There was mention made of CRC plugins. I will need to investigate these further. ==== Ontologies and mapping ==== Wisdom provided by Matvey Palchuck (Recombinant): * Use actual values then problems of ranges go away. * Get stuff into i2b2 even if the only hierarchy that you can make is splitting things by first letter. * Payment data is usually clean data == i2b2 SHRINE Conference == === [=#SHRINE1 SHRINE Clinical Trials (CT) Functionality and Roadmap] === SHRINE allows a query to be run across multiple i2b2 instances. It is implemented as an i2b2 cell called the SHRINE adapter. This adapter maps concept codes from their local value to the standard values used on the SHRINE network. ==== SHRINE CT ==== This project uses the i2b2 CT changes that allow individual patient details to be viewed in projects. It extends the idea to allow the projects to span multiple sites. ===== Workflow ===== 1. User creates a query using SHRINE to create a patient set. 1. This patient set is dragged into the new Authorization Request Module in the we client. 1. Project is created using the patients when the authorisation has been received. 1. Specified user with the correct permissions can then view a limited set of data for these patients and run queries based on this patient set. 1. Users from the patient's originating site with the correct permission can also view the patient PII data using SMART apps. ===== Clinical Trial SMART app ===== This SMART app allows the user to enter criteria for entering a clinical trial. The app then tells the user whether the patient is eligible or if not why not. ===== Release Stages ===== The release of the changes will be staged as follows: 1. All managers at each site to view queries run from the SHRINE on their i2b2 instance, including the patient sets returned. 1. Improved web client UI. 1. Allow patients sets to be assigned to multi-site projects, which make visible a limited data set. 1. Allow selection of individual patients into user generated patient set. 1. Enable patient-centric SMART apps showing PII to be enabled for multi-site projects. === [=#SHRINE2 SHRINE National Pilot Lessons Learned] === A couple of presentations and a panel to discuss the problems encountered from running the SHRINE national pilot. These included: * Authentication problems. Running queries across multiple sites highlighted potential problems with user authentication: * How can we be sure that the request has come from the correct organisation? For this each node in the SHRINE had to install an SSL certificate for each other node. * How can we be sure that the user making the request is authenticated? Have to trust client's authentication. Could use Shibboleth or OAuth. * Connectivity. Network problems meant that quite often one or more of the nodes required for a query was unavailable, causing the query to fail silently. These issues will be addressed in the SHRINE source code. * Ontology / Mapping / Semantic Issues. See later. * Peer-to-peer issues. Each node connects directly to each other node. Thus making the administration effort rise exponentially with the number of nodes. === [=#SHRINE3 SHRINE Ontology Panel] === All the sites within a SHRINE do not have to use the same ontology because there is an intermediate mapping step. Therefore, each organisation within the SHRINE has to map its own codes to the standard specified by the SHRINE. It may, however, be more efficient and easier to create a new instance of i2b2 for the SHRINE with the data mapped to the SHRINE common ontology. Some of the problems encountered with SHRINE mappings were: 1. Data aggregated at different levels between the SHRINE and node ontologies can cause problems. If the SHRINE ontology is mapped at a higher level, you may be OK, but if SHRINE is mapped at a lower level, it is impossible to split the node value into SHRINE values. 1. If the data is split into ranges (e.g., age ranges), it is possible that the ranges for the SHRINE and those for the node do not match. This can be solved by everyone loading the actual value and not grouping things into ranges (e.g., actual age). === [=#SHRINE4 University of California Research Exchange (UC ReX)] === Presentation of SHRINE utilisation to join data across 2 medical schools in California. Data imported is mainly demographics data, but have recently imported the top 200 LOINC codes. === [=#SHRINE5 Preparation for Patient-Centred Research] === Social media shows that people are willing to share their data. There is the scope for altruistic patients to share their information. The best way to get things to change is to use Disruptive Technologies. These are usually low level, basic technologies that are cheap, easy to create and easy to use. SMART apps could be such a Disruptive Technology. === [=#SHRINE6 Case Study: Improve Care Now] === [[BackLinks]]