i2b2 Background Installation Details

Return to i2b2 component page

The Install Artifacts

My feeling (Jeff) is that the install packages should be in a well-known place within the available BRICCS network/resources, eg: in a repo on briccs-2. We could use wget "locally" to download known versions of software within the automated procedures. Or could we just script automated sftp's? Having known versions of software in our own location would be reassuring, quick and easy to script.

This may have to be a private facility to avoid licence implications, so perhaps sftp would be easier for this. Any thoughts?

Ownership, Groups and Privileges

Within a Linux environment, the question of owner and group needs thinking through.

I'm not in favour of running software as root. Should we have specific owners and groups (eg: catissue, jboss) or have a generic group (eg: briccs)? These could be set up during the install process if they did not already exist. NB: The Oracle-XE install creates user oracle and group dba

Start and Restart

  • Should the database and JBoss be started at boot?
  • Should they routinely be restarted each day overnight to reclaim any errant resources?

Order of Installation

  1. JDK
  2. Ant
  3. JBoss
  4. Oracle-XE
  5. Data
  6. PM Cell
  7. Ontology Cell
  8. CRC Cell
  9. Workplace Cell
  10. File Repository Cell


  • The above order is more or less strict.
  • The JDK (from Sun) and Oracle-XE include some command line interaction.
  • The Data install includes the demo data.
  • There is a degree of ambiguity when you read the docs for the Data and the PM install. Complete the Data install before going to the PM install.
  • The PM install is the first cell install. As such it also bundles the i2b2 common package install and the web client install.

Comments from DaveMorris

I have already set up a location store public files, see : I have already uploaded the files needed by the CaTissue install scripts.

The data site has a robots.txt file that prevents it from being indexed by search engines.

If we need to we can add a protected area which is only accessible via sftp.

None of our services should be run as root, or using one of our own user accounts. Each service should have its own user acount and group associated with it.

The install scripts I have in svn define separate users and groups for each of the services. We may need to negotiate with RCS to reserve user identifiers for these.

Our install scripts should add fully functional system init scripts enabling our services to behave in the same way that the system services do.

This is at best a temporary fix, and just means that resource problems don't show up in development and only begin to show up once the service is under heavy use.

However, we should have test systems that are left running under stress for extended periods of time and we should also have test systems that are repeatedly restarted to identify any resource problems as early as possible.

Last modified 12 years ago Last modified on 11/08/10 21:37:00
Note: See TracWiki for help on using the wiki.