There are several options as to how we deploy our services, ranging from a single large machine hosting all of our services to a distributed system with one service per host.
My own view is that one service per host is probably the best option in terms of flexibility, reliability and security. However, right now, I don't think we have enough hard evidence to be able to decide which configuration would be best. In reality it will probably turn out that a mixture of services and hosts will provide the optimum solution, based on the performance, reliability and loading of each of our services.
As a result, I would recommend we adopt a flexible approach to service deployment.
By using the following simple set of components, we can develop a flexible deployment system that will enable us to adapt our deployment configurations to meet changes in available resources and service requirements.
The proposed system would also enable us to deploy test services in a range of different configurations to compare different configurations and evaluate what the actual requirements are for each of our web services.
If we are considering migrating our live CaTissue service from briccs-3 to briccs-1, then we will need to notify all our users of a change to the URL they should use.
In which case, we should take the opportunity to put in place the following components in order to avoid similar problems in the future.
- Use the DNS alias catissue.briccs.org.uk for our live service
- Install an Apache web server as the front end request handler
- Update all our clients to use http://catissue.briccs.org.uk/catissuecore/
Once we have these components in place, we will have a lot more flexibility as to where and how we deploy our services, without causing any side effects for our end users.
We own the
briccs.org.uk domain name.
This means that we can create as many host names or sub-domains that we need based on that domain.
The first step in making our service deployments portable would be to use DNS aliases for our live services.
A DNS alias is simply a DNS record that points to another host name.
catissue.briccs.org.uk ---> briccs-3.rcs.le.ac.uk
This has already been implemented for the BRICCS infrastructure services :
- Subversion repository (
- Maven repository (
- Install repository (
- Trac issue tracking and wiki (
- Google Mail for the BRICCS team (
- Google Docs for the BRICCS team (
Using human readable DNS aliases for our services not only makes things easier for our end users (catissue.briccs.org.uk is easier to remember and check than briccs-3.rcs.le.ac.uk), it also gives us a degree of flexibility in terms of relocating our services.
If we use DNS aliases for our live web services, we would be able to relocate our live CaTissue service from briccs-3 to briccs-1 without causing any side effects for our end users. All we would need to do is deploy the new service on briccs-1 and update the DNS alias to point to briccs-1 rather than briccs-3.
Anyone who was using the alias name
catissue.briccs.org.uk would automatically be directed to the new service, without having to update any of their bookmarks.
If we don't adopt an alias system, then if we move our live CaTissue service from one machine to another we will have to verify that all of our users have updated all of their browser settings, and that no one has any old bookmarks, emails, notes or user documents that point to the old service.
The worst case would be if we migrated our live service to briccs-1, and then re-purposed briccs-3 as a test and development machine. If we later deployed a test version of CaTissuse on briccs-3, there could be cases where a user clicks on a link in an old email or an old copy of a user manual they would not realise that they were now using a test deployment of CaTissue rather than the live service.
The DNS alias for
catissue.briccs.org.uk is already in place, and we can add new ones whenever we need to.
- Using host name http://briccs-3.rcs.le.ac.uk:8080/catissuecore/
- Using DNS alias http://catissue.briccs.org.uk:8080/catissuecore/
This was the quickest and easiest way to get things up and running. Registering a new domain name through a commercial registrar typically takes less than 60 seconds to fill in the form and click 'Ok'.
In the longer term, we may want to transfer ownership of the DNS name to the university IT services. Transferring the ownership of a registered DNS domain is a fairly simple operation. However, the ability to change the DNS aliases for our services forms an important part of our disaster recovery toolkit. If for example, one of our web servers fails, we would need to be able to deploy a new service and update the DNS alias quickly and easily.
If we do decide to transfer the management of
briccs.org.uk to university IT services we need to check that we would still be able to make changes to the DNS information quickly and easily,
preferably out of normal office hours (see note on DNS caching).
Due to the way that the DNS system works, it would be inadvisable to change the DNS aliases for a live service during working hours, while people are actively using the system.
DNS information is cached, typically up to 3 hours, both at the client (desktop) and at the network level (e.g. the DNS service at UHL).
This means that if we changed a DNS name for a live service at mid-day, there would be a period of several hours, 12:00 - 16:00, where it would be difficult to predict whether a client would get an old (cached copy) or the new setting. If we rely on DNS aliases alone to migrate services between hosts, this could result in some users being directed to the new service while others are still using the old service.
Best practice would be to change a DNS alias for a service when as few people as possible are using the service, either overnight or at the weekend, allowing at least 6 hours for the DNS changes to propagate through the system before users sign in and start to use the system.
Both the Apache web server and Tomcat web server support virtual hosts.
A virtual host is simply a way of linking a specific host name to a specific service handler :
Requests for hostname [xxx.yyy.zzz] should be handled by [service]
This allows us to set up specific handlers on our web servers that handle requests differently based on the server name in the request.
For example, we could configure a single web server to handle requests for both a live and and a test system, routing them to the appropriate web application based on the hostname in the request.
Requests for catissue.briccs.org.uk should be handled by [live-service] Requests for testservice.briccs.org.uk should be handled by [test-service]
Using web server virtual hosts in combination with DNS aliases gives us much greater flexibility and finer grained control over where how our services are deployed.
The previous section on DNS aliases outlined a problem with DNS caching, where changes to a DNS alias may take up to 3 hours to propagate through the system.
By configuring the web servers to distinguish between requests for live or test systems based on the host name, we can ensure that even if a user get an out of date DNS record the web server won't let them use a test service by accident.
Tomcat virtual hosts
The Tomcat web server embedded in JBoss supports name based virtual hosts. However, using virtual hosts at this level means that we are still tied to the machine that is hosting the Tomcat/JBoss service, and the non-standard port number normally allocated to Tomcat (port 8080).
Apache virtual hosts
A better option may be to use an Apache web server as the front end request handler, and use virtual hosts within the Apache service to delegate requests to the appropriate back-end server.
Using an Apache web server as the front end to Tomcat/JBoss has a number of advantages:
- Standard port number
- Delegate to local or remote Tomcat/JBoss
- Better control over security of Tomcat/JBoss
Standard port number
The Tomcat server is normally deployed using a non-standard port number (port 8080), which means that all of our users have to add the port number in the address.
Although it is possible to configure Tomcat to use the standard web server port, this does cause issues with the Unix permissions and security controls.
The Apache web server is designed to use the standard web server network port, and the standard deployment tools for Apache handle the Unix permissions correctly.
If we use an Apache web server as the front end request handler, then our services will be available via the standard port number (port 80).
This simplifies the URL that our end users have to deal with, because we no longer need to specify the port number.
The Apache module used to delegate requests can be configured to use a Tomcat/JBoss web service on the local machine, or to pass the request to a Tomcat/JBoss service on a different machine.
This means that the end user is unaware of where the Tomcat/JBoss service is actually running, enabling us to relocate our live Tomcat/JBoss services with no impact on our end users.
It may be useful to deploy all of the components on the same machine:
DNS alias [catissue.briccs.org.uk] | | +--------------+ \----> | Apache | | web server | | +--------------+ +-------| virtual host | | AJP proxy | +--------------+ | | +--------------+ \----> | Tomcat/JBoss | | port 8080 | | +--------------+ +------| CaTissue | | webapp | +--------------+
If we want to upgrade the version of CaTissue, then we can deploy the new version on a new machine, leaving the old service in place.
Once we have tested the new deployment, we can change the settings in the Apache web server to delegate requests to the new service.
DNS alias [catissue.briccs.org.uk] | | +--------------+ \----> | Apache | | web server | | +--------------+ +-------| virtual host | | AJP proxy | +--------------+ | | old service deployment | | +--------------+ | | Tomcat/JBoss | | | port 8080 | | | +--------------+ | +------| OLD CaTissue | | | webapp | | +--------------+ | | | new service deployment | | +--------------+ \----> | Tomcat/JBoss | | port 8080 | | +--------------+ +------| NEW CaTissue | | webapp | +--------------+
Changing the delegation settings in the Apache web server would take effect immediately, and all of our users would be connected to the new service.
Note that in the above scenario, the old Tomcat/JBoss service is left intact. This means that if there are any unexpected problems with the new deployment we can swap back to the old service if we need to. We only delete the old service once we are happy that the new service is running correctly.
It is also worth noting that the Apache web server is the only component in system that end users interact with directly.
If we deploy the Apache web server on a separate machine, then this is the only component that cannot be moved or re-deployed. All of the other components and services can be re-deployed without affecting our users.
If a machine hosting one of our back-end Tomcat/JBoss services fails we can re-deploy a new service on a new machine and update the the Apache proxy service. The new service would replace the failed service without having to update the URLs used by our end users.
With this in mind it may be best to keep the Apache web server deployment as simple as possible, with just the web server and AJP proxy module installed on that machine.
If the Apache web server is deployed on a separate machine, we can restrict access to the back-end JBoss/Tomcat web services to only allow connections from a limited set of machines. This will help to reduce the number of security vulnerabilities we are exposed to e.g. ticket:36.
At the moment, installing our deployment scripts relies on the target system having ssh access to our svn repository in order to checkout the current version of the scripts.
Two problems with this. Firstly, we rely on using SSH agent forwarding to access the svn repository, which is a potential security issue. The second is that it is difficult to ensure that productions systems don't pick up experimental changes from svn. Although we could control the versioning using tags and branches in svn, it would be fragile and easy to make a mistake.
A better solution would be to package our scripts as installable OS packages (rpm packages for Suse and RedHat, deb packages for Debian and Ubuntu). That way the first step in the install process would be to add our own package repository to the OS package installer, and then we install specific versions of our packages using the native OS package install process to handle installation, dependencies and updates.