Difference between revisions of "Deployment automation"
(→Next steps) |
(→Next steps) |
||
Line 74: | Line 74: | ||
# Put the new Localsettings under version control too. | # Put the new Localsettings under version control too. | ||
− | ===Next steps=== | + | ===Next steps towards full automation=== |
While even minimal automation should make things more reliable and repeatable, it's still a complex procedure. In order to make everything as simple as clicking on a button, we'll need a continuous integration (CI) tool or service. Additionally, CI-tools provide monitoring functions to keep an eye on the state of the deployment. Here are some possibilities: | While even minimal automation should make things more reliable and repeatable, it's still a complex procedure. In order to make everything as simple as clicking on a button, we'll need a continuous integration (CI) tool or service. Additionally, CI-tools provide monitoring functions to keep an eye on the state of the deployment. Here are some possibilities: | ||
Revision as of 12:25, 14 January 2016
Contents
The problem
Biowikifarm uses a big stack of software: webserver, database, parsoid service, cronjobs, update scripts, custom scripts, plugins from various sources etc. Keeping track of all the pieces is becoming increasingly difficult. The problem manifests itself as:
- Difficulties to install a specific version of Mediawiki for a specific wiki.
- Incompatibilities between versions (e.g. Parsoid with Mediawiki versions).
- Configuration is difficult to maintain, as there is no overview of what is installed and where the corresponding files are.
- Seriously testing stuff before it's deployed is nearly impossible.
- Documenting changes is increasingly work-intensive.
- As configuration is done on the live server, downtime is necessary.
Inadequacy of the current practice
The current practice makes certain assumptions which amount to as many "anti-patterns" (Humble & Farley, 2010, chap. 1):
Manual deployment of wikis
- Extensive documentation is produced, which is supposed to document all the steps necessary for manual installation (this is impossible, as so much is tacit knowledge)
- Manual testing is required
- Upgrades often go wrong, are unpredictable and sometimes have to be reverted or patched
- Upgrades take several hours
No staging environment
- Whenever an upgrade wiki release has to be tested, it's released on the production environment, sometimes breaking the production build.
Manual configuration of the server
- Stepping back from an unsuccessful server configuration is very difficult
- Modifying the production system is dangerous, specially for files not under version control.
Potential solution approaches
Automated and repeatable deployment
Releasing wikis and Parsoid should be fully automated. Deploying a wiki should be as simple as choosing a name for it and running the deployment script.
- Errors would occur only once.
- Deployment should be repeatable.
- Maintaining extensive documentation is no longer necessary, everything should be documented in scripts.
- Scripts are better than documentation, as everything is explicit.
- Deployment should not depend on the expert, all admins should be able to deploy a wiki using the script.
- Manual deployment is boring and time-consuming.
- Testing is easy as the wiki can be deployed to the staging environment.
- Any version of the wiki should be deployable, particularly new ones.
Staging environment
Deploying a new wiki does not strictly speaking require a staging server. Yet upgrades of Mediawiki and Parsoid may require corresponding versions of libraries (e.g. PHP) that might break the production server. A staging server would be practical for detecting problems well in advance.
- As problems would be detected in advance, it should be possible to fix them without additional downtime.
- As we use open source stuff developed elsewhere, on other Linux versions etc., it would be better to test it before running in on our server.
- Integration problems with our network environment should be detectable on time.
Configuration (and everything else) under version control
Configuration files, including server, database, wikis, cron etc. should be kept under version control.
- Having configuration files under version control is not only safer, but also allows to deploy any version necessary.
- It helps to keep track over what is in production.
- Last minute patches typically go undocumented, however they will be under version control, provided the change has been committed.
- Rollbacks become possible.
Introducing deployment automation
Deployment automation (aka continuous integration) can be introduced stepwise. The easiest course would be to start by automating the deployment of new wikis.
Prerequisites for a minimal system
- Make sure version control is working on the server and the repository structure is fit-for-purpose
- These things are needed under version control: build file, installation scripts, tests
- Install a build tool, such as ant (or maven).
- Make sure all admins agree that this is the way to go. No manual deployment of wikis allowed any more!
Deploy a new wiki automatically
Once the following is in place, it should be possible to deploy a simple wiki by executing the build file.
- Using ant (or maven), create a build descriptor, with a "target" for each step in the deployment.
- For each "target", write the necessary bash scripts or mysql files if applicable.
- Keep the build file and the scripts under version control.
- Choose some reasonable defaults for common wikis.
- Run a suite of Selenium tests on the new wiki (tests should be kept under version control too).
If tests don't pass, then repeat until they do:
- Change the settings to solve the problem
- Commit the changes
Finally:
- Put the new Localsettings under version control too.
Next steps towards full automation
While even minimal automation should make things more reliable and repeatable, it's still a complex procedure. In order to make everything as simple as clicking on a button, we'll need a continuous integration (CI) tool or service. Additionally, CI-tools provide monitoring functions to keep an eye on the state of the deployment. Here are some possibilities:
- Travis
- A service which works together with GitHub. Web interface, many browser add-ons. http://travis-ci.org
- Hudson
- A tool which has to be installed on our server to work. Uses our own version control and user auth. Comes with its own web server. Integrates into Eclipse. http://wiki.eclipse.org/Hudson-ci/Meet_Hudson
- Bamboo
- A tool from the Jira suite. Integrates with Bitbucket (Git) and Confluence etc.. Requires everything to be in a local or shared Git. https://www.atlassian.com/software/bamboo/
References
Humble, J. and Farley, D., 2010. Continuous delivery: reliable software releases through build, test, and deployment automation. Pearson Education. http://proquestcombo.safaribooksonline.com.libezproxy.open.ac.uk/book/software-engineering-and-development/software-testing/9780321670250