- Run
groovy AnsibleSkeleton <projectname> <options>
where: <projectname>
is the name of the application;<options>
is a list of dependencies the project requires, in the order you wish them to appear in the scripts. E.g. 'mysql' if the project requires a MySQL database, 'properties' if the project has external configuration file(s). Example:groovy AnsibleSkeleton logger-service mysql properties tomcat
would create a skeleton for the Logger Service, with a MySQL database, configuration files and Apache & Tomcat configuration/deployments.- This will create the following files/directories:
<projectname>-standalone.yml
(playbook)inventories/vagrant/<projectname>
(inventory file that can be used for local Vagrant deployments)roles/<projectname>/vars/main.yml
(variables that do not change between environments, e.g. artifact id, version, etc)roles/<projectname>/templates/<projectname>-config.properties
(if 'properties' was included) (template configuration file)roles/<projectname>/tasks/main.yml
(task file listing all the steps required to build & deploy the application)
- Edit the relevant files to fill in missing/placeholder variable names etc
- Add any additional files or deployment tasks that are not supported by the skeleton
If your application requires a WAR file deployment then you use the common tomcat_deploy
and apache_vhost
roles. This will ensure a consistent approach to Tomcat and Apache configuration.
If your application requires a JAR deployment then use the exec_jar
role.
The easiest way to use these roles is to use parameterised includes in your main task file. E.g.
- include: ../../apache_vhost/tasks/main.yml context_path='{{ logger_context_path }}' hostname='{{ logger_hostname
tags:
- logger
- apache_vhost
- deploy
}}'
- include: ../../tomcat_deploy/tasks/main.yml war_url='{{ logger_artifact_url }}' context_path='{{ logger_context_path }}' hostname='{{ logger_hostname }}'
tags:
- logger
- apache_vhost
- deploy
The following parameters are required:
context_path
- the value of the<projectname>_context_path
variable for your projecthostname
- the<projectname>_hostname
variable for your projectwar_url
- the URL of the WAR file to be downloaded and deployed (this should be from maven and is usually specified in the vars/main.yml file)
Note: the AnsibleSkeleton script will set all this up for you.
These roles depend heavily on two inventory properties:
<projectname>_context_path
<projectname>_hostname
These variables tell the tomcat_deploy
and apache_vhost
roles how to configure your application using the following rules:
- If
<projectname>_hostname
is not empty, not localhost, not the loopback and does not contain a colon:- If
<projectname>_context_path
is blank or "/":- Create an Apache Virtual Host for the
<projectname>_hostname
with proxy rules to forward the root context to Tomcat (i.e.ProxyPass / ajp://localhost:8009/
) - Create a Tomcat Virtual Host for the
<projectname>_hostname
with the application deployed as ROOT.war (so it is the default context).
- Create an Apache Virtual Host for the
- If
<projectname>_context_path
is NOT blank or "/"- Create an Apache Virtual Host for the
<projectname>_hostname
with proxy rules to forward the<projectname>_context_path
context to Tomcat (i.e.ProxyPass /<context> ajp://localhost:8009/<conttext>
) - Create a Tomcat Virtual Host for the
<projectname>_hostname
with the application deployed as<projectname>_context_path.war
.
- Create an Apache Virtual Host for the
- If
- If
<projectname>_hostname
is empty, localhost, the loopback or contains a colon:- Do not create any Apache configuration
- Do not create a Tomcat virtual host
- If
<projectname>_context_path
is blank or "/":- Deploy the application as ROOT.war to the webapps/ directory on Tomcat
- If
<projectname>_context_path
is NOT blank or "/":- Deploy the application as
<projectname>_context_path.war
to the webapps/ directory on Tomcat
- Deploy the application as
NOTES:
- These steps are non-destructive, so if the tomcat or apache vhosts already exists then they will be updated.
- You cannot have multiple applications as the root context, so if your playbook/inventory uses
<projectname>_context_path=
for multiple applications then you will have a problem.
There are several other properties that can be specified to perform certain actions:
proxy_root_context_to
- this property is often used by the 'hub' deployments and allows the root context to be proxied to a different context path on Tomcat. For an example, see the appd.yml playbook.additional_proxy_pass
- this property is used to create additional proxy rules in the Apache virtual host. It is a list property with items in the format{ src: <from>, dest: <to> }
For example:
additional_proxy_pass:
- { src: "/biocache-media", dest: "!" }
will create
ProxyPass /biocache-media !
log_filename
- the filename of the application's log file, so that the tomcat_deploy role can back it up. If not provided, the war_filename will be used. Do not include the file extension.
The existing WAR file and log file will be backed up before the new WAR is deployed.
HTTPS can be enabled for your playbook by specifying
ssl = true
in your inventory.
There are two options for installing HTTPS key/cert/etc files on your server:
- Copy local files to the server; or
- Manage them on the server with a tool like SSL Mate (this is the default).
ALA uses option 2.
Use the following parameters if you need to copy local files to your server:
ssl = true
- this enables HTTPScopy_https_certs_from_local = true
- this enables the copy optionssl_certificate_server_dir = /path/to/cert/dir/on/server
- this is the location on the server for your certificate and key filesssl_certificate_local_dir = /LOCAL/path/to/ssl/files
- this is the LOCAL file path to the HTTPS configuration files (key, cert, chain) that need to be deployed to the serverssl_cert_file = filename
- this is the name of the HTTPS certificate file, used to copy the file to the server (into ssl_certificate_server_dir) and to set theSSLCertificateFile
directive (to ssl_certificate_server_dir/ssl_cert_file).ssl_key_file = filename
- this is the name of the HTTPS key file, used to copy the file to the server (into ssl_certificate_server_dir) and to set theSSLCertificateKeyFile
directive (to ssl_certificate_server_dir/ssl_key_file).ssl_chain_file = filename
- this is the name of the HTTPS certificate chain file, used to copy the file to the server (into ssl_certificate_server_dir) and to set theSSLCertificateChainFile
directive (to ssl_certificate_server_dir/ssl_chain_file).
- To deploy the logger-service to
logger.ala.org.au
: logger_context_path =
logger_hostname = logger.ala.org.au
- To deploy the logger service to
ala.org.au/logger
: logger_context_path = logger
logger_hostname = ala.org.au
(these values go in the inventory file to be used when you run the playbook)
Most ALA applications require an external configuration properties file. This file is typically deployed via an ansible template so that environment-specific values (such as database connections and URLs to other services) can be substituted at deploy time.
When writing a new Ansible script, ensure that your application's configuration properties file has been moved (do NOT leave a copy in the application's GIT repository) to the templates directory for the application role, and replace ALL URLs and other environment-specific values with ansible variables. Note: this must include the CAS server URLs.
This will ensure that your application can be safely deployed to a non-production or non-ALA environment.
The AnsibleSkeleton script will create a skeleton properties file for your application, with appropriate variables for the auth servers (auth_base_url
and auth_cas_url
).
Auth URLs MUST be accessed via HTTPS.
The inventories/vagrant directory contains inventories for deploying applications to a vagrant instance.
For these inventories to work, you will need to have a Ubuntu 14.04 vagrant VM running, with the IP address mapped to vagrant1
AND vagrant1.ala.org.au
in your /etc/hosts files. The inventories refer to the host as 'vagrant1', and the URLs for the applications will be 'vagrant1.ala.org.au/something`.
The role role/db-backup
can be included in your playbook to backup a database instance during each deployment. This role currently supports MongoDB, MySQL and Postgres. The role defines a db_backup tag.
This role will export the database to a .gz file called db_[timestamp].gz in a directory you specify (defaults to /data).
To use this role, add the following to your playbook before your application deployment:
- { role: db-backup, db: "mongo|mysql|postgres" }
.
You will also need to specify the following variables in your inventory file:
Database | Option | Mandatory | Default | Description |
---|---|---|---|---|
All | backup_dir | No | /data | The directory to save the backup file to |
MongoDB | db_hostname | No | localhost | The hostname to use when connecting to the database |
db_port | No | 27017 | The port to use when connecting to the database | |
MySQL | db_user | Yes | none | The user to connect as |
db_password | Yes | none | The user's password | |
db_name | Yes | none | The name of the database schema to backup | |
db_host | No | localhost | The hostname to use when connecting to the database | |
db_port | No | 3306 | The port to use when connecting to the database | |
Postgres | db_user | Yes | none | The user to connect as |
db_name | Yes | none | The name of the database schema to backup |
- NOTE: There are some current limitations with the postgres support due to the way it handles authentication. The current implementation only supports postgres instances running on localhost as the postgres user.
To skip the database backup step during your deployment, add the following to your ansible-playbook command: --skip-tags backup
.