Installing kubernetes and databases on VMs with ansible


In a production environment, some parts of the wire-server infrastructure (such as e.g. cassandra databases) are best configured outside kubernetes. Additionally, kubernetes can be rapidly set up with kubespray, via ansible. This section covers installing VMs with ansible.


  • A bare-metal setup (no cloud provider)

  • All machines run ubuntu 16.04 or ubuntu 18.04

  • All machines have static IP addresses

  • Time on all machines is being kept in sync

  • You have the following virtual machines:









4 GB

80 GB




2 GB

100 GB




2 GB

10 GB




2 GB

10 GB




8 GB

20 GB




2 GB

10 GB

(It’s up to you how you create these machines - kvm on a bare metal machine, VM on a cloud provider, real physical machines, etc.)

Preparing to run ansible

Dependencies on operator’s machine

You need python2, some python dependencies, a specific version of ansible, and gnu make. Then, you need to download specific ansible roles using ansible-galaxy, and binaries kubectl and helm. You have two options to achieve this:

(Option 1) How to install the necessary components locally when using Debian or Ubuntu as your operating system

Install ‘poetry’ (python dependency management). See also the poetry documentation.

This assumes you’re using python 2.7 (if you only have python3 available, you may need to find some workarounds):

sudo apt install -y python2.7 python-pip
curl -sSL >
python2.7 --yes
source $HOME/.poetry/env
ln -s /usr/bin/python2.7 $HOME/.poetry/bin/python

Install the python dependencies to run ansible.

git clone
cd wire-server-deploy/ansible
## (optional) if you need ca certificates other than the default ones:
# export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
poetry install


The ‘make download-cli-binaries’ part of ‘make download’ requires either that you have run this all as root, or that the user you are running these scripts can ‘sudo’ without being prompted for a password. I run ‘sudo ls’, get prompted for a password, THEN run ‘make download’.

Download the ansible roles necessary to install databases and kubernetes:

make download

(Option 2) How to use docker on the local host with a docker image that contains all the dependencies

On your machine you need to have the docker binary available. See how to install docker. Then:

docker pull

# cd to a fresh, empty directory and create some sub directories
cd ...  # you pick a good location!
mkdir ./admin_work_dir ./dot_kube ./dot_ssh && cd ./admin_work_dir
# copy ssh key (the easy way, if you want to use your main ssh key pair)
cp ~/.ssh/id_rsa ../dot_ssh/
# alternatively: create a key pair exclusively for this installation
ssh-keygen -t ed25519 -a 100 -f ../dot_ssh/id_ed25519
ssh-add ../dot_ssh/id_ed25519
# make sure the server accepts your ssh key for user root
ssh-copy-id -i ../dot_ssh/ root@<server>

docker run -it --network=host -v $(pwd):/mnt -v $(pwd)/../dot_ssh:/root/.ssh -v $(pwd)/../dot_kube:/root/.kube
# inside the container, copy everything to the mounted host file system:
cp -a /src/* /mnt
# and make sure the git repos are up to date:
cd /mnt/wire-server && git pull
cd /mnt/wire-server-deploy && git pull
cd /mnt/wire-server-deploy-networkless && git pull

(The name of the docker image contains networkless because it was originally constructed for high-security installations without connection to the public internet. Since then it has grown to be our recommended general-purpose installation platform.)

Now exit the docker container. On subsequent times:

cd admin_work_dir
docker run -it --network=host -v $(pwd):/mnt -v $(pwd)/../dot_ssh:/root/.ssh -v $(pwd)/../dot_kube:/root/.kube
cd wire-server-deploy/ansible
# do work.

Any changes inside the container under the mount-points listed in the above command will persist (albeit as user root), everything else will not, so be careful when creating other files.

To connect to a running container for a second shell:

docker exec -it `docker ps -q --filter=""` /bin/bash

Adding IPs to hosts.ini

Go to your checked-out wire-server-deploy/ansible folder:

cd wire-server-deploy/ansible

Copy the example hosts file:

cp hosts.example.ini hosts.ini
  • Edit the hosts.ini, setting the permanent IPs of the hosts you are setting up wire on.

  • replace the ansible_host values (X.X.X.X) with the IPs that you can reach by SSH. these are the ‘internal’ addresses of the machines, not what a client will be connecting to.

  • replace the ip values (Y.Y.Y.Y) with the IPs which you wish kubernetes to provide services to clients on.

There are more settings in this file that we will set in later steps.


Some of these playbooks mess with the hostnames of their targets. You MUST pick different hosts for playbooks that rename the host. If you e.g. attempt to run Cassandra and k8s on the same 3 machines, the hostnames will be overwritten by the second installation playbook, breaking the first.

At the least, we know that the cassandra and kubernetes playbooks are both guilty of hostname manipulation.



If you use ssh keys, and the user you login with is either root or can elevate to root without a password, you don’t need to do anything further to use ansible. If, however, you use password authentication for ssh access, and/or your login user needs a password to become root, see Manage ansible authentication settings.

Running ansible to install software on your machines

You can install kubernetes, cassandra, restund, etc in any order.


In case you only have a single network interface with public IPs but wish to protect inter-database communication, you may use the tinc.yml playbook to create a private network interface. In this case, ensure tinc is setup BEFORE running any other playbook. See tinc

Installing kubernetes

Kubernetes is installed via ansible. To install kubernetes:

From wire-server-deploy/ansible:

poetry run ansible-playbook -i hosts.ini kubernetes.yml -vv

When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder artifacts containing a file admin.conf. Copy this file:

mkdir -p ~/.kube
cp artifacts/admin.conf ~/.kube/config

Make sure you can reach the server:

kubectl version

should give output similar to this:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}


  • Set variables in the hosts.ini file under [cassandra:vars]. Most defaults should be fine, except maybe for the cluster name and the network interface to use:

## set to True if using AWS
is_aws_environment = False
# cassandra_clustername: default

## Set the network interface name for cassandra to bind to if you have more than one network interface
# cassandra_network_interface = eth0

(see defaults/main.yml for a full list of variables to change if necessary)

  • Use poetry to run ansible, and deploy Cassandra:

poetry run ansible-playbook -i hosts.ini cassandra.yml -vv


  • In your ‘hosts.ini’ file, in the [all:vars] section, set ‘elasticsearch_network_interface’ to the name of the interface you want elasticsearch nodes to talk to each other on. For example:

# default first interface on ubuntu on kvm:
  • If you are performing an offline install, or for some other reason are using an APT mirror to retrieve elasticsearch-oss packages from, you need to specify that mirror. In the ‘ELASTICSEARCH’ section of hosts.ini, add two lines forcing elasticsearch to use a given APT mirror, with a given GPG key.

es_apt_key = "https://<mymirror>/linux/ubuntu/gpg"
es_apt_url = "deb [trusted=yes] https://<mymirror>/apt bionic stable"
  • Use poetry to run ansible, and deploy ElasticSearch:

poetry run ansible-playbook -i hosts.ini elasticsearch.yml -vv


  • In your ‘hosts.ini’ file, in the [all:vars] section, make sure you set the ‘minio_network_interface’ to the name of the interface you want minio nodes to talk to each other on. The default from the playbook is not going to be correct for your machine. For example:

# Default first interface on ubuntu on kvm:
  • In your ‘hosts.ini’ file, in the [minio:vars] section, ensure you set minio_access_key and minio_secret key.

  • Use poetry to run ansible, and deploy Minio:

poetry run ansible-playbook -i hosts.ini minio.yml -vv


Set other variables in the hosts.ini file under [restund:vars]. Most defaults should be fine, except for the network interfaces to use:

  • set ansible_host=X.X.X.X under the [all] section to the IP for SSH access.

  • (recommended) set restund_network_interface = under the [restund:vars] section to the interface name you wish the process to use. Defaults to the default_ipv4_address, with a fallback to eth0.

  • (optional) restund_peer_udp_advertise_addr=Y.Y.Y.Y: set this to the IP to advertise for other restund servers if different than the ip on the ‘restund_network_interface’. If using ‘restund_peer_udp_advertise_addr’, make sure that UDP (!) traffic from any restund server (including itself) can reach that IP (for restund <-> restund communication). This should only be necessary if you’re installing restund on a VM that is reachable on a public IP address but the process cannot bind to that public IP address directly (e.g. on AWS VPC VM). If unset, restund <-> restund UDP traffic will default to the IP in the restund_network_interface.

restund01         ansible_host=X.X.X.X


## Set the network interface name for restund to bind to if you have more than one network interface
## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0
restund_network_interface = eth0

(see defaults/main.yml for a full list of variables to change if necessary)

Install restund:

poetry run ansible-playbook -i hosts.ini restund.yml -vv


After running the above playbooks, it is important to ensure that everything is setup correctly. Please have a look at the post install checks in the section What sort of checks should I run after a successful installation?

poetry run ansible-playbook -i hosts.ini cassandra-verify-ntp.yml -vv

Installing helm charts - prerequisites

The helm_external.yml playbook can be used locally to write or update the IPs of the databases into the values/cassandra-external/values.yaml file, and thus make them available for helm and the ...-external charts (e.g. cassandra-external).

Ensure to define the following in your hosts.ini under [all:vars]:

minio_network_interface = ...
cassandra_network_interface = ...
elasticsearch_network_interface = ...
redis_network_interface = ...
poetry run ansible-playbook -i hosts.ini -vv --diff helm_external.yml

Now you can install the helm charts.

Next steps for high-available production installation

Your next step will be Installing wire-server (production) components using helm