Installing kubernetes and databases on VMs with ansible

Introduction

In a production environment, some parts of the wire-server infrastructure (such as e.g. cassandra databases) are best configured outside kubernetes. Additionally, kubernetes can be rapidly set up with kubespray, via ansible. This section covers installing VMs with ansible.

Assumptions

  • A bare-metal setup (no cloud provider)

  • All machines run ubuntu 16.04 or ubuntu 18.04

  • All machines have static IP addresses

  • Time on all machines is being kept in sync

  • You have the following virtual machines:

Name

Amount

CPU

memory

disk

cassandra

3

2

4 GB

80 GB

minio

3

1

2 GB

100 GB

elasticsearch

3

1

2 GB

10 GB

redis

3

1

2 GB

10 GB

kubernetes

3

4

8 GB

20 GB

restund

2

1

2 GB

10 GB

(It’s up to you how you create these machines - kvm on a bare metal machine, VM on a cloud provider, real physical machines, etc.)

Preparing to run ansible

Dependencies on operator’s machine

You need python2, some python dependencies, a specific version of ansible, and gnu make. Then, you need to download specific ansible roles using ansible-galaxy, and binaries kubectl and helm. You have two options to achieve this:

(Option 1) How to install the necessary components locally when using Debian or Ubuntu as your operating system

First, we’re going to install Poetry. We’ll be using it to run ansible playbooks, and manage their python dependencies. See also the poetry documentation.

This assumes you’re using python 2.7 (if you only have python3 available, you may need to find some workarounds):

sudo apt install -y python2.7 python-pip
curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py > get-poetry.py
python2.7 get-poetry.py --yes
source $HOME/.poetry/env
ln -s /usr/bin/python2.7 $HOME/.poetry/bin/python

Install the python dependencies to run ansible:

git clone https://github.com/wireapp/wire-server-deploy.git
cd wire-server-deploy/ansible
## (optional) if you need ca certificates other than the default ones:
# export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
poetry install

Note

The ‘make download-cli-binaries’ part of ‘make download’ requires either that you have run this all as root, or that the user you are running these scripts can ‘sudo’ without being prompted for a password. To preemptively work around this, feel free to run ‘sudo ls’, get prompted for a password, THEN run ‘make download’.

Download the ansible roles necessary to install databases and kubernetes:

make download

(Option 2) How to use docker on the local host with a docker image that contains all the dependencies

On your machine you need to have the docker binary available. See how to install docker. Then:

docker pull quay.io/wire/networkless-admin

# cd to a fresh, empty directory and create some sub directories
cd ...  # you pick a good location!
mkdir ./admin_work_dir ./dot_kube ./dot_ssh && cd ./admin_work_dir
# copy ssh key (the easy way, if you want to use your main ssh key pair)
cp ~/.ssh/id_rsa ../dot_ssh/
# alternatively: create a key pair exclusively for this installation
ssh-keygen -t ed25519 -a 100 -f ../dot_ssh/id_ed25519
ssh-add ../dot_ssh/id_ed25519
# make sure the server accepts your ssh key for user root
ssh-copy-id -i ../dot_ssh/id_ed25519.pub root@<server>

docker run -it --network=host -v $(pwd):/mnt -v $(pwd)/../dot_ssh:/root/.ssh -v $(pwd)/../dot_kube:/root/.kube quay.io/wire/networkless-admin
# inside the container, copy everything to the mounted host file system:
cp -a /src/* /mnt
# and make sure the git repos are up to date:
cd /mnt/wire-server && git pull
cd /mnt/wire-server-deploy && git pull
cd /mnt/wire-server-deploy-networkless && git pull

(The name of the docker image contains networkless because it was originally constructed for high-security installations without connection to the public internet. Since then it has grown to be our recommended general-purpose installation platform.)

Now exit the docker container. On subsequent times:

cd admin_work_dir
docker run -it --network=host -v $(pwd):/mnt -v $(pwd)/../dot_ssh:/root/.ssh -v $(pwd)/../dot_kube:/root/.kube quay.io/wire/networkless-admin
cd wire-server-deploy/ansible
# do work.

Any changes inside the container under the mount-points listed in the above command will persist (albeit as user root), everything else will not, so be careful when creating other files.

To connect to a running container for a second shell:

docker exec -it `docker ps -q --filter="ancestor=quay.io/wire/networkless-admin"` /bin/bash

Adding IPs to hosts.ini

Go to your checked-out wire-server-deploy/ansible folder:

cd wire-server-deploy/ansible

Copy the example hosts file:

cp hosts.example.ini hosts.ini
  • Edit the hosts.ini, setting the permanent IPs of the hosts you are setting up wire on.

  • On each of the lines declaring a database service node ( lines in the [all] section beginning with cassandra, elasticsearch, or minio) replace the ansible_host values (X.X.X.X) with the IPs of the nodes that you can connect to via SSH. these are the ‘internal’ addresses of the machines, not what a client will be connecting to.

  • On each of the lines declaring a kubernetes node (lines in the [all] section starting with ‘kubenode’) replace the ip values (Y.Y.Y.Y) with the IPs which you wish kubernetes to provide services to clients on, and replace the ansible_host values (X.X.X.X) with the IPs of the nodes that you can connect to via SSH. If the IP you want to provide services on is the same IP that you use to connect, remove the ip=Y.Y.Y.Y completely.

  • On each of the lines declaring an etcd node (lines in the [all] section starting with etcd), use the same values as you used on the coresponding kubenode lines in the prior step.

  • If you are deploying Restund for voice/video services then on each of the lines declaring a restund node (lines in the [all] section beginning with restund), replace the ansible_host values (X.X.X.X) with the IPs of the nodes that you can connect to via SSH.

There are more settings in this file that we will set in later steps.

Warning

Some of these playbooks mess with the hostnames of their targets. You MUST pick different hosts for playbooks that rename the host. If you e.g. attempt to run Cassandra and k8s on the same 3 machines, the hostnames will be overwritten by the second installation playbook, breaking the first.

At the least, we know that the cassandra, kubernetes and restund playbooks are guilty of hostname manipulation.

Authentication

Note

If you use ssh keys, and the user you login with is either root or can elevate to root without a password, you don’t need to do anything further to use ansible. If, however, you use password authentication for ssh access, and/or your login user needs a password to become root, see Manage ansible authentication settings.

Running ansible to install software on your machines

You can install kubernetes, cassandra, restund, etc in any order.

Note

In case you only have a single network interface with public IPs but wish to protect inter-database communication, you may use the tinc.yml playbook to create a private network interface. In this case, ensure tinc is setup BEFORE running any other playbook. See tinc

Installing kubernetes

Kubernetes is installed via ansible.

To install kubernetes:

From wire-server-deploy/ansible:

poetry run ansible-playbook -i hosts.ini kubernetes.yml -vv

When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder artifacts containing a file admin.conf. Copy this file:

mkdir -p ~/.kube
cp artifacts/admin.conf ~/.kube/config

Make sure you can reach the server:

kubectl version

should give output similar to this:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Cassandra

  • If you would like to change the name of the cluster, in your ‘hosts.ini’ file, in the [cassandra:vars] section, uncomment the line that changes ‘cassandra_clustername’, and change default to be the name you want the cluster to have.

  • If you want cassandra nodes to talk to each other on a specific network interface, rather than the one you use to connect via SSH, In your ‘hosts.ini’ file, in the [all:vars] section, uncomment, and set ‘cassandra_network_interface’ to the name of the ethernet interface you want cassandra nodes to talk to each other on. For example:

[cassandra:vars]
# cassandra_clustername: default

[all:vars]
## set to True if using AWS
is_aws_environment = False
## Set the network interface name for cassandra to bind to if you have more than one network interface
cassandra_network_interface = eth0

(see defaults/main.yml for a full list of variables to change if necessary)

  • Use poetry to run ansible, and deploy Cassandra:

poetry run ansible-playbook -i hosts.ini cassandra.yml -vv

ElasticSearch

  • In your ‘hosts.ini’ file, in the [all:vars] section, uncomment and set ‘elasticsearch_network_interface’ to the name of the interface you want elasticsearch nodes to talk to each other on.

  • If you are performing an offline install, or for some other reason are using an APT mirror other than the default to retrieve elasticsearch-oss packages from, you need to specify that mirror by setting ‘es_apt_key’ and ‘es_apt_url’ in the [all:vars] section of your hosts.ini file.

[all:vars]
# default first interface on ubuntu on kvm:
elasticsearch_network_interface=ens3

## Set these in order to use an APT mirror other than the default.
# es_apt_key = "https://<mymirror>/linux/ubuntu/gpg"
# es_apt_url = "deb [trusted=yes] https://<mymirror>/apt bionic stable"
  • Use poetry to run ansible, and deploy ElasticSearch:

poetry run ansible-playbook -i hosts.ini elasticsearch.yml -vv

Minio

Minio is used for asset storage, in the case that you are not running on AWS infrastructure, or feel uncomfortable storing assets in S3 in encrypted form. If you are using S3 instead of Minio, skip this step.

  • In your ‘hosts.ini’ file, in the [all:vars] section, make sure you set the ‘minio_network_interface’ to the name of the interface you want minio nodes to talk to each other on. The default from the playbook is not going to be correct for your machine. For example:

  • In your ‘hosts.ini’ file, in the [minio:vars] section, ensure you set minio_access_key and minio_secret key.

  • If you intend to use a deep link to configure your clients to talk to the backend, you need to specify your domain (and optionally your prefix), so that links to your deep link json file are generated correctly. By configuring these values, you fill in the blanks of https://{{ prefix }}assets.{{ domain }}.

[minio:vars]
minio_access_key = "REPLACE_THIS_WITH_THE_DESIRED_SECRET_KEY"
minio_secret_key = "REPLACE_THIS_WITH_THE_DESIRED_SECRET_KEY"
# if you want to use deep links for client configuration:
#minio_deeplink_prefix = ""
#minio_deeplink_domain = "example.com"

[all:vars]
# Default first interface on ubuntu on kvm:
minio_network_interface=ens3
  • Use poetry to run ansible, and deploy Minio:

poetry run ansible-playbook -i hosts.ini minio.yml -vv

Restund

  • In your hosts.ini file, in the [restund:vars] section, set the restund_network_interface to the name of the interface you want restund to talk to clients on. This value defaults to the default_ipv4_address, with a fallback to eth0.

  • (optional) restund_peer_udp_advertise_addr=Y.Y.Y.Y: set this to the IP to advertise for other restund servers if different than the ip on the ‘restund_network_interface’. If using ‘restund_peer_udp_advertise_addr’, make sure that UDP (!) traffic from any restund server (including itself) can reach that IP (for restund <-> restund communication). This should only be necessary if you’re installing restund on a VM that is reachable on a public IP address but the process cannot bind to that public IP address directly (e.g. on AWS VPC VM). If unset, restund <-> restund UDP traffic will default to the IP in the restund_network_interface.

[all]
(...)
restund01         ansible_host=X.X.X.X

(...)

[all:vars]
## Set the network interface name for restund to bind to if you have more than one network interface
## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0
restund_network_interface = eth0

(see defaults/main.yml for a full list of variables to change if necessary)

  • Place a copy of the PEM formatted certificate and key you are going to use for TLS communication to the restund server in /tmp/tls_cert_and_priv_key.pem. Remove it after you have completed deploying restund with ansible.

Install restund:

poetry run ansible-playbook -i hosts.ini restund.yml -vv

IMPORTANT checks

After running the above playbooks, it is important to ensure that everything is setup correctly. Please have a look at the post install checks in the section Verifying your wire-server installation

poetry run ansible-playbook -i hosts.ini cassandra-verify-ntp.yml -vv

Installing helm charts - prerequisites

The helm_external.yml playbook is used to write or update the IPs of the databases servers in the values/<database>-external/values.yaml files, and thus make them available for helm and the <database>-external charts (e.g. cassandra-external, elasticsearch-external, etc).

Due to limitations in the playbook, make sure that you have defined the network interfaces for each of the database services in your hosts.ini, even if they are running on the same interface that you connect to via SSH. In your hosts.ini under [all:vars]:

[all:vars]
minio_network_interface = ...
cassandra_network_interface = ...
elasticsearch_network_interface = ...
# if you're using redis external...
redis_network_interface = ...
- Now run the helm_external.yml playbook, to populate network values for helm:

poetry run ansible-playbook -i hosts.ini -vv –diff helm_external.yml

You can now can install the helm charts.

Next steps for high-available production installation

Your next step will be Installing wire-server (production) components using Helm