Installing kubernetes and databases on VMs with ansible

Introduction

In a production environment, some parts of the wire-server infrastructure (such as e.g. cassandra databases) are best configured outside kubernetes. Additionally, kubernetes can be rapidly set up with kubespray, via ansible. This section covers installing VMs with ansible.

Assumptions

  • A bare-metal setup (no cloud provider)

  • All machines run ubuntu 16.04 or ubuntu 18.04

  • All machines have static IP addresses

  • You have the following virtual machines:

Name

Amount

CPU

memory

disk

cassandra

3

2

4 GB

80 GB

minio

3

1

2 GB

100 GB

elasticsearch

3

1

2 GB

10 GB

redis

3

1

2 GB

10 GB

kubernetes

3

4

8 GB

20 GB

restund

2

1

2 GB

10 GB

(It’s up to you how you create these machines - kvm on a bare metal machine, VM on a cloud provider, real physical machines, etc.)

Preparing to run ansible

Dependencies on operator’s machine

You need python2, some python dependencies, a specific version of ansible, and gnu make. Then, you need to download specific ansible roles using ansible-galaxy, and binaries kubectl and helm. You have two options to achieve this:

(Option 1) How to install the necessary components locally when using Debian or Ubuntu as your operating system

Install ‘poetry’ (python dependency management). See also the poetry documentation.

This assumes you’re using python 2.7 (if you only have python3 available, you may need to find some workarounds):

sudo apt install -y python2.7 python-pip
curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py > get-poetry.py
python2.7 get-poetry.py
source $HOME/.poetry/env
ln -s /usr/bin/python2.7 $HOME/.poetry/bin/python

Install the python dependencies to run ansible.

git clone https://github.com/wireapp/wire-server-deploy.git
cd wire-server-deploy/ansible
## (optional) if you need ca certificates other than the default ones:
# export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
poetry install

Note

The ‘make download-cli-binaries’ part of ‘make download’ requires either that you have run this all as root, or that the user you are running these scripts can ‘sudo’ without being prompted for a password. I run ‘sudo ls’, get prompted for a password, THEN run ‘make download’.

Download the ansible roles necessary to install databases and kubernetes:

make download

(Option 2) How to use docker on the local host with a docker image that contains all the dependencies

On your machine you need to have the docker binary available. See how to install docker. Then:

docker pull quay.io/wire/networkless-admin

# cd to a fresh, empty directory and create some sub directories
cd ...  # you pick a good location!
mkdir ./admin_work_dir ./dot_kube ./dot_ssh && cd ./admin_work_dir
# copy ssh key (the easy way, if you want to use your main ssh key pair)
cp ~/.ssh/id_rsa ../dot_ssh/
# alternatively: create a key pair exclusively for this installation
ssh-keygen -t ed25519 -a 100 -f ../dot_ssh/id_ed25519
ssh-add ../dot_ssh/id_ed25519
# make sure the server accepts your ssh key for user root
ssh-copy-id -i ../dot_ssh/id_ed25519.pub root@<server>

docker run -it --network=host -v $(pwd):/mnt -v $(pwd)/../dot_ssh:/root/.ssh -v $(pwd)/../dot_kube:/root/.kube quay.io/wire/networkless-admin
# inside the container, copy everything to the mounted host file system:
cp -a /src/* /mnt
# and make sure the git repos are up to date:
cd /mnt/wire-server && git pull
cd /mnt/wire-server-deploy && git pull
cd /mnt/wire-server-deploy-networkless && git pull

(The name of the docker image contains networkless because it was originally constructed for high-security installations without connection to the public internet. Since then it has grown to be our recommended general-purpose installation platform.)

Now exit the docker container. On subsequent times:

cd admin_work_dir
docker run -it --network=host -v $(pwd):/mnt -v $(pwd)/../dot_ssh:/root/.ssh -v $(pwd)/../dot_kube:/root/.kube quay.io/wire/networkless-admin
cd wire-server-deploy/ansible
# do work.

Any changes inside the container under the mount-points listed in the above command will persist (albeit as user root), everything else will not, so be careful when creating other files.

To connect to a running container for a second shell:

docker exec -it `docker ps -q --filter="ancestor=quay.io/wire/networkless-admin"` /bin/bash

Adding IPs to hosts.ini

Go to your checked-out wire-server-deploy/ansible folder:

cd wire-server-deploy/ansible

Copy the example hosts file:

cp hosts.example.ini hosts.ini
  • Edit the hosts.ini, setting the permanent IPs of the hosts you are setting up wire on.

  • replace the ansible_host values (X.X.X.X) with the IPs that you can reach by SSH. these are the ‘internal’ addresses of the machines, not what a client will be connecting to.

  • replace the ip values (Y.Y.Y.Y) with the IPs which you wish kubernetes to provide services to clients on.

There are more settings in this file that we will set in later steps.

Warning

Some of these playbooks mess with the hostnames of their targets. You MUST pick different hosts for playbooks that rename the host. If you e.g. attempt to run Cassandra and k8s on the same 3 machines, the hostnames will be overwritten by the second installation playbook, breaking the first.

At the least, we know that the cassandra and kubernetes playbooks are both guilty of hostname manipulation.

Authentication

Note

If you use ssh keys, and the user you login with is either root or can elevate to root without a password, you don’t need to do anything further to use ansible. If, however, you use password authentication for ssh access, and/or your login user needs a password to become root, see Manage ansible authentication settings.

Running ansible to install software on your machines

You can install kubernetes, cassandra, restund, etc in any order.

Note

In case you only have a single network interface with public IPs but wish to protect inter-database communication, you may use the tinc.yml playbook to create a private network interface. In this case, ensure tinc is setup BEFORE running any other playbook. See tinc

Installing kubernetes

Kubernetes is installed via ansible. To install kubernetes:

From wire-server-deploy/ansible:

poetry run ansible-playbook -i hosts.ini kubernetes.yml -vv

When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder artifacts containing a file admin.conf. Copy this file:

mkdir -p ~/.kube
cp artifacts/admin.conf ~/.kube/config

Make sure you can reach the server:

kubectl version

should give output similar to this:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Cassandra

  • Set variables in the hosts.ini file under [cassandra:vars]. Most defaults should be fine, except maybe for the cluster name and the network interface to use:

[cassandra:vars]
## set to True if using AWS
is_aws_environment = False
# cassandra_clustername: default

[all:vars]
## Set the network interface name for cassandra to bind to if you have more than one network interface
# cassandra_network_interface = eth0

(see defaults/main.yml for a full list of variables to change if necessary)

Install cassandra:

poetry run ansible-playbook -i hosts.ini cassandra.yml -vv

ElasticSearch

  • In your ‘hosts.ini’ file, in the [elasticsearch:vars] section, set ‘elasticsearch_network_interface’ to the name of the interface you want elasticsearch nodes to talk to each other on. For example:

[all:vars]
# default first interface on ubuntu on kvm:
elasticsearch_network_interface=ens3
  • Use poetry to run ansible, and deploy ElasticSearch:

poetry run ansible-playbook -i hosts.ini elasticsearch.yml -vv

Minio

  • In your ‘hosts.ini’ file, in the [all:vars] section, make sure you set the ‘minio_network_interface’ to the name of the interface you want minio nodes to talk to each other on. The default from the playbook is not going to be correct for your machine. For example:

[all:vars]
# Default first interface on ubuntu on kvm:
minio_network_interface=ens3
  • In your ‘hosts.ini’ file, in the [minio:vars] section, ensure you set minio_access_key and minio_secret key.

  • Use poetry to run ansible, and deploy Minio:

poetry run ansible-playbook -i hosts.ini minio.yml -vv

Restund

Set other variables in the hosts.ini file under [restund:vars]. Most defaults should be fine, except for the network interfaces to use:

  • set ansible_host=X.X.X.X under the [all] section to the IP for SSH access.

  • (recommended) set restund_network_interface = under the [restund:vars] section to the interface name you wish the process to use. Defaults to the default_ipv4_address, with a fallback to eth0.

  • (optional) restund_peer_udp_advertise_addr=Y.Y.Y.Y: set this to the IP to advertise for other restund servers if different than the ip on the ‘restund_network_interface’. If using ‘restund_peer_udp_advertise_addr’, make sure that UDP (!) traffic from any restund server (including itself) can reach that IP (for restund <-> restund communication). This should only be necessary if you’re installing restund on a VM that is reachable on a public IP address but the process cannot bind to that public IP address directly (e.g. on AWS VPC VM). If unset, restund <-> restund UDP traffic will default to the IP in the restund_network_interface.

[all]
(...)
restund01         ansible_host=X.X.X.X

(...)

[all:vars]
## Set the network interface name for restund to bind to if you have more than one network interface
## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0
restund_network_interface = eth0

(see defaults/main.yml for a full list of variables to change if necessary)

Install restund:

poetry run ansible-playbook -i hosts.ini restund.yml -vv

Installing helm charts - prerequisites

The helm_external.yml playbook can be used locally to write or update the IPs of the databases into the values/cassandra-external/values.yaml file, and thus make them available for helm and the ...-external charts (e.g. cassandra-external).

Ensure to define the following in your hosts.ini under [all:vars]:

[all:vars]
minio_network_interface = ...
cassandra_network_interface = ...
elasticsearch_network_interface = ...
redis_network_interface = ...
poetry run ansible-playbook -i hosts.ini -vv --diff helm_external.yml

Now you can install the helm charts.

Next steps for high-available production installation

Your next step will be Installing wire-server (production) components using helm