This procedure is optimized to get Twistlock installed and set up in your Docker Swarm cluster as quickly as possible, while minimizing the likelihood of running into snags, and insulating you from dependencies on your cloud provider and cluster configuration. There are many ways to install Twistlock, but we recommend that you start with this procedure first. You can tweak the install procedure after you have validated that this install method works.
The Twistlock install supports Docker Swarm using Swarm-native constructs. Deploying Console as a service lets you rely on Swarm to ensure Twistlock Console is always available. Deploying Defender as a global service guarantees that Defender is automatically deployed to every worker node with a simple one-time configuration.
After completing this procedure, both Twistlock Console and Twistlock Defenders will run in your Swarm cluster. This setup uses a load balancer (HAProxy) and external persistent storage so that Console can failover and restart on any Swarm worker node.
If you don’t have external persistent storage, you can configure Console to use local storage, but you must pin Console to the node with the local storage. Console with local storage is not recommended for production-grade setups.
In this procedure, Twistlock images are pulled from Twistlock’s cloud registry.
Twistlock doesn’t support deploying Defender as a global service when SELinux is enabled on your underlying hosts. Defender requires access to the Docker socket to monitor your environment and enforce your policies. SELinux blocks access to the Docker socket because it can be a serious security issue. Unfortunately, Swarm doesn’t provide a way for legitimate services to run with elevated privileges. None of the --security-opts, --privileged, or --cap-add flags are supported for Swarm services. As a work-around, install single Container Defenders on each individual node in your cluster. |
Swarm uses a routing mesh inside the cluster. When you deploy Twistlock Console as a replicated service, Swarm’s routing mesh publishes Console’s ports on every node.
A load balancer is required to facilitate Defender-to-Console communication. Console is deployed on an overlay network, and Defenders are deployed in the host network namespace. Because Defenders aren’t connected to the overlay network, they cannot connect to the Virtual IP (VIP) address of the Twistlock Console service. Prepare your load balancer so that traffic is distributed to all available Swarm worker nodes. The nodes use Swarm’s routing mesh to forward traffic to the worker node that runs Console. The following diagram shows the setup:
The following example HAProxy configuration has been tested in our labs. Use it as a starting point for your own configuration.
Whichever load balancer you use, be sure it supports TCP passthrough. Otherwise, Defenders might not be able to connect Console. |
global
...
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
...
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
maxsslconn 256
tune.ssl.default-dh-param 2048
defaults
...
frontend http_front
bind *:8081
stats uri /haproxy?stats
default_backend http_back
backend http_back
balance roundrobin
server node1 IP-OF-YOUR-SWARMWORKER:8081 check (1)
server node2 IP-OF-YOUR-SWARMWORKER:8081 check
server node3 IP-OF-YOUR-SWARMWORKER:8081 check
frontend https_front
stats uri /haproxy?stats
default_backend https_back
bind *:8083 ssl crt /etc/ssl/private/haproxy.pem (2)
backend https_back
balance roundrobin
server node1 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none (1)
server node2 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none
server node3 IP-OF-YOUR-SWARMWORKER:8083 weight 1 maxconn 100 check ssl verify none
frontend defender_front
stats uri /haproxy?stats
default_backend defender_back
option tcplog
mode tcp
bind *:8084
backend defender_back
balance roundrobin
mode tcp
option tcp-check
server node1 IP-OF-YOUR-SWARMWORKER:8084 check (1)
server node2 IP-OF-YOUR-SWARMWORKER:8084 check
server node3 IP-OF-YOUR-SWARMWORKER:8084 check
A couple of notes about the config file:
1 | Traffic is balanced across three Swarm nodes. Specify as many Swarm nodes as needed. |
2 | The port binding 8083 uses HTTPS, so you must create a certificate in PEM format before applying the configuration. The cert in this configuration is stored in /etc/ssl/private/haproxy.pem. Use the linked instructions to create a certificate. We recommend creating a certificate that is signed by your trusted CA. |
Simplify the configuration of your environment by setting up a DNS A Record that points to your load balancer. Then use the load balancer’s domain name to:
Connect to Console’s HTTP or HTTPS web interface,
Interface with Console’s API,
Configure how Defender connects to Console.
Install a volume driver that can create persistent volumes that can be accessed from any node in the cluster. Because Console can be scheduled on any node, it must be able to access its data and backup folders from wherever it runs.
You can use any available volume plugin, then specify the plugin driver with the --volume-driver
option when installing Twistlock Console with twistcli.
Every node in your cluster must have the proper permissions to create persistent volumes.
This procedure describes how to use the Google Cloud Platform and NFSv4 volume drivers, but you can use any supported volume plugin.
Set up the gce-docker volume plugin on each cluster node, then create data and backup volumes for Console.
Verify that Swarm is enabled on all nodes, and that they are connected to a healthy master.
Install the GCP volume plugin. Run the following command on each node.
$ docker run -d \ -v /:/rootfs \ -v /run/docker/plugins:/run/docker/plugins \ -v /var/run/docker.sock:/var/run/docker.sock \ --privileged \ mcuadros/gce-docker
Create persistent volumes to hold Console’s data and backups.
$ docker volume create \ --driver=gce \ --name twistlock-console \ -o SizeGb=90
$ docker volume create \ --driver=gce \ --name twistlock-backup \ -o SizeGb=90
Set up an NFS server, then create data and backup volumes for Console. The NFS server should run on a dedicated host that runs outside of the Swarm cluster.
Twistlock Console uses MongoDB to store data. There are some mount options required when accessing a MongoDB database from an NFSv4 volume.
nolock
— Disables the NLM sideband protocol to lock files on the server.
noatime
— Disables the NFS server from updating the inodes access time.
bg
— Backgrounds a mount command so that it doesn’t hang forever in the event that there is a problem connecting to the server.
Install an NFSv4 server:
$ sudo apt install nfs-kernel-server
Configure the server.
Open /etc/exports for editing.
$ sudo vim /etc/exports
Append the following line to the file.
/srv/home *(rw,sync,no_root_squash)
Start the server.
$ sudo systemctl start nfs-kernel-server.service
Mount all other nodes.
$ sudo mount -o nolock,bg,noatime <server-ip>:/srv/home /<local>/srv/home
Ensure all permissions are granted to twistlock user (2674).
Create NFS volumes to hold Console’s data and backups.
$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-console
$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=<IP address of the NFS Server>,rw,nolock,noatime,bg \ --opt device=:/srv/home \ twistlock-backup
Install Console as a Docker Swarm service.
All the components in your environment (nodes, host operating systems, orchestrator, etc) meet the hardware and version specs in System requirements.
Your Swarm cluster is up and running.
Your persistent storage is configured correctly.
Your load balancer is configured correctly for ports 8081 (HTTP), 8083 (HTTPS) and 8084 (TCP).
You created a DNS record that points to your load balancer.
Go to Releases, and copy the link to current recommended release.
Connect to your master node.
$ ssh <SWARM-MASTER>
Retrieve the release tarball.
$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
Unpack the Twistlock release tarball.
$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/ $ cd twistlock
Install Console into your Swarm using the twistcli
utility.
If you are using GCP:
$ ./linux/twistcli console install swarm --volume-driver "gcp"
If you are using NFSv4:
$ ./linux/twistcli console install swarm --volume-driver "local"
If you are using a local storage (not recommended for production environments):
$ ./linux/twistcli console install swarm --volume-driver "local"
At the prompt, enter your Twistlock access token. The access token is required to retrieve the Twistlock container images from the cloud repository.
Validate that Console is running. It takes a few moments for the replica count to go from 0/1 to 1/1.
$ docker service ls ID NAME MODE REPLICAS IMAGE pctny1pymjg8 twistlock-console replicated 1/1 registry.twistlock.com/...
Open Console’s dashboard in a web browser.
Console’s published ports use Swarm’s routing mesh (ingress network), so the Console service is accessible at the target port on every node, not just the host it runs on.
Open Twistlock Console’s web interface. The Web interface is available via HTTP (port 8081) and via HTTPS (port 8083). Go to https://<LOAD-BALANCER>:8083 for HTTPS and http://<LOAD-BALANCER>:8081 for HTTP.
If you did not configure a load balancer, Console is reachable via HTTPS at https://<ANY-SWARM-NODE-IPADDR>:8083 and via HTTP at http://<ANY-SWARM-NODE-IPADDR>:8081.
Create your first admin user.
Enter your license key, and click OK.
You are redirected to the Console dashboard.
Defender is installed as a global service, which ensures it runs on every node in the cluster. Console provides a GUI to configure all the options required to deploy Defender into your environment.
Open Console.
Go to Manage > Defenders > Names.
Click Add SAN, and add the DNS name of your load balancer.
Go to Manage > Defenders > Deploy Swarm.
Work through each of the configuration options:
Choose the DNS name of your load balancer. Defenders use this address to communicate with Console.
Choose the registry that hosts the Defender image. Select Twistlock’s registry.
Set Deploy Defenders with SELinux Policy to Off.
Copy the generated curl-bash command.
Connect to your Swarm master.
$ ssh <SWARM-MASTER>
Paste the curl-bash command into your shell, then run it. You need sudo privileges to run this command.
$ curl -sSL -k --header "authorization: Bearer <TOKEN>" ...
Validate that the Defender global service is running.
Open Console, then go to Manage > Defenders > Manage. The table lists all Defenders deployed to your environment (one per node).
To uninstall Twistlock, reverse the install steps. Delete the Defender daemon set first, followed by the Console service.
Delete the Defender global service.
Open Console, then go to Manage > Defenders > Deploy Swarm.
Scroll to the bottom of the page, then copy the last curl-bash command, where it says The script below uninstalls the Swarm Defenders from the cluster.
Connect your Swarm master.
$ ssh <SWARM-MASTER>
Paste the curl-bash command into your shell, then run it.
$ curl -sSL -k --header "authorization: Bearer <TOKEN>" ...
Delete the Console service.
SSH to the node where you downloaded and unpacked the Twistlock release tarball.
Run twistcli with the uninstall subcommand.
$ ./linux/twistcli console uninstall swarm
For maximum control over your environment, you might want to store the Twistlock container images in your own private registry, and then install Twistlock from your private registry. When you deploy Twistlock as a service, Docker Swarm pulls the Console image from the specified registry, and then schedules it to run on a node in the cluster.
Twistlock currently only supports Docker Hub and Docker Trusted Registry for Swarm deployments.
The key steps in the deployment workflow are:
Log into your registry with docker login
.
Push the Console image your registry.
Install Console using twistcli
.
Set the --registry-address
option to your registry and repository.
Set the --skip-push
option so that twistcli doesn’t try to automatically push the Console image to your registry for you.
If you are using an unsupported registry, you must manually make the Console image available on each node in your cluster. Unsupported registries include Quay.io, Artifactory, and Amazon EC2 Container Registry.
The method documented here supports any registry. The key steps in this deployment workflow are:
Manually push the Console image to your registry.
The twistcli
tool is not capable of doing it for you.
Manually pull the Console image to each node in your cluster.
Run twistcli
to deploy Console, bypassing any options that interact with the registry.
In particular, use the --skip-push
option because twistcli
does not know how to authenticate and push to unsupported registries.
The commands in this procedure assume you are using Quay.io, but the same method can be applied to any registry. Adjust the commands for your specific registry.
Download the Twistlock current release from Releases, and copy it to your master node.
Unpack the Twistlock release tarball.
$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/
Login to your registry.
$ docker login quay.io Username: Password: Email:
Load the Console image shipped in the release tarball.
$ docker load < twistlock_console.tar.gz
Tag the Console image according to the format required by your registry.
$ docker tag twistlock/private:console_<VERSION> quay.io/<USERNAME>/twistlock:console
Push the Console image to your registry.
$ docker push quay.io/<username>/twistlock:console
Connect to each node in your cluster, and pull the Console image.
$ docker pull quay.io/<username>/twistlock:console
On your Swarm master, run twistcli
to deploy Console.
$ ./linux/twistcli console install swarm \ --volume-driver "<VOLUME-DRIVER>" \ --registry-address "quay.io/<USERNAME>"