$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
Twistlock supports OpenShift v3.9 and later. Twistlock Console is deployed as a ReplicationController, which ensures it’s always running. Twistlock Defenders are deployed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. You can run Defenders on OpenShift master and infrastructure nodes using node selectors.
The Twistlock Console and Defender container images can be stored either in the internal OpenShift registry or your own Docker v2 compliant registry. Alternatively, you can configure your deployments to pull the Twistlock images from Twistlock’s cloud registry.
This guide shows you how to generate deployment YAML files for both Console and Defender, and then deploy them to your OpenShift cluster with the oc client.
To ensure that your installation goes smoothly, work through the following checklist and validate that all requirements are met.
Validate that the components in your environment (nodes, host operating systems, orchestrator) meet the specs in System requirements.
For OpenShift installs, we recommend using the overlay or overlay2 storage drivers due to a known issue in RHEL. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1518519. |
Validate that you have permission to:
Push to a private docker registry. For most OpenShift setups, the registry runs inside the cluster as a service. You must be able to authenticate with your registry with docker login.
Pull images from your registry. This might require the creation of a docker-registry secret.
Have the correct role bindings to pull and push to the registry. For more information, see Accessing the Registry.
Create and delete projects in your cluster. For OpenShift installations, a project is created when you run oc new-project.
Run oc create commands.
TCP: 8081, 8083, 8084
TCP: 443
The Twistlock Console connects to the Twistlock Intelligence Stream (https://intelligence.twistlock.com) on TCP port 443 for vulnerability updates. If your Console is unable to contact the Twistlock Intelligence Stream, follow the guidance for offline environments.
Use twistcli to install the Twistlock Console and Defenders. The twistcli utility is included with every release. After completing this procedure, both Twistlock Console and Twistlock Defenders will be running in your OpenShift cluster.
Download the latest Twistlock release to any system where the OpenShift oc client is installed.
Go to Releases, and copy the link to current recommended release.
Download the release tarball to your cluster controller.
$ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
Unpack the release tarball.
$ mkdir twistlock $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/
Create a project named twistlock.
Login to the OpenShift cluster and create the twistlock project:
$ oc new-project twistlock
When Twistlock is deployed to your cluster, the images are retrieved from a registry. You have a number of options for storing the Twistlock Console and Defender images:
OpenShift internal registry.
Private Docker v2 registry. You must create a docker-secret to authenticate with the registry.
Alternatively, you can pull the images from the Twistlock cloud registry at deployment time. Your cluster nodes must be able to connect to the Twistlock cloud registry (registry-auth.twistlock.com) with TLS on TCP port 443.
This guides shows you how to use both the OpenShift internal registry and the Twistlock cloud registry. If you’re going to use the Twistlock cloud registry, you can skip this section. Otherwise, this procedure shows you how to pull, tag, and upload the Twistlock images to the OpenShift internal registry’s twistlock imageStream.
Determine the endpoint for your OpenShift internal registry. Use either the internal registry’s service name or cluster IP.
$ oc get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 172.30.163.181 <none> 5000/TCP 88d
Pull the Twistlock images from the Twistlock cloud registry using your access token. The major, minor, and patch numerals in the <VERSION> string are separated with an underscore. For exampe, 18.11.128 would be 18_11_128.
$ docker pull \
registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION>
$ docker pull \
registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/console:console_<VERSION>
Tag the images for the OpenShift internal registry.
$ docker tag \
registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> \
172.30.163.181:5000/twistlock/private:defender_<VERSION>
$ docker tag \
registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/console:console_<VERSION> \
172.30.163.181:5000/twistlock/private:console_<VERSION>
Push the images to the twistlock project’s imageStream.
$ docker push 172.30.163.181:5000/twistlock/private:defender_<VERSION> $ docker push 172.30.163.181:5000/twistlock/private:console_<VERSION>
Use the twistcli tool to generate the Twistlock Console deployment YAML. The twistcli tool is bundled with the release tarball. There are versions for Linux, macOS, and Windows.
The twistcli tool generates YAML for a ReplicationContoller, and other service configurations, such as a PersistentVolumeClaim, SecurityContextConstraints, and so on. Run the twistcli command with the --help flag for additional details about the command and supported flags.
You can optionally customize twistlock.cfg to enable additional features, such as custom compliance SCAP scanning. Then run twistcli from the root of the extracted release tarball.
Twistlock Console uses a PersistentVolumeClaim to store data. There are two ways to provision storage for Console:
Dynamic provisioning: Allocate storage for Console on-demand at deployment time. When generating the Console deployment YAML files with twistcli, specify the name of the storage class with the --storage-class flag. Most customers use dynamic provisioning.
Manual provisioning: Pre-provision a persistent volume for Console, then specify its label when generating the Console deployment YAML files. OpenShift uses NFS mounts for the backend infrastructure components (e.g. registry, logging, etc.). The NFS server is typically one of the master nodes. Guidance for creating an NFS backed PersistentVolume can be found here. Also see Appendix: NFS PersistentVolume example.
Generate a deployment YAML file for Console. A number of command variations are provided. Use them as a basis for constructing your own working command.
Twistlock Console + dynamically provisioned PersistentVolume + image pulled from the OpenShift internal registry.
$ <PLATFORM>/twistcli console export openshift \ --storage-class "<STORAGE-CLASS-NAME>" \ --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \ --service-type "ClusterIP"
Twistlock Console + manually provisioned PersistentVolume + image pulled from the OpenShift internal registry. Using the NFS backed PersistentVolume described in Appendix: NFS PersistentVolume example, pass the label to the --persistent-volume-labels flag to specify the PersistentVolume to which the PersistentVolumeClaim will bind.
$ <PLATFORM>/twistcli console export openshift \ --persistent-volume-labels "app-volume=twistlock-console" \ --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \ --service-type "ClusterIP"
Twistlock Console + manually provisioned PersistentVolume + image pulled from the Twistlock cloud registry. If you omit the --image-name flag, the Twistlock cloud registry is used by default, and you are prompted for your access token.
$ <PLATFORM>/twistcli console export openshift \ --persistent-volume-labels "app-volume=twistlock-console" \ --service-type "ClusterIP"
Deploy Console.
$ oc create -f ./twistlock_console.yaml
You can safely ignore the error that says the twistlock project already exists. |
Create an external route to Console so that you can access the web UI and API.
From the OpenShift web interface, go to the twistlock project.
Go to Application > Routes.
Select Create Route.
Enter a name for the route, such as twistlock-console.
Hostname = URL used to access the Console, e.g. twistlock-console.apps.ose.example.com
Path = /
Service = twistlock-console
Target Port = 8081 → 8081 or 8083 → 8083
Select the Security > Secure Route radio button.
TLS Termination = Edge (if using 8081) or Passthrough (if using 8083)
If your workstation already trusts the OpenShift x.509 certificate, select Edge TLS Termination for TCP port 8081.
If you plan to issue a custom certificate for the Twistlock Console that is trusted and will allow the TLS establishment with the Twistlock Console, then Select Passthrough TLS for TCP port 8083.
Insecure Traffic = Redirect
Click Create.
Create your first admin user, enter your license key, and configure Console’s certificate so that Defenders can establish a secure connection to it.
In a web browser, navigate to the external route you configured for Console, e.g. https://twistlock-console.apps.ose.example.com.
Create your first admin account.
Enter your license key.
Add a SubjectAlternativeName to Console’s certificate to allow Defenders to establish a secure connection with Console.
Use either Console’s service name, twistlock-console or twistlock-console.twistlock.svc, or Console’s cluster IP.
$ oc get svc -n twistlock NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) twistlock-console LoadBalancer 172.30.41.62 172.29.61.32,172.29.61.32 8084:3184...
Go to Manage > Defenders > Names.
Click Add SAN and enter Console’s service name.
Click Add SAN and enter Console’s cluster IP.
Twistlock Defenders run as containers on the nodes in your OpenShift cluster. They are deployed as a DaemonSet. Use the twistcli tool to generate the DaemonSet deployment YAML. The command has the following basic structure It creates a YAML file named defender.yaml in the working directory.
$ <PLATFORM>/twistcli defender export openshift \ --address <ADDRESS> --cluster-address <CLUSTER-ADDRESS>
The command connects to Console’s API, specified in --address, to generate the Defender DaemonSet YAML config file. The location where you run twistcli (inside or outside the cluster) dictates which Console address should be supplied.
The --cluster-address flag specifies the address Defender uses to connect to Console. For Defenders deployed inside the cluster, specify Twistlock Console’s service name, twistlock-console or twistlock-console.twistlock.svc, or cluster IP address. For Defenders deployed outside the cluster, specify either Console’s external address, which is exposed by your external route.
If SELinux is enabled on the OpenShift nodes, pass the --selinux-enabled argument to twistcli.
Generate the Defender DaemonSet YAML. A number of command variations are provided. Use them as a basis for constructing your own working command.
Outside the OpenShift cluster + pull the Defender image from the Twistlock cloud registry. Use the OpenShift external route for your Twistlock Console, --address https://twistlock-console.apps.ose.example.com. Designate Twistlock’s cloud registry by omitting the --image-name flag.
$ <PLATFORM>/twistcli defender export openshift \ --address https://twistlock-console.apps.ose.example.com \ --cluster-address 172.30.41.62 \ --selinux-enabled
Outside the OpenShift cluster + pull the Defender image from the OpenShift internal registry. Use the --image-name flag to designate an image from the OpenShift internal registry.
$ <PLATFORM>/twistcli defender export openshift \ --address https://twistlock-console.apps.ose.example.com \ --cluster-address 172.30.41.62 \ --selinux-enabled \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION>
Inside the OpenShift cluster + pull the Defender image from the Twistlock cloud registry. When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console’s service name (twistlock-console) or cluster IP in the --cluster-address flag. This flag specifies the endpoint for the Twistlock API and must include the port number.
$ <PLATFORM>/twistcli defender export openshift \ --address https://172.30.41.62:8083 \ --cluster-address 172.30.41.62 \ --selinux-enabled
Inside the OpenShift cluster + pull the Defender image from the OpenShift internal registry. Use the --image-name flag to designate an image in the OpenShift internal registry.
$ <PLATFORM>/twistcli defender export openshift \ --address https://172.30.41.62:8083 \ --cluster-address 172.30.41.62 \ --selinux-enabled \ --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION>
Deploy the Defender DaemonSet.
$ oc create -f ./defender.yaml
Confirm the Defenders were deployed.
In Twistlock Console, go to Manage > Defenders > Manage to see a list of deployed Defenders.
In the OpenShift Web Console, go to the Twistlock project’s monitoring window to see which pods are running.
Using the OpenShift CLI to see the DaemonSet pod count.
$ oc get ds -n twistlock
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE twistlock-defender-ds 4 3 3 3 3 <none> 29m
The desired and current pod counts do not match. This is a job for the nodeSelector. |
You can deploy Defenders to all nodes in an OpenShift cluster (master, infra, compute). Depending upon the nodeSelector configuration, Twistlock Defenders may not get deployed to all nodes. Adjust the guidance in the following procedure according to your organization’s deployment strategy.
Review the following OpenShift configuration settings.
The OpenShift master nodeSelector configuration can be found in /etc/origin/master/master-config.yaml. Look for any nodeSelector and nodeSelectorLabelBlacklist settings.
defaultNodeSelector: compute=true
Twistlock project - The nodeSelector can be defined at the project level.
$ oc describe project twistlock Name: twistlock Created: 10 days ago Labels: <none> Annotations: openshift.io/description= openshift.io/display-name= openshift.io/node-selector=node-role.kubernetes.io/compute=true openshift.io/sa.scc.mcs=s0:c17,c9 openshift.io/sa.scc.supplemental-groups=1000290000/10000 openshift.io/sa.scc.uid-range=1000290000/10000 Display Name: <none> Description: <none> Status: Active Node Selector: node-role.kubernetes.io/compute=true Quota: <none> Resource limits: <none>
In this example the Twistlock project default nodeSelector instructs OpenShift to only deploy Defenders to the node-role.kubernetes.io/compute=true nodes.
The following command removes the Node Selector value from the Twistlock project.
$ oc annotate namespace twistlock openshift.io/node-selector=""
Add a Deploy_Twistlock : true label to all nodes to which Defender should be deployed.
$ oc label node ip-172-31-0-55.ec2.internal Deploy_Twistlock=true
$ oc describe node ip-172-31-0-55.ec2.internal
Name: ip-172-31-0-55.ec2.internal
Roles: compute
Labels: Deploy_Twistlock=true
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=ip-172-31-0-55.ec2.internal
logging-infra-fluentd=true
node-role.kubernetes.io/compute=true
region=primary
Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Sun, 05 Aug 2018 05:40:10 +0000
Set the nodeSelector in the Defender DaemonSet deployment YAML.
version: extensions/v1beta1
kind: DaemonSet
metadata:
name: twistlock-defender-ds
namespace: twistlock
spec:
template:
metadata:
labels:
app: twistlock-defender
spec:
serviceAccountName: twistlock-service
nodeSelector:
Deploy_Twistlock: "true"
restartPolicy: Always
containers:
- name: twistlock-defender-2-5-127
...
Check the desired and current count for the Defender DaemonSet deployment.
$ oc get ds -n twistlock
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
twistlock-defender-ds 4 4 4 4 4 Deploy_Twistlock=true
You can use twistcli to create Helm charts for Twistlock Console and Defender. Helm is a package manager for Kubernetes, and chart is the moniker for a Helm package.
Follow the main install flow, except:
Pass the --helm option to twistcli to generate a Helm chart. Other options passed to twistcli configure the chart.
Deploy Console and Defender with helm install rather than oc create.
To create and install a Console Helm chart that dynamically provisions its persistent volume and pulls the container image from the OpenShift internal registry:
$ <PLATFORM>/twistcli console export openshift \
--storage-class "<STORAGE-CLASS-NAME>" \
--image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \
--service-type "ClusterIP"
--helm
$ helm install --namespace=twistlock twistlock-console-helm.tar.gz
To create and install a Defender DaemonSet Helm chart that pulls the Defender image from the OpenShift internal registry:
$ <PLATFORM>/twistcli defender export openshift \
--address https://twistlock-console.apps.ose.example.com \
--cluster-address 172.30.41.62 \
--selinux-enabled \
--image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION>
--helm
$ helm install --namespace=twistlock twistlock-defender-helm.tar.gz
To uninstall Twistlock, delete the twistlock project, then delete the Twistlock PersistentVolume.
Delete the twistlock Project
$ oc delete project twistlock
Delete the twistlock PersistentVolume
$ oc delete pv twistlock
Create an NFS mount for the Twistlock Console’s PV on the host that serves the NFS mounts.
mkdir /opt/twistlock_console
Check selinux: sestatus
chcon -R -t svirt_sandbox_file_t -l s0 /opt/twistlock_console
sudo chown nfsnobody /opt/twistlock_console
sudo chgrp nfsnobody /opt/twistlock_console
Check perms with: ls -lZ /opt/twistlock_console (drwxr-xr-x. nfsnobody nfsnobody system_u:object_r:svirt_sandbox_file_t:s0)
Create /etc/exports.d/twistlock.exports
In the /etc/exports.d/twistlock.exports add in line /opt/twistlock_console *(rw,root_squash)
Restart nfs mount sudo exportfs -ra
Confirm with showmount -e
Get the IP address of the Master node that will be used in the PV (eth0, openshift uses 172. for node to node communication). Make sure TCP 2049 (NFS) is allowed between nodes.
Create a PersistentVolume for Twistlock Console.
The following example uses a label for the PersistentVolume and the volume and claim pre-binding features.
The PersistentVolumeClaim uses the app-volume: twistlock-console
label to bind to the PV.
The volume and claim pre-binding claimref
ensures that the PersistentVolume is not claimed by another PersistentVolumeClaim before Twistlock Console is deployed.
apiVersion: v1
kind: PersistentVolume
metadata:
name: twistlock
labels:
app-volume: twistlock-console
storageClassName: standard
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
nfs:
path: /opt/twistlock_console
server: 172.31.4.59
persistentVolumeReclaimPolicy: Retain
claimRef:
name: twistlock-console
namespace: twistlock
When federating Twistlock Console that is accessed through an OpenShift external route with a SAML v2.0 Identity Provider (IdP), the SAML authentication request’s AssertionConsumerServiceURL value must be modified. Twistlock automatically generates the AssertionConsumerServiceURL value sent in a SAML authentication request based on Console’s configuration. When Console is accessed through an OpenShift external route, the URL for Console’s API endpoint is most likely not the same as the automatically generated AssertionConsumerServiceURL. Therefore, you must configure the AssertionConsumerServiceURL value that Twistlock sends in the SAML authentication request.
Log into Twistlock Console.
Go to Manage > Authentication > SAML.
In Console URL, define the AssertionConsumerServiceURL.
In this example, enter https://twistlock-console.apps.ose.example.com