hero-FG-blog

Kasten K10 Blog

All Things Kubernetes and Data Management

  Latest Posts

How to Install Kasten K10 on OpenShift

How to Install Kasten K10 on OpenShift by Michael Courcy-3The Kasten K10 by Veeam Data Management Platform has a strong integration with OpenShift on many aspects. Among them are security of the containers, authentication, multi-tenancy/RBAC, and support for different storage providers. 

OpenShift and Kasten K10 are both very adaptable products that can be installed in various conditions ranging from air-gapped on-premise infrastructure to full-public cloud deployments. With several different combinations available, this article provides best practices and our recommendations for installing Kasten K10 on OpenShift clusters.

Assumptions on OpenShift

Below is a list of assumptions based on our experiences: 

  • The OpenShift Administrator will also be the Kasten K10 Administrator and responsible for setting location profile, infrastructure profile, and policy.
  • However, the OpenShift Administrator wants to delegate a Kasten K10 Administrator, which is not a Cluster Admin.
  • Namespace admin should have the capacity to run backup and restore on their projects and only on their projects but can’t edit policy or profile.
  • We don’t want to use another authentication system for Kasten K10, people connecting to Kasten K10 must use OpenShift Authentication.
  • The storage class we use on OpenShift is CSI compliant and supports the Snapshot API (like OCS 4.6, Trident Netapp, Nutanix CSI …). We assume that OCS 4.6 is installed and therefore two VolumeSnapshotClass are already here: ocs-storagecluster-cephfsplugin-snapclass and ocs-storagecluster-rbdplugin-snapclass.

The assumptions we made on the different user delegation are not absolute requirements but rather recommandations. You may have completely different delegation schemes depending on your organisation.

Run the Preflight Script

Assuming that helm3 is installed, add the Kasten repo, create the kasten-io namespace and annotate your VolumeSnapshotClass  

$ helm repo add kasten https://charts.kasten.io/

$ oc create ns kasten-io

$ kubectl annotate volumesnapshotclass 
ocs-storagecluster-cephfsplugin-snapclass \
     k10.kasten.io/is-snapshot-class=true

$ kubectl annotate volumesnapshotclass 
ocs-storagecluster-rbdplugin-snapclass \
     k10.kasten.io/is-snapshot-class=true

Now you can run the preflight check, here is the output of the preflight script with the previous assumptions.

$ curl https://docs.kasten.io/tools/k10_primer.sh | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100  6226  100  6226    0     0  18585      0 --:--:-- --:--:-- --:--:-- 18529

Namespace option not provided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm Tiller version (>= v2.16.0)
 --> No Tiller needed with Helm v3.0.2
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10primer:3.0.5) to run test
Checking access to the Kubernetes context tars-apic/api-mic-oc6-aws-kasten-io:6443/kube:admin
 --> Able to access the default Kubernetes namespace

Running K10Primer Job in cluster with command- 
     ./k10primer 
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Waiting for pod k10primer-pmthz to be ready - ContainerCreating
Waiting for pod k10primer-pmthz to be ready - ContainerCreating
Waiting for pod k10primer-pmthz to be ready - ContainerCreating
Pod Ready!

WARNING: Package "github.com/golang/protobuf/protoc-gen-go/generator" is deprecated.
        A future release of golang/protobuf will delete this package,
        which has long been excluded from the compatibility promise.

I0125 13:58:23.212664       6 request.go:645] Throttling request took 1.038342751s, request:

GET:https://172.30.0.1:443/apis/template.openshift.io/v1?timeout=32s

Kubernetes Version Check:
  Valid kubernetes version (v1.18.3+3415b61)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

CSI Capabilities Check:
  Using CSI GroupVersion snapshot.storage.k8s.io/v1beta1  -  OK

Validating Provisioners: 
kubernetes.io/aws-ebs:
  Storage Classes:
    gp2
      Valid Storage Class  -  OK

openshift-storage.rbd.csi.ceph.com:
  Is a CSI Provisioner  -  OK
  Storage Classes:
    ocs-storagecluster-ceph-rbd
      Valid Storage Class  -  OK
  Volume Snapshot Classes:
    k10-clone-ocs-storagecluster-rbdplugin-snapclass
    ocs-storagecluster-rbdplugin-snapclass
      Has k10.kasten.io/is-snapshot-class annotation set to true  -  OK
      Has deletionPolicy 'Delete'  -  OK

openshift-storage.cephfs.csi.ceph.com:
  Is a CSI Provisioner  -  OK
  Storage Classes:
    ocs-storagecluster-cephfs
      Valid Storage Class  -  OK
  Volume Snapshot Classes:
    ocs-storagecluster-cephfsplugin-snapclass
      Has k10.kasten.io/is-snapshot-class annotation set to true  -  OK
      Has deletionPolicy 'Delete'  -  OK

openshift-storage.noobaa.io/obc:
  Storage Classes:
    openshift-storage.noobaa.io
      Supported via K10 Generic Volume Backup. See https://docs.kasten.io/latest/install/generic.html.

serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted

Kasten K10 has support for both CSI as well as native storage drivers. As such, the gp2 storage class is supported. Alternatively, it can support any remaining drivers via a generic backup method.

In the OpenShift cluster, we look for all pre-flight checks to pass.

To ensure the VolumeSnapshotClass is working properly, the storageclass name can be passed into the preflight as an option, as shown below.

This will invoke specific data protection operations to create a sample application with a PVC attached, take its snapshot, delete the sample application, and restore it completely.

$ curl https://docs.kasten.io/tools/k10_primer.sh | bash /dev/stdin -s ocs-storagecluster-cephfs
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6226  100  6226    0     0  18924      0 --:--:-- --:--:-- --:--:-- 18924
Namespace option not provided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm Tiller version (>= v2.16.0)
 --> No Tiller needed with Helm v3.0.2
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10primer:3.0.5) to run test
Checking access to the Kubernetes context tars-apic/api-mic-oc6-aws-kasten-io:6443/kube:admin
 --> Able to access the default Kubernetes namespace

Running K10Primer Job in cluster with command- 
     ./k10primer storage csi-checker -s ocs-storagecluster-cephfs
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Waiting for pod k10primer-2t88w to be ready - ContainerCreating
Pod Ready!

WARNING: Package "github.com/golang/protobuf/protoc-gen-go/generator" is deprecated.
        A future release of golang/protobuf will delete this package,
        which has long been excluded from the compatibility promise.

Starting CSI Checker. Could take up to 5 minutes
I0125 14:11:57.315871       6 request.go:645] Throttling request took 1.010277409s, request: GET:https://172.30.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s
Creating application
  -> Created pod (kubestr-csi-original-pod7ncfw) and pvc (kubestr-csi-original-pvch6gnm)
Taking a snapshot
  -> Created snapshot (kubestr-snapshot-20210125141202)
Restoring application
  -> Restored pod (kubestr-csi-cloned-pod2xmt6) and pvc (kubestr-csi-cloned-pvcfcl26)
Cleaning up resources
CSI Snapshot Walkthrough:

Using annotated VolumeSnapshotClass
(ocs-storagecluster-cephfsplugin-snapclass)
  Successfully tested snapshot restore functionality.  -  OK

serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted

Install with OpenShift Authentication 

Kasten K10 supports multiple modes of authentication described here. In the section, we will describe how to setup Kasten K10 with OpenShift based authentication.

Kasten K10 integrates with Dex, an identity service that uses OpenID Connect to drive authentication for other apps. It acts as a portal to other identity providers through "connectors”. This lets Dex defer authentication to LDAP servers, SAML providers, or other identity providers like GitHub, Google, and Active Directory. Among those connectors is the OpenShift OAuth connector. Dex is acting as an OAuth client on behalf of Kasten K10. This picture depicts the relationship between Kasten K10, Dex, and the OpenShift OAuth server.

Kasten-DEX-OpenShift-01

Sounds complex? Not really and don’t worry; to make your life easier, our helm chart is taking care of the entire setup and configuration. You just have to provide the proper configuration for the OAuth client:

  • Client name 
  • Client secret 
  • Client redirect url 

In OpenShift you can use a Service Account as an OAuth client. The token of this service account is the client secret.  

Configure the OAuth Client

For the rest of this tutorial, we’ll set up two variables that you must set depending on your OpenShift cluster:

  • APPS_BASE_DOMAIN which is  apps + base domain portion appended to each route on OpenShift when you don’t specify explicitly a host for instance : apps.myopenshiftcluster.com
  • API_BASE_DOMAIN which is the OpenShift API FQDN for instance api.myopenshiftcluster.com
APPS_BASE_DOMAIN=apps.myopenshiftcluster.com

API_BASE_DOMAIN=api.myopenshiftcluster.com

Let’s create the Service account now

$ cat > oauth-sa.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k10-dex-sa
  namespace: kasten-io
  annotations:
    serviceaccounts.openshift.io/oauth-redirecturi.dex: https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/dex/callback
EOF
$ oc create -f oauth-sa.yaml

An OAuth client has now been registered with OpenShift.

Now let’s grab the name of the Secret containing the token for this Service Account. After the creation of the Service Account, two secrets are automatically added.

$ > oc get sa -n kasten-io -o yaml k10-dex-sa
apiVersion: v1
kind: ServiceAccount
…...
secrets:
- name: k10-dex-sa-token-9mw2f
- name: k10-dex-sa-dockercfg-lbxdw

If the Secret containing the token is the first one in the Service Account’s “secrets” list, then use the command below to fetch the token. If the Secret is the the second one in the list, then use index 1 instead of 0 in the command:

DEX_TOKEN=$(oc -n kasten-io get secret $(oc -n kasten-io get sa k10-dex-sa -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -d)

Install Kasten K10 with Helm Options

We now have all the required information for providing all the options to the helm installer:

helm install k10 kasten/k10 --namespace=kasten-io \
  --set scc.create=true \
  --set route.enabled=true \
  --set route.tls.enabled=true \
  --set auth.openshift.enabled=true \
  --set auth.openshift.serviceAccount=k10-dex-sa \
  --set auth.openshift.clientSecret=${DEX_TOKEN} \
  --set auth.openshift.dashboardURL=https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/ \
  --set auth.openshift.openshiftURL=https://${API_BASE_DOMAIN}:6443 \
  --set auth.openshift.insecureCA=true

All these options are documented in the advanced option page, but here is a brief description for some of them:

  • scc.create=true will create dedicated security context constraints (scc) on which some Kasten service account will be attached
  • auth.openshift.insecureCA=true is most of the time mandatory, because Dex needs to speak to the OAuth service which is created with a passthrough route and the tls certificate on the OAuth server is signed by the internal openshift PKI, thus not acknowledged by Dex.

What if the route to Dex is not signed by a valid certificate? 

This situation may easily happen if you don’t use a valid wildcard certificate for the OpenShift router. If your router is already configured with a valid wildcard certificate you may skip this part.

Depending on your openshift installation you may not have installed a valid default certificate for the router, in other words the certificate exposed by the router has been signed by the CA generated by the openshift-ingress operator and not a public CA entity like let’s encrypt or verisign.

In this case you’re going to have an error on the auth container who’s trying to reach https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/dex/.well-known/openid-configuration, it will complain because it doesn’t know the authority that signed this certificate.  

This CA can be found in a secret named router-ca in the openshift-ingress-operator namespace and must be added to the trust-store of the Kasten pods.

The helm chart has a special option for that:

--set cacertconfigmap.name=<configmap-with-ca-certificate>”

Let's get this certificate first in a pem file. Kasten K10 requires that this pem file be named custom-ca-bundle.pem:

oc get secret router-ca -n openshift-ingress-operator -o jsonpath='{.data.tls\.crt}' | base64 --decode > custom-ca-bundle.pem

And create the configmap:

oc --namespace kasten-io create configmap custom-ca-bundle-store --from-file=custom-ca-bundle.pem

You can install Kasten K10 with these extra helm options:

helm install k10 kasten/k10 --namespace=kasten-io \
  --set scc.create=true \
  --set route.enabled=true \
  --set route.tls.enabled=true \
  --set auth.openshift.enabled=true \
  --set auth.openshift.serviceAccount=k10-dex-sa \
  --set auth.openshift.clientSecret=${DEX_TOKEN} \
  --set auth.openshift.dashboardURL=https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/ \
  --set auth.openshift.openshiftURL=https://${API_BASE_DOMAIN}:6443 \
  --set auth.openshift.insecureCA=true \
  --set cacertconfigmap.name=custom-ca-bundle-store

Testing  

You can check that your installation is working properly by opening the Kasten K10 dashboard with this url:  

https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/#

You’ll have to authenticate against the identity providers that you set up on OpenShift. In the OAuth portal you’ll have to allow KastenK10 to get basic info about you.

image6

To verify that the authentication service is correctly set up, our tool named k10tools can be used. Download the latest version of the tool here:

./k10tools debug auth

Dex:
  OIDC Redirect URL: https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/dex/callback
  Release name: k10
  Dex well known URL: https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/.well-known/openid-configuration
  Trying to connect to Dex without TLS (insecureSkipVerify=false)
  Connection succeeded  -  OK

Testing Multi-Tenancies 

We assume here that dev-user, test-user, app-admin-user and k10admin-user are four users in your cluster.

Let’s create 3 kasten-basic users and a kasten-admin user 

This table describes the different roles and their binding type:

Role name Role description Binding
Admin Openshift cluster-role given to any user that creates a new-project.  Namespace
k10-basic K10 cluster-role useful for some operational K10 access to users in specific namespaces Namespace
k10-admin K10 cluster-role useful for administrators who want uninterrupted access to all K10 operations. Cluster

 

 

 

Three projects named dev, test and prod will be created using these commands:

# create the ns 
oc create ns dev
oc create ns test
oc create ns prod

 

This table shows the projects that each user will be allowed to access with different roles:

User Name Namespace Namespace binding  Cluster binding 
dev-user dev admin and k10-basic -
test-user test admin and k10-basic -
app-admin-user dev, test and prod admin and k10-basic -
k10admin-user kasten-io admin k10-admin

 

In this configuration k10admin-user will have k10-admin role without being a cluster-admin:

# add the role for dev and test
oc policy add-role-to-user admin dev-user -n dev 
oc policy add-role-to-user k10-basic dev-user -n dev 
oc policy add-role-to-user admin test-user -n test 
oc policy add-role-to-user k10-basic test-user -n test 

# add the role for admin 
oc policy add-role-to-user admin app-admin-user -n dev
oc policy add-role-to-user k10-basic app-admin-user -n dev
oc policy add-role-to-user admin app-admin-user -n test
oc policy add-role-to-user k10-basic app-admin-user -n test
oc policy add-role-to-user admin app-admin-user -n prod 
oc policy add-role-to-user k10-basic app-admin-user -n prod

# add the cluster-role for k10admin-user, 
oc adm policy add-cluster-role-to-user k10-admin k10admin-user 

# k10admin-user must be able to read the content of kasten-io 
oc policy add-role-to-user admin k10admin-user -n kasten-io

 

Now connect to the Kasten K10 dashboard you’ll be prompted for allowing you user info:

image4

 

With the dev-user or test-user, you should be only able to see the test or dev project:

 

image5

 

If you connect with the admin-user login then you can see only the three dev, test, and prod namespaces:

 

image1

 

And finally if you connect with the login k10admin you can see the dashboard as a cluster-admin would see it:

 

image7

Testing an Application 

It’s now time to create an application with the OpenShift tools and check that we are able to perform a backup and restore. 

Creating the Application 

We’re going to create a WordPress application in the test namespace:

# set test as the default project 
oc project test 
# create mariadb and a wordpress pods using template and source to image
oc new-app mariadb-persistent
oc new-app php~https://github.com/wordpress/wordpress
# create a route to word press 
oc expose svc/wordpress

Check all pods are up and running:

oc get pods 
NAME                         READY   STATUS      RESTARTS   AGE
mariadb-1-deploy             0/1     Completed   0          3m34s
mariadb-1-svkzh              1/1     Running     0          3m29s
wordpress-1-build            0/1     Completed   0          2m21s
wordpress-754b56ffc7-rjz2g   1/1     Running     0          67s

With the route you created, navigate to http://wordpress-test.${APPS_BASE_DOMAIN} and provide the database information that you can find in the mariadb secret:

oc get -o jsonpath='{.data.database-name}' secret mariadb | base64 -d 
oc get -o jsonpath='{.data.database-password}' secret mariadb | base64 -d 
oc get -o jsonpath='{.data.database-user}' secret mariadb | base64 -d

 

For the database host you just need to provide “mariadb” which is the name of the service inside the namespace test.

image8

Finish the installation and note your login and password. Update the first post, for instance change the title.

Make sure the file wp-config.php is not transient by mounting it as a configmap in the WordPress deployment:

oc exec -i -t $(oc get pod -l deployment=wordpress -o name) -- cat wp-config.php > wp-config.php

oc create configmap wp-config --from-file=wp-config.php

oc set volume deploy/wordpress --add --name=wp-config -t configmap --configmap-name=wp-config --mount-path=/opt/app-root/src/wp-config.php --sub-path=wp-config.php

Backup and Restore the Application

Connect as test on the Kasten K10 dashboard and backup the application test. 

 

image2

When backup is finished delete everything inside the namespace:

oc delete deploy,deploymentconfig,pvc,svc,route,configmap --all

 

From the dashboard, restore from the restore point:

image3-1image2-1

Then, check that your WordPress is back with your change:

image8-1

Conclusion 

Congratulations, we have now successfully completed a Kasten K10 installation on OpenShift using the OpenShift authentication and OCS 4.6 with CSI snapshot capacities. We also tested multi-tenancies and the ability to backup and restore a typical OpenShift application based on an OpenShift template and source to image build config.

Now, try it out for yourself.

We encourage you to give Kasten K10 a try for FREE no sign-up needed, and let us know how we can help. We look forward to hearing from you!

 

This post was co-authored with Onkar Bhat, Engineering Manager at Kasten and Praveen Vanka, Lead System Engineer at Kasten.

Michael Courcy

I started my career as a solutions architect, focused mainly on JAVA/JEE for government projects. Now I work as a DevOps architect, building cloud native solutions based on Kubernetes and the main cloud providers like AWS, Azure, and many more.


Share

Recent Blog Posts