Skip to content
NOC

Today, we are seeing a large number of customers either already adopting or expressing interest in the use of multiple Kubernetes clusters in every one of their environments (production, staging, dev, etc.). These clusters are often being split across application, security, or team boundaries and the architecture is further complicated by the fact that these clusters will often be deployed across multiple availability zones, regions, clouds, and on-premises data centers. Our observations have also been backed up by this recent CNCF survey that showed that the vast majority of Kubernetes users have deployed multiple production clusters.

Here, at Kasten, we are focused on making critical Day 2 operations such as backup, disaster recovery, and mobility for all your cloud-native applications dead simple. While our K10 data management platform already works across all the above environments, we are also actively exploring how to make it easy to propagate things like global policies, mobility profiles, and more across all these clusters. 

Given that our requirements are not unique, we started looking at a number of interesting community projects in the multi-cluster resource management space that are out there today. This blog post describes the use of Razee, one such multi-cluster project being driven by IBM, with K10, the market leader for application backup, disaster recovery, and mobility for Kubernetes. We will use Razee to distribute a global policy that, in this example use case, will protect all Helm-deployed applications running in your Kubernetes clusters.


I. Installing Razee

Razee

As documented on its GitHub page, “Razee is an open-source project that was developed by IBM to automate and manage the deployment of Kubernetes resources across clusters, environments, and cloud providers.” While the scale here isn’t the 10s of 1000s of Kubernetes clusters IBM deploys, the tagline for the project does fit the use case.

To get started, I use the command visible on the Razee dashboard to deploy the Razee agent into my cluster:

$ kubectl apply -f "https://app.razee.io/api/install/razeedeploy-job?orgKey=..."

I verified that things were working as expected by running the following command:

$ kubectl --namespace=razeedeploy get pods
NAME READY STATUS RESTARTS AGE
featureflagsetld-controller-8565b84c74-xbn2p 1/1 Running 0 29m
managedset-controller-795445978d-mz7bc 1/1 Running 0 29m
mustachetemplate-controller-56b77f489b-n2rtn 1/1 Running 0 29m
razeedeploy-job-pvwfd 0/1 Completed 0 30m
remoteresource-controller-f4465cf4d-lkwwv 1/1 Running 0 29m
remoteresources3-controller-7cbc7d7db9-qwzts 1/1 Running 1 29m
remoteresources3decrypt-controller-6b4d999b6-mqdb7 1/1 Running 0 29m
watch-keeper-58cbcfbdcb-2r78p 1/1 Running 0 29m

II. Installing K10, An Enterprise-Grade Data Management Platform

Kasten K10 Logo

After installing Razee, I added the free and fully-featured edition of K10, our data management platform that is deeply integrated into Kubernetes. If you aren't already familiar with it, K10 provides you an easy-to-use and secure system for backup/restore and mobility of your entire Kubernetes application. This includes use cases such as:

  • Backup/restore for your entire application stack to make it easy to “reset” your application to a good known state
  • Cloning your application to a different namespace for debugging
  • Disaster recovery of your applications in another cluster, region, or cloud

Installing K10 is quite simple and you should be up and running in 5 minutes or less! It is usually a one-line Helm command or a single-click install in cloud marketplaces. Install documentation for your environment can be found here but, for my cluster, I deployed K10 using the following commands:

$ helm repo add kasten https://charts.kasten.io/
$ kubectl create namespace kasten-io

# Helm 3 install command. Tweak for Helm 2 $ helm --namespace kasten-io install k10 kasten/k10 NAME: k10 LAST DEPLOYED: Thu Mar 5 11:52:12 2020
NAMESPACE: kasten-io STATUS: DEPLOYED ...

I then manually created the below K10 policy as a text file and, instead of adding it to my K10 deployment, I uploaded it to GitHub as a gist:

apiVersion: config.kio.kasten.io/v1alpha1
kind: Policy
metadata:
name: backup-helm-apps
namespace: kasten-io
spec:
comment: Backup all apps deployed via Helm
frequency: "@hourly"
actions:
- action: backup
retention:
hourly: 24
daily: 7
weekly: 4
monthly: 12
yearly: 7
selector:
matchExpressions:
- key: heritage
operator: In
values:
- Helm

This policy:

  • Is defined as a Kubernetes-native resource (a CustomResource)
  • On an hourly frequency, will protect all applications that are deployed via Helm. It has a wildcard selector that detects Helm applications via the label heritage: Helm.
  • Has a flexible retention scheme for backups and snapshots to both manage costs and meet compliance requirements. In particular, it uses a GFS-based retention scheme to allow you to decide on the number of total backups stored and the rolloff from one tier to the next (e.g., one hourly backup rolls off into a daily backup once a day). This way, you can keep an hourly granularity for fast restores but still have the ability to go back days and weeks without having to keep every backup around.

III. Integrating K10 and Razee

Razee K10

Finally, to get Razee in my cluster to start pulling in the above policy (and all subsequent edits to the policy), I created a file with the following content:

apiVersion: "deploy.razee.io/v1alpha1"
kind: RemoteResource
metadata:
name: distribute-k10-policies
namespace: razeedeploy
spec:
requests:
- options:
url: https://gist.githubusercontent.com/ntolia/d594a574e475b5bc11e7ff359ab28e1f/raw/0a0f3a36425272ee901bde549a920c326bcab4ca/razee-k10-test.yaml

The above Kubernetes resource is a Razee RemoteResource. It is used to automatically deploy Kubernetes resources that are stored in the URL specified within it (multiple URLs and S3-based remote resources are also possible). I applied the RemoteResource to my cluster using:

$ kubectl apply -f distribute-k10-policies.yaml        

IV. Demo

For the demo recording below, I had preinstalled MySQL via Helm 3 before I installed either Razee or K10.

$ helm --namespace mysql install k10 stable/mysql       

As you can see in the video below, magic happens where the K10 policy defined above gets pulled into the cluster by Razee, K10 automatically discovers it, and then starts protecting all Helm-deployed applications.

And, all of this happens in less than 45 seconds!

 


V. Next Steps, Related Projects, and More...

Exploring integrations with projects like Razee is still in the initial stages for us but there are a number of things that stand out such as its scalable pull-based model and ease-of-use. There are also a number of other features in Razee that I also want to dig deeper into including how templating works, using RemoteResources to bootstrap other RemoteResources, and to see if there is a secure way to handle secret distribution.

Finally, one should note that Razee has common goals with other projects. While there are a number of projects in the multi-cluster networking space, the biggest related one for configuration is the Kubernetes community-driven kubefed effort. Even though the project is on its second incarnation, progress there seems to have been stalled in an alpha stage. I am not close enough to the project to identify why but it does look like it tried to solve too much at the same time, conflated cross-cluster application deployment with multi-cluster management, and tried to address them both in the same solution. We also ran into the kubed project but it didn't fit our customer requirements as it only synchronizes configuration and secrets.

Overall, given the increasingly common multi-cluster deployment patterns seen with Kubernetes and our goals to deliver simplicity, ease-of-use, and reduce operational burden for DevOps teams, the ability to seamlessly handle multi-cluster operations via a unified control plane in our K10 product is very important to us. Configuration and resource distribution demonstrated in this article is just one aspect of handling multi-cluster management. We have very ambitious plans and so, stay tuned, reach out if you would like early previews, and feel free to send us input on what you might be looking for!

Finally, I would love to hear from you. Find me as @nirajtolia on Twitter, drop us an email, or swing by our website!

Download Free Kasten K10

logo-aws-color

logo-azure-color

logo-digital-ocean-color

logo-google-cloud-color

logo-kubernetes-color

logo-openshift-color

logo-suse-rancher-color

logo-k3s-color

logo-vmware-tanzu-color

For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at contact@kasten.io

For product support: Open a case via Veeam
Community support: Veeam Community

Address:

Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message