hero-FG-blog

Kasten K10 Blog

All Things Kubernetes and Data Management

  Latest Posts

So you want to learn DevOps?

I’m a few months into my journey learning DevOps. While I still have a ways to go before considering myself an expert, I wanted to share my advice in a few key areas that I’ve been focusing on.

Learn a Programming Language

If you’re like me, with no programming experience, you may be overwhelmed by the sheer amount of programming languages that are out there. For DevOps, my advice would be to pick one of these languages: Python, NodeJS, or Ruby. While it’s important to have a fundamental understanding of all three, you’ll need to be comfortable with at least one in order to progress your understanding of DevOps. 

While there is no right or wrong choice when choosing a language, Python is easily the most popular. My recommendation to anyone just beginning to dip their toe into the ocean  of programming is to first find a few good videos or written resources that you’ll be able to reference. After that, jump right in. From my experience, the best way to learn programming is to get hands-on and start creating.

Know Linux Basics

When it comes to Linux, the guidance remains much the same as with programming. Get into it, get hands-on, and if possible convert your everyday system to Linux. In order to properly grasp the fundamentals, you’ll need to use it daily and in practice. Some key areas you’ll want to focus on are shell commands, Linux directory structure, and SSH key management. 

It can be challenging to use “legacy” technology when the operating systems we’re accustomed to using have become so advanced; however, a foundational understanding of Linux is essential for someone learning DevOps. Once you’re comfortable with Linux, you’ll be much more prepared for what’s to come.

Understand Networking 

Much like the Linux basics, this is another foundational bit of knowledge you just need to know before jumping into the shiny stuff. Networking is the cornerstone of DevOps and the link that enables communications between our applications and users. You’ll need to have a basic understanding of several networking fundamentals: DNS, subnetting, gateways, DHCP, NAT, OSI Model, firewalls, load balancers, proxy servers, and HTTP/HTTPS.

Stick to One Cloud Provider 

Since we’ve covered the basics, we can start looking at the nice shiny tools that are a staple of DevOps, such as cloud providers. To gain more than just a foundational knowledge, you’ll want to pick one cloud provider to concentrate on and stick to it (. The skills you learn here will make your transition much easier when exploring other provider options. 

A personal example is that a few years back I began learning AWS, which offers an endless amount of cloud services. I focused on the most common services such as IaaS and PaaS, then took those skills with me into Microsoft Azure and Google Cloud Platform.

The great thing here is that each of the big hyperscalers offer a free tier, which enables you to get hands-on experience without financially committing to one provider. 

Use Git Effectively

Source control is the single most important aspect of DevOps. Understanding Git and being able to use it effectively is essential to your workflow. If you already have a foundational understanding of what’s been covered previously in this post, you’ll want to start your journey here with source control. 

Every script you create should be managed through source control, so that you can track changes and collaborate with other developers. You’ll need to understand the concepts of branching, pull requests, and code repositories, as well as how they fit into the overall development process.

GitHub is easily the most popular code repository platform, and where I personally store much of what I’ve learned. 

Containers? Start with Docker 

When you think of containers, you think of Docker, and this is where I’d recommend you should start your container learning. 

Containers allow for a consistent and portable environment for applications to run on. When it comes to learning Docker, make sure to get hands-on experience. This is super easy to do as pretty much anything can run Docker. 

You’ll want to be able to create, run, download, change, and inspect containers. It’s also important that you understand the networking stack, storage management, and the creation of your own Dockerfiles.

Orchestrate with Kubernetes 

Now that we’re comfortable with containers, we’ll need to understand how to orchestrate them. Kubernetes is a container orchestration engine that allows you to manage a large amount of containers over multiple nodes. There are a variety of resources out there that will help you learn about Kubernetes in-depth, both in theory and hands-on. 

Learn Infrastructure as Code

Code is not just about applications. You should also be learning about how we use code to create your infrastructure. This could be in the public cloud, on premises in your virtualization environments, or in your home lab. Infrastructure as code (IaC) is the way to build infrastructure in a safe and repeatable way. 

Terraform is a great tool for learning infrastructure as code. Terraform allows for you to write, plan, and apply desired state changes to your infrastructure for multiple providers. Terraform understands the current state of your deployment and retains it for you.

This lends itself nicely to repeatability. We’ve all been there: you find yourself needing to deploy virtual machines multiple times in different locations. Terraform and other IaC tools make this a breeze. 

Automate Configuration Management 

Our next step towards DevOps automation is to ensure that the desired state of our deployed applications is retained. While IaC enables you to build a platform in a repeatable way, configuration management will provide similar benefits on the application side of things. Tools such as Ansible, Puppet and Chef provide a simple way to automate your configuration management.  

Create CI/CD Pipelines 

So we know how to maintain the state of our application, but how do we actually get it to that desired state? Time to automate the process of creating applications.

There are two aspects to consider when creating your pipeline: Continuous Integration and Continuous Deployment (CI/CD): 

  • Continuous Integration: If you have been scratching the surface of DevOps, you’ll have most likely come across the terms Code > Build > Test. This process is the foundation of getting application code tested and ready to release to your audience.
     
  • Continuous Deployment/Delivery: Continuous Deployment/Delivery will enable you to update your application in an automated fashion. If the code passes its tests, use continuous deployment to push that code into the next environment be it, QA, staging, or production. 

Some commonly used tools for creating CI/CD pipelines are: GitHub Actions, Jenkins, TravisCI, and GitLab.

Monitoring, Log Management, and Data Visualization 

Keep in mind that throughout this journey, many important security and logging techniques are foregone, for the sake of education. Many of the tools discussed here ensure proper configuration with security in mind, but the integrity of your applications is your responsibility. You should make certain that your application is secure before pushing it to production.

It’s also important to consider how your systems collect and aggregate data. Finding a way to visualize the information you collect can be extremely useful when analyzing trends and to mitigate risk. Tools such as Prometheus, Grafana, and ELK stack (Elasticsearch, Logstash, and Kibana) should be on your radar while you’re trying to understand this area of DevOps.

Store and Protect Your Data

At some point in your journey, you are going to require a level of stateful data, be it some sort of persistent storage for storing logs or an actual database requirement. This is where data management comes in. You’ll need to know where to store the dataset, as well as how to best protect the data. 

There are a host of different database options to explore. I’d advise you to learn the fundamentals of backing up data, managing application mobility between environments, and disaster recovery. 

It should be noted that no platform is completely safe from data loss, accidental deletion, or malicious activity. However, by automating most of the workflow, we can eliminate the majority of human error from our development pipeline. At the end of the day, the integrity and management of application data will always fall on the shoulders of DevOps.

Good Luck on Your DevOps Journey!

Now that I’ve shared my experience, I hope you can apply some of my lessons learned to smooth out and accelerate your own journey to DevOps excellence. Subscribe to the Kasten Blog to receive notifications for new articles covering more in-depth discussions on these topics.

Michael Cade

A community first technologist for Kasten by Veeam Software. Based in the UK with over 16 years of industry experience with a key focus on technologies such as cloud native, automation & data management. His role at Kasten is to act as a technical thought leader, community champion and project owner to engage with the community to enable influencers and customers to overcome the challenges of Cloud Native Data Management and be successful, speaking at events sharing the technical vision and corporate strategy whilst providing ongoing feedback from the field into product management to shape the future success.


Share

Recent Blog Posts