Jenkins and Kubernetes: Secret Agents in the Cloud

This post is a transcript from ShipEngine Software Engineer, Mandy Hubbard’s, presentation at the DevOpsDays Austin in May 2019


Good morning—I think it’s still morning. I am Mandy Hubbard. I am a Software Engineer and a QA Architect from Austin, Texas.

I have been leading quality efforts for companies in Austin for almost 20 years. I’ve worked for different sized companies in industries ranging from FinTech and big data, to network management solutions. And, I’m currently a software engineer at ShipEngine where we integrate hundreds of shipping carriers and marketplaces such as Shopify and Magento, and those sorts of things through our shipping APIs.

I have always been very focused on quality. I can even remember back in college when I put in data validation in my C++ programs and was mad when I didn’t get any extra points for that because my teacher didn’t really think quality was all that important. But, I’ve always been very focused on quality and that’s how I got into CI/CD.

So you see here on my speaker slides a little bit about myself. I am a developer advocate, not in the professional title sense, but in the philosophy. I really like doing things to make developer’s jobs easier. That means having automated tests that run so that they have some confidence about their code before they push it out, and that’s also why I like building pipelines. So, I’m not here pushing any product or service. I’m simply a geek girl who loves implementing solutions and sharing what I learned with other people. I hope you find this useful.

So today we’re going to talk about the way in which the software development landscape has changed during my career with the culmination of companies moving to microservices. And we’ll talk about some of those promises of microservices that companies seek to achieve and what continuous integration, continuous delivery, continuous deployment, what that looks like in a microservices world because it’s a bit different. I’m going to walk you through all of the things I have done to try and scale agents and keep up with building microservices that eventually culminated into trying out Kubernetes. So if you only remember a few things today, it’s that Jenkins is staying in front of technology so that it can do all of the things that you’re doing in your own software applications, and that you need to take advantage of all of those things. But mostly I want you to walk away knowing that you too can spin up a Jenkins environment in Kubernetes. It is really, really very simple and I hope after this talk you will be inspired to go out and try it.

“I’m going to walk you through all of the things I have done to try and scale agents and keep up with building microservices that eventually culminated into trying out Kubernetes.”

So how many people are already Kubernetes? Awesome. And how many of you are running Jenkins? And who’s running Jenkins on Kubernetes? All right. So there’s some interest here.

You may have already done this, but I’m going to hopefully bring people on board who aren’t familiar with any of these technologies. So let’s get started. So back in the day when I was first getting involved in QA we usually had a monolithic application. I mean a true monolith where everything ran on one server. Either in a data center or somewhere in the back of her office or where have you, and it was pretty simple to build, test, and deploy software because it’s one system. Usually everything’s written in the same language and there’s only one place to deploy and test. So it really simplified building and deploying and testing software.

Moving forward, we started working on distributed applications. In this case, we took some of the components, very large components. Maybe your identity management or your backend database were on separate physical or virtual servers, but still you had very few pieces of infrastructure. Kind of complicated things but it was still pretty simple compared to where we’re moving today. But moving forward to microservices architecture, we’re now talking about a lot of moving parts. We’re talking about very small discrete services running, being scaled so that we have lots of infrastructure to keep up with and lots of things to deploy to, and it has changed the way that we build, test and deploy software.

How many of you’ve seen a slide like this before? Wow, not as many people as I thought. Well, most of the time when you go to these types of talks there are people standing up here talking about of all of the promises of microservices. We want to move to that architecture because we want to be able to deploy smaller changes independent of other pieces, and be able to make frequent changes without affecting the entire application. It allows us to scale horizontally and we don’t have to use the same language and tool set for all of our services because we have discrete services. These are all of the benefits that the microservices people who tout microservices will claim, which is ultimately going to re-reduce costs and reduce risks for your business. And I don’t really know many businesses that aren’t interested in achieving these things. However, in order to achieve these we need to take a look at our CI/CD pipeline. So I’m going to show you a diagram of continuous integration. In this example, I’m using Jenkins as my build and test platform. I’m a huge Jenkins fan and I’m going to be using GitHub as our source control management system.

So in the continuous integration environment, if a developer needs to make code change—whether it’s to fix a bug or write a new feature—when a developer gets ready to write new code, he or she is going to create a branch locally. So developer’s going to create a branch to do a bug fix or to create a new feature, and when that developer has code that’s ready to go to production, they’re going to open a pull request or a PR as it’s known and GitHub terminology. That is an indication that they would like that code to be merged into the master branch, and the master branch is the pristine branch from which we always deploy.

“In the continuous integration environment, if a developer needs to make code change—whether it’s to fix a bug or write a new feature—when a developer gets ready to write new code, he or she is going to create a branch locally.”

So in a really solid continuous integration environment, we’re going to have end to end communication between our source control management system, in this example: GitHub. And our building deployment platform, which in this example is Jenkins.

This is achieved by using WebHooks so that when something happens in our GitHub repository such as the pull request being open, GitHub’s going to send a notification to Jenkins via the WebHook. Jenkins is then going to take action. It’s going to check out the pull request, build it, and test it and send a status back to GitHub. So this is where it gets into why CI/CD is important to me as a quality aficionado. Because while this is occurring, the screen in your pull request in GitHub, the GitHub UI is going to be yellow. You can’t or no one can merge that to master until these quality checks have completed. No skirting around it, no pushing it any way, no late night commits. It’s not happening until the quality checks have passed. This is why I love CI/CD.

So let’s say all the tests pass. The next thing that happens is your squash and merge button is now green. Once you click that button, that’s going to send another notification to Jenkins that a push was done to the master branch.

Now, the way we can construct our pipeline scripts and Jenkins pipeline is just a build script written in Groovy that gives a set of instructions on how to build, test and deploy your software and we can write these scripts in a way that it can take … It can be conditional and take different actions, whether we’re testing a pull requests or testing a push to master. So if all of the checks are successful, then we will deploy and that gets into the difference in continuous delivery and continuous deployment, which is a whole separate topic we could get into.

But continuous delivery simply means that we keep our master branch in a state that is ready to deploy at any time. Continuous deployment means, we actually deploy every push to master immediately to production, and there are reasons why you might do one over the other. It really comes down to business rules and interdependence between different components, and just risk aversion. So the reason I want to go over this, I really believe that in order to deploy small changes regularly in our microservices architecture that we must have continuous integration and continuous delivery at a minimum in place. But what this looks like now we’ve got all of those services in the first picture with all the Bs and we’ve got with this setup, a minimum of two builds for pull requests. We’ve got the first pull request, the first build when the pull request is open, the second when the pull request is merged to master.

So what I wanted to convey is that once we move to a microservices architecture and we’re doing CI/CD, that really turns into a whole lot of builds. So now we’ve got to manage our build and deployment platform infrastructure to be able to keep up with this. So now if you think about CI/CD and the evolution of the way we build and deploy software.

With the monolith, we could get away with maybe building everything on one master agent and pretty much everything’s in the same tool, the same languages because we’re deploying to one environment. CI/CD is great there, but it’s not really necessary. But as we evolve into distributed applications things get a little bit more complicated and we potentially are using different languages and tools so we might have different needs for the different pieces of our application. So now CI is getting very important in continuous delivery. It’s still kind of optional, but it’s a, it’s the right way to do things from a quality perspective.

But once you move to microservices, it’s no longer optional. You really must have continuous integration and continuous delivery, otherwise you’re not going to be able to achieve all of the promises of microservices. You’re not going to have the infrastructure capacity to release all those small changes regularly and keep up with that load. So at this point we need to change things up. So how do we build, test and deploy all of these small frequent changes in a scalable way? The only way we can do that is that we must adopt the CI/CD pipeline that’s tuned to optimize building these microservices.

“You really must have continuous integration and continuous delivery, otherwise you’re not going to be able to achieve all of the promises of microservices.”

So now that gets me into all the ways that I’ve run Jenkins. I want to walk you through how I got started. Some things I tried that finally convinced me to try running it on Kubernetes. So the first thing I did was execute all my builds on master even though the docs say don’t do that.

We all do that, right? You set up a Jenkins system, you run all your builds and everything’s fine. You’re building maybe one or more services, but then things get complicated once you start building more services, and you realize you really do need a build agent. So then that usually looks like setting up one agent that maybe an EC2 instance or a physical server if you’ve got that and you just put all the tools you need. You put all the languages and all the tool sets and then you can build any of your services, and whenever you want to scale it you just stamp out another one until you’ve got enough agents to keep up with your load.

But of course sometimes you’re using all your agents, sometimes you’re using a few of your agents and you’re not really maximizing efficiency and costs. And management really doesn’t like that. Also you’ve got to keep them all updated with all your different software packages and it’s a lot to maintain. So then I thought, “Well, why don’t I just create a different agent for all of the things I’m building?” Because let’s say I had a node application and also Go application, and so I had one agent that just did the Go builds and one that just did the node bills. So that’s simplified it. I’ve got one agent per platform and I don’t have to worry about updating Go on both of the agents or updating Python on both of the agents. It’s super simple. But now you’re really dealing with idle agents because your agents are specialized and can only build for the language and tool set that is installed.

So that doesn’t really work for optimizing cost and efficiency. So the next thing I did was I tried Dockerizing agents. I tried several ways. The first thing I did was Docker in Docker. We were running the Docker machine inside your ephemeral agents. So you’ve got an agent running in a Docker container and then you want to build and push other Docker images inside of that container. So you install Docker in Docker, and then I read about the security risk of that and I did it again and I mounted the Docker host of the Jenkins master inside my agent containers so that I could build sidecar containers for my actual application. Then I tried running with an external Docker host that all it did was I sent all my workload there. I just needed a Docker client on my agent and all the work was handled on the external Docker host, but you still got a lot of idle resources.

Then finally I tried, “Well, what if I just spin up a Docker host when I need it?” And so I actually had the EC2 plugin spin up a Docker host on an EC2 agent, and then launch dockerized agents on top of that EC2 hosts and then build Docker containers. You can see I’ve tried a lot of different things. So finally I said, “Sure, fine. Let’s try Kubernetes.”

I wanted to learn Kubernetes. I was already familiar with Jenkins. So I think if you’re trying to learn a new technology it really helps if you do it in the context of something you’re already familiar with. So I thought, “This is a great way for me to get familiar with Kubernetes concepts.” I tried reading the docs and found it really dry and boring, but once I was trying to install my very favorite CI/CD platform in Kubernetes things got really interesting and that is how I approached getting more familiar with Kubernetes.

“I think if you’re trying to learn a new technology it really helps if you do it in the context of something you’re already familiar with.”

So I thought, “This is a great way for me to get familiar with Kubernetes concepts.” I tried reading the docs and found it really dry and boring, but once I was trying to install my very favorite CI/CD platform in Kubernetes things got really interesting and that is how I approached getting more familiar with Kubernetes.

So for those of you who aren’t familiar, such as I was when I started, I just want to hit a few key concepts that put it all together for me. The first is the concept of a node, which is a physical or virtual server that contains all of the discrete application components that provide the Kubernetes platform. The next thing is a cluster. That’s going to be one or more nodes so that you can spread out the workload across those nodes, and then your application’s going to be defined in a pod. That’s the building block or the smallest unit of work in Kubernetes. Usually consist of one or more containers that live and die together. They share resources such as networking and storage. Most of the time you’re going to have one container per pod because that’s how we’re architecting our services for scalability, but you might have two if they’re tightly coupled.

Then we need to expose that pod to the world. We need to be able to have a publicly accessible IP address for our service, and that’s the vine and the service definition. Then the final piece is the idea of the Helm Chart. Helm is like a package manager for Kubernetes. If you think about running APK or Chocolatey, or Brew or any of your favorite package managers on your laptop. It’s the same kind of thing. It’s a way to install applications written for deployment into Kubernetes. It’s a way to easily install them and then a chart is the packaging format for Helm.

Okay, so with that background, I’m going to take you through, just briefly, this is the official Helm Chart for Jenkins and you’ve got a bunch of templates here. If you’re already familiar with Kubernetes you probably are familiar with defining things with YAML and variables. But they’re all parameterized and are templatized here for the various aspects and then you simply have a values.yaml or you can override various things. So I wanted to show you just how configurable and extensible your Jenkins instance can be. Is this exciting or overwhelming to see all of this? For me, it’s completely overwhelming because I didn’t know what a lot of these things meant, and I just wanted to get up … And I’m still going.

Okay, so there’s a lot of things you can do to customize, but if you go to the chart and you see that and you think, “Wow, I don’t really want to deal with all that.” Don’t freak out because… Oh, that’s awesome. Typically it comes with a set of sane defaults and this is a values.yaml that I used. It came from a tutorial example I use to get things up and running. SO even though you can extend and customize to your heart’s content, you don’t have to understand that to just get started. So this is an example that I use that YAML just to override the things that I care about when I do my installation.

So before I installed Jenkins using Helm there were a few things that I needed to do. I did this in GKE. I’d never use GKE before, but I started looking around and their tutorials are amazing. I met Victor Iglesias at Jenkins World and he gave a talk similar on running on Kubernetes, and I found all of his tutorials and they’re super easy to consume. I use that to do all the pre-reqs for installing Jenkins. So I’ve got a link to one here that you can visit to get set up.

It looks like a lot of you probably are already at this point, and that’s awesome. But once you have your environment set up, you’ve got your cluster and you’ve got your access configured, and Helm is all set up to go, then all you have to do is run a Helm install. And you can see here that I have indicated that I’d like to use the stable Jenkins, which is the repo we just looked at.

I want to override with the values.yaml that we just looked at and there’s a couple other options that you give, and then you kick that off and you’re going to end up with a working Jenkins master. You can obtain the admin password from the G Cloud console really easily. So by the time this is all done, you can obtain that password and you’re ready to log in as admin and start using Jenkins. So I want to walk you through the specific installations required in Jenkins once you get it up and running. So this goes through, you’re going to need a credential set for your cluster so that the Jenkins master has permission to launch agents inside that cluster. You’re going to need to create a configuration for that cluster so that Jenkins master knows where to launch your agents. We’re then going to create a pod template for the agent, which defines which containers are going to be available inside that agent and then we will create a container template for each of those.

So I’m just going to show you some screenshots of the configuration. So to create the account that Jenkins is going to use to authorize it to create agents in Kubernetes is the Kubernetes service account. I’m going to show you all the things you have to configure, but if you use the Helm chart with those default values that YAML override you really don’t have to configure all of this. It’s already pre-configured. If you’re starting from scratch you would have to go and do this. So now we’ve got an account that Jenkins can use to create agents in the Kubernetes cluster. We configure the Kubernetes cloud so the master knows where to launch the new agents. And you can see that you’ve got different things that you can configure here, but you don’t really have to configure all of them.

So it gets interesting when we configure our pod template. The pod template is going to define what is inside the pod that acts as your agent. I’ve given it a name and a label. If you’ve used external Jenkins agents before you’re familiar with specifying a label so that you can refer to it later. Then we can give environment variables, we can mount volumes, we can set the retention and all of the things you would want to control the behavior of that agent right here in the UI. Then if you click the add container button, you’ll get this screen. This is where you’re going to add the containers that you want to be available inside that pod so that once you decide to build on that pod it’s going to launch an instance and you can say, “Do this action inside my golang container. Do this action inside my … The Docker client container.

This is all well and good, but how do we use it? From your Jenkins file, of course. Just like with your external agents you specify the node by name. You can use all of your regular plugin syntax. This is using one of the plugins. You just say, “Check out SEM.” And then once you’re inside your build steps, then you can indicate which container in which to build each of the steps inside of your pipeline. Then once you do a build, you’ll see I did this twice and I purposely wanted to show you that it launches a brand new pod every time you run a build. So it’s a brand new pristine environment. If you’re not running builds you’re not paying the cost of having those running all the time. Now, one of the configuration options that you can set if you’re running builds back to back and you don’t want to incur the cost of spinning up that pot every time you can set it to say, I don’t know, “Wait 15 minutes for new build before down this pod.” So you have some configuration options there, but I just wanted to show you how it spins up a brand new pod with each build.

“I don’t like configuring things in the UI because it’s not software defined and I have to remember to backup Jenkins, and a whole lot of other things. I wanted to show you next how you can do all of the UI work directly in your Jenkins file.”

In this example at the top, I’ve got the pod template information with the two containers, just like we did in the UI. I am mounting a volume here and you remember there were quite a lot of options in the UI. You can specify those options here, any of the ones you want to tweak or you can only override the ones that you care about and so it’s very simple. Then you just like with the other, you specify the agent by name, you specify the container in which you want to run particular steps. The way I set this one up, and one other disclaimer. I had to make this work before I was willing to talk about it because that’s just how I’m wired.

The first thing I wanted to do is I wanted to do my builds exactly the way it was accustomed to doing them, which was using a Docker client and Docker engine to do my Docker build and my Docker push. So I mounted that Docker socket and then I used the Docker container, which just had the client so that I could run all of my Docker commands wrapped it in a shell command. But then I started reading and learned I can also just use the G Cloud CLI. Can just specify a container that has that installed. So any of your cloud providers should have a CLI available. Whatever you’re using from your local machine and Azure or AWS, you should be able to find or build a Docker image that has those tools so you can do it exactly in your Jenkins pipeline as you’re doing it in your local environment.

I want to show you one other way you can define these. If you already are a Kubernetes aficionado and you really love YAML, you can define everything in YAML directly in your Jenkins file. I don’t know why you want to do it, but a lot of people are comfortable with that and you can also do it there. So also though, when you’re first starting, even though ultimately it’s great to have everything software defined and in your source control management system. Don’t be afraid just to configure it through the UI, get it working and then understand it. It took about an hour, maybe two to get this up and running and then I spent the good part of a weekend to go back and understand what I’d just done. So I think that that solution first, dig in second approach, it works really well for me. So don’t be afraid to take whatever shortcuts you need to get it running, because once you see it running it’s way more motivating to go and understand what’s going on than when you don’t know if it’s going to work or not.

“If you already are a Kubernetes aficionado and you really love YAML, you can define everything in YAML directly in your Jenkins file.”

So final thoughts here. As I said, Jenkins is growing to take advantage of new technology and to enable you to build applications using new technology. You really want to take advantage of all of Jenkins’ capabilities. If you’re still using the same Jenkins that you started with five years ago, you’re missing out. But mainly, as I said, I don’t work for Jenkins, I don’t work for Kubernetes that just like tech and I wanted to share what I learned in my time using Jenkins.

I just want you to walk out of here knowing that you also can totally spin this up and get an environment running, and then if you’re not using this in your company, you can easily demo to a POC and say, “Hey, why are we not doing this? It was super easy.”

View the video from Mandy’s presentation.

Leave a Reply

Your email address will not be published. Required fields are marked *