The Value of Automation Pipelines

This post is a transcript from John Chavez’s keynote presentation at
The Austin Homegrown API Meetup in October 2019.

Thank you for the introduction!

So, APIMatic actually was the one who invited us out here — that’s how we heard about this. And we’re going to go over Bandwidth’s automation pipeline that begins with: obey the ice bags, goes into our SDKs, and then our developer docs with APIMatic.

So the story that I want to take you guys on is basically a developer who makes a change to the API. He might not know it affects the facade, but through our deployment pipeline, we’re going to be releasing SDKs automatically and updating our developer docs all without the intervention, hopefully of humans.

So just a little bit: I’m John Chavez, I work on developer experience team at Bandwidth. Our job is basically to make onboarding new clients as easy as possible and to make our internal experience better, too, for our developers. We want to make clear documentation that we want to keep stuff accurate and we want to keep it current. So that’s kind of how we go, because one of the things about an API is if you’re not current on your documentations or current on your SDKs, it doesn’t work.

So, what is Bandwidth? Bandwidth is a communication platform as a service. We provide a voice and messaging and 911 access and a numbers management API. There are four distinct points, and a lot of our customers actually use, at least two of our API’s together. So a lot of times you’ll see the voice and message doing a tango together. Our customers, our clients — we’re just going to refer to them as our users from this point on, just for the sake of simplicity.

Our users send about 190 million API requests to us a week. Works out to about 27 million a day, or 300 a second. So those are again our four major ones.

So you’re going to hear this story a lot, but I’m just trying to drive through what we want to accomplish. A developer makes a change to the API somewhere, wherever, and when that API releases goes public to our users, the SDKs coincides with a new release that allows that added functionality, and our developer docs are updated.

Now, there’s a bit of a challenge for us because we have not only one API, but we have four. Again the voice, message, numbers management, etc. And they all operate on different cycles.

Some of them release weekly during big pushes, some release monthly. Others go quiet for long periods of time, but they’re all different. They’re separate teams. They’re not even geographically co-located, so it’s very hard to get that information. Sometimes a team in the past would release and we wouldn’t even know about it for six months until a client calls up and goes, “What is this? This isn’t working anymore.” And we’re like, “Fire, fire — go fix this. Run off.”

So, that’s very important to us, because our product is our API. That is what we sell. So we may call them voice and message services, but they’re our APIs. The other thing that we have that’s adds some challenges that we wanted to keep certain things in there like vXML (VoiceXML), which we’ll go into more detail about, but it’s a subset of XML that works as our basic response language to callbacks.

So it’s just basically a nice feature that helps a developer experience if they don’t have to learn vXML. I don’t even know all the nuances of vXML, and I use it at work. So… we wanted to add that.

The overview is, these are the four questions that we pose to ourselves that we are going into the end of a presentation. I hate to read directly from slides, but honestly that’s the best way I can do this.

1. How do we keep our specifications in turn? How do we keep them accurate?

So this is the accuracy question.

2. How do we keep a single source of truth?

So this is going to be where we’re keeping current.

3. How do we keep up to date SDKs for a myriad of languages?

And we support a myriad of languages—about six of them I believe now. So when you ask what I develop in, pick a day of the week, and which client is having a fire. And, then…

4. How do we keep our developer docs up to date?

Which, as pointed out the first face value of your API terms.

First Question:

So the challenge here again was the accuracy of it, and we wanted to keep them extremely accurate because having an endpoint that’s been deleted existing in your a SDK just leads to failure of climate future. Previously one of our eight guys, specifically our numbers, which has over 600 plus method end point variations, which could be PUT, POST, whatever. But, it’s still 600 plus was being manually maintained. As you can imagine, lots of developers working on in many changes. It just got out of hand. There were whole end points went missing, and then other times people make spelling mistakes. That just happens. Human error, I made them all the time, but as you can see that those spec files fell out of sync with our API that was being released, and things like that.

So we decided what we need to remove is the human error. We’re going to generate our open API spec directly from our code.

And at least if we have errors in that way, they’re going to be systemic errors. Errors that we can reproduce every single time. If I get three developers in the room and they make mistakes, they’re all three going to be different. At least with code generation we can face systemic errors, which are much better to handle.

So I actually have a quick question for the audience. How many of you guys have worked on an API thats backbone has been in Java?

All right, so Bandwidth is mostly Java. So our backbone is that, and we work within an open shifting environment for deployment. So we’re going to hopefully talk about things in the right perspective the Bandwidth’s environment.

So the API spec generation actually can come from a major plugin, and our projects are made with. They’re not exactly the same style of Java. We use SPRING for others and JAX-RS for some. So we found that there are actually tools for both of those, this SPRING and JAX-RS. And we’ve also seen tools out there for other languages. I believe Python and Ruby on Rails has them, too. Don’t quote me on that, but basically these are all automation tools that generate specs at different times.

The Maven plugin as you can see on board, is pretty simple. We pull it in from the Maven central repository and these are what the style of the Swagger invitations are. But first step that it actually does and it goes through the scan through SPRING annotations and your JAX-RS annotations, and from those annotations it can actually build out your open API spec or your Swagger file, depending on your configuration that you want. And then if you needed additional information you can add these Swagger annotations and this plugin is actually supported by Swagger. I believe that’s where the company that produces this and that is maintained.

The real power of it comes in the fact that it actually looks at the compiled object. So some things go in fields with getters and setters for Java, it’ll pick up that there’s been an additional field added into a class. Even if it’s a sub class way down the line that the developer didn’t even know was ever going to affect the facade of your API.

So he added an additional telephone number to a career grant or some subset list for something. That, through the course of this update, this bugging, we’ll catch it, put the exact name. Even if you misspelled the name of Java, it’s going to match on the open API spec… and it come through with little to no interaction from the dev.

Additionally, now the Swagger invitations, which is an example down there, allows you to really bump up the power of this because it allows you to do things where one all or other type stuff that’s not exactly explicit in the code. So you can look up and change names, add other stuff. So we found that there hasn’t been situation within all four of our APIs that we haven’t been able to address with some form of Swagger invitation or just a slight. But it keeps us almost spot-on every single time.

All right, so now we have the accurate API spec, which is our base, our foundation. So that’s what we need from this and we’re going to go into maintaining a single source of truth, which is the current question.

So, what is the single source of truth for us?

It’s a little vague. It’s more of idea of what if we were to put it to a concrete thing, it’s a repository. We maintain our repository and GitHub too. If I say GitHub, as I’m calling about repository in general. Within this repository is a name space directory file of basically all of our opening accounts. They’re maintained there and they’re supposed to be the current ones that are out there in the wild and the release of all our APIs. The other thing that we maintain is customizations. So I’ll go into more detail about what customizations we’re talking and be X amount. It’s also maintain this source of truth. It’s not just a central API though. It’s really that central source of truth.

Now all of our SDKs are built from this repository and we actually kick it off by doing tax to it. Again, we automate all of this. I’m going to automate myself out of job. So the Jenkinsfile will exist here, run the pipeline and all of that.

How do we get the open API specs that are compiled here in run time into our source of truth? This is where we integrate into our product APIs, specifically into their release. We work within an OpenShift Jenkins environment so our obvious choice was to go with the Jenkins library. Sometimes people call them shared libraries.

Anyone know what that is? Show of hands. All right, cool.

“So shared libraries are basically a repository that you can put out within your environment.”

Ours is again, in OpenShift. And basically you can reference it from another pipeline and you need to pull in this library — it’s Ruby library, which allows you as the owner of the repository to control the functionality of this. So it’s very easy for people to integrate — like our stuff, we called it the dead X, Python. We can do all sorts of stuff. You just have to add a line of code and we’ll run our stuff within it. Pass your bill, fail your bill, whatever we need. So the Jenkins shared libraries are easy to import in our environment, very usable and then portable across the teams.

So we don’t control the environment that we’re running within when we’re within our own product — the product API releases and their build lines — we can only ask them requests because likely ever touches became. Now our specific library for this task went in and validated your API spec. So this is back again — We’re checking when you were maintaining that you didn’t forget a bracket, quotation, or comma. If you didn’t have a valid one, we’ve failed you, you’re, your build stopped.

The other thing once we started doing this auto generation is we started to doing this because we thought about it, the problem, we were like, you know, just because they’re going to release it doesn’t mean our SDKs should be released because there may be no change to the facade. They may have updated code that didn’t even have any effect to be okay with the answer. We actually just leveraged GET for this. It’s kind of an easy wrapper for it. So we pull in from our source of truth, the last most current and compare it to what was produced during this build artifact and then finally GET to push it.

And during that time we would actually do some calculation for the versioning and tag it, at which point we are now starting our pilot. So this answers the question of how we get the accurate, how we get the current. So we’re finally beginning to build out our SDKs and our developer docs. So this all now refers to our pipeline going forward.

How do we keep up-to-date SDKs for this myriad of work?

We have the base of opening counselings. So this is where APIMatic came in. Our friends at APIMatic has helped us. But to begin, let’s talk about how at first we were maintaining our SDKs. We had for our numbers management, two separate SDKs. One was covering some 200 end points and other 150 with little crossover, clients were out there using them all willy-nilly.

We’ve made an update, yes the SDKs stayed the same. Next thing you know it doesn’t work anymore. So there was poor maintenance all around. Things were getting out of hand, there was no dedicated team at the time.

It’s time consuming. Not only does it have to go to a developer whose time is consumed, it goes from a client representative, and then all the way through the pipeline to us. So this is all just time consuming and that, and the other part is a lot of our stuff lacked in consistency, which I think a lot of times keep.

So people would be using BUILDers in some area versus GETters and SETters in others. So overall not the greatest environment, not the greatest developer experience.

So APIMatic can be used to generate SDKs. We have our open API specs, which are verbose and accurate with many response errors now. So we know exactly what’s going wrong. They’re consistent. So they now all matched, they’re generated by code, not people. So there’s none of that fluctuation there and there’s a quick reaction.

I can produce an SDK in a second, and doing that for a developer would take an hour, even if it’s small. So now we have quick reaction times and freeze all our debts. We’re good to go!

We took it a step further and decided to pursue something called the unified SDK strategy. We saw this inside Amazon Web Services SDKs, which we really loved. Instead of having a SDK for each API, let’s just say the norm. So we actually approached this and the unified SDK by taking all four of our open API specs and combining them into one, so that we can just brand out there that we have a Bandwidth SDK. You don’t need anything else, just go get it. It will work with every product we get.

So, that was very important to us. Next, we wanted the SDK that had all of these end points to be able to run our customizations and these customizations that we had in there. The XML, was added up under voice so that you can only access it if you were using the voice control because it’s a voice response language, that’s used to control the flow of a telephone call. So all of these were added to make our SDK experience and our developer experience when using them just better overall.

So this is an example of the SDK namespace, if you can see it there. It just combines multiple open API specs into one. So that you initiate a client and you’re able to draw out basically your controller that you want and stuff like that and you run with it. It can make it appear and feel as if you’re using one API, which is really what we were after. We honestly, like we said, some clients use voice and message as a tango, so we wanted you to be able to do this seamlessly.

This is our custom code injection. That was the vXML. That’s our callback response language. We really like maintaining this and this is something that we still maintain as developer experience team, are these libraries. Because we don’t want our clients to have to go out there and learn vXML if they don’t want to.

“SDKs allow our developers, our clients, our users to work in a language they’re most comfortable in.”

They’re allowed to write their own vXML if they want to, but if they want to work with the vXML object oriented library in Java, they can feel free if their Python, they can go forward. Especially if you have an IPE that like Java, just allows you to discover all these things you can do to it. You never really need to know about it. You don’t need to know the exact syntax because our library will build it and then send it off and do what you want.

This is my little example that’s kind of half cut off, but if you ever want to say “Hello World,” you just kind of interact like that and then send it like Rick Roll. So that was actually my first project that they’re using their API was like Rick Roll for on the phone and that what you have is really great as the case, really great for developer experience. You want to make sure they have access to it.

It does no good to keep it just somewhere in Dropbox and so it’d be hard to find. Clients aren’t going to be able to use it, so of course push it out to your packages. This part’s kind of a catchall of stuff because there’s really no way to do it consistently across the languages. Python has its own way PHP, has its own weird GitHub hook-in to deploy them out, Mavin has its maintenance central, where you have to actually click a button and I believe that’s the only one we couldn’t fully automate. But yeah, so we want to get them out there, we want to keep pushing these package management and again, we’re on OpenShift, which means that we leverage Docker all over the place. We don’t ever do the installs because we pull Docker and you made a deployment and move on.

Same with no PHP and Ruby whatnot. So again, OpenShift, It was really nice just as integration with Docker Hub in this space. We also pushed to a public GitHub, which is, we debated a little bit just because our codes technically generated and we’re going to make sure we block pool requests against it because that way you’re taking, but we basically put it out there cause those three mean produces a pretty good example with the uses of our SDKs. And if you want to you can go ahead and fork from it and do your own, own experiments and stuff like that. So we just put it out there just for the sake of having source code for things like Java, so you don’t have to be compiled and stuff like that.

We also, we didn’t have a single customer that we gave our betas and alpha to that did not ask to see to the GitHub. So far, make your GitHubs public.

How do we keep our developer documents current?

So this is going to sound a lot like the nested case cause we just autogenerate.

It’s important to have these documents cause its source of truth for developers. It’s how they make new stuff. So we generate with docs. Previously we were maintaining a manually updating little things here and there and trying to keep them consistent and big pushes would come through that just suck up tons of time just writing these docs, people adding in. We even allow the other teams to make PRs against our developer docs, just because we weren’t the experts in some of the new stuff coming out. So it was really nice to be able to provide it and put it out.

Now, these Swagger annotations I mentioned at the beginning, have the ability to do examples and descriptions and we’ve really enjoyed being able to add that in as a requirement. For example, for that 600 plus end point API, a lot of our developers don’t even know what it does at the end point that their fixing, in a business. So being able to provide a description and code and having to update an example of it, kind of helped all of our Java devs kind of grow in the knowledge of the company of the business, what you’re actually doing. So it was a nice opportunity for our developers to learn and do it. And it also made for richer documentation. It’s a good idea to have.

And just as the last comment on here, we just basically, we use Amazon Web Services to host this. We just push out this three buckets. The big drive for this is that we just wanted to integrate it into our SDK pipeline so that the release of developer docs would coincide with the SDKs, which would coincide with the release of a new API. Whether you change to voice, or you change to message; we just wanted all of this to be current and real time.

So as a quick summary, changes to the API are automatically reflected all the way through to your SDKs, your public GitHub repositories, and your developer docs. And all of it happens almost simultaneously — within minutes. Now, that covers the four questions that we asked, and those were the four questions we answered.

Thank you!

View the video from John’s presentation.

Leave a Reply

Your email address will not be published. Required fields are marked *