Building an Azure-powered Concourse pipeline for Kubernetes – Part 1: Setup

Isn’t it frustrating to build great software and helplessly watch as it waits to get deployed? We don’t just want to build software in small batches, we want to ship it in small batches. This helps us learn faster, and gives our users a non-stop stream of new value.

I’m a big fan of Concourse. It’s a continuous integration platform that reflects modern cloud-native values: it’s open source, container-native, stateless, and developer-friendly. And all pipeline definitions are declarative (via YAML) and easily source controlled. I wanted to learn how build a Concourse pipeline that unit tests an ASP.NET Core app, packages it up and stashes a tarball in Azure Storage, creates a Docker container and stores it in Azure Container Registry, and then deploy the app to Azure Kubernetes Service. In this three part blog series, we’ll do just that! Here’s the final pipeline:

This first posts looks at everything I did to set up the scenario.

My ASP.NET Core web app

I used Visual Studio for Mac to build a new ASP.NET Core Web API. I added NuGet package dependencies to xunit and xunit.runner.visualstudio. The API controller is super basic, with three operations.

[Route("api/[controller]")]
[ApiController]
public class ValuesController : ControllerBase
{
    [HttpGet]
    public ActionResult<IEnumerable<string>> Get()
    {
        return new string[] { "value1", "value2" };
    }

    [HttpGet("{id}")]
    public string Get(int id)
    {
        return "value1";
    }

    [HttpGet("{id}/status")]
    public string GetOrderStatus(int id)
    {
        if (id > 0 && id <= 20)
        {
            return "shipped";
        }
        else
        {
            return "processing";
        }
    }
}

I also added a Testing class for unit tests.

    public class TestClass
    {
        private ValuesController _vc;

        public TestClass()
        {
            _vc = new ValuesController();
        }

        [Fact]
        public void Test1()
        {
            Assert.Equal("value1", _vc.Get(1));
        }

        [Theory]
        [InlineData(1)]
        [InlineData(3)]
        [InlineData(9)]
        public void Test2(int value)
        {
            Assert.Equal("shipped", _vc.GetOrderStatus(value));
        }
    }

Next, I right-clicked my project and added “Docker Support.”

What this does is add a Docker Compose project to the solution, and Dockerfile to the project. Due to relative paths and such, if you try and “docker build” from directly within the project directory containing the Docker file, Docker gets angry. It’s meant to be invoked from the parent directory with a path to the project’s directory, like:

docker build -f seroter-api-k8s/Dockerfile .

I wasn’t sure if my pipeline could handle that nuance when containerizing my app, so just went ahead and moved the generated Dockerfile to the parent directory like in the screenshot below. From here, I could just execute the docker build command.

You can find the complete project up on my GitHub.

Instantiating an Azure Container Registry

Where should we store our pipeline-created container images? You’ve got lots of options. You could use the Docker Hub, self-managed OSS projects like VMware’s Harbor, or cloud-specific services like Azure Container Registry. Since I’m trying to use all-things Azure, I chose the latter.

It’s easy to set up an ACR. Once I provided the couple parameters via the Azure Dashboard, I had a running, managed container registry.

Provisioning an Azure Storage blob

Container images are great. We may also want the raw published .NET project package for archival purposes, or to deploy to non-container runtimes. I chose Azure Storage for this purpose.

I created a blob storage account named seroterbuilds, and then a single blob container named coreapp. This isn’t a Docker container, but just a logical construct to hold blobs.

Creating an Azure Kubernetes Cluster

It’s not hard to find a way to run Kubernetes. I think my hair stylist sells a distribution. You can certainly spin up your own vanilla server environment from the OSS bits. Or run it on your desktop with minikube. Or run an enterprise-grade version anywhere with something like VMware PKS. Or run it via managed service with something like Azure Kubernetes Service (AKS).

AKS is easy to set up, and I provided the version (1.13.9), node pool size, service principal for authentication, and basic HTTP routing for hosted containers. My 3-node cluster was up and running in a few minutes.

Starting up a Concourse environment

Finally, Concourse. If you visit the Concourse website, there’s a link to a Docker Compose file you can download and start up via docker-compose up. This starts up the database, worker, and web node components needed to host pipelines.

Once Concourse is up and running, the web-based Dashboard is available on localhost:8080.

From there you can find links (bottom left) to downloads for the command line tool (called fly). This is the primary UX for deploying and troubleshooting pipelines.

With fly installed, we create a “target” that points to our environment. Do this with the following statement. Note that I’m using “rs” (my initials) as the alias, which gets used for each fly command.

fly -t rs login -c http://localhost:8080

Once I request a Concourse login (default username is “test” and password is “test”), I’m routed to the dashboard to get a token, which gets loaded automatically into the CLI.

At this point, we’ve got a functional ASP.NET Core app, a container registry, an object storage destination, a managed Kubernetes environment, and a Concourse. In the next post, we’ll build the first part of our Azure-focused pipeline that reads source code, runs tests, and packages the artifacts.

Author: Richard Seroter

Richard Seroter is currently the Chief Evangelist at Google Cloud and leads the Developer Relations program. He’s also an instructor at Pluralsight, a frequent public speaker, the author of multiple books on software design and development, and a former InfoQ.com editor plus former 12-time Microsoft MVP for cloud. As Chief Evangelist at Google Cloud, Richard leads the team of developer advocates, developer engineers, outbound product managers, and technical writers who ensure that people find, use, and enjoy Google Cloud. Richard maintains a regularly updated blog on topics of architecture and solution design and can be found on Twitter as @rseroter.