Google Cloud - Containers and Kubernetes

Ship Your Go Applications Faster To Cloud Run With Ko

As developers work more and more with containers, it is becoming increasingly important to reduce the time to move from source code to a deployed application. To make building container images faster and easier, we have built technologies like Cloud BuildkoJibNixery and added support for cloud-native Buildpacks. Some of these tools focus specifically on building container images directly from the source code without a Docker engine or a Dockerfile.

The Go programming language specifically makes building container images from source code much easier. This article focuses on how a tool we developed named “ko” can help you deploy services written in Go to Cloud Run faster than Docker build/push, and how it compares to alternatives like Buildpacks.

How does ko work?

ko is an open-source tool developed at Google that helps you build container images from Go programs and push them to container registries (including Container Registry and Artifact Registry). ko does its job without requiring you to write a Dockerfile or even install Docker itself on your machine.

ko is spun off of the go-containerregistry library, which helps you interact with container registries and images. This is for a good reason: The majority of ko’s functionality is implemented using this Go module. Most notably this is what ko does:

  • Download a base image from a container registry
  • Statically compile your Go binary
  • Create a new container image layer with the Go binary
  • Append that layer to the base image to create a new image
  • Push the new image to the remote container registry

Building and pushing a container image from a Go program is quite simple with ko:

export KO_DOCKER_REPO=gcr.io/YOUR_PROJECT/my-app
ko publish .

In the command above, we specified a registry for the resulting image to be published and then specified a Go import path (the same as what we would use in a “go build” command, i.e. the current directory in this case) to refer to the application we want to build. 

By default, the ko command uses a secure and lean base image from the Distroless collection of images (the gcr.io/distroless/static:nonroot image), which doesn’t contain a shell or other executables in order to reduce the attack surface of the container. With this base image, the resulting container will have CA certificates, timezone data, and your statically-compiled Go application binary.

ko also works with Kubernetes quite well. For example, with “ko resolve” and “ko apply” commands you can hydrate your YAML manifests as ko replaces your “image:” references in YAML automatically with the image it builds, so you can deploy the resulting YAML to the Kubernetes cluster with kubectl:

ko resolve -f deployment.yml | kubectl apply -f-

Using ko with Cloud Run

Because of ko’s composable nature, you can use ko with gcloud command-line tools to build and push images to Cloud Run with a single command:

gcloud run deploy SERVICE_NAME --image=$(ko publish IMPORTPATH) [...]

This works because ko outputs the full pushed image reference to the stdout stream, which gets captured by the shell and passed as an argument to gcloud via the --image flag.

Similar to Kubernetes, ko can hydrate your YAML manifests for Cloud Run if you are deploying your services declaratively using YAML:

ko resolve -f service.yml | gcloud beta run services replace - [...]

In the command above, “ko resolve” replaces the Go import paths in the “image: …” values of your YAML file, and sends the output to stdout, which is passed to gcloud over a pipe. gcloud reads the hydrated YAML from stdin (due to the “-” argument) and deploys the service to Cloud Run.

For this to work, the “image:” field in the YAML file needs to list the import path of your Go program using the following syntax:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
spec:
  template:
    spec:
      containerConcurrency: 0
      containers:
      - image: ko://example.com/app/backend # a Go import path

Ko, compared to its alternatives

As we mentioned earlier, accelerating the refactor-build-deploy-test loop is crucial for developers iterating on their applications. To illustrate the speed gains made possible by using ko (in addition to the time and system resources you’ll save by not having to write a Dockerfile or run Docker), we compared it to two common alternatives:

  1. Local docker build and docker push commands (with a Dockerfile)
  2. Buildpacks (no Dockerfile, but runs on Docker)

Below is the performance comparison for building a sample Go application into a container image and pushing this image to Artifact Registry.

Note: In this chart, “cold” builds do not cache layers either in the build machine or in the container registry. In contrast, “warm” builds cache both layers (if caching is enabled by default) and skip pushing the layer blobs to the registry if they already exist.

ko vs local Docker Engine: ko wins here by a small margin. This is because the “docker build” command packages your source code into a tarball and sends it to the Docker engine, which either runs natively on Linux or inside a VM on macOS/Windows.  Then, Docker builds the image by spinning up a new container for Dockerfile instruction and snapshots the filesystem of the resulting container into an image layer. These steps can take a while.

ko does not have these shortcomings; it directly creates the image layers without spinning up any containers and pushes the resulting layer tarballs and image manifest to the registry.

In this approach we built and pushed the Go application using the following command:

  docker build -t IMAGE_URL . && docker push IMAGE_URL

ko vs Buildpacks (on local Docker): Buildpacks help you build images for many languages without having to write a Dockerfile. It’s worth noting that Buildpacks still require Docker to work. Buildpacks work by detecting your language and using a “builder image” that has all the build tools installed, before finally copying the resulting artifacts into a smaller image.

In this case, the builder image (gcr.io/buildpacks/builder:v1) is around 500 MB, so it will show up in the “cold” builds. However, even for “warm” builds, Buildpacks use a local Docker engine, which is already slower than ko. And similarly, Buildpacks will run custom logic during the build phase, so it is also slower than Docker.

In this approach we built and pushed the Go application using the following command:

pack build IMAGE_URL --publish

Conclusion

ko is part of a larger effort to make developers’ lives easier by simplifying how container images are built. With buildpacks support, you can build container images out of many programming languages without writing Dockerfiles at all, and then you can deploy these images to Cloud Run with a single command.

ko helps you build your Go applications into container images and makes it easy to deploy them to Kubernetes or Cloud Run. ko is not limited to the Google Cloud ecosystem: It can authenticate to any container registry and works with any Kubernetes cluster.

To learn more, make sure to check out ko documentation at the GitHub repository and try deploying some of your own Go services to Cloud Run.

By Jon Johnson(Software Engineer) and Ahmet Alp Balkan(Senior Developer Advocate)
Source: Google Cloud Blog

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article
Motorcycles

Making Transport Easier And More Affordable Across India’s Urban Centers With Google Maps Platform

Next Article
Google Cloud - Kubernetes

Discover And Invoke Services Across Clusters With GKE Multi-Cluster Services

Related Posts