GCP Goodies Part 2— Google Deployment Manager with Kubernetes Type Provider

Krzysztof Grajek
SoftwareMill Tech Blog
4 min readSep 18, 2019

--

astrid westvang @ flickr CC 2.0

In the first part of GCP Goodies series (link here), I have written about the basics of deployment using simple yamls as well as Python/Jinja templates which you can use to write your Infrastructure as Code in more flexible way. But the Deployment Manager doesn’t stop there, it has a very powerful mechanism for managing your infrastructure directly from the same templates using so called Type Providers .

Deployment Manager offers the ability to register a third-party API with the Deployment Manager service. After registering an API as a type provider with Deployment Manager, you can use Deployment Manager to deploy resources from the API as types in your configuration.

What it means is that except of creating you GKE cluster, for example, you can register Kubernetes API as a type provider and manage your Kubernetes cluster directly from Deployment Manager, without the need to install other tools like famous and great btw — Helm.

Currently supported GCP Type Providers are listed here: https://cloud.google.com/deployment-manager/docs/configuration/supported-gcp-types but you can of course create your own, as long as it adheres to Open API specification or Google Discovery document (I will come to that in the next part of this series of blog posts).

Preparation

Similarly to the steps in Part 1 (link here), activate Cloud Shell and clone the repository (this time we won’t be using the samples provided by google directly as we want to extend their examples slightly).

In the Cloud Shell Console type:

git clone git@github.com:softwaremill/gcp-goodies.git

and navigate to part-2 examples with:

cd gcp-goodies/part-2/python

GKE Cluster Deployment

We are going to deploy Google Kubernetes Cluster (GKE) in the same way as we did in Part 1 (link here), but this time we are going to focus on type providers deployed by Deployment Manager. Deploy the cluster the usual way, setting up zone and cluster name first:

gcloud deployment-manager deployments create ${NAME} \
--template cluster.py \
--properties zone:${ZONE}

Now, navigate to Deployment Manager -> Deployments and click the details for the deployment you’ve just created:

As you can see, except the cluster.py deployment there are some Type Providers listed which were deployed together with our cluster. The list of type providers is declared in the beginning of cluster.py file:

There are more available for Kubernetes and which one you choose depends on what you want to do with the cluster from your deployment manager scripts. You would need to refer to the documentation of Kubernetes to find out their available API groups when what you can do in each one of them: https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups

The endpoints listed in the k8s_endpoints secion are deployed with the script using for loop construct:

Notice the descriptorUrl section where you need to specify the API description in Open API or Google Discovery document format.

Managing Kubernetes

At this point we have a deployment and our cluster ready and running on the GCP together with the ability to manage the GKE itself. Now it’s time to do some magic on the GKE with Deployment Manager.

Let’s deploy something to our cluster, let’s say nginx To do that, execute the following commands:

After the nginx gets deployed we can access the kubectl directly and check if everything is fine:

gcloud container clusters get-credentials ${NAME}-cluster-py --zone ${ZONE}kubectl get deployments
kubectl get services
kubectl get po

You can run describe on the pod deployed to see its details:

You can of course, forward the port to access the nginx directly:

kubectl port-forward $(\
kubectl get pods — output=jsonpath=”{.items[0].metadata.name}”) \
9999:${PORT}

and in the next Cloud Shell console window access it with curl:

curl localhost:9999

Adding Kubernetes ConfigMap

We are using two type providers in our deployment.py file:

As you can see, they are using different Kubernetes API endpoints which had to be defined while registering type providers with the cluster itself.

We are going to add one more, this time for storing kubernetes configmaps. To do that modify the section to cluster_types like this:

the same scenario you will apply for secrets etc.

In the provider template definition add the configmap variables:

Delete the old deployment from GCP:

gcloud deployment-manager deployments delete deployment

and make sure there are no deployments on the K8S cluster itself, with:

kubectl get deployments

now, you are ready to deploy new version of our deployment with configmaps specified:

gcloud deployment-manager deployments create deployment --template deployment2.py --properties clusterType:${NAME}-cluster-py-type,image:${IMAGE},port:${PORT}

I have create a separate file for you on the Github repo called deployment2.py

Now, when connected to the cluster, you can check if the configmap was deployed properly:

In the next part of this series, I’m going to show you how to use type providers with your custom API which opens a ton of new possibilities with the way you can write your deployment code.

--

--