Introduction
Scope
This document explains:
- How to deploy two play applications with Docker (an API and client using the API)
- Set up GKE (network & security)
- Run the applications on Google Kubernetes Engine
- Make the client accessible through HTTPS
Given that you have:
- A domain called
mydomain.com
- An SSL certificate for that domain
- A multi-project Play application (api & client)
- A GCP account and a project called
mydomain-dev
- Docker for Mac installed
GCP Services used
- GKE: Google Kubernetes Engine
- Cloud Armor: Security groups (filter network traffic, similar to security groups on AWS)
- Cloud SQL: MySQL on GCP
Play concepts used
Set up Play for Docker
Configure native package
Add sbt-native-packager to the plugin.sbt file, which will enable to generate a docker with sbt
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.7")
The api and the client run on a different port on local (api: 9100, client: 9200), but on the same port in their own docker container (i.e. one docker container is created per application, listening to port 9000). We set this configuration in build.sbt.
## Set the application version to be different at every generation of the docker file => versioning
val appVersion = Option(System.getProperty("version")).getOrElse("%tY%<tm%<td%<tH%<tM".format(new java.util.Date))
## Settings for the API
lazy val api = Project(
"api",
file("api")
).enablePlugins(
PlayScala, DockerPlugin
).settings(
playDefaultPort := 9100,
dockerExposedPorts := Seq(9000)
)
## Settings for the client
lazy val client = Project(
"client",
file("client")
).enablePlugins(
PlayScala, DockerPlugin
).settings(
playDefaultPort := 9200,
dockerExposedPorts := Seq(9000)
)
We need to configure the application.conf for each project. We access the database only from the api, so the DB part is not set for the client.
## Set a secret key, otherwise the docker can not be generated
play.http.secret.key = "XXX"
## Allow requests from hosts running in the same Kubernetes cluster
play.filters {
hosts {
allowed = ["."]
}
}
# Configure scalikejdbc in order to use default values in local and specified env values set when deployed (only for the API)
play.modules.enabled += "scalikejdbc.PlayDBApiAdapterModule"
db.default.driver = "com.mysql.jdbc.Driver"
db.default.url = "jdbc:mysql://localhost:3306/db?useEncoding=true&characterEncoding=utf8&useSSL=false"
db.default.url = ${?DB_URL}
db.default.username = "root"
db.default.username = ${?DB_USER}
db.default.password = ""
db.default.password = ${?DB_PASSWORD}
db.default.hikaricp.maximumPoolSize=10
db.default.hikaricp.readOnly = true
db.default.logSql=true
By default, the GKE load balancer will do a health check on GET / and is expecting to receive an 200. If there is no such endpoint (e.g. in the API), let's add one for GKE.
GET / api.controllers.IndexController.index
@Singleton
class IndexController @Inject() (cc: ControllerComponents) extends AbstractController(cc) {
def index = Action {
Ok("OK")
}
}
Deploy docker to GCP
Once Play has been set up for docker, we can generate the docker file.
GCP_PROJECT=$(gcloud config get-value project)
# Generate the api docker
sbt ";project api; docker:publishLocal"
# The version needs to be adapted to the one generated by the command above
VERSION=201812081000
# Rename the tag
docker tag api:$VERSION asia.gcr.io/$GCP_PROJECT/api:$VERSION
# Push to the GCP container repository
gcloud docker -- push asia.gcr.io/$GCP_PROJECT/api:$VERSION
# Generate the client docker
sbt ";project api; docker:publishLocal"
# The version needs to be adapted to the one generated by the command above
VERSION=201812081000
# Rename the tag
docker tag client:$VERSION asia.gcr.io/$GCP_PROJECT/api:$VERSION
# Push to the GCP container repository
gcloud docker -- push asia.gcr.io/$GCP_PROJECT/api:$VERSION
Set up Kubernetes
A backend configuration specifies additional configurations for the cluster. In our case, we will configure the network security.
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
namespace: mydomain
name: mydomain-cluster-backend-config
spec:
securityPolicy:
name: "mydomain-cluster-security"
The following deployment file specifies the configuration of the api application. Since we will use CloudSQL, we need to configure the settings as specified in the Google documentation.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
namespace: mydomain
labels:
app: api
role: application
spec:
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
spec:
containers:
- name: api
image: asia.gcr.io/mydomain-dev/api:201812081000
ports:
- containerPort: 9000
env:
- name: DB_URL
value: "jdbc:mysql://localhost:3306/db?useEncoding=true&characterEncoding=utf8&useSSL=false"
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=mydomain-dev:asia-northeast1:mydomain=tcp:3306",
"-credential_file=/secrets/cloudsql/sql-proxy.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
From the client we will not access the database, so we don't need to configure it.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client
namespace: mydomain
labels:
app: client
role: application
spec:
replicas: 1
template:
metadata:
labels:
app: client
tier: backend
spec:
containers:
- name: admin
image: asia.gcr.io/mydomain-dev/client:201812081000
ports:
- containerPort: 9000
env:
- name: CLIENT_BASE_URL
value: "http://client:9200"
- name: API_BASE_URL
value: "http://api:9100/admin"
The services.yaml exposes the application above as service to the cluster.
apiVersion: v1
kind: Service
metadata:
name: api
namespace: mydomain
labels:
app: api
role: application
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"9100":"mydomain-cluster-backend-config"}}'
spec:
type: NodePort
ports:
- port: 9100
targetPort: 9000
selector:
app: api
tier: backend
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
name: client
namespace: mydomain
labels:
app: client
role: application
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"9200":"mydomain-cluster-backend-config"}}'
spec:
type: NodePort
ports:
- port: 9200
targetPort: 9000
selector:
app: client
tier: backend
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mydomain-loadbalancer
namespace: mydomain
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: mydomain-cluster-ip
spec:
tls:
- secretName: mydomain-ssl
rules:
- host: client.mydomain.com
http:
paths:
- backend:
serviceName: client
servicePort: 9200
Set up GKE
Create a cluster
## Create cluster
gcloud container --project $GCP_PROJECT \
clusters create "mydomain-cluster" \
--zone "asia-northeast1-a" \
--username "admin" \
--cluster-version "1.10.6-gke.2" \
--machine-type "g1-small" \
--image-type "COS" \
--disk-size "30" \
--scopes "https://www.googleapis.com/auth/cloud-platform" \
--num-nodes "1" \
--network "default" \
--enable-cloud-logging \
--enable-cloud-monitoring \
--subnetwork "default" \
--addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard \
--enable-autorepair
## Define a namespace
kubectl create namespace mydomain
## Create new public IP that will be used by the load balancer
## Only global static IP works with HTTPS
gcloud compute addresses create mydomain-cluster-ip --global
## Check the created public IP
gcloud compute addresses list
Setup credential files for CloudSql
## See more: https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
# Enable API https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview?project=XXX
# Create service account
# See available roles: https://cloud.google.com/iam/docs/understanding-roles#cloud_sql_roles
gcloud iam service-accounts keys create ../credentials/sql-proxy.json --iam-account sql-proxy@$GCP_PROJECT.iam.gserviceaccount.com --display-name "SQL Proxy"
gcloud projects add-iam-policy-binding $GCP_PROJECT --member serviceAccount:sql-proxy@$GCP_PROJECT.iam.gserviceaccount.com --role roles/cloudsql.editor
gcloud iam service-accounts get-iam-policy sql-proxy@$GCP_PROJECT.iam.gserviceaccount.com
## Add the service account credentials to the cluster
kubectl create secret generic cloudsql-instance-credentials --from-file=sql-proxy.json --namespace=mydomain
## Set the DB credentials for the cluster
kubectl create secret generic cloudsql-db-credentials --from-literal=username=proxyuser --from-literal=password=XXX --namespace=mydomain
Setup SSL
## Add the SSL certificate to the cluster
kubectl create secret tls mydomain-ssl --key mydomain_unencripted.key --cert mydomain.crt --namespace=mydomain
Set up the network policy
Restrict access from outside, except for certain IPs
gcloud beta compute security-policies create mydomain-cluster-security \
--description "Deny public traffic for mydomain-cluster"
gcloud beta compute security-policies rules create 1000 \
--security-policy mydomain-cluster-security \
--description "Deny traffic from outside" \
--src-ip-ranges "*" \
--action "deny-404"
gcloud beta compute security-policies rules create 500 \
--security-policy mydomain-cluster-security \
--description "All traffic from trusted sources" \
--src-ip-ranges "XXX.XXX.XXX.XXX/32" \
--action "allow"
Deploy
## Connect to the cluster
gcloud container clusters get-credentials mydomain-cluster --zone asia-northeast1-a --project $GCP_PROJECT
## Deploy nodes and service
kubectl apply -f deployment-api.yaml
kubectl apply -f deployment-client.yaml
kubectl apply -f backend.yaml
kubectl apply -f services.yaml
Access the application
The client is now accessible at https://client.mydomain.com
.
Conclusion
We have seen how to set up and run play on a GKE cluster. Find below further references for useful commands for GKE.
Useful GKE commands
## Delete existing cluster
gcloud container clusters delete mydomain-cluster --zone "asia-northeast1-a"
## Get running nodes for cluster
kubectl get pods --all-namespaces
## Fetches logs for container in a specific node
kubectl logs -p NODE --container=mycontainer --namespace=mynamespace
## Convert a certificate from rapid SSL to be used with kubernetes
openssl pkcs7 -print_certs -in old.p7b -out new.crt