0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

The basic of Kubernetes

Last updated at Posted at 2022-02-27

What is Kubernetes?

→ Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications.

How Kubernetes works?

Architecture

(To be updated...)

Controle plane

kube-apiserver

→ This component provides us API server. With this component, we can interact with Kubernetes resources. Also, this component can be scaled vertically, which means we can distribute requests to this API server.

etcd

→ KVS which stores all information about Kubernetes resources

kube-scheduler 

→ This component is responsible for selecting a node for a newly created pod based on

  • individual and collective resource requirements
  • hardware/software/policy constraints
  • etc…

kube-controller-manager

→ This component executes multiple controllers which are bundled into one process.

  • Node controller: Responsible for noticing and responding when nodes go down
  • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion
  • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods)
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces

cloud-controller-manager (optional)

(To be updated...)

Worker nodes

kubelet

(To be updated...)

kube-proxy

(To be updated...)

Container runtime

→ The container runtime is the software that is responsible for running containers. Kubernetes supports the following runtime.

  • Docker
  • containerd
  • CRI-O

These container runtime satisfy any implementation of the Container Runtime Interface (CRI).

Core concept

I try to explain the reason why Kubernetes has such kinds of concepts.

If you want to check the following demo by yourself, you can start the installation of minikube. My configuration is

> kubectl version -o yaml
clientVersion:
  buildDate: "2021-03-15T09:58:13Z"
  compiler: gc
  gitCommit: e87da0bd6e03ec3fea7933c4b5263d151aafd07c
  gitTreeState: dirty
  gitVersion: v1.20.4-dirty
  goVersion: go1.16.2
  major: "1"
  minor: 20+
  platform: darwin/amd64
serverVersion:
  buildDate: "2021-01-13T13:20:00Z"
  compiler: gc
  gitCommit: faecb196815e248d3ecfb03c680a4507229c2a56
  gitTreeState: clean
  gitVersion: v1.20.2
  goVersion: go1.15.5
  major: "1"
  minor: "20"
  platform: linux/amd64

> minikube version
minikube version: v1.18.1
commit: 09ee84d530de4a92f00f1c5dbc34cead092b95bc

> kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
minikube   Ready    control-plane,master   32d   v1.20.2   192.168.49.2   <none>        Ubuntu 20.04.1 LTS   4.19.121-linuxkit   containerd://1.4.3

Pod

What is pod?

The “pod” is the minimum deployment unit in Kubernetes. It can contain one or more processes (containers). For example, the pod "app" contains two containers. The following figure shows that an application runs by Python and filebeats collects its logs.

image.png

Let’s study how to create a pod with a simple example.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
    type: front-end
spec:
  containers:
  - name: nginx-container
    image: nginx

Basically, we need to specify the following four sections in YAML files.

  • apiVersion: Version of the Kubernetes API we are using to create the object
  • kind: Type of object we are trying to create
    • pod, replicaset, deployment, service, etc…
  • metadata: Data about the object like its name, labels, etc...
    • In labels section, we can use any dictionary in the format of key: value (we can set arbitrary labels!)
  • spec: this is a dictionary, add a property under it called "containers". This is actually list/array and we can define each element of the list with -

We can create a pod with kubectl apply -f <filename>.yaml command.

> kubectl apply -f nginx-pod.yaml
pod/myapp-pod created

> kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          69s

Replicaset

What is replicaset?

It is for high availability and for load balancing & scaling.

  • What do we need to do if we want to increase the number of applications (pods) to deal with high traffic?
  • Do we really want to execute kubectl create -f … command many times?

Usually, we don’t want to execute the command many times when we receive a lot of requests. The replicaset manages the number of pods. With replicaset, we can easily control the number of pods.

image.png

There is a similar resource named replication controller, but it’s recommended to use replica set. The YAML file looks like

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    type: front-end
spec:
  replicas: 3
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
      - name: nginx-container
        image: nginx
  selector:
    matchLabels:
      type: front-end
  • In template section, we can configure what kinds of pod should be managed with ReplicaSet.
  • selector section helps the replica set identify what parts fall under it
    • With the information in this section, replicaset can recognize which pod should be controlled

Let’s deploy and play with it!

> kubectl apply -f nginx-replicaset.yaml
replicaset.apps/myapp-replicaset created

> kubectl get pod,rs
NAME                         READY   STATUS    RESTARTS   AGE
pod/myapp-pod                1/1     Running   0          9m28s
pod/myapp-replicaset-8k2zw   1/1     Running   0          27s
pod/myapp-replicaset-ccmnv   1/1     Running   0          27s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/myapp-replicaset   3         3         3       27s

> vi nginx-pod.yaml

> bat nginx-pod.yaml
───────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       │ File: nginx-pod.yaml
───────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1   │ apiVersion: v1
   2   │ kind: Pod
   3   │ metadata:
   4   │   name: myapp-pod
   5   │   labels:
   6   │     app: myapp
   7   │     type: back-end <- ** I changed this **
   8   │ spec:
   9   │   containers:
  10   │   - name: nginx-container
  11   │     image: nginx
───────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

> kubectl apply -f nginx-pod.yaml
pod/myapp-pod configured

> kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myapp-pod                1/1     Running   0          10m
myapp-replicaset-8k2zw   1/1     Running   0          96s
myapp-replicaset-ccmnv   1/1     Running   0          96s
myapp-replicaset-zdjxq   1/1     Running   0          10s

Can we remove template section? Is it enough for replica set only to know selector and matchLabels sections?

→ No. Because replica set needs to know what kind of pod should be created when the number of pods is different from that in YAML file.

Deployment

What is deployment?

It is for

  • rollout of pods
  • storing the history of replica sets
  • rollback if we find some problems after we update the image

image.png

The YAML file looks like

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
      - name: nginx-container
        image: nginx
  replicas: 3
  selector:
    matchLabels:
      type: front-end

Let’s deploy and play with it.

> kubectl apply -f nginx-deployment.yaml
deployment.apps/myapp-deployment created

> kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     3            3           26s

> kubectl rollout history deploy myapp-deployment
deployment.apps/myapp-deployment
REVISION  CHANGE-CAUSE
1         <none>

> vi nginx-deployment.yaml
(changed image from nginx to nginx:alpine)

> kubectl apply -f nginx-deployment.yaml
deployment.apps/myapp-deployment configured

> kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-5f7f854c46-4g6tv   1/1     Running   0          68s
myapp-deployment-5f7f854c46-fdlj8   1/1     Running   0          55s
myapp-deployment-5f7f854c46-gnrb7   1/1     Running   0          58s
myapp-pod                           1/1     Running   0          39m

> kubectl rollout history deploy myapp-deployment
deployment.apps/myapp-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

> kubectl describe pod myapp-deployment-5f7f854c46-4g6tv | grep image
  Normal  Pulling    110s  kubelet            Pulling image "nginx:alpine"
  Normal  Pulled     103s  kubelet            Successfully pulled image "nginx:alpine" in 7.367998s

> kubectl rollout undo deploy myapp-deployment --to-revision 1
deployment.apps/myapp-deployment rolled back

> kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-pod                1/1     Running   0          40m
myapp-replicaset-4nr8m   1/1     Running   0          14s
myapp-replicaset-5bnsh   1/1     Running   0          11s
myapp-replicaset-gt5b2   1/1     Running   0          17s

> kubectl describe pod myapp-replicaset-4nr8m | grep image
  Normal  Pulling    38s   kubelet            Pulling image "nginx"
  Normal  Pulled     37s   kubelet            Successfully pulled image "nginx" in 1.6459043s
  • We successfully update the image of nginx from nginx to nginx:alpine
    • A new revision is created
  • We successfully rollback the previous revision (image)

Basically, we create a deployment for managing applications (not pods and replica sets)

> kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-pod                1/1     Running   1          23h
myapp-replicaset-4nr8m   1/1     Running   1          22h
myapp-replicaset-5bnsh   1/1     Running   1          22h
myapp-replicaset-gt5b2   1/1     Running   1          22h

> vi nginx-deployment.yaml

> cat nginx-deployment.yaml | grep replicas
  replicas: 1

> kubectl apply -f nginx-deployment.yaml
deployment.apps/myapp-deployment configured

> kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-5f7f854c46-hpc8k   1/1     Running   0          15s
myapp-pod                           1/1     Running   1          23h
  • If we have many pods, how other applications can communicate with these pods?

    • What is the mechanism to select one pod for the applications?
  • If we decrease the number of replicas, some of the existing pods are deleted and new pods are created.

    • It means that IP addresses for the previous pods are changed
    • How other applications communicate with pod? (pod that

Namespace

What is namespace?

It is virtual partitions to separate resources so that they do not interact with each other.

The YAML file looks like

apiVersion: v1
kind: Namespace
metadata:
  name: dev

If we want to specify namespace for some resources, we can include the information in metadata section. For example, if we want to create a nginx pod in dev namespace, the YAML file looks like

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  namespace: dev
spec:
  containers:
  - name: nginx-container
    image: nginx

We can create a deployment for each namespace so that we can control Kubernetes resources in each environment.

image.png

Let’s create namespaces!

> kubectl get ns
NAME                STATUS   AGE
argocd              Active   10h
crossplane-system   Active   2d4h
default             Active   34d
kube-node-lease     Active   34d
kube-public         Active   34d
kube-system         Active   34d
lens-metrics        Active   19d

> kubectl create ns dev
namespace/dev created

> kubectl create ns prod
namespace/prod created

> kubectl apply -f nginx-deployment.yaml -n dev
deployment.apps/myapp-deployment created

> kubectl apply -f nginx-deployment.yaml -n prod
deployment.apps/myapp-deployment created

> kubectl get deploy -A
NAMESPACE           NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
argocd              argocd-dex-server                      1/1     1            1           10h
argocd              argocd-redis                           1/1     1            1           10h
argocd              argocd-repo-server                     1/1     1            1           10h
argocd              argocd-server                          1/1     1            1           10h
crossplane-system   crossplane                             1/1     1            1           2d4h
crossplane-system   crossplane-provider-aws-4ebd5d8c7cb7   1/1     1            1           2d4h
crossplane-system   crossplane-rbac-manager                1/1     1            1           2d4h
dev                 myapp-deployment                       1/1     1            1           22s
kube-system         coredns                                1/1     1            1           34d
lens-metrics        kube-state-metrics                     1/1     1            1           19d
prod                myapp-deployment                       1/1     1            1           15s

> kubectl get pod -n dev
NAME                               READY   STATUS    RESTARTS   AGE
myapp-deployment-dc88b646d-sdb49   1/1     Running   0          45s

> kubectl get pod -n prod
NAME                               READY   STATUS    RESTARTS   AGE
myapp-deployment-dc88b646d-ngxjq   1/1     Running   0          39s

> vi nginx-deployment.yaml

> kubectl apply -f nginx-deployment.yaml -n prod
deployment.apps/myapp-deployment configured

> kubectl get pod -n prod
NAME                               READY   STATUS    RESTARTS   AGE
myapp-deployment-dc88b646d-6f2j9   1/1     Running   0          9s
myapp-deployment-dc88b646d-9vfl5   1/1     Running   0          9s
myapp-deployment-dc88b646d-mfskf   1/1     Running   0          9s
myapp-deployment-dc88b646d-ngxjq   1/1     Running   0          68s
myapp-deployment-dc88b646d-www27   1/1     Running   0          9s

> kubectl get pod -n dev
NAME                               READY   STATUS    RESTARTS   AGE
myapp-deployment-dc88b646d-sdb49   1/1     Running   0          6m32s

We can control resources (e.g. the number of applications) by namespaces (environments).

Service

What is service?

It is for communication with a pod. When we manage applications (pods) with deployment, service enables us to communicate with pods by name resolution. It is for exposing pods (applications) inside or outside of Kubernetes cluster. There are four types.

  • ClusterIP → This is for exposing only inside the cluster

image.png

  • NodePort → This is for exposing inside and outside the cluster. We need information about the IP address and port of the service.

image.png

  • LoadBalancer → This is for exposing inside and outside the cluster with a load balancer provided by cloud provider (e.g. ELB for AWS).

image.png

The YAML file looks like

apiVersion: v1
kind: Service
metadata:
  name: myapp-clusterip-service
spec:
  type: ClusterIP
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Let’s deploy

ClusterIP

With this type of service,

  • We can communicate with pods from inside of the cluster
  • We can’t communicate with pods from outside of the cluster
(After cleaning up all resources...)

> kubectl apply -f nginx-pod.yaml
pod/myapp-pod created

> kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          14s

> kubectl apply -f nginx-clusterip-service.yaml
service/myapp-clusterip-service created

> kubectl get service
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes                ClusterIP   10.96.0.1       <none>        443/TCP   33d
myapp-clusterip-service   ClusterIP   10.104.184.49   <none>        80/TCP    27s

> kubectl run --restart Never --image curlimages/curl:7.68.0 -it --rm curl sh
If you don't see a command prompt, try pressing enter.
/ $ curl myapp-clusterip-service:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ $ exit
pod "curl" deleted

> curl myapp-clusterip-service:80
curl: (6) Could not resolve host: myapp-clusterip-service

NodePort

With this type of service

> cat nginx-nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-nodeport-service
spec:
  type: NodePort
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

> kubectl apply -f nginx-nodeport-service.yaml
service/myapp-nodeport-service created

> kubectl get service
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes                ClusterIP   10.96.0.1        <none>        443/TCP        33d
myapp-clusterip-service   ClusterIP   10.104.184.49    <none>        80/TCP         7m2s
myapp-nodeport-service    NodePort    10.108.229.252   <none>        80:31459/TCP   3m7s

> kubectl run --restart Never --image curlimages/curl:7.68.0 -it --rm curl sh
If you don't see a command prompt, try pressing enter.
/ $ curl myapp-nodeport-service:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ $ exit
pod "curl" deleted

> curl myapp-nodeport-service:80
curl: (6) Could not resolve host: myapp-nodeport-service

> minikube service myapp-nodeport-service --url
🏃  Starting tunnel for service myapp-nodeport-service.
|-----------|------------------------|-------------|------------------------|
| NAMESPACE |          NAME          | TARGET PORT |          URL           |
|-----------|------------------------|-------------|------------------------|
| default   | myapp-nodeport-service |             | http://127.0.0.1:55776 |
|-----------|------------------------|-------------|------------------------|
http://127.0.0.1:55776
❗  Dockerドライバーをdarwin上で動かしているため、実行するにはターミナルを開く必要があります。

You can visit http://127.0.0.1:55776

image.png

ConfigMap

What is configmap?

It is a resource to define environment variables as key-value pairs. Usually, we want to change some settings (e.g. environment variables) for dev and prod.

image.png

The YAML file looks like

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  POD_ENV: dev

When you use configmap for pods, you can write YAML file like

> cat nginx-pod-with-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  containers:
  - name: nginx-container
    image: nginx
    env:
      - name: ENV
        valueFrom:
          configMapKeyRef:
            name: my-config
            key: POD_ENV

An environment variable ENV is defined in nginx pod and its value comes from the value of POD_ENV in my-config configmap.

Let’s play with it!

  • Create a configmap named my-config
  • Deploy a pod named nginx with my-config configmap
  • Check the following
    • The defined values in my-config appear as environment variables
> kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      34d

> kubectl apply -f my-config-configmap.yaml
configmap/my-config created

> kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      34d
my-config          1      8s

> kubectl describe cm my-config
Name:         my-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
POD_ENV:
----
dev
Events:  <none>

> kubectl apply -f nginx-pod-with-configmap.yaml
pod/nginx created

> kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   2          23h
nginx       1/1     Running   0          10s

> kubectl exec -it nginx -- sh
# env | grep ENV
ENV=dev
# exit

Secret

What is secret?

It is similar to configmap. As you can imagine from the name, it is used to define credentials as key-value pairs. To define secrets, we need to use encode values with base64. It means that

  • everyone who has permission to get secrets can see the actual values!!!

  • it’s hard to manage credentials with git (GitHub)

image.png

There is a way to solve this problem. We can use SealedSecret.

The YAML file looks like

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm

When you use secret for pods, you can write YAML file like

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx
  name: nginx-with-secret
spec:
  containers:
  - image: nginx
    name: nginx-with-secret
    volumeMounts:
    - name: my-secret-volume
      mountPath: "/my-secret"
      readOnly: true
  volumes:
  - name: my-secret-volume
    secret:
      secretName: my-secret

I mount secret as volume, but of course, you can use the secret with secretKeyRef, etc…

Let’s play it!

  • Create a secret named my-secret
  • Use my-secret as a file in nginx-with-secret pod
  • Check the following
    • the defined value in secrets doesn’t appear as environment variables
    • the defined value in secrets appear as files under /my-secret directory
    • we can’t write data in the files
    • the defined value in secrets can be decoded with base64 command
> kubectl get secret,pod
NAME                              TYPE                                  DATA   AGE
secret/dashboard-sa-token-tn9gt   kubernetes.io/service-account-token   3      13d
secret/default-token-zdfwg        kubernetes.io/service-account-token   3      34d

NAME            READY   STATUS    RESTARTS   AGE
pod/myapp-pod   1/1     Running   2          24h
pod/nginx       1/1     Running   0          33m

> kubectl apply -f my-secret-secret.yaml
secret/my-secret created

> kubectl apply -f nginx-pod-with-secret.yaml
pod/nginx-with-secret created

> kubectl get secret,pod
NAME                              TYPE                                  DATA   AGE
secret/dashboard-sa-token-tn9gt   kubernetes.io/service-account-token   3      13d
secret/default-token-zdfwg        kubernetes.io/service-account-token   3      34d
secret/my-secret                  Opaque                                2      27s

NAME                    READY   STATUS    RESTARTS   AGE
pod/myapp-pod           1/1     Running   2          24h
pod/nginx               1/1     Running   0          34m
pod/nginx-with-secret   1/1     Running   0          6s

> kubectl describe secret my-secret
Name:         my-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  12 bytes
username:  5 bytes

> kubectl exec -it nginx-with-secret -- sh
# env | grep username
# env | grep password
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt        opt   root  sbin  sys  usr
boot  docker-entrypoint.d  etc                   lib   media  my-secret  proc  run   srv   tmp  var
# cd my-secret
# ls
password  username
# cat username
admin# cat password
1f2d1e2e67df# cat 'aaa' > username
sh: 3: cannot create username: Read-only file system
# exit

> kubectl get secret my-secret -o yaml | grep username
  username: YWRtaW4=
      {"apiVersion":"v1","data":{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="},"kind":"Secret","metadata":{"annotations":{},"name":"my-secret","namespace":"default"},"type":"Opaque"}
        f:username: {}
        
> echo 'YWRtaW4=' | base64 -D
admin%

SealedSecret

How sealedsecret manages credentials?

  • Create a sealed secret with kubeseal command
  • SealedSecretController creates an actual secret with the created sealed secret
  • The created sealedsecret can be decrypted by users
    • It’s ok to manage it with git (GitHub)

image.png

Let’s play it

Preparation

> brew install kubeseal
==> Downloading https://homebrew.bintray.com/bottles/kubeseal-0.15.0.big_sur.bottle.tar.gz
Already downloaded: /Users/kanata-miyahana/Library/Caches/Homebrew/downloads/cdf181d52146ab05e538797d34af723dfb6cb2f937096a984cb276d29a3e349c--kubeseal-0.15.0.big_sur.bottle.tar.gz
==> Pouring kubeseal-0.15.0.big_sur.bottle.tar.gz
🍺  /usr/local/Cellar/kubeseal/0.15.0: 5 files, 31.9MB

> kubeseal --version
kubeseal version: v0.15.0

> kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.15.0/controller.yaml
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/sealed-secrets-service-proxier created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/sealed-secrets-key-admin created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/secrets-unsealer created
deployment.apps/sealed-secrets-controller created
customresourcedefinition.apiextensions.k8s.io/sealedsecrets.bitnami.com created
service/sealed-secrets-controller created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/sealed-secrets-controller created
serviceaccount/sealed-secrets-controller created
role.rbac.authorization.k8s.io/sealed-secrets-service-proxier created
rolebinding.rbac.authorization.k8s.io/sealed-secrets-controller created

> kubectl get deploy -n kube-system sealed-secrets-controller
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
sealed-secrets-controller   1/1     1            1           2m14s

The YAML file of secret

> cat my-sealed-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: my-sealed-secret
type: Opaque
data:
  username: YWRtaW4=
  password: YWRtaW5fcGFzcw==

Check the following

  • there is no secret or sealed secret related my-sealed-secret
  • we can’t get actual secrets from sealed secret
  • sealed secret generates an actual secret
> kubectl get sealedsecrets.bitnami.com,secrets
NAME                              TYPE                                  DATA   AGE
secret/dashboard-sa-token-tn9gt   kubernetes.io/service-account-token   3      13d
secret/default-token-zdfwg        kubernetes.io/service-account-token   3      34d

> kubeseal -o yaml < my-secret-from-sealed-secret.yaml > my-sealed-secret.yaml

> cat my-sealed-secret.yaml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: my-secret-from-sealed-secret
  namespace: default
spec:
  encryptedData:
    password: AgBaJ6zrfZcS4NtbHFvqNJPtqouqlOTHvZSbNBq/rhobdWQ5wt72X19uUTyKpreJmKm2TrYIVzeHmAtjCFzU3GWhrVxCbeU6SJtlpIUbPpn8K30oz9s+JdzXv+aK8HVriZPTrVh2lDnZatxQ2qrkI0yX+dDejxffPxzVtP70GITOeHAfrIGHcDJmtBckeys1UJJ1APFJeBawHd4ZLcPtyuc259Whh/fVGGZkvSjg2FX2i5ZTWgaYxK9+hJDxNrL119VX1NjqDXj+4GY7uZvTY6kX5bldUmrgRHNnn2TLaYC/Y5SHWxA1jB0c7llWC0O3cyJsGUggUWNYcJwHFR/3jXmn1ZMJIw3vWXoCxR+t240DRDhCBT6Xa3Spm7EeHWTPnQbtccHHJV+otcSpeSqbRHeGTUxtko/ZKdxNZFR8uWnYpdZWvk+V8gj+GAD8myj8CLM7g/PTABuqQSI4UkaFawdgWWtC+OOqyO3StFv7IHn4EhndNtMIxqzYD6tCamEXpbuLBEpRREZYVeFJgBXMl0cbv9TAm/XOZdkBUYqByJvUZlvlH0rdsskEFSosbQmcpamyKqirXwUQWWWP7ajj8px8ns2yFaf0cTUFkyVEMDB+oHSx2julKObiP4RHeVsLS0OcgyfplgvY+6DKrEQXpOYPAtEZ5dxS2NcpMf240CyXWo7YDMerV8xnDrvDGUuj/olMLEBoCCRPFDef
    username: AgCLffsJj45/J/KMjzv8hBHl5jDM443Z5lFV5NqghQw/BkRoodTgkcjxV15xN9ytNBNmVNVif0VB6z4X6avNAU9XxpubZMnW00ceSC+l/JyreAZXOmZgx7PcB9FnsDwnJz8X/9ntdaIlkS46FM0U/mXK74cJ8K9WaZDRm041gA5ugT7bve0ZSkuVQtfsyR8hV7ZNr2rP5tmWCYKXQXL3Hw0FzJOYv5M+s9ox7OPqMkJjChl8kTvGKlcsqTZdujZaZv7ZJkntNSRjFIKazcxJXfDnqu00SFvqCI5VpXhflOu+v0Yoe6TOdVrZeRvI4bsUoW5iA9JQ/tL6R9e8Gl92SghhISCQWNjwWt0a53o9vYXsvtszICzrU5c6XjYXkIuk7dWbZ/uB4RJdovOqKj/UKMLFCRbqmECtadjiVZSJChJ5HW/UoEnnkVZpS/hF5gu0oiFjIoflhXir3xrQ8gTAkXyma0zQQB/kEByzxwUCGemxlkZ6WZkYzPBPajB7Eij+8d1e9yYQyQpSKnRICXD/0dGKz1oVyzhoJc31tEJjxDg/pPLsGWU/1WCtr+GTWhB5KaM5+MkZ/qF2VaPg71L+EX8wUfJCmreCX8jlGe+1DE4S1hmHMmJ9PjD8neAtdn+UjzCtOigDP2CE7UOssjz/3EwOpQYQ855U8xpoSC8j8QocTQDG9oj4MhezYnhVXaJnoMhVHQUaBg==
  template:
    metadata:
      creationTimestamp: null
      name: my-secret-from-sealed-secret
      namespace: default
    type: Opaque
    
> echo 'AgCLffsJj45/J/KMjzv8hBHl5jDM443Z5lFV5NqghQw/BkRoodTgkcjxV15xN9ytNBNmVNVif0VB6z4X6avNAU9XxpubZMnW00ceSC+l/JyreAZXOmZgx7PcB9FnsDwnJz8X/9ntdaIlkS46FM0U/mXK74cJ8K9WaZDRm041gA5ugT7bve0ZSkuVQtfsyR8hV7ZNr2rP5tmWCYKXQXL3Hw0FzJOYv5M+s9ox7OPqMkJjChl8kTvGKlcsqTZdujZaZv7ZJkntNSRjFIKazcxJXfDnqu00SFvqCI5VpXhflOu+v0Yoe6TOdVrZeRvI4bsUoW5iA9JQ/tL6R9e8Gl92SghhISCQWNjwWt0a53o9vYXsvtszICzrU5c6XjYXkIuk7dWbZ/uB4RJdovOqKj/UKMLFCRbqmECtadjiVZSJChJ5HW/UoEnnkVZpS/hF5gu0oiFjIoflhXir3xrQ8gTAkXyma0zQQB/kEByzxwUCGemxlkZ6WZkYzPBPajB7Eij+8d1e9yYQyQpSKnRICXD/0dGKz1oVyzhoJc31tEJjxDg/pPLsGWU/1WCtr+GTWhB5KaM5+MkZ/qF2VaPg71L+EX8wUfJCmreCX8jlGe+1DE4S1hmHMmJ9PjD8neAtdn+UjzCtOigDP2CE7UOssjz/3EwOpQYQ855U8xpoSC8j8QocTQDG9oj4MhezYnhVXaJnoMhVHQUaBg==' | base64 -D
}       ';0QU
>12Bc        ?DhW^q7ܭ4fTbEA>OWƛdGH/xW:f`dzg<''?u%.:e     ViћN5n>۽JKB!WMjٖ Ar
|;*W,6]6Zf&I5$cI]4HUx_뾿F({uZybPG׼_va! XZz=3 ,S:^6g]*?( @iU
yoԠIViKE
        !c"x|kL@鱖FzYOj0{(^&
R*tH    pZ8h%Bc8?e?`Zy)9vUR0QB_
                               N2b}>0-v0:(?`C<LThH/#
M2bxU]gU% 

> kubectl apply -f my-sealed-secret.yaml
sealedsecret.bitnami.com/my-secret-from-sealed-secret created

> kubectl get sealedsecrets.bitnami.com,secrets
NAME                                                    AGE
sealedsecret.bitnami.com/my-secret-from-sealed-secret   17s

NAME                                  TYPE                                  DATA   AGE
secret/dashboard-sa-token-tn9gt       kubernetes.io/service-account-token   3      13d
secret/default-token-zdfwg            kubernetes.io/service-account-token   3      34d
secret/my-secret-from-sealed-secret   Opaque                                2      17s

> kubectl get secret my-secret-from-sealed-secret -o json | jq -r '.data.username' | base64 -D
admin%

> kubectl get secret my-secret-from-sealed-secret -o json | jq -r '.data.password' | base64 -D
admin_pass%

Persistent Volume (PV) and Persistent Volume Claim (PVC)

What are PV and PVC?

It is for storing data on Kubernetes cluster, not pod or other resources.

Basically, data in a pod does not remain after the pod is deleted. Let’s see it!

> kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   3          33h

> kubectl exec -it myapp-pod -- sh
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt  proc  run   srv  tmp  var
boot  docker-entrypoint.d  etc                   lib   media  opt  root  sbin  sys  usr
# echo "Hello world in myapp-pod" > text.txt
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt  proc  run   srv  text.txt  usr
boot  docker-entrypoint.d  etc                   lib   media  opt  root  sbin  sys  tmp       var
# cat text.txt
Hello world in myapp-pod
# exit

> kubectl delete pod myapp-pod
pod "myapp-pod" deleted

> kubectl apply -f nginx-pod.yaml
pod/myapp-pod created

> kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          11s

> kubectl exec -it myapp-pod -- sh
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt  proc  run   srv  tmp  var
boot  docker-entrypoint.d  etc                   lib   media  opt  root  sbin  sys  usr
# ls | grep text.txt
# exit
command terminated with exit code 1

What is the difference between PV and PVC?

  • PV → volume on the cluster
  • PVC → statement that Kubernetes resources (e.g. pod) want to use (the part of) the PV

image.png

The YAML file looks like

PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  storageClassName: manual
  capacity:
    storage: 100M
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/pvc"

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWritOnce
  resources:
    requests:
      storage: 10M

When we use this pvc from a pod, the YAML for the pod is like

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx-with-pvc
  name: nginx-with-pvc
spec:
  containers:
  - image: nginx
    name: nginx-with-pvc
    volumeMounts:
    - name: my-pv
      mountPath: /mnt/pvc
  volumes:
    - name: my-pv
      persistentVolumeClaim:
        claimName: my-pvc

Let’s play with them!

Preparation

> minikube ssh
Last login: Thu Mar 25 00:12:54 2021 from 192.168.49.1
docker@minikube:~$ ls
docker@minikube:~$ whomai
-bash: whomai: command not found
docker@minikube:~$ whoami
docker
docker@minikube:~$ cd ../../
docker@minikube:/$ ls
Release.key  boot  dev         etc   kic.txt  lib    lib64   media  opt   root  sbin  sys  usr
bin          data  docker.key  home  kind     lib32  libx32  mnt    proc  run   srv   tmp  var
docker@minikube:/$ cd mnt/
docker@minikube:/mnt$ ls
docker@minikube:/mnt$ sudo mkdir pvc
docker@minikube:/mnt$ cd pvc/
docker@minikube:/mnt/pvc$ sudo touch text.txt
docker@minikube:/mnt/pvc$ ls -la
total 8
drwxr-xr-x 2 root root 4096 Mar 26 00:03 .
drwxr-xr-x 1 root root 4096 Mar 26 00:03 ..
-rw-r--r-- 1 root root    0 Mar 26 00:03 text.txt
docker@minikube:/mnt/pvc$ sudo chown -R docker:docker .
docker@minikube:/mnt/pvc$ ls -la
total 8
drwxr-xr-x 2 docker docker 4096 Mar 26 00:03 .
drwxr-xr-x 1 root   root   4096 Mar 26 00:03 ..
-rw-r--r-- 1 docker docker    0 Mar 26 00:03 text.txt
docker@minikube:/mnt/pvc$ echo "Hello World" > text.txt
docker@minikube:/mnt/pvc$ cat text.txt
Hello World
docker@minikube:/mnt/pvc$ exit
logout

Create PV, PVC, and a pod that uses the PVC.

Check the following

  • pvc my-pvc is bounded to pv my-pv
> kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE
persistentvolume/pvc-00566281-3afb-4e39-931f-79e43940691b   20G        RWO            Delete           Bound    lens-metrics/data-prometheus-0   standard                20d

> kubectl apply -f my-pv-persistent-volume.yaml
persistentvolume/my-pv created

> kubectl apply -f my-pvc-psersistent-volume-claim.yaml
persistentvolumeclaim/my-pvc created

> kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                            STORAGECLASS   REASON   AGE
persistentvolume/my-pv                                      100M       RWO            Retain           Bound         default/my-pvc                   manual                  84s
persistentvolume/pvc-00566281-3afb-4e39-931f-79e43940691b   20G        RWO            Delete           Terminating   lens-metrics/data-prometheus-0   standard                20d

NAME                           STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/my-pvc   Bound    my-pv    100M       RWO            manual         8s

> kubectl apply -f nginx-pod-with-pvc.yaml
pod/nginx-with-pvc created

> kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
myapp-pod        1/1     Running   0          49m
nginx-with-pvc   1/1     Running   0          34s

> kubectl exec -it nginx-with-pvc -- sh
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt  proc  run   srv  tmp  var
boot  docker-entrypoint.d  etc                   lib   media  opt  root  sbin  sys  usr
# ls /mnt
pvc
# ls /mnt/pvc
text.txt
# cat /mnt/pvc/text.txt
Hello World
# echo "Hello World from nginx-with-pvc pod" >> text_in_root_dir.txt
# cat text_in_root_dir.txt
Hello World from nginx-with-pvc pod
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt  proc  run   srv  text_in_root_dir.txt  usr
boot  docker-entrypoint.d  etc                   lib   media  opt  root  sbin  sys  tmp                   var
# echo "Hello World from nginx-with-pvc pod" >> /mnt/pvc/text.txt
# cat /mnt/pvc/text.txt
Hello World
Hello World from nginx-with-pvc pod
# echo "Hello World from nginx-with-pvc pod" > /mnt/pvc/text_in_mount_dir.txt
# exit

> kubectl delete -f nginx-pod-with-pvc.yaml
pod "nginx-with-pvc" deleted

> kubectl apply -f nginx-pod-with-pvc.yaml
pod/nginx-with-pvc created

> kubectl exec -it nginx-with-pvc -- sh
# ls
bin   dev                  docker-entrypoint.sh  home  lib64  mnt  proc  run   srv  tmp  var
boot  docker-entrypoint.d  etc                   lib   media  opt  root  sbin  sys  usr
# ls | grep *.txt
# ls /mnt/pvc/
text.txt  text_in_mount_dir.txt
# cat /mnt/pvc/text.txt
Hello World
Hello World from nginx-with-pvc pod
# cat /mnt/pvc/text_in_mount_dir.txt
Hello World from nginx-with-pvc pod
# exit
0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?