5
8

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

kubernetes個人メモ

Last updated at Posted at 2019-08-14

自分で調べたコマンドやtipsを更新するだけ。

人に見せるものではない。

コマンド

やりたいこと コマンド 備考
Namespace切り替え kubectl config set-context $(kubectl config current-context) --namespace=mynamespace
k8s completion source <(kubectl completion bash)
情報を横長で見る kubectl get pods -o wide
podやsvcをdry-runしてmanifestを生成する kubectl create deploy xxxx --image xxx/xxx -o yaml --dry-run > xxx.yaml
既存のpodからyamlを出力 kubectl get pods -o yaml [pod名]
tokenの取得 kubeadm token list
podのログ確認 kubectl logs [podname]
kubeletのログ確認 journalctl -xeu kubelet
journalctl -xeu kubelet -o verbose

nodeを再びクラスタに参加させる方法

  • tokenの再発行およびコマンド生成
kubeadm token create --print-join-command

ちなみに再度参加させる場合はkubeletを停止した状態で/etc/kubernetes配下を削除し、上記のコマンドを各ノードで実行する

ファイル

ファイル名 内容
/etc/kubernetes/manifests/etcd.yaml etcdの設定
/etc/kubernetes/manifests/kube-apiserver.yaml apiserverの設定
/etc/kubernetes/manifests/kube-controller-manager.yaml kube-controller-managerの設定
/etc/kubernetes/manifests/kube-scheduler.yaml kube-schedulerの設定

ハンズオン

AWSへのkubernetesのインストール


  • 環境

    • AWS EC2
    • OS:RedHat8
    • 構成:master x1, node x2
    • container:docker-ce(RHEL8はpodman-dockerというのがいるがこれは使わない)
    • cni:calico
  • 手順

    • kubeadm/kubelet/kubectl入れる
    • docker-ce入れる
    • masterでkubernetes cluster作る
    • nodeがclusterに参加する
    • calico入れる

    0  sudo -s <- rootになっておく
    1  cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

    3  setenforce 0
    4  sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    5  yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    6  systemctl enable --now kubelet
    7  cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

    8  sysctl --system
    9  yum -y install git
   10  systemctl daemon-reload
   11  systemctl restart kubelet
   12  yum -y update
   36  yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
   37  yum -y install docker-ce --nobest
   38  systemctl enable docker
   39  systemctl start docker
  • master
   40  kubeadm init --pod-network-cidr=192.168.0.0/16
   44  mkdir -p $HOME/.kube
   45  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   46  sudo chown $(id -u):$(id -g) $HOME/.kube/config
   48  echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> .bash_profile

  113  curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O
  114  POD_CIDR="<your-pod-cidr>" sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
  115  cat calico.yaml 
  116  kubectl apply -f calico.yaml

[root@ip-10-0-0-187 calico]# kubectl get node
NAME                                            STATUS   ROLES    AGE   VERSION
ip-10-0-0-152.ap-northeast-1.compute.internal   Ready    <none>   13m   v1.15.2
ip-10-0-0-187.ap-northeast-1.compute.internal   Ready    master   17m   v1.15.2
ip-10-0-0-237.ap-northeast-1.compute.internal   Ready    <none>   13m   v1.15.2
[root@ip-10-0-0-187 calico]# 

  • node
   kubeadm join 10.0.0.187:6443 --token xxxx     --discovery-token-ca-cert-hash sha256:xxxx

-> ミスってkubeadm initを叩きなおしたいならこれ

Istioサンプルアプリ導入

Istio / Bookinfo Application これをやってみた。

起動した後、kubectl get pods -o yaml reviews-v1-5787f7b87-f6ffwをたたいてIstioによって変化した部分を見てみた。(Deploymentについて注目した)
※IstioいれるとEnvoyが勝手にsidecarとして入ってくる

→なんかうまくできていない(Error: missing configuration map key "values" in "istio-sidecar-injector" ってでてきたので、そのうちistioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yamlの結果とナチュラルなbookinfo.yamlを比較する。

-apiVersion: apps/v1
+apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
-  name: reviews-v1
+  annotations:
+    deployment.kubernetes.io/revision: "1"
+    kubectl.kubernetes.io/last-applied-configuration: |
+      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"reviews","version":"v1"},"name":"reviews-v1","namespace":"test1"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"reviews","version":"v1"}},"template":{"metadata":{"labels":{"app":"reviews","version":"v1"}},"spec":{"containers":[{"image":"docker.io/istio/examples-bookinfo-reviews-v1:1.15.0","imagePullPolicy":"IfNotPresent","name":"reviews","ports":[{"containerPort":9080}]}],"serviceAccountName":"bookinfo-reviews"}}}}
+  creationTimestamp: 2019-08-14T12:33:28Z
+  generation: 1
   labels:
     app: reviews
     version: v1
+  name: reviews-v1
+  namespace: test1
+  resourceVersion: "13250"
+  selfLink: /apis/extensions/v1beta1/namespaces/test1/deployments/reviews-v1
+  uid: b7bff820-be8f-11e9-ba5f-42010a92009f
 spec:
+  progressDeadlineSeconds: 600
   replicas: 1
+  revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: reviews
       version: v1
+  strategy:
+    rollingUpdate:
+      maxSurge: 25%
+      maxUnavailable: 25%
+    type: RollingUpdate
   template:
     metadata:
+      creationTimestamp: null
       labels:
         app: reviews
         version: v1
     spec:
-      serviceAccountName: bookinfo-reviews
       containers:
-      - name: reviews
-        image: docker.io/istio/examples-bookinfo-reviews-v1:1.15.0
+      - image: docker.io/istio/examples-bookinfo-reviews-v1:1.15.0
         imagePullPolicy: IfNotPresent
+        name: reviews
         ports:
         - containerPort: 9080
+          protocol: TCP
+        resources: {}
+        terminationMessagePath: /dev/termination-log
+        terminationMessagePolicy: File
+      dnsPolicy: ClusterFirst
+      restartPolicy: Always
+      schedulerName: default-scheduler
+      securityContext: {}
+      serviceAccount: bookinfo-reviews
+      serviceAccountName: bookinfo-reviews
+      terminationGracePeriodSeconds: 30
+status:
+  availableReplicas: 1
+  conditions:
+  - lastTransitionTime: 2019-08-14T12:34:03Z
+    lastUpdateTime: 2019-08-14T12:34:03Z
+    message: Deployment has minimum availability.
+    reason: MinimumReplicasAvailable
+    status: "True"
+    type: Available
+  - lastTransitionTime: 2019-08-14T12:33:28Z
+    lastUpdateTime: 2019-08-14T12:34:03Z
+    message: ReplicaSet "reviews-v1-5787f7b87" has successfully progressed.
+    reason: NewReplicaSetAvailable
+    status: "True"
+    type: Progressing
+  observedGeneration: 1
+  readyReplicas: 1
+  replicas: 1
+  updatedReplicas: 1

トラブルシュート

calicoが起動しない(CrashLoopBackOff)

[root@ip-10-0-0-187 calico]# kubectl get pods -A
NAMESPACE     NAME                                                                    READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-65b8787765-xldtm                                0/1     ContainerCreating   0          10m
kube-system   calico-node-jmqn2                                                       0/1     CrashLoopBackOff    6          11m
kube-system   calico-node-xn8wg                                                       0/1     CrashLoopBackOff    6          11m
kube-system   calico-node-xnwq2                                                       0/1     CrashLoopBackOff    7          11m
kube-system   coredns-5c98db65d4-crjs2                                                0/1     ContainerCreating   0          22h
kube-system   coredns-5c98db65d4-hlxmg                                                0/1     ContainerCreating   0          22h
kube-system   etcd-ip-10-0-0-187.ap-northeast-1.compute.internal                      1/1     Running             0          22h

[root@ip-10-0-0-187 calico]# kubectl logs calico-node-xn8wg -n kube-system
2019-08-16 09:43:07.390 [INFO][8] startup.go 256: Early log level set to info
2019-08-16 09:43:07.390 [INFO][8] startup.go 272: Using NODENAME environment for node name
2019-08-16 09:43:07.390 [INFO][8] startup.go 284: Determined node name: ip-10-0-0-152.ap-northeast-1.compute.internal
2019-08-16 09:43:07.391 [INFO][8] k8s.go 228: Using Calico IPAM
2019-08-16 09:43:07.392 [INFO][8] startup.go 316: Checking datastore connection
2019-08-16 09:43:37.393 [INFO][8] startup.go 331: Hit error connecting to datastore - retry error=Get https://10.96.0.1:443/api/v1/nodes/foo: dial tcp 10.96.0.1:443: i/o timeout
2019-08-16 09:44:08.436 [INFO][8] startup.go 331: Hit error connecting to datastore - retry error=Get https://10.96.0.1:443/api/v1/nodes/foo: dial tcp 10.96.0.1:443: i/o timeout

を参考にAPIserverが使用するセグメントを変更した

/etc/kubernetes/manifests
[root@ip-10-0-0-187 manifests]# grep 20 kube-apiserver.yaml 
    - --service-cluster-ip-range=20.96.0.0/12

もともと10.96.0.0/12だったが、うまく通信ができていなかった。AWSのVPCが10.0.0.0/8とかなのでセグメントがかぶっていたのではないかと推察している。

[root@ip-10-0-0-187 manifests]# kubectl get pods -A
NAMESPACE     NAME                                                                    READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-65b8787765-xldtm                                0/1     CrashLoopBackOff   5          74m
kube-system   calico-node-jmqn2                                                       1/1     Running            22         74m
kube-system   calico-node-xn8wg                                                       1/1     Running            23         74m
kube-system   calico-node-xnwq2                                                       1/1     Running            24         74m
kube-system   coredns-5c98db65d4-crjs2                                                0/1     Running            5          23h
kube-system   coredns-5c98db65d4-hlxmg                                                0/1     Error              5          23h
kube-system   etcd-ip-10-0-0-187.ap-northeast-1.compute.internal                      1/1     Running            1          23h
kube-system   kube-apiserver-ip-10-0-0-187.ap-northeast-1.compute.internal            1/1     Running            1          11m
kube-system   kube-controller-manager-ip-10-0-0-187.ap-northeast-1.compute.internal   1/1     Running            2          23h
kube-system   kube-proxy-mmk8k                                                        1/1     Running            1          23h
kube-system   kube-proxy-swwz8                                                        1/1     Running            1          23h
kube-system   kube-proxy-vx8lq                                                        1/1     Running            1          23h
kube-system   kube-scheduler-ip-10-0-0-187.ap-northeast-1.compute.internal            1/1     Running            2          23h
kube-system   kubernetes-dashboard-7d75c474bb-4rzhj                                   0/1     CrashLoopBackOff   5          7h5m

-> まだ別の問題は発生中。calico-nodeがReadyにはなった。

[root@ip-10-0-0-187 manifests]# kubectl logs calico-kube-controllers-65b8787765-xldtm -n kube-system
2019-08-16 10:47:11.589 [INFO][1] main.go 92: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2019-08-16 10:47:11.592 [INFO][1] k8s.go 228: Using Calico IPAM
W0816 10:47:11.592129       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-08-16 10:47:11.593 [INFO][1] main.go 113: Ensuring Calico datastore is initialized
2019-08-16 10:47:21.593 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2019-08-16 10:47:21.593 [FATAL][1] main.go 118: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
/etc/cni/net.d/calico-kubeconfig
# Kubeconfig file for Calico CNI plugin.
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://[20.96.0.1]:443 <- 10.96を20.96にかえた

-> いろいろ問題起こったのでinitしなおした

master

kubeadm reset
kubeadm init --service-cidr=20.96.0.0/12 --pod-network-cidr=192.168.0.0/16
node

kubeadm reset 
kubeadm join 10.0.0.187:6443 --token xxxx     --discovery-token-ca-cert-hash sha256:xxx

-> うまくいった!!!

参考

5
8
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
8

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?