0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

k8s コマンド備忘録

Last updated at Posted at 2020-09-29

絶賛勉強中、適宜更新していきます。

作って捗ったエイリアス

alias kgp="kubectl get pod"
alias kc="kubectl"
alias kgs="kubectl get service"
alias kgd="kubectl get deploy"
alias kcd="kubectl delete"
alias kdd="kubectl delete deployment"
alias kds="kubectl delete service"
alias kdp="kubectl delete pod"

よく使うコマンド

create pod

kc run --image=<myimage> --restart=Never my-pod

create deployment

kc run --image=<myimage> --restart=Always --replicas=2 my-deployment

create service


kc expose deployment my-deployment --type=NodePort --port==3306

minikube のdocker daemon をローカルから操作

https://dzone.com/articles/running-local-docker-images-in-kubernetes-1

minikube ssh
# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)

to get out, just close the current shell, as

https://stackoverflow.com/questions/53010848/how-to-connect-to-local-docker-daemon

ローカルでdocker registory を動かしてそこにプッシュ

参考

https://qiita.com/progrhyme/items/116948c9fef37f3e995b

docker run -d -p 5000:5000 \
  -v ~/.dockerregistry:/var/lib/registry \
  --restart always \
  --name registry \
  registry:2
docker tag progrhyme/compose2minikube:v1 localhost:5000/progrhyme/compose2minikube:v1
docker push localhost:5000/progrhyme/compose2minikube:v1

Node/Podのリソース使用状況確認

https://ameblo.jp/bakery-diary/entry-12615738013.html

Nodeによる使用量確認: kubectl top node [Node名]

Podによる使用量確認:  kubectl top pod [Pod名]

k8s Worker NodeまたはPodによる、ノードCPUとメモリの、実際の使用量・使用率を確認できる。

なお、kubectl describe ~コマンドでも、CPUとメモリのリソース状況は確認できる。

が、describeではk8sによって確保された値(余分含む)を出力するのに対し、

topでは実際の使用状況を出力する点で異なる。

<オプション>

Pod内の、コンテナごとのリソース使用量を確認したい時は、「--containers」オプションをつける。

kubectl top pod --containers

とりあえずpodをrunning でキープしたい

kc run --image=<my-image> --restart=Never input-client --command -- tail -f /dev/null

dry-run=client

kubectl run bee --image=nginx --restart=Never --dry-run -o yaml > bee.yaml

Imperative vs Declarative

important concepts.

get labels

kubectl get nodes node01 --show-labels

get tolerations

kubectl explain pod --recursice | grep -A5 tolerations

get taint

kubectl describe nodes master | grep -i taint

label nodes

kubectl label nodes node01 size=Large

scale deployments

kubectl scale deployment nginx --replicas=5

create deployment and scale

kubectl create deployment blue --image==nginx 
kubectl scale deployment blue --replicas=6

get pods -o wide

kubectl get pods -o wide

get all daemon sets in all name spaces

kubectl get ds --all-namespaces

On how many nodes are the pods scheduled by the DaemonSet kube-proxy

kubectl -n kube-system get pods -o wide | grep proxy

describe daemonset info about image in kube-system

kubectl describe ds weave-net -n kube-system | grep -i image

deploy daemonset

first

kubectl create deployment <name> --dry-run=client -o yaml > elastic.yaml

then change

  • deployment to DaemonSet
  • remove replica
  • remove strategy
  • remove status
  • set appropriate namespace

static pod

static pod means in the node kubelet only manage pod without knowing about deployment, replica set like that.
kubelet only reads the manifest file from certain path.

get static pods

kubectl get pods --all-namespaces

and check suffix if it ends with namespace name.

get the path of static pod definition file

ps -ef | grep kubectl

and look for --config

then go to that path, and look for staticPodPath: by

grep -i static <path of --config> 

create static pod named static-busybox and that uses busybox image and thte command sleep 1000

kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml

delete static pods created in some other node

first get ip for that node by

kubectl get nodes -o wide 

second, ssh to the target node by

ssh <target-internal-ip>

follow

ps -ef | grep kubelet | grep "\--config"

and delete static yaml file.

deploy new scheduler to the cluster

cd /etc/kubernetes/manifests

cp kube-scheduler.yaml /root/my-scheduler.yaml

create new pod with custom scheduler

under the spec, add

schedulerName: my-scheduler

metric server getting started

minikube addons enable metrics-server

once it is configured,

kubectl top node

kubectl top pod

will show the memory consumption etc.

to keep watching,

watch "kubectl top node"

is handy.

see logs stream

kubectl logs -f <pod>

see logs with specific container

kubectl logs <pod> -c <container>

ENTRYPOINT in Dockerfile == command in k8s yaml

CMD in Dockerfile == args in k8s yaml

create yaml file based on existing pod

kubectl get pod <pod name> -o yaml > test.yaml

create configmap inperative way

kubectl create configmap <configmap name> --from-literal=key=value

useful for lookup

kubectl explain pods --recursive | grep envFrom -A3

create secret in imperative way

kubectl create secret <secret name> generic --from-literal=key=value

get pod and services

note **no space **

kubectl -n elastic-stack get  pods,svc

take out the node for maintanance

kubectl drain node01 --ignore-daemonsets

drain basically means make node as unschedulable.

after the maintanance, bring back node

kubectl uncordon node01

after uncordon, only when new pods are scheduled, it may be assigned to this node.

kubectl version

kubectl version --short

upgrade kubeadm for the master

apt install kubeadm=1.18.0-00

kubeadm upgrade apply v1.18.0 

apt install kubelet=1.18.0-00

"" upgrade kubeadm for the worker

apt install kubeadm=1.18.0-00

kubeadm upgrade node

apt install kubelet=1.18.0-00

Backup option 1: like Velero

kubectl get all --all-namespaces -o yaml > backup.yaml

Backup option 2: use etcd client etcdctl

default port for etcd is 2379

to see the help,

ETCDCTL_API=3 etcdctl snapshot save -h

then save it to

ETCDCTL_API=3 etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" --endp
oints=127.0.0.1:2379 snapshot save /tmp/snapshot-pre-boot.db

or first try

ETCDCTL_API=3 etcdctl member list --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" --endpoints=127.0.0.1:2379 

for restore from etcd

ETCDCTL_API=3 etcdctl snapshot restore -h
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
     --name=master \
     --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
     --data-dir /var/lib/etcd-from-backup \
     --initial-cluster=master=https://127.0.0.1:2380 \
     --initial-cluster-token=etcd-cluster-1 \
     --initial-advertise-peer-urls=https://127.0.0.1:2380 \
     snapshot restore /tmp/snapshot-pre-boot.db

url: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/practice-questions-answers/cluster-maintenance/backup-etcd/etcd-backup-and-restore.md

then

cd /etc/kubernetes/manifests/

vim etcd.yaml

- --data-dir=/var/lib/etcd-from-backup
- --initial-cluster-token=etcd-cluster-1

also change mountPath

node to node

see the ip address assigned to master node

ifconfig -a 
cat /etc/network/interfaces

for MAC address,

ip link command to see the assigned hard ware address.

default route

ip route

to see the default gateway

port that scheduler is listening on

netstat -natulp  | grep kube-scheduler

show how many ip can be assigned to node

ip addr show weave

when bash is not available when exec

kubectl exec -it <pod> -- sh

see default gateway of the pod

kubectl exec -it <pod> -- sh

then 

ip r

ip address cidar range for nodes

ip a

and check cidar

ip address cidar range for pod

kubectl -n kube-system logs weabe-net-<> -c weave

IP address range for service

cat /etc/kubernetes/manifests/kube-apiserver.yaml

check roles and role bindings for service account

kubectl -n ingress-space get roles.rbac.authorization.k8s.io
kubectl -n ingress-space get rolebindings.rbac.authorization.k8s.io

see kubeadm token

kubeadm token create --help

kubeadm token create --print-join-command

around scheduler troubleshoot

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

see
KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml

or

where the static pod yaml file located.

vim /etc/kubernetes/manifests/kube-scheduler.yaml

scale deployment

kubectl scale deployment app --replicas=2

scaleing, deployment is hadled by

kube-controller-manager-master

minikube dashboard

minikube addons enable dashboard
minikube dashboard --url

allow all the traffic to the proxy

kubectl proxy --address='0.0.0.0' --disable-filter=true

untaint master node

kubectl taint nodes myhost node-role.kubernetes.io/master:NoSchedule-
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?