LoginSignup
0
0

Azure Kubernetes Serviceをあえて使用せず、Azure上のVMにKubernetesをセルフインストールした時の備忘録です。今回は解説記事ではないですが、いつか修正してマニュアル化したいですね…。

※ 本稿はQiita Engineer Festa 2024の「この記事誰得? 私しか得しないニッチな技術で記事投稿!」に投稿しております

Kubernetesの初期構築

hosts作成

sudo bash
root@k8s-master:~# vim /etc/hosts
127.0.0.1 localhost
127.0.1.1 k8s-master
192.168.2.30 k8s-master   ★
192.168.2.31 k8s-worker01  ★
192.168.2.32 k8s-worker02  ★
192.168.2.33 k8s-worker03  ★

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Swap無効化

root@k8s-master:~# swapoff -a
root@k8s-master:~# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

カーネルモジュールにロード

root@k8s-master:~# tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
root@k8s-master:~# modprobe overlay
root@k8s-master:~# modprobe br_netfilter

カーネルパラメータに設定

root@k8s-master:~# tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

反映

root@k8s-master:~# sysctl --system

リポジトリを更新

root@k8s-master:~# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
root@k8s-master:~# mkdir /etc/apt/keyrings
root@k8s-master:~# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
root@k8s-master:~# sudo apt-get update

containerdをインストールし、起動

root@k8s-master:~# apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
root@k8s-master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
root@k8s-master:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
root@k8s-master:~# apt update
root@k8s-master:~# apt install containerd.io
root@k8s-master:~# containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
root@k8s-master:~# sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
root@k8s-master:~# systemctl restart containerd
root@k8s-master:~# systemctl enable containerd

Kubernetes関連のコマンドなどをインストール

root@k8s-master:~# apt update
root@k8s-master:~# apt install -y kubelet kubeadm kubectl
root@k8s-master:~# apt-mark hold kubelet kubeadm kubectl

apt install -y containerd.ioを実施すると-yがエラーになることに注意

補足:以下はリポジトリが上書きされてしまうため、実行しない。

# root@k8s-master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/kubernetes-xenial.gpg
# root@k8s-master:~# apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

補足:やっちゃった場合

root@k8s-master-nqdior:/etc/apt/sources.list.d# grep -r "kubernetes-xenial" /etc/apt/sources.list /etc/apt/sources.list.d/
/etc/apt/sources.list:deb http://apt.kubernetes.io/ kubernetes-xenial main
/etc/apt/sources.list:# deb-src http://apt.kubernetes.io/ kubernetes-xenial main
root@k8s-master-nqdior:/etc/apt/sources.list.d# ^C
root@k8s-master-nqdior:/etc/apt/sources.list.d# vim /etc/apt/sources.list
"deb http://apt.kubernetes.io/ kubernetes-xenial main"をコメントアウト
root@k8s-master-nqdior:/etc/apt/sources.list.d# apt update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 https://download.docker.com/linux/ubuntu bionic InRelease
Hit:4 http://azure.archive.ubuntu.com/ubuntu focal InRelease
Hit:5 http://azure.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:6 http://azure.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb  InRelease
Hit:7 http://security.ubuntu.com/ubuntu focal-security InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
287 packages can be upgraded. Run 'apt list --upgradable' to see them.

Kubernetesのクラスター構築

init

root@Linkarnation-master:/home/azureuser# kubeadm init --pod-network-cidr=10.1.0.0/16
I0525 03:39:22.425593   12930 version.go:256] remote version is much newer: v1.30.1; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0525 03:39:22.887956   12930 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is i                                                                                                                                                                                       nconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cl                                                                                                                                                                                       uster.local linkarnation-master] and IPs [10.96.0.1 10.1.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [linkarnation-master localhost] and IPs [10.1.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [linkarnation-master localhost] and IPs [10.1.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". T                                                                                                                                                                                       his can take up to 4m0s
[apiclient] All control plane components are healthy after 5.501743 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node linkarnation-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node linkarnation-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: wdb7gk.87j4io0bekqybori
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.0.4:6443 --token wdb7gk.87j4io0bekqybori \
        --discovery-token-ca-cert-hash sha256:e9ca6a7ff5d3c5f882083e9705af391e8c03f6ebae7de6db0ef2494f7e3e7a35
root@Linkarnation-master:/home/azureuser# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  232k  100  232k    0     0   833k      0 --:--:-- --:--:-- --:--:--  833k

作成

root@Linkarnation-master:/home/azureuser# vim calico.yaml
4571             # Enable IPIP
4572             - name: CALICO_IPV4POOL_IPIP
4573               value: "Never"
4574             # Enable or Disable VXLAN on the default IP pool.
4575             - name: CALICO_IPV4POOL_VXLAN
4576               value: "Always"
~
4598             # The default IPv4 pool to create on startup if none exists. Pod IPs will be
4599             # chosen from this range. Changing this value after installation will have
4600             # no effect. This should fall within `--cluster-cidr`.
4601             - name: CALICO_IPV4POOL_CIDR
4602               value: "10.0.1.0/16"

適用

root@Linkarnation-master:/home/azureuser# kubectl apply -f calico.yaml
Warning: resource poddisruptionbudgets/calico-kube-controllers is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
poddisruptionbudget.policy/calico-kube-controllers configured
Warning: resource serviceaccounts/calico-kube-controllers is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/calico-kube-controllers configured
Warning: resource serviceaccounts/calico-node is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/calico-node configured
Warning: resource configmaps/calico-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/calico-config configured
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
Warning: resource daemonsets/calico-node is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
daemonset.apps/calico-node configured
Warning: resource deployments/calico-kube-controllers is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/calico-kube-controllers configured

ノード側でJOIN

root@Linkarnation-worker1:/home/azureuser# kubeadm join 10.1.0.4:6443 --token wdb7gk.87j4io0bekqybori --discovery-token-ca-cert-hash sha256:e9ca6a7ff5d3c5f882083e9705af391e8c03f6ebae7de6db0ef2494f7e3e7a35
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@Linkarnation-worker1:/home/azureuser#

master側でコントローラー作成

root@Linkarnation-master:/home/azureuser# kubectl get nodes
NAME                   STATUS   ROLES           AGE   VERSION
linkarnation-master    Ready    control-plane   27m   v1.28.10
linkarnation-worker1   Ready    <none>          7s    v1.28.10
root@Linkarnation-master:/home/azureuser# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

IngressコントローラーをhostPortで公開する。

root@Linkarnation-master:/home/azureuser#  kubectl edit deploy -n ingress-nginx ingress-nginx-controller
deployment.apps/ingress-nginx-controller edited
# kubectl edit deploy -n ingress-nginx ingress-nginx-controller
(26行目ら辺)
   replicas: 2    # → 1から2に修正
(92行目ら辺)
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
          hostPort: 80    # → 行追加

ingress-nginxをクラスターのデフォルトIngressコントローラーにする

root@Linkarnation-master:/home/azureuser# kubectl edit IngressClass nginx
ingressclass.networking.k8s.io/nginx edited
$ kubectl edit IngressClass nginx
(8行目ら辺)
   annotations:
     ingressclass.kubernetes.io/is-default-class: "true"    # → 行追加

wk1、wk2でIngressコントローラーが動作しているのを確認

root@Linkarnation-master:/home/azureuser# kubectl get pod -n ingress-nginx -o wide
NAME                                        READY   STATUS      RESTARTS   AGE   IP           NODE                   NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-qdlbq        0/1     Completed   0          92s   10.0.83.65   linkarnation-worker1   <none>           <none>
ingress-nginx-admission-patch-gqpzd         0/1     Completed   1          92s   10.0.83.66   linkarnation-worker1   <none>           <none>
ingress-nginx-controller-6dd5fbb965-d5k2s   1/1     Running     0          42s   10.0.83.68   linkarnation-worker1   <none>           <none>

Ingressの動作確認

root@Linkarnation-master:/home/azureuser# kubectl run httpd --image=httpd
pod/httpd created
root@Linkarnation-master:/home/azureuser# kubectl expose pod/httpd --port=80
service/httpd exposed
root@Linkarnation-master:/home/azureuser# cat > httpd-ingress.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: httpd
spec:
  rules:
    - http:
        paths:
        - pathType: Prefix
          path: /
          backend:
            service:
              name: httpd
              port:
                number: 80
EOF
root@Linkarnation-master:/home/azureuser# kubectl create -f httpd-ingress.yaml

構築完了後

External IPがない

root@Linkarnation-master:/home/azureuser# kubectl get nodes
NAME                   STATUS   ROLES           AGE   VERSION
linkarnation-master    Ready    control-plane   55m   v1.28.10
linkarnation-worker1   Ready    <none>          28m   v1.28.10
root@Linkarnation-master:/home/azureuser# kubectl get pods --all-namespaces
NAMESPACE       NAME                                          READY   STATUS      RESTARTS      AGE
default         httpd                                         1/1     Running     0             25m
ingress-nginx   ingress-nginx-admission-create-qdlbq          0/1     Completed   0             27m
ingress-nginx   ingress-nginx-admission-patch-gqpzd           0/1     Completed   1             27m
ingress-nginx   ingress-nginx-controller-6dd5fbb965-d5k2s     1/1     Running     0             26m
kube-system     calico-kube-controllers-658d97c59c-xcwq9      1/1     Running     2 (34m ago)   40m
kube-system     calico-node-fnhff                             1/1     Running     0             28m
kube-system     calico-node-hvc9k                             1/1     Running     0             40m
kube-system     coredns-5dd5756b68-pbvgw                      1/1     Running     0             55m
kube-system     coredns-5dd5756b68-r2gmw                      1/1     Running     0             55m
kube-system     etcd-linkarnation-master                      1/1     Running     6             55m
kube-system     kube-apiserver-linkarnation-master            1/1     Running     7             55m
kube-system     kube-controller-manager-linkarnation-master   1/1     Running     3             55m
kube-system     kube-proxy-jlt7b                              1/1     Running     0             55m
kube-system     kube-proxy-k9v6m                              1/1     Running     0             28m
kube-system     kube-scheduler-linkarnation-master            1/1     Running     6             55m
root@Linkarnation-master:/home/azureuser# kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-qdlbq        0/1     Completed   0          27m
ingress-nginx-admission-patch-gqpzd         0/1     Completed   1          27m
ingress-nginx-controller-6dd5fbb965-d5k2s   1/1     Running     0          26m
root@Linkarnation-master:/home/azureuser# kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.99.64.29    <pending>     80:32023/TCP,443:32518/TCP   29m
ingress-nginx-controller-admission   ClusterIP      10.97.154.11   <none>        443/TCP                      29m

MetalLBでExternalIP付与

root@Linkarnation-master:/home/azureuser# kubectl create namespace metallb-system
namespace/metallb-system created
root@Linkarnation-master:/home/azureuser# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
role.rbac.authorization.k8s.io/controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
rolebinding.rbac.authorization.k8s.io/controller created
daemonset.apps/speaker created
deployment.apps/controller created
resource mapping not found for name: "controller" namespace: "metallb-system" from "https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "speaker" namespace: "metallb-system" from "https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
root@Linkarnation-master:/home/azureuser# vim metallb-config.yaml
root@Linkarnation-master:/home/azureuser# kubectl apply -f metallb-config.yaml
configmap/config created
root@Linkarnation-master:/home/azureuser# kubectl delete svc ingress-nginx-controller -n ingress-nginx
service "ingress-nginx-controller" deleted
root@Linkarnation-master:/home/azureuser# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
serviceaccount/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
configmap/ingress-nginx-controller unchanged
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission unchanged
deployment.apps/ingress-nginx-controller configured
job.batch/ingress-nginx-admission-create unchanged
job.batch/ingress-nginx-admission-patch unchanged
ingressclass.networking.k8s.io/nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
root@Linkarnation-master:/home/azureuser# kubectl edit deploy -n ingress-nginx ingress-nginx-controller
Edit cancelled, no changes made.
root@Linkarnation-master:/home/azureuser# kubectl edit deploy -n ingress-nginx ingress-nginx-controller
Edit cancelled, no changes made.
root@Linkarnation-master:/home/azureuser# kubectl edit deploy -n ingress-nginx ingress-nginx-controller
Edit cancelled, no changes made.
root@Linkarnation-master:/home/azureuser# kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.105.135.132   10.0.1.240    80:31280/TCP,443:30298/TCP   105s
ingress-nginx-controller-admission   ClusterIP      10.97.154.11     <none>        443/TCP                      41m
root@Linkarnation-master:/home/azureuser# curl 10.0.1.240
<html><body><h1>It works!</h1></body></html>

コントローラーの削除

kubectl delete svc ingress-nginx-controller -n ingress-nginx

namespaceごと削除

kubectl delete namespace ingress-nginx
kubectl delete namespace metallb-system

失敗した時:reset

kubectl delete --all pods --all-namespaces
kubectl delete --all services --all-namespaces
kubectl delete --all deployments --all-namespaces
kubectl delete --all daemonsets --all-namespaces
kubectl delete --all statefulsets --all-namespaces
kubectl delete --all replicasets --all-namespaces
kubectl delete --all jobs --all-namespaces
kubectl delete --all cronjobs --all-namespaces
sudo kubeadm reset
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/etcd/
sudo rm -rf /var/lib/kubelet/

失敗した時:リソース全削除

kubectl delete poddisruptionbudget calico-kube-controllers
kubectl delete serviceaccount calico-kube-controllers
kubectl delete serviceaccount calico-node
kubectl delete configmap calico-config
kubectl delete customresourcedefinition bgpconfigurations.crd.projectcalico.org
kubectl delete customresourcedefinition bgppeers.crd.projectcalico.org
kubectl delete customresourcedefinition blockaffinities.crd.projectcalico.org
kubectl delete customresourcedefinition caliconodestatuses.crd.projectcalico.org
kubectl delete customresourcedefinition clusterinformations.crd.projectcalico.org
kubectl delete customresourcedefinition felixconfigurations.crd.projectcalico.org
kubectl delete customresourcedefinition globalnetworkpolicies.crd.projectcalico.org
kubectl delete customresourcedefinition globalnetworksets.crd.projectcalico.org
kubectl delete customresourcedefinition hostendpoints.crd.projectcalico.org
kubectl delete customresourcedefinition ipamblocks.crd.projectcalico.org
kubectl delete customresourcedefinition ipamconfigs.crd.projectcalico.org
kubectl delete customresourcedefinition ipamhandles.crd.projectcalico.org
kubectl delete customresourcedefinition ippools.crd.projectcalico.org
kubectl delete customresourcedefinition ipreservations.crd.projectcalico.org
kubectl delete customresourcedefinition kubecontrollersconfigurations.crd.projectcalico.org
kubectl delete customresourcedefinition networkpolicies.crd.projectcalico.org
kubectl delete customresourcedefinition networksets.crd.projectcalico.org
kubectl delete clusterrole calico-kube-controllers
kubectl delete clusterrole calico-node
kubectl delete clusterrolebinding calico-kube-controllers
kubectl delete clusterrolebinding calico-node
kubectl delete daemonset calico-node
kubectl delete deployment calico-kube-controllers

参考

以下の記事を参考にさせて頂きました。ありがとうございます。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0