LoginSignup
11
10

More than 1 year has passed since last update.

kubeadmでk8sクラスタを構築

Last updated at Posted at 2019-08-20

ついにここまでやってきました。ここではオレオレk8s環境を、minikubeなどを使わずにkubeadmを使ってサーバにセットアップしていきます。
公式の手順を参考にしながら進めましょう。
https://kubernetes.io/ja/docs/setup/independent/install-kubeadm/

kubernetes v1.19.1版で書き直しました。

CRI-Oを利用したv1.25のインストール方法はこちら。 https://qiita.com/murata-tomohide/items/cd408dbed0211fedf5dc

murata:/etc/sysctl.d $ uname -r
5.8.12-200.fc32.x86_64
murata:~ $ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T11:26:42Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T11:18:22Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

TL;DR

オレオレk8sサーバでローカルPCからkubectl applyできるようにする。
k8sとDockerをローカルPCからいじくれるようにしてます。
master&worker nodeとして動くようにしています。

Install Docker

k8sのキモとなるDockerはどの場合でもインストールしておく必要があるよ。
公式ドキュメントを参考にいれましょう。今回はDocker CEを使っています。(CEはcommunity editionのこと)
https://docs.docker.com/

2020/12/03追記
kubernetes 1.20からDockerが非推奨となるようです。代わりにCRI-OもしくはCoitainerdを使う必要があります。やり方は公式に。
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#container-runtimes

Kubernetesさんが作ったCRIっていうAPI仕様をDockerがサポートしていないから&Dockerは管理デーモンが存在していて障害点になりえるってことでDeprecateだそうです。

OSの準備

公式の手順を参考にしながら進めましょう。
https://kubernetes.io/ja/docs/setup/independent/install-kubeadm/

swapをoff

始める前にの項にswapをOFFにしておけよってあるのでします。
Swapがオフであること。kubeletが正常に動作するためにはswapは必ずオフでなければなりません。

murata:~ $ sudo swapoff -a
murata:~ $ free -h
              total        used        free      shared  buff/cache   available
Mem:           31Gi       269Mi        30Gi       0.0Ki       540Mi        30Gi
Swap:            0B          0B          0B

sudo swapoff -aしてswap領域がなくなっていることを確認すればよろしいかと。

sysctl 調整

sysctlでiptablesの調整をします。マニュアル通りにコピペします。(root権限が必要)

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

結果に下記の内容が含まれていればよしです。

~~snip;
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
~~snip;

その他

そのほかにMACアドレスやホスト名がユニークであること、product_uuidがほかサーバと重複していないか確認すべし!とあります。
firewallによって利用するポートを許可してください。とりあえず今回はお試しなので systemctl stop firewalld しました。(外に出ているサーバの場合には絶対にしないように注意)

fedoraなどでiptablesを利用していない場合は、使うように設定します。(デフォルトはnftablesを使っていると思います。)
ちなみにルーティングにiptablesを使っているようなのでOFFにしてはいけません。

update-alternatives --set iptables /usr/sbin/iptables-legacy

確認します。

[root@k8s-tmp ~]# alternatives --display iptables
iptables - status is auto.
 link currently points to /usr/sbin/iptables-legacy
/usr/sbin/iptables-legacy - priority 10
 slave ip6tables: /usr/sbin/ip6tables-legacy
 slave ip6tables-restore: /usr/sbin/ip6tables-legacy-restore
 slave ip6tables-save: /usr/sbin/ip6tables-legacy-save
 slave iptables-restore: /usr/sbin/iptables-legacy-restore
 slave iptables-save: /usr/sbin/iptables-legacy-save
Current `best' version is /usr/sbin/iptables-legacy.

インストール

今回はFedora32を使っています。

kubeadm、kubelet、kubectlのインストールを参考にyumリポジトリを追加します。

公式のままですね。

[root@k8s-tmp ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> exclude=kube*
> EOF
[root@k8s-tmp ~]# ll /etc/yum.repos.d/kubernetes.repo
-rw-r--r--. 1 root root 277 Aug  7  2019 /etc/yum.repos.d/kubernetes.repo

selinuxを調整して、インストールします。公式ではyumコマンドを利用していますが、fedoraさんらしくdnfでインストールします。

[root@murata murata]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
[root@k8s-tmp ~]# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
google-chrome                                                                                                                                                                                          5.6 kB/s | 1.3 kB     00:00
Dependencies resolved.
=======================================================================================================================================================================================================================================
 Package                                                             Architecture                                   Version                                                   Repository                                          Size
=======================================================================================================================================================================================================================================
Installing:
 kubeadm                                                             x86_64                                         1.19.2-0                                                  kubernetes                                         8.3 M
 kubectl                                                             x86_64                                         1.19.2-0                                                  kubernetes                                         9.0 M
 kubelet                                                             x86_64                                         1.19.2-0                                                  kubernetes                                          19 M
Installing dependencies:
 conntrack-tools                                                     x86_64                                         1.4.5-5.fc32                                              fedora                                             205 k
 containernetworking-plugins                                         x86_64                                         0.8.7-1.fc32                                              updates                                             11 M
 cri-tools                                                           x86_64                                         1.13.0-0                                                  kubernetes                                         5.1 M
 ethtool                                                             x86_64                                         2:5.8-1.fc32                                              updates                                            208 k
 libnetfilter_cthelper                                               x86_64                                         1.0.0-17.fc32                                             fedora                                              23 k
 libnetfilter_cttimeout                                              x86_64                                         1.0.0-15.fc32                                             fedora                                              23 k
 libnetfilter_queue                                                  x86_64                                         1.0.2-15.fc32                                             fedora                                              27 k
 socat                                                               x86_64                                         1.7.3.4-2.fc32                                            fedora                                             304 k

Transaction Summary
=======================================================================================================================================================================================================================================
Install  11 Packages

Total download size: 54 M
Installed size: 275 M
Downloading Packages:
(1/11): ethtool-5.8-1.fc32.x86_64.rpm                                                                                                                                                                  2.8 MB/s | 208 kB     00:00
(2/11): conntrack-tools-1.4.5-5.fc32.x86_64.rpm                                                                                                                                                        2.5 MB/s | 205 kB     00:00
(3/11): libnetfilter_cttimeout-1.0.0-15.fc32.x86_64.rpm                                                                                                                                                1.1 MB/s |  23 kB     00:00
(4/11): libnetfilter_cthelper-1.0.0-17.fc32.x86_64.rpm                                                                                                                                                 672 kB/s |  23 kB     00:00
(5/11): libnetfilter_queue-1.0.2-15.fc32.x86_64.rpm                                                                                                                                                    1.8 MB/s |  27 kB     00:00
(6/11): socat-1.7.3.4-2.fc32.x86_64.rpm                                                                                                                                                                6.9 MB/s | 304 kB     00:00
(7/11): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm                                                                                                  13 MB/s | 5.1 MB     00:00
(8/11): containernetworking-plugins-0.8.7-1.fc32.x86_64.rpm                                                                                                                                             11 MB/s |  11 MB     00:00
(9/11): d0ba40edfc0fdf3aeec3dd8e56c01ff0d3a511cc0012aabce55d9a83d9bf2b69-kubeadm-1.19.2-0.x86_64.rpm                                                                                                   9.4 MB/s | 8.3 MB     00:00
(10/11): d9d997cdbfd6562824eb7786abbc7f4c6a6825662d0f451793aa5ab8c4a85c96-kubelet-1.19.2-0.x86_64.rpm                                                                                                   12 MB/s |  19 MB     00:01
(11/11): b1b077555664655ba01b2c68d13239eaf9db1025287d0d9ccaeb4a8850c7a9b7-kubectl-1.19.2-0.x86_64.rpm                                                                                                  4.1 MB/s | 9.0 MB     00:02
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                   15 MB/s |  54 MB     00:03
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                               1/1
  Installing       : containernetworking-plugins-0.8.7-1.fc32.x86_64                                                                                                                                                              1/11
  Installing       : kubectl-1.19.2-0.x86_64                                                                                                                                                                                      2/11
  Installing       : cri-tools-1.13.0-0.x86_64                                                                                                                                                                                    3/11
  Installing       : socat-1.7.3.4-2.fc32.x86_64                                                                                                                                                                                  4/11
  Installing       : libnetfilter_queue-1.0.2-15.fc32.x86_64                                                                                                                                                                      5/11
  Installing       : libnetfilter_cttimeout-1.0.0-15.fc32.x86_64                                                                                                                                                                  6/11
  Installing       : libnetfilter_cthelper-1.0.0-17.fc32.x86_64                                                                                                                                                                   7/11
  Installing       : conntrack-tools-1.4.5-5.fc32.x86_64                                                                                                                                                                          8/11
  Running scriptlet: conntrack-tools-1.4.5-5.fc32.x86_64                                                                                                                                                                          8/11
  Installing       : ethtool-2:5.8-1.fc32.x86_64                                                                                                                                                                                  9/11
  Installing       : kubelet-1.19.2-0.x86_64                                                                                                                                                                                     10/11
  Installing       : kubeadm-1.19.2-0.x86_64                                                                                                                                                                                     11/11

~~snip;

packageをインストールしたら一度rebootしておいてください。selinuxをdisableにしておきたいので。

systemctlに登録して読み込みます。

[root@k8s-tmp ~]# systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

cgroup設定

DockerのcgroupDriverをsystemdにする。

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

Dockerと同じcgroup Driverを使う必要があるので設定。cgroupとはkernelの機能でプロセスなどをグルーピングしてくれる機能らっし。
Cgroup Driverを別のを使いたかったり、指定したい場合には/etc/default/kubeletKUBELET_EXTRA_ARGS=--cgroup-driver=<value>を保存すればいいらしい。
基本は自動的にkubeletが自動的に判別して設定してくれるらしい。

[murata@murata ~]$  docker info | grep -i cgroup
 Cgroup Driver: cgroupfs

Centos8を使う人は注意してください。

kubeadmによる初期設定

Creating a single control-plane cluster with kubeadm

dnf updateなどはしてあるので飛ばします。

kubeadm init <args>をします。<args>にはオプションを入れられます。次の項目でCalico for policy and flannel (aka Canal)というネットワークプラグインを使うので、--pod-network-cidr=10.244.0.0/16を追加しておきます。後述のCanal側のマニフェストファイル側を変えてもいい。

murata:~ $ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
W1002 15:23:13.043966    5853 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING Hostname]: hostname "k8s-tmp.dev.deroris.local" could not be reached
        [WARNING Hostname]: hostname "k8s-tmp.dev.deroris.local": lookup k8s-tmp.dev.deroris.local on 172.16.200.31:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-tmp.dev.deroris.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.203.203]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-tmp.dev.deroris.local localhost] and IPs [172.16.203.203 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-tmp.dev.deroris.local localhost] and IPs [172.16.203.203 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.502912 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-tmp.dev.deroris.local as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-tmp.dev.deroris.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0qoe0u.s8exrc3ahdixzkzu
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.203.203:6443 --token 0qoe0u.s8exrc3ahdixzkzu \
    --discovery-token-ca-cert-hash sha256:7edc25f8a485b832c5b57808df6cc00a9cd8b8955f9349eab86762c6ed413118

上にあるように、設定ファイルをhomeディレクトリに持ってきます。

murata:~ $ mkdir -p $HOME/.kube
murata:~ $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
murata:~ $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
murata:~ $ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ながい。
    server: https://172.16.203.203:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: ながい
    client-key-data: ながい

これでkubectlで見れるようになるはず。

murata:~ $ kubectl get all -A
NAMESPACE     NAME                                                    READY   STATUS              RESTARTS   AGE
kube-system   pod/coredns-f9fd979d6-9bcmk                             0/1     ContainerCreating   0          2m57s
kube-system   pod/coredns-f9fd979d6-l4whm                             0/1     ContainerCreating   0          2m57s
kube-system   pod/etcd-k8s-tmp.dev.deroris.local                      1/1     Running             0          3m7s
kube-system   pod/kube-apiserver-k8s-tmp.dev.deroris.local            1/1     Running             0          3m7s
kube-system   pod/kube-controller-manager-k8s-tmp.dev.deroris.local   1/1     Running             0          3m7s
kube-system   pod/kube-proxy-nxcct                                    1/1     Running             0          2m57s
kube-system   pod/kube-scheduler-k8s-tmp.dev.deroris.local            1/1     Running             0          3m7s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  3m16s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   3m14s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   3m14s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           3m14s

NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-f9fd979d6   2         2         0       2m57s

corednsが動けないみたい。PodのNetwork add-onを入れる。
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

今回はCanalを採用!(なんとなく)を入れる。先ほどのkubeadm initでpod-network-cidrを別の値にした場合には、ymlファイルをDLして該当箇所を書き換えてください。きっと10.244.0.0/16って書いてあるところがあるはず。

murata:~ $ curl https://docs.projectcalico.org/manifests/canal.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  181k  100  181k    0     0   133k      0  0:00:01  0:00:01 --:--:--  133k
murata:~ $ kubectl apply -f ./canal.yaml
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

Canalが産まれようとしている!


murata:~ $ kubectl get all -A
NAMESPACE     NAME                                                    READY   STATUS              RESTARTS   AGE
kube-system   pod/calico-kube-controllers-c9784d67d-zj78k             0/1     Pending             0          16s
kube-system   pod/canal-xllfv                                         2/2     Running             0          16s
kube-system   pod/coredns-f9fd979d6-56l7w                             0/1     ContainerCreating   0          3m7s
kube-system   pod/coredns-f9fd979d6-zvwrj                             0/1     ContainerCreating   0          3m7s
kube-system   pod/etcd-k8s-tmp.dev.deroris.local                      1/1     Running             0          3m16s
kube-system   pod/kube-apiserver-k8s-tmp.dev.deroris.local            1/1     Running             0          3m16s
kube-system   pod/kube-controller-manager-k8s-tmp.dev.deroris.local   1/1     Running             0          3m16s
kube-system   pod/kube-proxy-8npdc                                    1/1     Running             0          3m7s
kube-system   pod/kube-scheduler-k8s-tmp.dev.deroris.local            1/1     Running             0          3m16s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  3m24s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   3m23s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/canal        1         1         1       1            1           kubernetes.io/os=linux   17s
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   3m23s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   0/1     1            0           17s
kube-system   deployment.apps/coredns                   0/2     2            0           3m23s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-c9784d67d   1         1         0       17s
kube-system   replicaset.apps/coredns-f9fd979d6                   2         2         0       3m7s

murata:~ $ kubectl get all -A
NAMESPACE     NAME                                                    READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-c9784d67d-zj78k             1/1     Running   0          57s
kube-system   pod/canal-xllfv                                         2/2     Running   0          57s
kube-system   pod/coredns-f9fd979d6-56l7w                             1/1     Running   0          3m48s
kube-system   pod/coredns-f9fd979d6-zvwrj                             1/1     Running   0          3m48s
kube-system   pod/etcd-k8s-tmp.dev.deroris.local                      1/1     Running   0          3m57s
kube-system   pod/kube-apiserver-k8s-tmp.dev.deroris.local            1/1     Running   0          3m57s
kube-system   pod/kube-controller-manager-k8s-tmp.dev.deroris.local   1/1     Running   0          3m57s
kube-system   pod/kube-proxy-8npdc                                    1/1     Running   0          3m48s
kube-system   pod/kube-scheduler-k8s-tmp.dev.deroris.local            1/1     Running   0          3m57s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  4m5s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   4m4s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/canal        1         1         1       1            1           kubernetes.io/os=linux   58s
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   4m4s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           58s
kube-system   deployment.apps/coredns                   2/2     2            2           4m4s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-c9784d67d   1         1         1       58s
kube-system   replicaset.apps/coredns-f9fd979d6                   2         2         2       3m48s

できた。

デフォルトではマスターノードはworkerにはなれないのでなれるようにしてあげます。

murata:~ $ kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-tmp.dev.deroris.local untainted

動作確認

ためしにnginxをデプロイしてみます。

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
        - args:
          image: nginx:latest
          imagePullPolicy: IfNotPresent
          name: nginx-test
          ports:
            - containerPort: 9100
              protocol: TCP

murata:~ $ cat <<EOF | kubectl apply -f -
> kind: Deployment
> apiVersion: apps/v1
> metadata:
>   name: nginx-test
> spec:
>   replicas: 1
>   selector:
>     matchLabels:
>       app: nginx-test
>   template:
>     metadata:
>       labels:
>         app: nginx-test
>     spec:
>       containers:
>         - args:
>           image: nginx:latest
>           imagePullPolicy: IfNotPresent
>           name: nginx-test
>           ports:
>             - containerPort: 9100
>               protocol: TCP
> EOF
deployment.apps/nginx-test created

できた。

murata:~ $ kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/nginx-test-79b49dfdf4-b8lpl   1/1     Running   0          66s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   18m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-test   1/1     1            1           66s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-test-79b49dfdf4   1         1         1       66s

確認

murata:~ $ curl 10.244.0.6 -I
HTTP/1.1 200 OK
Server: nginx/1.19.2
Date: Fri, 02 Oct 2020 06:42:48 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Aug 2020 14:50:35 GMT
Connection: keep-alive
ETag: "5f32b03b-264"
Accept-Ranges: bytes

ちゃんとレスポンスも受け取れる。

ローカルのPCからいじれるようにする。

開発用オレオレk8sだからadminコピーしちゃってるけど、本番ではちゃんとすること。

/etc/kubernetes/admin.conf のファイルをローカルに持ってきて環境変数KUBECONFIGに食わせる。

murata:~/.kube $ scp root@xxxx.local:/etc/kubernetes/admin.conf ~/.kube/oreore.local.config

export KUBECONFIG=~/.kube/oreore.local.config

murata:~ $ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-5ff9d6cc77-4zc9t   1/1     Running   0          3h5m
pod/nginx-5ff9d6cc77-fnsk5   1/1     Running   0          3h5m


NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP        3h17m
service/nginx-nodeport   NodePort    10.108.248.35   <none>        80:30222/TCP   3h5m


NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   2/2     2            2           3h5m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-5ff9d6cc77   2         2         2       3h5m

必要ならば、~/.bash_profileなどに記載しておきましょう。

ローカルPCからデプロイする。

※ここから先はkubeのバージョン関係ないので1.19ではやっていません。

開発用オレオレk8sだからやってるけど、本番ではちゃんとすること。

Docker のリポジトリがサーバのローカルにある以上、サーバにDockerイメージをpushしなければならない。
上記で構築したサーバにSSHして云々するのはめんどくさい。でもDockerレジストリを構築するのも嫌だ。

まずDockerを別ホストからいじれるように鍵を用意。
参考: Docker Engineを立ててクライアントからいじる

/etc/systemd/system/multi-user.target.wants/docker.service の ExecStartに鍵たちと-Hでtcp://0.0.0.0:2376を追加します。

ExecStart=/usr/bin/dockerd --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=tcp://0.0.0.0:2376 -H=fd:// --containerd=/run/containerd/containerd.sock

sudo systemctl daemon-reloadして sudo systemctl restart docker.service します。

ローカルPCにexport DOCKER_TLS_VERIFY=1 DOCKER_HOST=tcp://[↑のサーバのIPアドレス]:2376 します。

docker container lsdocker infoでつながっているか確認します。

murata:~ $ docker container ls
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
0d08ddfa494d        4fcdc789ef1a           "start_runit"            10 seconds ago      Up 9 seconds                            k8s_calico-node_calico-node-x7nrf_kube-system_fbf69ba9-e31e-4258-83c3-1918aca0229c_60
d2e5b79aa6c5        eb516548c180           "/coredns -conf /etc…"   5 minutes ago       Up 5 minutes                            k8s_coredns_coredns-5c98db65d4-w2lsq_kube-system_f93be9ea-4151-4a21-aa6e-bc876a1ad4b0_2
086373c963d4        k8s.gcr.io/pause:3.1   "/pause"                 5 minutes ago       Up 5 minutes                            k8s_POD_coredns-5c98db65d4-w2lsq_kube-system_f93be9ea-4151-4a21-aa6e-bc876a1ad4b0_8
6c8ad0c23551        nginx                  "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes                            k8s_nginx_nginx-5ff9d6cc77-4zc9t_default_20a48970-c693-40f3-a11c-9771f6cb3383_2
41a8610f7462        nginx                  "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes                            k8s_nginx_nginx-5ff9d6cc77-fnsk5_default_5499cc62-bc8b-41b4-a862-47d9c777eb1c_2
59ad16a25a50        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_nginx-5ff9d6cc77-4zc9t_default_20a48970-c693-40f3-a11c-9771f6cb3383_2
e7024d8c28cf        214ddcd2a33e           "/usr/bin/kube-contr…"   6 minutes ago       Up 6 minutes                            k8s_calico-kube-controllers_calico-kube-controllers-7bd78b474d-7zqt7_kube-system_5be65856-7bee-493d-9e5b-7232f1b402dc_2
e6a0e2a9d930        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_calico-kube-controllers-7bd78b474d-7zqt7_kube-system_5be65856-7bee-493d-9e5b-7232f1b402dc_10
33c307d9b32b        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_nginx-5ff9d6cc77-fnsk5_default_5499cc62-bc8b-41b4-a862-47d9c777eb1c_2
0b4f92e99fd6        eb516548c180           "/coredns -conf /etc…"   6 minutes ago       Up 6 minutes                            k8s_coredns_coredns-5c98db65d4-clcsv_kube-system_7a548701-e6f2-4dc3-8f62-e6c1d776cedb_2
6fe27b5b0c2a        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_coredns-5c98db65d4-clcsv_kube-system_7a548701-e6f2-4dc3-8f62-e6c1d776cedb_8
2062006035f3        34a53be6c9a7           "kube-apiserver --ad…"   6 minutes ago       Up 6 minutes                            k8s_kube-apiserver_kube-apiserver-derori.cloud.oreore.local_kube-system_c05b675d6985ed143dd49b1a689ad67c_3
e96b0325f13f        2c4adeb21b4f           "etcd --advertise-cl…"   6 minutes ago       Up 6 minutes                            k8s_etcd_etcd-derori.cloud.oreore.local_kube-system_0487325602f4b35b7a2cbd7fe4b865e7_2
c22ea2474db9        167bbf6c9338           "/usr/local/bin/kube…"   6 minutes ago       Up 6 minutes                            k8s_kube-proxy_kube-proxy-kw622_kube-system_e5ccb43d-d774-401e-87d9-15849e2db860_3
c770b18181bd        88fa9cb27bd2           "kube-scheduler --bi…"   6 minutes ago       Up 6 minutes                            k8s_kube-scheduler_kube-scheduler-derori.cloud.oreore.local_kube-system_abfcb4f52e957b11256c1f6841d49700_2
955de466e5d2        9f5df470155d           "kube-controller-man…"   6 minutes ago       Up 6 minutes                            k8s_kube-controller-manager_kube-controller-manager-derori.cloud.oreore.local_kube-system_6b98d73b8fe8352c3d11c50e86a2556e_3
5ad55d36f759        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_calico-node-x7nrf_kube-system_fbf69ba9-e31e-4258-83c3-1918aca0229c_2
a9da827e59ab        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_etcd-derori.cloud.oreore.local_kube-system_0487325602f4b35b7a2cbd7fe4b865e7_2
f85d3da0d571        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_kube-controller-manager-derori.cloud.oreore.local_kube-system_6b98d73b8fe8352c3d11c50e86a2556e_3
b8e74e4b19b7        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_kube-scheduler-derori.cloud.oreore.local_kube-system_abfcb4f52e957b11256c1f6841d49700_2
588f58978a3e        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_kube-proxy-kw622_kube-system_e5ccb43d-d774-401e-87d9-15849e2db860_3
2bfc7879f0dd        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_kube-apiserver-derori.cloud.oreore.local_kube-system_c05b675d6985ed143dd49b1a689ad67c_3
murata:~ $ docker info
Containers: 46
 Running: 23
 Paused: 0
 Stopped: 23
Images: 13
Server Version: 19.03.1
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 5.2.7-200.fc30.x86_64
Operating System: Fedora 30 (Thirty)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.66GiB
Name: derori.cloud.oreore.local
ID: 74UT:MSBM:XOMG:MJJM:PXY2:O4MM:GLAD:6HV4:C3NM:OLWA:ULVM:NIAI
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

これで、自身のローカルから直接Dockerへのアクセスできるようになったので、ローカルからkubectl applyできるはずだ!

適当にDockerfile作ってbiuildしてapplyしてみます。

murata:~/tmp/ppp $ cat Dockerfile.dev
FROM nginx:1.15
ADD index.html /usr/share/nginx/html/

murata:~/tmp/ppp $ cat index.html
pipirupi-!!!
murata:~/tmp/ppp $ cat k8s.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: pipirupi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pipirupi
  template:
    metadata:
      labels:
        app: pipirupi
    spec:
      containers:
      - args:
        image: pipirupi:v
        imagePullPolicy: IfNotPresent
        name: pipirupi
        ports:
        - containerPort: 80
          protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: pipirupi
spec:
  # externalTrafficPolicy: Local
  type: NodePort
  ports:
  - name: "http-port"
    protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: pipirupi

murata:~/tmp/ppp $ docker build -t pipirupi:v . -f Dockerfile.dev
Sending build context to Docker daemon  9.728kB
Step 1/2 : FROM nginx:1.15
 ---> 53f3fd8007f7
Step 2/2 : ADD index.html /usr/share/nginx/html/
 ---> 189cc5fa6d2f
Successfully built 189cc5fa6d2f
Successfully tagged pipirupi:v

murata:~/tmp/ppp $ kubectl apply -f k8s.yaml
deployment.extensions/pipirupi configured
service/pipirupi configured

murata:~/workspace/deroris.github/infrastructure/config/aws (update-countrade-deployer-lambda-policy=) $ kubectl get all  -o wide
NAME                            READY   STATUS      RESTARTS   AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
pod/hello-world                 0/1     Completed   0          56m     192.168.219.80   derori.cloud.oreore.local   <none>           <none>
pod/nginx-5ff9d6cc77-4zc9t      1/1     Running     2          4h23m   192.168.219.78   derori.cloud.oreore.local   <none>           <none>
pod/nginx-5ff9d6cc77-fnsk5      1/1     Running     2          4h23m   192.168.219.76   derori.cloud.oreore.local   <none>           <none>
pod/pipirupi-7f987ff544-q7kx7   1/1     Running     0          12s     192.168.219.83   derori.cloud.oreore.local   <none>           <none>


NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP        4h35m   <none>
service/nginx-nodeport   NodePort    10.108.248.35    <none>        80:30222/TCP   4h23m   run=nginx
service/pipirupi         NodePort    10.105.153.120   <none>        80:32671/TCP   14m     app=pipirupi


NAME                       READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES         SELECTOR
deployment.apps/nginx      2/2     2            2           4h23m   nginx        nginx:latest   run=nginx
deployment.apps/pipirupi   1/1     1            1           14m     pipirupi     pipirupi:v     app=pipirupi

NAME                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES            SELECTOR
replicaset.apps/nginx-5ff9d6cc77      2         2         2       4h23m   nginx        nginx:latest      pod-template-hash=5ff9d6cc77,run=nginx
replicaset.apps/pipirupi-564bff47cb   0         0         0       12m     pipirupi     pipirupi:vvv      app=pipirupi,pod-template-hash=564bff47cb
replicaset.apps/pipirupi-6bd5b68657   0         0         0       14m     pipirupi     pipirupi:latest   app=pipirupi,pod-template-hash=6bd5b68657
replicaset.apps/pipirupi-7f987ff544   1         1         1       12s     pipirupi     pipirupi:v        app=pipirupi,pod-template-hash=7f987ff544


murata:~/workspace/deroris.github/infrastructure/config/aws (update-countrade-deployer-lambda-policy=) $ curl http://oreore.k8s.local:32671
pipirupi-!!!

すごいDockerfile作るのに試行錯誤しちゃったけどこんな感じ。

まとめ。

まだIngress(ロードバランサ)の設定をしていないため外からの通信はできないけど、kubeadmを使って構築はこんな感じになる。クラスタにサーバを追加する際にはkubeadm joinのコマンドをやれば勝手につないでくれる。

11
10
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
11
10