1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

kube-ansibleを使って自端末にkubernetesクラスターを構築する

Last updated at Posted at 2019-04-17

はじめに

こんなあなた

  • 自端末(macOS)に練習用のkubenetesが欲しい
  • minikube や k8s playgroundあるけどkubernetesの運用知りたいので複数nodeが良い
  • 自分で1からインストール手順や自動でインストールするプログラムを作りたくない

awesome-kubernetesに紹介されていた kairen/kube-ansibleを使ってみます。

事前準備

  • README通りにbrew で sshpass.rbを入れます
$ brew install http://git.io/sshpass.rb
  • ansibleをインストールします
$ brew install ansible

・ansible.cfgのhost_key_checkingの設定を消す
https://github.com/kairen/kube-ansible/blob/master/ansible.cfg#L12
https://github.com/kairen/kube-ansible/blob/master/ansible.cfg#L17

構築してみる

README通りにシェルを動かしてみるとy/n聞かれます。
ここでは以下としました。

  1. OSをcentos7
  2. ifaceをenp0s8
  3. CNIをflannel(defaultはcalico)

上記3.を変更するには hack/setup-vms を一部修正します。


:

role_config "vip_address:.*" "vip_address: ${SUBNET}.9" ${GROUP_VARS_PATH}
+ role_config "container_network:.*" "container_network: ${CNI_PLUGIN}" ${GROUP_VARS_PATH}

# Create inventory and hosts
set_inventory
$ export OS_IMAGE="centos7"
$ export ETH="enp0s8"
$ export CNI_PLUGIN="flannel"

$ ./hack/setup-vms
Cluster Size: 1 master, 2 worker.
  VM Size: 1 vCPU, 2048 MB
  VM Info: centos7, virtualbox
  CNI binding iface: enp0s8
Start to deploy?(y): y

: (vagrant の VM立ち上げとAnsibleによるkubernetes installが始まる)

TASK [k8s-cni : Apply Kubernetes CNI] ******************************************
Wednesday 17 April 2019  20:38:42 +0900 (0:00:00.916)       0:06:57.559 *******
changed: [172.16.35.10 -> 172.16.35.10]


PLAY RECAP *********************************************************************
172.16.35.10               : ok=158  changed=91   unreachable=0    failed=0
172.16.35.11               : ok=61   changed=36   unreachable=0    failed=0
172.16.35.12               : ok=59   changed=36   unreachable=0    failed=0

起動しました。

Kubernetesを動かす

vagrant sshでmaster nodeに乗り込んでkubectlコマンドを打ってみます。
下記から rootユーザのホームディレクトリに .kube ディレクトリがあるので、rootになってkubectlが打てそうです。

$ vagrnat ssh k8s-m1
[vagrant@k8s-m1 ~]$ sudo su -
[root@k8s-m1 ~]# ls -la
total 36
dr-xr-x---.  4 root root  199 Apr 17 11:44 .
dr-xr-xr-x. 18 root root  239 Apr 17 11:29 ..
-rw-------.  1 root root 2352 Aug 23  2017 anaconda-ks.cfg
drwx------.  3 root root   17 Apr 17 11:31 .ansible
-rw-------.  1 root root    5 Apr 17 11:44 .bash_history
-rw-r--r--.  1 root root   18 Dec 29  2013 .bash_logout
-rw-r--r--.  1 root root  176 Dec 29  2013 .bash_profile
-rw-r--r--.  1 root root  176 Dec 29  2013 .bashrc
-rw-r--r--.  1 root root  100 Dec 29  2013 .cshrc
drwxr-xr-x.  4 root root   51 Apr 17 11:36 .kube
-rw-------.  1 root root 1677 Aug 23  2017 original-ks.cfg
-rw-------.  1 root root 1024 Apr 17 11:32 .rnd
-rw-r--r--.  1 root root  129 Dec 29  2013 .tcshrc

[root@k8s-m1 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9m10s
[root@k8s-m1 ~]#
[root@k8s-m1 ~]# kubectl get all --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-896d9f87d-5g5ls               1/1     Running   0          32m
kube-system   pod/coredns-896d9f87d-bz9zr               1/1     Running   0          35m
kube-system   pod/coredns-autoscaler-58784cd54d-x5dks   1/1     Running   0          35m
kube-system   pod/kube-apiserver-k8s-m1                 1/1     Running   0          34m
kube-system   pod/kube-controller-manager-k8s-m1        1/1     Running   0          35m
kube-system   pod/kube-flannel-ds-9sn9r                 1/1     Running   1          33m
kube-system   pod/kube-flannel-ds-kvxkl                 1/1     Running   1          33m
kube-system   pod/kube-flannel-ds-wrqlv                 1/1     Running   1          33m
kube-system   pod/kube-haproxy-k8s-m1                   1/1     Running   0          34m
kube-system   pod/kube-keepalived-k8s-m1                1/1     Running   0          35m
kube-system   pod/kube-proxy-k9wgd                      1/1     Running   0          33m
kube-system   pod/kube-proxy-qhwrw                      1/1     Running   0          33m
kube-system   pod/kube-proxy-wqtxm                      1/1     Running   0          33m
kube-system   pod/kube-scheduler-k8s-m1                 1/1     Running   0          34m

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  36m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   35m

NAMESPACE     NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/kube-flannel-ds   3         3         3       3            3           <none>          33m
kube-system   daemonset.apps/kube-proxy        3         3         3       3            3           <none>          35m

NAMESPACE     NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns              2/2     2            2           35m
kube-system   deployment.apps/coredns-autoscaler   1/1     1            1           35m

NAMESPACE     NAME                                            DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-896d9f87d               2         2         2       35m
kube-system   replicaset.apps/coredns-autoscaler-58784cd54d   1         1         1       35m

nginxを動かしてみる

# cat <<EOF >deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF
# kubectl apply -f deploy.yaml
deployment.apps/nginx created

# cat <<EOF >svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
EOF
# kubectl apply -f svc.yaml
service/nginx created

# kubectl get svc,po
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   75m
service/nginx        ClusterIP   10.100.167.11   <none>        80/TCP    27m

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7db75b8b78-c8572   1/1     Running   0          27m
pod/nginx-7db75b8b78-pc6l8   1/1     Running   0          27m
pod/nginx-7db75b8b78-xqfc7   1/1     Running   0          27m


# kubectl run alpine -it --rm --image alpine -- ash

/ # wget 10.100.167.11 && cat index.html
Connecting to 10.100.167.11 (10.100.167.11:80)
index.html           100% |************************************************************************************************|   612  0:00:00 ETA
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

動いた。

参考: 構築成功までに出会ったトラシュー

・エラー1: host_key_checking = False の重複でAnsibleエラー

以下を消して再実行します。
https://github.com/kairen/kube-ansible/blob/master/ansible.cfg#L12

Error reading config file (kube-ansible/ansible.cfg): While reading from '<string>' [line 17]: option 'host_key_checking' in section 'defaults' already exists
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
$ ./hack/clear-vms
$ ./hack/setup-vms

・エラー2: eth1がない

setup-vms を実行する前に正しいiface名を指定します。
vagrant ssh でログインし、ip aコマンド等でiface名を確認します。

TASK [k8s-setup : Copy Keepalived manifest and config files into cluster] ******
Wednesday 17 April 2019  20:16:05 +0900 (0:00:00.725)       0:01:58.900 *******
fatal: [172.16.35.10]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_eth1'"}
	to retry, use: --limit @/kube-ansible/cluster.retry

以上

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?