13
6

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Orange Pi ZeroとOneでK3Sを動かしてみる

Last updated at Posted at 2019-02-26

はじめに

帰りがけにスマホを開くとこういうのを見かけました。

https://k3s.io/
image.png

Lightweight Kubernetes??
8-5=3なの??
5はどこ行った??

疑問はつきないのですが、見た感じ40MBのバイナリでメモリが512MBでも動くようです。
Minimum System Requirementsが以下書かれています。

Linux 3.10+
512 MB of ram per server
75 MB of ram per node
200 MB of disk space
x86_64, ARMv7, ARM64

Raspberry pi zeroとかでも動くのでしょうか?
Raspberry pi zeroは持ってなかったので、お家に転がっていたOrange Pi zeroとOneで試してみます。

Orange Pi ZeroとOne

Orange Pi ZeroとOneは中華製ラズパイです。
Zeroが1個 ¥1,182 で送料込みで¥1,729です。

image.png

Oneがで1個 ¥1,125で送料込みで¥1,660です。
image.png

送料こんな高かったかなぁ??

Zeroという名前ながら、Oneよりあとに出ています。
OneがAllWinnerのH3、ZeroがH2というarmhfのCPUを使用しています。

略しておっぱい

1台目作成

メーカが出しているOSイメージは当てにせず、Armbianを使いましょう。
https://www.armbian.com/orange-pi-zero/

Armbian Bionic を落としてきて解凍したらddコマンドでmicrosdに書き込んで起動させます。
起動したらDHCPで払い出しているIPアドレスを確認して、SSH接続をします。

Armbianの初期パスワードはroot/1234です。初回接続時に変更とユーザ作成を求められるので、任意のものを設定しましょう。

root@192.168.0.6's password: 
You are required to change your password immediately (root enforced)
  ___                               ____  _   _____              
 / _ \ _ __ __ _ _ __   __ _  ___  |  _ \(_) |__  /___ _ __ ___  
| | | | '__/ _` | '_ \ / _` |/ _ \ | |_) | |   / // _ \ '__/ _ \ 
| |_| | | | (_| | | | | (_| |  __/ |  __/| |  / /|  __/ | | (_) |
 \___/|_|  \__,_|_| |_|\__, |\___| |_|   |_| /____\___|_|  \___/ 
                       |___/                                     

Welcome to ARMBIAN 5.75 stable Ubuntu 18.04.1 LTS 4.19.20-sunxi   
System load:   0.16 0.38 0.19  	Up time:       4 min		
Memory usage:  14 % of 493MB  	IP:            192.168.0.6
CPU temp:      24°C           	
Usage of /:    6% of 15G    	

[ General system configuration (beta): armbian-config ]

Last login: Tue Feb 26 16:01:11 2019 from 192.168.0.10
Changing password for root.
(current) UNIX password: 
Enter new UNIX password: 
Retype new UNIX password: 


Thank you for choosing Armbian! Support: www.armbian.com

Creating a new user account. Press <Ctrl-C> to abort

Please provide a username (eg. your forename): orange
Trying to add user orange
Adding user `orange' ...
Adding new group `orange' (1000) ...
Adding new user `orange' (1000) with group `orange' ...
Creating home directory `/home/orange' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for orange
Enter the new value, or press ENTER for the default
	Full Name []: 
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] y

Dear orange, your account orange has been created and is sudo enabled.
Please use this account for your daily work from now on.

root@orangepizero:~# 

/etc/network/interfacesに以下を書いて再起動、IPアドレスを固定しました。

auto eth0
iface eth0 inet static
address 192.168.0.200
netmask 255.255.255.0
gateway 192.168.0.1
dns-namaservers 192.168.0.1 8.8.8.8 8.8.4.4

以下を実行して再起動、wifiを無効にしておきます。(技適対策)

root@orangepizero:~# echo "blacklist xradio_wlan" > /etc/modprobe.d/disable_xradio_wlan.conf

再起動後、確認するとwlan0は消えて無効となっていました。

root@orangepizero:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2a:1a:ad:41:40:e1 brd ff:ff:ff:ff:ff:ff
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 02:42:41:da:5c:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.200/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 240f:7b:96b1:1:ae10:7c3e:e75e:ca22/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 3069525572sec preferred_lft 3069525572sec
    inet6 240f:7b:96b1:1:42:41ff:feda:5cab/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 3069525572sec preferred_lft 3069525572sec
    inet6 fe80::42:41ff:feda:5cab/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
root@orangepizero:~# 

ログインしたら、リリースページからarmhf向けのバイナリを取ってきてPATHが通った場所に追加しました。
https://github.com/rancher/k3s/releases/tag/v0.1.0

root@orangepizero:~# wget https://github.com/rancher/k3s/releases/download/v0.1.0/k3s-armhf
root@orangepizero:~# mv k3s-armhf k3s
root@orangepizero:~# chmod +x k3s
root@orangepizero:~# mv k3s /usr/bin/

起動させてみます。

root@orangepizero:~# k3s server &
[1] 1425
root@orangepizero:~# INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/3c5abfba6bd2f6546427aabec2178ac160193eb9855631070e569313459a78de 
INFO[2019-02-26T16:37:20.812903004Z] Starting k3s v0.1.0 (91251aa)                
INFO[2019-02-26T16:38:29.381409224Z] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key 
INFO[2019-02-26T16:38:57.161671965Z] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false 
INFO[2019-02-26T16:38:57.166018649Z] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false 
INFO[2019-02-26T16:38:58.189814776Z] Creating CRD listenerconfigs.k3s.cattle.io   
INFO[2019-02-26T16:38:58.670528076Z] Creating CRD addons.k3s.cattle.io            
INFO[2019-02-26T16:38:58.690461132Z] Creating CRD helmcharts.k3s.cattle.io        
INFO[2019-02-26T16:38:58.767613713Z] Waiting for CRD listenerconfigs.k3s.cattle.io to become available 
INFO[2019-02-26T16:38:59.277871760Z] Done waiting for CRD listenerconfigs.k3s.cattle.io to become available 
INFO[2019-02-26T16:38:59.278122837Z] Waiting for CRD addons.k3s.cattle.io to become available 
INFO[2019-02-26T16:38:59.788171137Z] Done waiting for CRD addons.k3s.cattle.io to become available 
INFO[2019-02-26T16:38:59.788403715Z] Waiting for CRD helmcharts.k3s.cattle.io to become available 
INFO[2019-02-26T16:39:00.320439465Z] Done waiting for CRD helmcharts.k3s.cattle.io to become available 
INFO[2019-02-26T16:39:00.352344210Z] Listening on :6443                           
INFO[2019-02-26T16:39:21.286355342Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml 
INFO[2019-02-26T16:39:21.287514071Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml 
INFO[2019-02-26T16:39:22.481354791Z] Node token is available at /var/lib/rancher/k3s/server/node-token 
INFO[2019-02-26T16:39:22.481567996Z] To join node to cluster: k3s agent -s https://192.168.0.200:6443 -t ${NODE_TOKEN} 
INFO[2019-02-26T16:39:23.762142996Z] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[2019-02-26T16:39:23.762382367Z] Run: k3s kubectl                             
INFO[2019-02-26T16:39:23.762466490Z] k3s is up and running                        
INFO[2019-02-26T16:39:24.892657667Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-02-26T16:39:24.893201241Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-02-26T16:39:24.905723819Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
INFO[2019-02-26T16:39:25.908300809Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
INFO[2019-02-26T16:39:26.910658941Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
INFO[2019-02-26T16:39:27.964915355Z] Connecting to wss://localhost:6443/v1-k3s/connect 
INFO[2019-02-26T16:39:27.965260558Z] Connecting to proxy                           url="wss://localhost:6443/v1-k3s/connect"
INFO[2019-02-26T16:39:28.090165483Z] Handling backend connection request [orangepizero] 
INFO[2019-02-26T16:39:28.098986923Z] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/3c5abfba6bd2f6546427aabec2178ac160193eb9855631070e569313459a78de/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override orangepizero 
Flag --allow-privileged has been deprecated, will be removed in a future version
INFO[2019-02-26T16:39:28.429055198Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:30.599009814Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:32.608091116Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:34.616953307Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:36.626157269Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:38.636058016Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:40.644678220Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:42.653383295Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:44.661858892Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:46.670226027Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:48.678891260Z] waiting for node orangepizero: nodes "orangepizero" not found 
INFO[2019-02-26T16:39:51.050342254Z] waiting for node orangepizero CIDR not assigned yet 

最後にめっちゃ not found って出てますが、これはこのことかなぁ :thinking:

By default the server will register itself as a node (run the agent).

ちょっとわからないのですが、get nodeすると出てきたので2台目の作成に進んでみます。

root@orangepizero:~# k3s kubectl get node
NAME           STATUS   ROLES    AGE     VERSION
orangepizero   Ready    <none>   5m19s   v1.13.3-k3s.6
root@orangepizero:~# 

2台目作成

2台目のOrange Pi Oneを起動させたらIPアドレスを固定して、再起動しました。

auto eth0
iface eth0 inet static
address 192.168.0.201
netmask 255.255.255.0
gateway 192.168.0.1
dns-namaservers 192.168.0.1 8.8.8.8 8.8.4.4

再起動後、同じ手順でk3sのバイナリをセットしました。
セットしたらノードに参加させてみます。
まず最初に作ったOrange Pi Zeroでtokenを確認してみます。

INFO[2019-02-26T16:39:22.481354791Z] Node token is available at /var/lib/rancher/k3s/server/node-token 
INFO[2019-02-26T16:39:22.481567996Z] To join node to cluster: k3s agent -s https://192.168.0.200:6443 -t ${NODE_TOKEN} 

↑起動時のログで出ていたファイルをcatします。

root@orangepizero:~# cat /var/lib/rancher/k3s/server/node-token
K101a28c1caa8e4abf5bbb337ff44a32611184246ecdff9b99fd6319b4cd94c3a60::node:d89fe3da46a0fb65637ad70ddf494485
root@orangepizero:~# 

確認したtokenをセットしてコマンドを実行します。

root@orangepione:~# k3s agent -s https://192.168.0.200:6443 --token K101a28c1caa8e4abf5bbb337ff44a32611184246ecdff9b99fd6319b4cd94c3a60::node:d89fe3da46a0fb65637ad70ddf494485
INFO[2019-02-26T17:53:39.647693237Z] Starting k3s agent v0.1.0 (91251aa)          
INFO[2019-02-26T17:53:49.870332651Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-02-26T17:53:49.871192382Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-02-26T17:53:49.878874299Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
INFO[2019-02-26T17:53:50.921723551Z] Connecting to wss://192.168.0.200:6443/v1-k3s/connect 
INFO[2019-02-26T17:53:50.921997504Z] Connecting to proxy                           url="wss://192.168.0.200:6443/v1-k3s/connect"
INFO[2019-02-26T17:53:51.079073898Z] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/3c5abfba6bd2f6546427aabec2178ac160193eb9855631070e569313459a78de/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override orangepione 
Flag --allow-privileged has been deprecated, will be removed in a future version
W0226 17:53:51.080634    1491 server.go:194] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
INFO[2019-02-26T17:53:51.403349479Z] waiting for node orangepione: nodes "orangepione" not found 
W0226 17:53:51.417252    1491 node.go:103] Failed to retrieve node info: nodes "orangepione" not found
I0226 17:53:51.417397    1491 server_others.go:148] Using iptables Proxier.
W0226 17:53:51.418042    1491 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I0226 17:53:51.418446    1491 server_others.go:178] Tearing down inactive rules.
E0226 17:53:51.440506    1491 proxier.go:232] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.447323    1491 proxier.go:238] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.461188    1491 proxier.go:246] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.467885    1491 proxier.go:252] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.481586    1491 proxier.go:259] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-NON-LOCAL'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.516147    1491 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-SERVICES'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.522061    1491 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-SERVICES'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.528286    1491 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-POSTROUTING'

Try `iptables -h' or 'iptables --help' for more information.
E0226 17:53:51.534123    1491 proxier.go:563] Error removing iptables rules in ipvs proxier: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-FORWARD'

Try `iptables -h' or 'iptables --help' for more information.
I0226 17:53:51.994728    1491 server.go:464] Version: v1.13.3-k3s.6
I0226 17:53:52.029222    1491 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0226 17:53:52.029515    1491 conntrack.go:52] Setting nf_conntrack_max to 131072
I0226 17:53:52.074096    1491 conntrack.go:83] Setting conntrack hashsize to 32768
I0226 17:53:52.092194    1491 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0226 17:53:52.092632    1491 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0226 17:53:52.093229    1491 config.go:202] Starting service config controller
I0226 17:53:52.093330    1491 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0226 17:53:52.093299    1491 config.go:102] Starting endpoints config controller
I0226 17:53:52.093424    1491 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0226 17:53:52.193707    1491 controller_utils.go:1034] Caches are synced for endpoints config controller
I0226 17:53:52.193707    1491 controller_utils.go:1034] Caches are synced for service config controller
E0226 17:53:52.378078    1491 proxier.go:1335] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.2: Couldn't find target `KUBE-MARK-DROP'

Error occurred at line: 35
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
E0226 17:53:52.472098    1491 proxier.go:1335] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.2: Couldn't find target `KUBE-MARK-DROP'

Error occurred at line: 40
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
INFO[2019-02-26T17:53:53.424454053Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:53:55.454505853Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:53:57.465664429Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:53:59.481996304Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:01.496805825Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:03.513422045Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:05.541009686Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:07.563074216Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:09.574039204Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:11.584293097Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:13.602085223Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:15.622828508Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:17.640816898Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:19.655340240Z] waiting for node orangepione: nodes "orangepione" not found 
INFO[2019-02-26T17:54:21.679815924Z] waiting for node orangepione: nodes "orangepione" not found 
E0226 17:54:22.194500    1491 proxier.go:1335] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.2: Couldn't find target `KUBE-MARK-DROP'

Error occurred at line: 45
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
I0226 17:54:22.472033    1491 server.go:393] Version: v1.13.3-k3s.6
E0226 17:54:22.502851    1491 machine.go:194] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
I0226 17:54:22.506422    1491 server.go:630] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0226 17:54:22.508430    1491 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
I0226 17:54:22.508621    1491 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/rancher/k3s/agent/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
I0226 17:54:22.509511    1491 container_manager_linux.go:271] Creating device plugin manager: true
I0226 17:54:22.510203    1491 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0226 17:54:22.529465    1491 kubelet.go:297] Watching apiserver
I0226 17:54:22.566114    1491 kuberuntime_manager.go:192] Container runtime containerd initialized, version: 1.2.3+unknown, apiVersion: v1alpha2
W0226 17:54:22.568087    1491 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0226 17:54:22.570866    1491 server.go:946] Started kubelet
I0226 17:54:22.592214    1491 server.go:133] Starting to listen on 127.0.0.1:10250
I0226 17:54:22.599556    1491 server.go:318] Adding debug handlers to kubelet server.
I0226 17:54:22.610638    1491 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0226 17:54:22.610934    1491 status_manager.go:152] Starting to sync pod status with apiserver
I0226 17:54:22.611086    1491 kubelet.go:1735] Starting kubelet main sync loop.
I0226 17:54:22.623721    1491 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
E0226 17:54:22.625086    1491 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0226 17:54:22.625553    1491 kubelet.go:1229] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0226 17:54:22.640027    1491 volume_manager.go:248] Starting Kubelet Volume Manager
I0226 17:54:22.656972    1491 desired_state_of_world_populator.go:130] Desired state populator starts to run
W0226 17:54:22.681288    1491 nvidia.go:66] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
W0226 17:54:22.715039    1491 container.go:409] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
W0226 17:54:22.718003    1491 container.go:409] Failed to create summary reader for "/system.slice/systemd-timesyncd.service": none of the resources are being tracked.
W0226 17:54:22.725987    1491 container.go:409] Failed to create summary reader for "/system.slice/system-serial\\x2dgetty.slice": none of the resources are being tracked.
W0226 17:54:22.727418    1491 container.go:409] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
I0226 17:54:22.731261    1491 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet]
W0226 17:54:22.737083    1491 container.go:409] Failed to create summary reader for "/system.slice/armbian-hardware-optimize.service": none of the resources are being tracked.
W0226 17:54:22.741911    1491 container.go:409] Failed to create summary reader for "/system.slice/armbian-hardware-monitor.service": none of the resources are being tracked.
W0226 17:54:22.743108    1491 container.go:409] Failed to create summary reader for "/system.slice/cron.service": none of the resources are being tracked.
W0226 17:54:22.744529    1491 container.go:409] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
W0226 17:54:22.762272    1491 container.go:409] Failed to create summary reader for "/system.slice/sys-kernel-debug.mount": none of the resources are being tracked.
W0226 17:54:22.763647    1491 container.go:409] Failed to create summary reader for "/system.slice/tmp.mount": none of the resources are being tracked.
W0226 17:54:22.764600    1491 container.go:409] Failed to create summary reader for "/system.slice/ssh.service": none of the resources are being tracked.
W0226 17:54:22.780358    1491 container.go:409] Failed to create summary reader for "/system.slice/sys-kernel-config.mount": none of the resources are being tracked.
W0226 17:54:22.781604    1491 container.go:409] Failed to create summary reader for "/system.slice/dev-mqueue.mount": none of the resources are being tracked.
W0226 17:54:22.782784    1491 container.go:409] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
I0226 17:54:22.782978    1491 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
W0226 17:54:22.784018    1491 container.go:409] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
W0226 17:54:22.785377    1491 container.go:409] Failed to create summary reader for "/system.slice/networkd-dispatcher.service": none of the resources are being tracked.
W0226 17:54:22.788155    1491 container.go:409] Failed to create summary reader for "/system.slice/haveged.service": none of the resources are being tracked.
W0226 17:54:22.789877    1491 container.go:409] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
W0226 17:54:22.791250    1491 container.go:409] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
W0226 17:54:22.792657    1491 container.go:409] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
E0226 17:54:22.793580    1491 kubelet.go:2167] node "orangepione" not found
W0226 17:54:22.801210    1491 container.go:409] Failed to create summary reader for "/system.slice/unattended-upgrades.service": none of the resources are being tracked.
W0226 17:54:22.805246    1491 container.go:409] Failed to create summary reader for "/system.slice/wpa_supplicant.service": none of the resources are being tracked.
W0226 17:54:22.807018    1491 container.go:409] Failed to create summary reader for "/system.slice/systemd-resolved.service": none of the resources are being tracked.
I0226 17:54:22.820823    1491 kubelet_node_status.go:70] Attempting to register node orangepione
I0226 17:54:22.896790    1491 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
I0226 17:54:22.916934    1491 cpu_manager.go:155] [cpumanager] starting with none policy
I0226 17:54:22.917433    1491 cpu_manager.go:156] [cpumanager] reconciling every 10s
I0226 17:54:22.917684    1491 policy_none.go:42] [cpumanager] none policy: Start
E0226 17:54:22.936523    1491 kubelet.go:2167] node "orangepione" not found
I0226 17:54:22.957617    1491 kubelet_node_status.go:73] Successfully registered node orangepione
I0226 17:54:22.976874    1491 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet]
W0226 17:54:23.017426    1491 manager.go:527] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
E0226 17:54:23.043673    1491 kubelet.go:2167] node "orangepione" not found
E0226 17:54:23.062356    1491 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to get node info: node "orangepione" not found
I0226 17:54:23.142667    1491 kuberuntime_manager.go:930] updating runtime config through cri with podcidr 10.42.1.0/24
I0226 17:54:23.145726    1491 kubelet_network.go:69] Setting Pod CIDR:  -> 10.42.1.0/24
E0226 17:54:23.156849    1491 kubelet.go:2167] node "orangepione" not found
I0226 17:54:23.545900    1491 reconciler.go:154] Reconciler: start to sync state
I0226 17:54:23.713101    1491 flannel.go:89] Determining IP address of default interface
I0226 17:54:23.714615    1491 flannel.go:99] Using interface with name eth0 and address 192.168.0.201
I0226 17:54:23.723652    1491 kube.go:127] Waiting 10m0s for node controller to sync
I0226 17:54:23.723724    1491 kube.go:306] Starting kube subnet manager
I0226 17:54:24.724107    1491 kube.go:134] Node controller sync successful
I0226 17:54:24.724441    1491 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I0226 17:54:26.199809    1491 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
I0226 17:54:26.199971    1491 flannel.go:79] Running backend.
I0226 17:54:26.202613    1491 vxlan_network.go:60] watching for new subnet leases
I0226 17:54:26.226186    1491 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0226 17:54:26.226391    1491 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0226 17:54:26.229717    1491 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0226 17:54:26.230053    1491 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0226 17:54:26.237389    1491 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0226 17:54:26.240834    1491 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0226 17:54:26.248292    1491 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0226 17:54:26.251771    1491 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
I0226 17:54:26.262834    1491 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0226 17:54:26.270881    1491 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0226 17:54:26.274327    1491 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0226 17:54:26.293609    1491 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0226 17:54:26.309028    1491 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
I0226 17:54:26.324484    1491 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully

途中でエラーが出ていますが、追加されました。

NAME           STATUS   ROLES    AGE     VERSION
orangepione    Ready    <none>   3m14s   v1.13.3-k3s.6
orangepizero   Ready    <none>   77m     v1.13.3-k3s.6
root@orangepizero:~# k3s kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS      RESTARTS   AGE
kube-system   coredns-7748f7f6df-lw7q4         1/1     Running     0          108m
kube-system   helm-install-traefik-48d22       0/1     Completed   1          108m
kube-system   svclb-traefik-657f4dffcc-69mz8   2/2     Running     0          104m
kube-system   traefik-cd5db8d98-47s4b          1/1     Running     0          104m
root@orangepizero:~# 

バイナリをセットするだけで簡単に動きましたが、K8S初めて触ってみたのでよくわからず、ログの内容もわかりません...:sob:
この環境で勉強しながら動かしていきたいと思います。

雑な記事を失礼しました。 :bow_tone1: :bow_tone2:

13
6
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
13
6

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?