3
6

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

kubernetes cluster上にfree5gcを構築してみる(前編) Kubernetes Cluster構築編

Last updated at Posted at 2023-01-14

image.png

前回↓までは普通の?Free5GCを構築しましたが流行りのコンテナ版もありますので勉強のためにWindows PC上に構築してみます。前編後編2回に分けて説明します。

コンテナ版はフランスのテレコム事業者Orangeが公開している以下のものを使用します。

コンテナ版はおおまかな流れとして

(前編)
1. VirtualBoxでLinux VMを構築
2. Linux VM上にコンテナ実行環境のdockerをいれる
3. コンテナオーケストレーターのkubernetesをいれる
4. kubernetes clusterを構築する

(後編)
5. helm chartを使ってコンテナ版free5gcをinstallする
6. UERANSIMを使ってone callを試す

といった感じになります。Kubernetesクラスターというくらいなので複数のサーバー上に構築します(といっても2台だけですが…)。今回作成してみる構成イメージは以下の通り。

image.png

それではやってみましょう。

使用するVersion

Oracle Virtual Box 6.1.36
Ubuntu Server 20.04.5 LTS 
Kubernetes 2.16

Ubuntsuは例のgtp5g問題があるので古い20.04を使います(今回は諸事情によりMaster Nodeだけは22.04使ってますが20.04同士でも、もちろん動作します)
20.04のisoイメージリンクはこちら(ubuntu-20.04.5-live-server-amd64.iso)

1-1.まずはVirtualBoxの設定

VirtualBox上にUbuntsuをInstallするのですがCPUの数を2以上にしないとKubernetes起動時に以下のエラーで怒られます。

[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

VirtualBox側でプロセッサー数を4に設定
image.png

次にVM間のネットワーク設定。VirtualBoxのメニューからファイル->環境設定->ネットワークを選択しNatNetworkを作っておきます。これで(enp0s3インターフェイスを使った)VM間同士の通信が出来るようなります。(Kubernetesクラスター間の通信をenp0s8インターフェイス使う様にすればこちらの設定は不必要な気もしますが、よく分からないのでとりあえず。。)

image.png

ネットワークアダプターの1つ目はNATネットワークを選択し先ほど作ったNatNetworkを選択。

image.png

ネットワークアダプターの2つめを有効化しておきます。
image.png

1-2.UbuntsuのInstall

UbuntsuはそのままInstallするだけです。
Install後2nd ネットワークインターフェイス(enp0s8)の設定を追加します。

q14537@kubemaster:~$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp0s3:
      dhcp4: true
    enp0s8:
      addresses:
      - 192.168.56.103/24
      nameservers:
        addresses:
        - 8.8.8.8
        search: []
  version: 2

q14537@kubemaster:~$ sudo netplan apply

そのままだとTimezoneもUTCなのでJSTに変更します。

q14537@kubemaster:~# date
Mon Jan  9 06:08:37 AM UTC 2023
q14537@kubemaster:~# sudo timedatectl set-timezone Asia/Tokyo
q14537@kubemaster:~$ date
Mon Jan  9 03:08:38 PM JST 2023

2. 次にdocker

root@kubemaster:~$ apt -y install docker.io apt-transport-https

root@kubemaster:~# cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF


root@kubemaster:~# systemctl restart docker
root@kubemaster:~# systemctl enable docker

3. そんでもってkubernetesのinstall

3-1. Master Nodeの設定

kubernetes向けの設定

root@kubemaster:~# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward               = 1
EOF

設定が反映されている事の確認

root@kubemaster:~# sysctl --system
~~
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
~~

iptablesの確認(iptables-legacy)になっている事

root@kubeworker2:~# update-alternatives --config iptables
There are 2 choices for the alternative iptables (providing /usr/sbin/iptables).

  Selection    Path                       Priority   Status
------------------------------------------------------------
* 0            /usr/sbin/iptables-legacy   20        auto mode
  1            /usr/sbin/iptables-legacy   20        manual mode
  2            /usr/sbin/iptables-nft      10        manual mode

swap off

root@kubeworker2:~# swapoff -a

設定を恒久化するためにswapの行をコメントアウト

root@kubeworker2:~# vi /etc/fstab
root@kubemaster:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-ERepcD1PSIbdMvFKD127W0DfqshuLPd1ei1gJuR5gXOasPNcMI5wIFp6A8ZNz8b6 / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/57647820-8433-44d6-b1bc-a0083a0409f5 /boot ext4 defaults 0 1
#/swap.img      none    swap    sw      0       0

3-2. Kubernetes Install

root@kubemaster:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK
root@kubemaster:~#  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
root@kubemaster:~# apt update

root@kubemaster:~# apt -y install kubeadm kubelet kubectl

4. Kubernetes起動

4-1. Master Nodeの起動

Masterノード自身のIPアドレスを指定してスタート。
Node間通信用に10.244.0.0/16の指定。

root@kubemaster:~# kubeadm init --apiserver-advertise-address=192.168.56.103 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
---
略
---
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.103:6443 --token 47iwoy.1c7hznernno0h1i7 \
        --discovery-token-ca-cert-hash sha256:ef50f533b57c7d3fc203e05ccabe29ffea4eed0f6c55f2aa99398d46d6984533


root@kubemaster:~# mkdir -p $HOME/.kube
root@kubemaster:~#  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kubemaster:~#  chown $(id -u):$(id -g) $HOME/.kube/config

node間通信用にCNI PluginのflannelのApply

root@kubemaster:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

とりあえずMaster Nodeだけ起動。Status ReadyならOK。

root@kubemaster:~# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
kubemaster   Ready    control-plane   3m53s   v1.26.0

Podたちの様子も見てみる。全部RunningならOK。

root@kubemaster:~# kubectl get pods -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-xcqlh                1/1     Running   0          32s
kube-system    coredns-787d4945fb-dwr27             1/1     Running   0          3m31s
kube-system    coredns-787d4945fb-fwmb2             1/1     Running   0          3m31s
kube-system    etcd-kubemaster                      1/1     Running   1          3m43s
kube-system    kube-apiserver-kubemaster            1/1     Running   1          3m42s
kube-system    kube-controller-manager-kubemaster   1/1     Running   1          3m43s
kube-system    kube-proxy-68lpc                     1/1     Running   0          3m32s
kube-system    kube-scheduler-kubemaster            1/1     Running   1          3m47s

4-2. Worker Nodeの作成

Master NodeをKubeadm resetで一度停止し、とりあえずMaster Nodeのクローン作製

image.png

クローン作製後Worker Node起動
hostnameを変更

root@kubemaster:~# hostnamectl set-hostname kubeworker1

4-3. Worker Nodeの起動

Worker Nodeは特にやることはなく、Master Nodeで再度Kubernetesを起動させ、最後に出てくるCluster JoinさせるコマンドをWorker側にコピペしてClusterにJoinさせるだけ。

root@kubeworker1:~$ kubeadm join 192.168.56.103:6443 --token kx1btl.hr9rk7hhc3cmv4gx         --discovery-token-ca-cert-hash sha256:4c5656879127a2e41dfa4dc6598edd0ecbe453e55bcb920b37a9005caf53e6ef
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4-4. Cluster状態の確認

Master NodeでWorkerが増えたことを確認(もうひとつのWorkerにもkubeadm joinコマンドを実行済み)。get nodeで見るとkubeworker, kubeworker2が増えている。
またget pods -o wideで見るとそれぞれのpodsがどこのNodeで起動しているかわかる。

root@kubemaster:~# kubectl get node -o wide
NAME          STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kubemaster    Ready    control-plane   3h13m   v1.26.0   10.0.2.4      <none>        Ubuntu 22.04.1 LTS   5.15.0-57-generic   containerd://1.6.14
kubeworker1   Ready    <none>          3h8m    v1.26.0   10.0.2.16     <none>        Ubuntu 20.04.5 LTS   5.4.0-136-generic   containerd://1.6.14
root@kubemaster:~# kubectl get node -o wide
NAME          STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kubemaster    Ready    control-plane   3h13m   v1.26.0   10.0.2.4      <none>        Ubuntu 22.04.1 LTS   5.15.0-57-generic   containerd://1.6.14
kubeworker1   Ready    <none>          3h8m    v1.26.0   10.0.2.16     <none>        Ubuntu 20.04.5 LTS   5.4.0-136-generic   containerd://1.6.14



root@kubemaster:~# kubectl get pods -A -o wide
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-dqwpc                1/1     Running   0          13m     10.0.2.15    kubemaster    <none>           <none>
kube-flannel   kube-flannel-ds-skf7p                1/1     Running   0          2m57s   10.0.2.15    kubeworker1   <none>           <none>
kube-system    coredns-787d4945fb-55vwb             1/1     Running   0          15m     10.244.0.7   kubemaster    <none>           <none>
kube-system    coredns-787d4945fb-f8tls             1/1     Running   0          15m     10.244.0.6   kubemaster    <none>           <none>
kube-system    etcd-kubemaster                      1/1     Running   3          15m     10.0.2.15    kubemaster    <none>           <none>
kube-system    kube-apiserver-kubemaster            1/1     Running   3          15m     10.0.2.15    kubemaster    <none>           <none>
kube-system    kube-controller-manager-kubemaster   1/1     Running   3          15m     10.0.2.15    kubemaster    <none>           <none>
kube-system    kube-proxy-7vnnb                     1/1     Running   0          15m     10.0.2.15    kubemaster    <none>           <none>
kube-system    kube-proxy-lcjd9                     1/1     Running   0          2m57s   10.0.2.15    kubeworker1   <none>           <none>
kube-system    kube-scheduler-kubemaster            1/1     Running   3          15m     10.0.2.15    kubemaster    <none>           <none>

4-5. HelmのInstall

後にFree5GCのコンテナ群をサクッとDeployするためのツールのHelmを入れておきます。

あっという間にInstallできます。

root@kubemaster:~# curl -O https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11345  100 11345    0     0  76550      0 --:--:-- --:--:-- --:--:-- 77176
root@kubemaster:~#  bash ./get-helm-3
Downloading https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
root@kubemaster:~# helm version
version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}

後編↓に続きます。

3
6
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
6

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?