4
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

オンプレKubernetesでGitLab(Gitaly Cluster)を立てた

Last updated at Posted at 2022-06-03

概要

Qiitaで初投稿です。kubernetesでいろいろ遊んでみたのでメモとして残しておきます。

containerdの設定

mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

containerdの再起動

systemctl restart containerd

kubeadmのインストール

事前準備

swapoff -a
vi /etc/fstab
(※swapをコメントアウト)

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

sudo apt-get install -y iptables arptables ebtables

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo update-alternatives --set arptables /usr/sbin/arptables-legacy
sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy

kubeadmのインストール

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Kubernetes初期設定

master nodeの初期設定

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
(略)
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.50:6443 --token 6tpj5a.wacqwzxlbql7ez0u \
        --discovery-token-ca-cert-hash sha256:7fa66b661d99192ea7a8f83f3cdd51132fb854813e7649021f7bc732260c4f59
※保存しておくと便利

worker nodeの初期設定

kubeadm join 192.168.100.50:6443 --token 6tpj5a.wacqwzxlbql7ez0u \
        --discovery-token-ca-cert-hash sha256:7fa66b661d99192ea7a8f83f3cdd51132fb854813e7649021f7bc732260c4f59
※master node側でworker nodeが追加されるが、CNIを入れないとSTATUSがReadyにならない

Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
(略)
kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
k8s-m1   Ready    control-plane   20h   v1.24.1
k8s-w1   Ready    <none>          20h   v1.24.1
k8s-w2   Ready    <none>          20h   v1.24.1
k8s-w3   Ready    <none>          20h   v1.24.1

MetalLBを入れてLBを行う

kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: false -> true

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

vi metallb.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.100.51/32
      - 192.168.100.52/32
      - 192.168.100.53/32

kubectl create -f metallb.yaml

rook/cephで分散ストレージを用意する

vi disk.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-w1-sdb
  labels:
    osd: "true"
spec:
  volumeMode: Block
  capacity:
    storage: 10Gi
  local:
    path: /dev/sdb
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-w1
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-w2-sdb
  labels:
    osd: "true"
spec:
  volumeMode: Block
  capacity:
    storage: 10Gi
  local:
    path: /dev/sdb
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-w2
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-w3-sdb
  labels:
    osd: "true"
spec:
  volumeMode: Block
  capacity:
    storage: 10Gi
  local:
    path: /dev/sdb
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-w3

kubectl create -f disk.yaml

git clone --single-branch --branch v1.9.4 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
kubectl create -f csi/cephfs/storageclass.yaml
kubectl create -f filesystem.yaml

kubectl create -f toolbox.yaml
kubectl -n rook-ceph exec -it rook-ceph-tools-6b8668bcc9-mzbb7 -- ceph status
  cluster:
    id:     5dfe1104-8da5-4c76-aa7d-1d0cef3cc167
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 2h)
    mgr: a(active, since 20h), standbys: b
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 20h), 3 in (since 20h)

  data:
    volumes: 1/1 healthy
    pools:   3 pools, 65 pgs
    objects: 7.34k objects, 493 MiB
    usage:   4.3 GiB used, 116 GiB / 120 GiB avail
    pgs:     65 active+clean

  io:
    client:   7.8 KiB/s rd, 36 KiB/s wr, 3 op/s rd, 5 op/s wr

StorageClassをdefaultにしてしまう。

kubectl patch storageclass rook-cephfs -p '{"metadata": {"annotations":{ storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl get sc
NAME                    PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-cephfs (default)   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   20h

Helmの導入

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

GitLab Native Cloud Helm Chart

helm repo add gitlab https://charts.gitlab.io/
helm repo update
helm pull gitlab/gitlab
tar -zxvf gitlab-6.0.1.tgz
vi gitlab/values.yaml

edition: ee -> ce
externalIP: 192.168.100.51(metallb.yamlの一番最初のIP)
configureCertmanager: true -> false

certmanagerとgitlab-runnerのinstall: falseに変更

$ helm package gitlab
$ kubectl create namespace gitlab
$ helm install gitlab gitlab-6.0.1.tgz -n gitlab

Gitaly Cluster

vi gitlab/values.yaml
  praefect:
    enabled: true
helm package gitlab
helm upgrade gitlab gitlab-6.0.1.tgz -n gitlab

kubectl get secret gitlab-praefect-dbsecret -n gitlab -o jsonpath="{.data.secret}" | base64 --decode
(DB Pass)
kubectl exec -it $(kubectl get pods -n gitlab -l app=postgresql -o custom-columns=NAME:.metadata.name --no-headers) -n gitlab -- bash
I have no name!@gitlab-postgresql-0:/$ PGPASSWORD=$(cat $POSTGRES_POSTGRES_PASSWORD_FILE) psql -U postgres -d template1
psql (11.9)
Type "help" for help.

template1=# CREATE ROLE praefect WITH LOGIN;
CREATE ROLE
template1=# \password praefect
Enter new password:(DB Pass)
Enter it again:(DB Pass)
template1=# CREATE DATABASE praefect WITH OWNER praefect;
CREATE DATABASE
template1=# \q
I have no name!@gitlab-postgresql-0:/$ exit
exit

実際にアクセスする際は「gitlab.example.com」と「192.168.100.51」が名前解決できるように「/etc/hosts」なりを変えておく必要がある。初期パスワードは以下で取得

kubectl get secret gitlab-gitlab-initial-root-password \
  --namespace gitlab \
  -ojsonpath='{.data.password}' | base64 --decode ; echo
4
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?