0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

redisクラスタ作成

Last updated at Posted at 2025-11-08

環境

  • Rocky Linux release 9.6 (Blue Onyx)
  • VirtualBox 7.0
  • ネットワーク: 192.168.11.1/24
  • ブリッジ接続

特徴

  • マスターは最低3つ必要
  • マスターはそれに紐づく複数のスレーブを持つことができる
  • マスターが落ちるとそれに紐づくスレーブがマスターに昇格する
  • 各マスターで分割された記録領域を管理している
  • マスターとそれに紐づくスレーブが落ちた場合、担当領域のデータは引けなくなる
  • マスターとスレーブの紐づけはクラスター開始時に自動的に決められる

構成

ノード サーバー FQDN IPアドレス ポート(*1) ポート(*2)
1 chihuahua chihuahua.example.home 192.168.11.10 7000 17000
2 chihuahua chihuahua.example.home 192.168.11.10 7001 17001
3 poodle poodle.example.home 192.168.11.11 7000 17000
4 poodle poodle.example.home 192.168.11.11 7001 17001
5 pug pug.example.home 192.168.11.12 7000 17000
6 pug pug.example.home 192.168.11.12 7001 17001

構築

Redisインストール

// パッケージ確認
# dnf list redis
	Last metadata expiration check: ...
	Available Packages
	redis.x86_64             6.2.20-1.el9_6               appstream

// インストール
# dnf install redis

ディレクトリ作成

# cd
# mkdir redis-cluster
# cd redis-cluster
# mkdir 700{0,1}

設定ファイル作成

// redis.conf作成
# vi 7000/redis.conf
bind 192.168.11.10 127.0.0.1   ← 各サーバーのIPアドレスに読み替え
port 7000
cluster-enabled yes
cluster-config-file /root/redis-cluster/7000/nodes.conf
cluster-node-timeout 5000
appendonly yes
protected-mode no
(ZZ)

// コピー
# cp -p ./7000/redis.conf ./7001/

// 置換
# sed -i s/7000/7001/ 7001/redis.conf

起動用Unitファイル作成

// ディレクトリ移動
# cd /etc/systemd/system/

// unitファイル作成
# vi redis-7000.service
[Unit]
Description = redis-cluster:7000 service control script.
After = network.target

[Service]
ExecStart = /usr/bin/redis-server /root/redis-cluster/7000/redis.conf
ExecStop = /bin/kill -INT ${MAINPID}
Type = simple

[Install]
WantedBy = multi-user.target
(ZZ)

// コピー
# cp -p redis-7000.service redis-7001.service

// 置換
# sed -i s/7000/7001/ redis-7001.service

// サービス登録確認
# systemctl list-unit-files --type=service | grep redis
		redis-7000.service                         disabled        disabled
		redis-7001.service                         disabled        disabled
		redis-sentinel.service                     disabled        disabled
		redis.service                              disabled        disabled
	
// systemd リロード
# systemctl daemon-reload

一括起動・停止スクリプト作成

/root/redis-cluster/start.sh
#!/usr/bin/bash
systemctl start redis-7000.service
systemctl start redis-7001.service
sleep 5
ps aux | grep "redis" | grep -v "grep"
/root/redis-cluster/stop.sh
#!/usr/bin/bash
systemctl stop redis-7000.service
systemctl stop redis-7001.service
sleep 5
find /root/redis-cluster/ -name "nodes.conf" -print | xargs -n 1 rm
ps aux | grep 'redis' | grep -v 'grep'

ポート開放

# firewall-cmd --add-port=7000/tcp --zone=public --permanent
# firewall-cmd --add-port=17000/tcp --zone=public --permanent
# firewall-cmd --add-port=7001/tcp --zone=public --permanent
# firewall-cmd --add-port=17001/tcp --zone=public --permanent
# firewall-cmd --reload

クラスター起動スクリプト作成

/root/redis-cluster/reset_cluster.sh
#!/usr/bin/bash

systemctl start redis-7000.service
systemctl start redis-7001.service

/usr/bin/redis-cli --cluster create \
192.168.11.10:7000 \
192.168.11.10:7001 \
192.168.11.11:7000 \
192.168.11.11:7001 \
192.168.11.11:7000 \
192.168.11.11:7001 \
--cluster-replicas 1

sleep 5

ps aux | grep "redis" | grep -v "grep"

クラスター起動

  1. 3台のサーバーを起動させる
  2. それぞれのサーバーでredisを起動させる(start.sh)
  3. クラスターを開始する(reset-cluster.sh)
# ./reset-cluster.sh
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.11.10:7001 to 192.168.11.10:7000
Adding replica 192.168.11.11:7001 to 192.168.11.11:7000
Adding replica 192.168.11.12:7001 to 192.168.11.12:7000
M: d756103ee666754a9ec1ccf69f4711b78f4c59e3 192.168.11.10:7000
   slots:[0-5460] (5461 slots) master
S: 005845a33a85e0d35c0a5a752e9709cf6f1c65c1 192.168.11.10:7001
   replicates 9cf05fb9fe14e370e17682ea3dae7372df8bb1a6
M: a83167247854e31eac419c6e7fca5ecb3120ae06 192.168.11.11:7000
   slots:[5461-10922] (5462 slots) master
S: a28ae9f61570ad8f79de60358db6d0f2303a5847 192.168.11.11:7001
   replicates d756103ee666754a9ec1ccf69f4711b78f4c59e3
M: 9cf05fb9fe14e370e17682ea3dae7372df8bb1a6 192.168.11.12:7000
   slots:[10923-16383] (5461 slots) master
S: f53dd314a72a8ab4281cfc13606c5e85cd7fc77e 192.168.11.12:7001
   replicates a83167247854e31eac419c6e7fca5ecb3120ae06
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

>>> Performing Cluster Check (using node 192.168.11.10:7000)
M: d756103ee666754a9ec1ccf69f4711b78f4c59e3 192.168.11.10:7000
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: a28ae9f61570ad8f79de60358db6d0f2303a5847 192.168.11.11:7001
   slots: (0 slots) slave
   replicates d756103ee666754a9ec1ccf69f4711b78f4c59e3
M: 9cf05fb9fe14e370e17682ea3dae7372df8bb1a6 192.168.11.12:7000
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: f53dd314a72a8ab4281cfc13606c5e85cd7fc77e 192.168.11.12:7001
   slots: (0 slots) slave
   replicates a83167247854e31eac419c6e7fca5ecb3120ae06
S: 005845a33a85e0d35c0a5a752e9709cf6f1c65c1 192.168.11.10:7001
   slots: (0 slots) slave
   replicates 9cf05fb9fe14e370e17682ea3dae7372df8bb1a6
M: a83167247854e31eac419c6e7fca5ecb3120ae06 192.168.11.11:7000
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

[ERR] Node 192.168.11.11:7000 is not empty. が出たら

該当するノードのディレクトリにある nodes.conf を削除してノードを再起動する
(stop.sh → start.sh)

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?