MySQL 8.0.11のInnoDB Clusterをvagrant環境で構築してみた

tl;dr

開発者のブログで記事になっていたのでMySQL8.0.11のInnoDB Clusterをvagrant上のvmで構築し動作を確認してみました
5.7の頃に面倒だった各slave環境に個別ログインしてのmy.cnf設定修正作業が不要になり、構築作業がしやすくなっています

MySQLインストールと初期設定が終わってからの作業が今回の記事のメインのため
ページ最後に検証に使ったvagrant環境の構築方法は列挙しました

環境

機材: Intel NUC BOXNUC6i5SYH(RAM32GB,Samsung 950 PRO)
OS: Windows 10 Pro
Vagrant version: 2.0.4
VirtualBox version: 5.2.10 r122406(Qt5.6.2)

ゲスト環境

OS: CentOS 7.4(x86_64)
MySQL ver:8.0.11

環境名 IP
master 192.168.33.10
slave1 192.168.33.11
slave2 192.168.33.12
router 192.168.33.13
client 192.168.33.14

設定

MySQL JSや MySQL 192.168.33.10:33060+ ssl JS >などのコンソールコマンドはmysqlsh実行後の画面で実行してください
コマンド結果を含めて記述しているため大変そうに見えるかもしれませんが実際に実行するコマンドの種類は多くありません

まずは現在のmy.cnfの設定が適切かチェックコマンドをInnoDB Clusterのグループ内全台で実行します
早速環境に依存したエラーが出ました
これは名前解決周りということで/etc/hostsの先頭行に127.0.0.1 サーバ名が記載されていることで発生していました

vagrant-hostmanagerの動作を見直せば解消されるのでしょうが
本検証ではvagrant-hostmanagerの設定は見直さず、暫定対応として手作業で該当行削除します

MySQL  JS > dba.checkInstanceConfiguration("root@localhost:3306")

ERROR: slave2 resolves to a loopback address.
Because that address will be used by other cluster members to connect to it, it must resolve to an externally reachable address.
This address was determined through the report_host or hostname MySQL system variable.


Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

{
    "status": "ok"
}

/etc/hosts修正後のコマンド結果からERRORが消えました

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'localhost:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

グループ全台のチェックが通ったことを確認後InnoDB Clustertとしての構築をはじめていきます
5.7検証時と選択項目が違いますね
rootユーザを%で許可するか、別ユーザを作成するか選択肢がでてきました

 MySQL  JS > dba.configureInstance('root@localhost:3306')


1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

今回は2番を選択し、sreplアカウントを作成します

作成されたユーザの権限情報を念のため確認しましたが権限データベースの変更権限をパスなしでIP全許可しています
本番環境運用時は手動でパスワードを保持したユーザの作成を検討しましょう

権限確認
GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT, INSERT, UPDATE, DELETE ON `mysql`.* TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `sys`.* TO `srepl`@`%` WITH GRANT OPTION
GRANT ALL PRIVILEGES ON `mysql_innodb_cluster_metadata`.* TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_applier_configuration` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_applier_status_by_coordinator` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_applier_status_by_worker` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_applier_status` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_connection_configuration` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_connection_status` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_group_member_stats` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`replication_group_members` TO `srepl`@`%` WITH GRANT OPTION
GRANT SELECT ON `performance_schema`.`threads` TO `srepl`@`%` WITH GRANT OPTION

InnoDB Clusterの作業ユーザが作成されました
次にクラスターを定義してきます
mysqlshコマンド実行後、さらにshell.connectで作業機にログインし設定していきます
作業対象がわかりづらいので
コマンドのテキストボックスに作業対象機を記載しておきましたが基本masterでの作業です

master
MySQL  JS > shell.connect('srepl@master:3306')

Please provide the password for 'srepl@master:3306':
Creating a session to 'srepl@master:3306'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 14
Server version: 8.0.11 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
<ClassicSession:srepl@master:3306>


 MySQL  master:3306 ssl  JS > cluster = dba.createCluster("myCluster");
A new InnoDB cluster will be created on instance 'srepl@master:3306'.

Validating instance at master:3306...

This instance reports its own address as master

Instance configuration is suitable.
Creating InnoDB cluster 'myCluster' on 'srepl@master:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

<Cluster:myCluster>
master
 MySQL  192.168.33.10:33060+ ssl  JS > var cluster = dba.getCluster();
 MySQL  192.168.33.10:33060+ ssl  JS > cluster.status();
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "master:3306": {
                "address": "master:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    },
    "groupInformationSourceMember": "mysql://srepl@192.168.33.10:3306"
}

クラスタが作成されました
slaveメンバーを追加していきます
もしmysqlshからログアウトしてしまったらcluster.addInstance実行前に下記コマンドも必要です

master
shell.connect('srepl@master:3306')
var cluster = dba.getCluster();
master
MySQL  master:3306 ssl  JS > cluster.addInstance('srepl@slave1:3306')
MySQL  master:3306 ssl  JS > cluster.addInstance('srepl@slave2:3306')

クラスタの状態を確認します
toporogyの"status"が全台 "ONLINE"となっていることを確認します
ONLINE以外のstatusであればmysqlのログを確認し解決していきましょう

master
 MySQL  192.168.33.10:33060+ ssl  JS > cluster.status()
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "master:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "192.168.33.11:3306": {
                "address": "192.168.33.11:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "192.168.33.12:3306": {
                "address": "192.168.33.12:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "master:3306": {
                "address": "master:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    },
    "groupInformationSourceMember": "mysql://srepl@192.168.33.10:3306"
}

障害例の一つとして
vagrant haltして全ゲスト環境シャットダウンしたところ、OS起動後に自動でリカバリされませんでした
metadataは存在していますがGRが正常に動作していません
全台がOFFLINEになっていましたのでdba.rebootClusterFromCompleteOutage()でクラスタを復元し対処し復旧しました

master
MySQL  master:3306 ssl  JS > var cluster = dba.getCluster();
Dba.getCluster: This function is not available through a session to a standalone instance (metadata exists, but GR is not active) (RuntimeError)


MySQL  JS > shell.connect('srepl@master:3306')
Please provide the password for 'srepl@master:3306':
Creating a session to 'srepl@master:3306'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 28
Server version: 8.0.11 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
<ClassicSession:srepl@master:3306>

 MySQL  master:3306 ssl  JS > dba.rebootClusterFromCompleteOutage('myCluster')
Reconfiguring the cluster 'myCluster' from complete outage...

The instance 'slave1:3306' was part of the cluster configuration.
Would you like to rejoin it to the cluster? [y/N]: y

The instance 'slave2:3306' was part of the cluster configuration.
Would you like to rejoin it to the cluster? [y/N]: y

The cluster was successfully rebooted.

<Cluster:myCluster>
master
 MySQL  master:3306 ssl  JS > shell.connect('srepl@master:3306')
Please provide the password for 'srepl@master:3306':
Creating a session to 'srepl@master:3306'
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 59
Server version: 8.0.11 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
<ClassicSession:srepl@master:3306>

 MySQL  master:3306 ssl  JS > dba.getCluster()
<Cluster:myCluster>

 MySQL  master:3306 ssl  JS > var cluster = dba.getCluster();
 MySQL  master:3306 ssl  JS > cluster.status()
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "master:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "master:3306": {
                "address": "master:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "slave1:3306": {
                "address": "slave1:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "slave2:3306": {
                "address": "slave2:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    },
    "groupInformationSourceMember": "mysql://srepl@master:3306"
}

mysqlrouterの設定

レプリケーション周りの設定が終わりましたのでmysqlrouterを設定します
bootstrapコマンドでmetadataの情報を喰わすことで初期設定まで終わらせた後
設定ファイルを修正します
ブログにも記載がありますが現時点でrouterの最大接続数は5000です
サービスの規模に応じてrouterの台数は見直しましょう
今回は試験目的で1台のみの構築ですが
複数台routerを設定する場合は、各router環境で同じ作業が必要です

mysql-router
yum install mysql-router
service mysqld stop
mysqlrouterのbootstrap
[root@router vagrant]# mysqlrouter --bootstrap srepl@master:3306 --user=mysqlrouter
Please enter MySQL password for srepl:

Bootstrapping system MySQL Router instance...
MySQL Router  has now been configured for the InnoDB cluster 'myCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
X protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

Existing configurations backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak'
/etc/mysqlrouter/mysqlrouter.conf
# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=system
user=mysqlrouter
keyring_path=/var/lib/mysqlrouter/keyring
master_key_path=/etc/mysqlrouter/mysqlrouter.key
connect_timeout=30
read_timeout=30
logging_folder = /var/log/mysqlrouter

[logger]
level = INFO

[metadata_cache:myCluster]
router_id=2
bootstrap_server_addresses=mysql://master:3306,mysql://slave1:3306,mysql://slave2:3306
user=mysql_router2_ue57ggm3re4n
metadata_cluster=myCluster
ttl=5

[routing:myCluster_default_rw]
bind_address=0.0.0.0
bind_port=3306
destinations=metadata-cache://myCluster/default?role=PRIMARY
routing_strategy=round-robin
protocol=classic
max_connections=2048
max_connect_errors=2048

[routing:myCluster_default_ro]
bind_address=0.0.0.0
bind_port=3307
destinations=metadata-cache://myCluster/default?role=SECONDARY
routing_strategy=round-robin
protocol=classic
max_connections=2048
max_connect_errors=2048
mysqlrouter起動
systemctl start mysqlrouter.service
systemctl enable mysqlrouter.service

mysqlrouter経由でクエリがルーティングされるかclinet用途のvmから接続を試験します
Masterにのみ接続が通ることが確認できました

client
port3306:r/w

[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
master
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
master
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
master
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
master

Slave用のportはRound Robinされています

client
port3307:r

[root@client vagrant]# mysql -usrepl -p -P 3307 -h router -sNe "select @@hostname;"
slave2
[root@client vagrant]# mysql -usrepl -p -P 3307 -h router -sNe "select @@hostname;"
slave1
[root@client vagrant]# mysql -usrepl -p -P 3307 -h router -sNe "select @@hostname;"
slave2
[root@client vagrant]# mysql -usrepl -p -P 3307 -h router -sNe "select @@hostname;"
slave1

Masterのmysqldを落として
Masterの接続先がmaster=>slave2に変化することを確認しました

client
port3306:r/w
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
slave2
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
slave2
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
slave2
[root@client vagrant]# mysql -usrepl -p -P 3306 -h router -sNe "select @@hostname;"
slave2

まとめ

5.7の時と比べて設定しやすくなった印象を受けました

https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-working-with-cluster.html
https://mysqlserverteam.com/mysql-innodb-cluster-whats-new-in-the-8-0-ga-release/
https://mysqlserverteam.com/mysql-innodb-cluster-8-0-a-hands-on-tutorial/
https://dev.mysql.com/doc/dev/mysqlsh-api-javascript/8.0/group___admin_a_p_i.html

使用したmy.cnf

  • 性能試験はしない想定でinnodb clusterの要件を満たす程度
  • server-id,loose-group_replication_local_addressは環境ごとに変更必須
  • loose-group_replication_group_seedsのportは当初5.7の頃と同じ6606を設定していたが GRが正常動作しないため、エラーログを元に33061に変更
/etc/my.cnf
#Foradviceonhowtochangesettingspleasesee
#http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html

[mysqld]
#
#Removeleading#andsettotheamountofRAMforthemostimportantdata
#
#Removeleading#toturnonaveryimportantdataintegrityoption:logging
#changestothebinarylogbetweenbackups.
#log_bin
#
#Removeleading#tosetoptionsmainlyusefulforreportingservers.
#TheserverdefaultsarefasterfortransactionsandfastSELECTs.
#Adjustsizesasneeded,experimenttofindtheoptimalvalues.
#join_buffer_size=128M
#sort_buffer_size=2M
#read_rnd_buffer_size=2M
datadir=/data/mysql
default_authentication_plugin=mysql_native_password
#socket=/var/lib/mysql/mysql.sock
socket=/tmp/mysql.sock

log-error=/data/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

user=mysql
server-id=1
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_slave_updates=ON

#MasterDB
log-bin
binlog_format=ROW
binlog_expire_logs_seconds=2592000 #30days
binlog_error_action=IGNORE_ERROR
binlog_rows_query_log_events

character-set-server=utf8mb4

master_info_repository=TABLE
relay_log_info_repository=TABLE
relay-log=relay

secure_file_priv=/data/mysql-files

#
#InnodbGeneralSetting
#
innodb_data_home_dir=/data/innodb/
innodb_data_file_path=data1:2000M;data2:10M:autoextend
innodb_log_group_home_dir=/data/innodb-log
innodb_buffer_pool_dump_at_shutdown=ON
innodb_buffer_pool_load_at_startup=OFF
innodb_strict_mode=0
innodb_checksum_algorithm=innodb
innodb_flush_method=O_DIRECT
innodb_temp_data_file_path=ibtmp1:256M#tmptableisinnodbsize256MB
innodb_buffer_pool_size=1G
innodb_log_file_size=512M

#Log
#
slow-query-log
slow_query_log_file=/data/mysql/slow.log
log-queries-not-using-indexes
log_timestamps=SYSTEM

loose-group_replication_group_name=820725cf-f703-4d44-a3fe-56584af69f6f
loose-group_replication_start_on_boot=off
loose-group_replication_auto_increment_increment=1
loose-group_replication_local_address="192.168.33.10:33061"
loose-group_replication_group_seeds= "192.168.33.10:33061,192.168.33.11:33061,192.168.33.12:33061"
loose-group_replication_ip_whitelist="192.168.33.10,192.168.33.11,192.168.33.12"
loose-group_replication_bootstrap_group=off

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
socket = /tmp/mysql.sock

[myisamchk]
key_buffer       = 128M
sort_buffer_size = 128M
read_buffer      = 2M
write_buffer     = 2M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]

vagrant環境

  • 検証に使ったゲスト環境の構築方法
  • vagrant使用は初のため、InnoDB Clusterの検証より環境準備に時間を要した
  • vagrantとvirtualboxのインストール方法は省略
  • Vagrantfileの記述は公式ページが参考になった

コマンドプロンプトでの作業

Vagrantディレクトリ作成
mkdir Vagrant
cd Vagrant
mkdir centos74
cd centos74
元イメージDL
vagrant init /bento/centos-7.4
pluginインストール
vagrant plugin install vagrant-vbguest
vagrant plugin install vagrant-hostmanager
vbguestチェック
E:\Vagrant\centos74>vagrant vbguest --status
[default] GuestAdditions 5.2.10 running --- OK.
設定反映
vagrant reload
状態確認
vagrant status
Current machine states:

default                   poweroff (virtualbox)
chrony.conf
cat -<<'__EOF__'> /etc/chrony.conf
server ntp.nict.jp iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
port 0
__EOF__
反映
systemctl restart chronyd.service
timezone
timedatectl set-timezone Asia/Tokyo
firewalld
systemctl start firewalld
firewall-cmd --add-port=3306/tcp --permanent
#for mysql router split r/w
firewall-cmd --add-port=3307/tcp --permanent
firewall-cmd --add-port=33060/tcp --permanent
firewall-cmd --add-port=13306/tcp --permanent
# for group replication member's communication
firewall-cmd --add-port=33061/tcp --permanent
firewall-cmd --add-port=22/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-all-zones
systemctl enable firewalld
selinux
setenforce 0
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config
mysql8.0
wget https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
rpm -ivh mysql80-community-release-el7-1.noarch.rpm
yum intall mysql-community-server mysql-shell

mkdir -p /etc/systemd/system/mysqld.service.d/

cat -<<'__EOF__'> /etc/systemd/system/mysqld.service.d/override.conf 
[Service]
LimitNOFILE=65535
__EOF__

systemctl enable mysqld.service
my.cnf配置

mysqld --initialize-insecure

mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY '英数記号ランダム8文字以上';

service mysqld stop
box容量削減のためにゼロ埋め
yum clean all
dd if=/dev/zero of=/0 bs=4k
rm -rf /0
shutdown -h now
box作成
vagrant package default --output mysql.box
作成したboxをvagrantに登録
vagrant box add mysql mysql.box

Vagrantfileを修正
トライアンドエラーで作業した結果が下記

Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|

  config.vm.box = "mysql"
  config.hostmanager.enabled = false
  config.vm.provision :hostmanager

  # config.vm.synced_folder "../data", "/vagrant_data"

  config.vm.define :master do | master |
    master.vm.hostname = "master"
    master.vm.network "private_network", ip: "192.168.33.10"
   master.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
   end
  end

  config.vm.define :slave1 do | slave1 |
    slave1.vm.hostname = "slave1"
    slave1.vm.network "private_network", ip: "192.168.33.11"
   slave1.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
   end
  end

  config.vm.define :slave2 do | slave2 |
    slave2.vm.hostname = "slave2"
    slave2.vm.network "private_network", ip: "192.168.33.12"
   slave2.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
   end
  end

  config.vm.define :router do | router |
    router.vm.hostname = "router"
    router.vm.network "private_network", ip: "192.168.33.13"
    router.vm.network "forwarded_port", guest:3306, host: 3306, protocol: "tcp"
    router.vm.network "forwarded_port", guest:3307, host: 3307, protocol: "tcp"
   router.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
   end
  end

  config.vm.define :client do | client|
    client.vm.hostname = "client"
    client.vm.network "private_network", ip: "192.168.33.14"
   end

end
Sign up for free and join this conversation.
Sign Up
If you already have a Qiita account log in.