5
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 1 year has passed since last update.

TiDB構築(1ノード構成)

Last updated at Posted at 2023-05-12

はじめに

Quick Start Guide for the TiDB Database Platform の下記を参考にTiDBを構築しました。

  • Simulate production deployment on a single machine

準備

本記事の環境は、Oracle VM VirtualBoxの仮想マシンを使用しています。

#
仮想マシンOS CentOS Linux release 7.9.2009 (Core)
cpu cores 4
memory 64 GB

ファイアウォールを無効化します。( あるいは、TiDBの使用ポートを開けます。 )

[root@tisim ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since 日 2023-05-07 07:49:11 JST; 1h 2min ago
     Docs: man:firewalld(1)
 Main PID: 717 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─717 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid

 5月 07 07:49:10 tisim.besite systemd[1]: Starting firewalld - dynamic firewall daemon...
 5月 07 07:49:11 tisim.besite systemd[1]: Started firewalld - dynamic firewall daemon.
 5月 07 07:49:11 tisim.besite firewalld[717]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now.
[root@tisim ~]# systemctl stop firewalld
[root@tisim ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

構築

TiUPをダウンロードします。

[root@tisim ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7321k  100 7321k    0     0  2476k      0  0:00:02  0:00:02 --:--:-- 2475k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

上記のInstalled pathに記載の /root/.tiup/bin がPATHに追加されている事を確認します。

[root@tisim ~]# cat /root/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export PATH=/root/.tiup/bin:$PATH

.bash_profileを再読み込みします。

[root@tisim ~]# . ~/.bash_profile
[root@tisim ~]# which tiup
/root/.tiup/bin/tiup

TiUPのクラスタコンポーネントをインストールします。

[root@tisim ~]# tiup cluster
tiup is checking updates for component cluster ...timeout(2s)!
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.12.1-linux-amd64.tar.gz 8.68 MiB / 8.68 MiB 100.00% 14.41 MiB/s
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  rotatessh   rotate ssh keys on all nodes
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.

インストール済みの場合は、下記のコマンドでアップデートできます。

[root@tisim ~]# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.12.1-linux-amd64.tar.gz 7.15 MiB / 7.15 MiB 100.00% 15.46 MiB/s
Updated successfully!
component cluster version v1.12.1 is already installed
Updated successfully!

本記事の環境のように1ノードで構成する場合は、sshのMaxSessionsを増やします。クイックスタートガイドに記載の通り、20に変更しました。

[root@tisim ~]# vi /etc/ssh/sshd_config
[root@tisim ~]# grep MaxSessions /etc/ssh/sshd_config
#MaxSessions 10
MaxSessions 20

sshdを再起動します。

[root@tisim ~]# systemctl restart sshd
[root@tisim ~]# systemctl status sshd
● sshd.service - OpenSSH server daemon
   Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
   Active: active (running) since 日 2023-05-07 10:09:03 JST; 13s ago
     Docs: man:sshd(8)
           man:sshd_config(5)
 Main PID: 1942 (sshd)
    Tasks: 1
   CGroup: /system.slice/sshd.service
           mq1942 /usr/sbin/sshd -D

 5月 07 10:09:03 tisim.besite systemd[1]: Starting OpenSSH server daemon...
 5月 07 10:09:03 tisim.besite sshd[1942]: Server listening on 0.0.0.0 port 22.
 5月 07 10:09:03 tisim.besite sshd[1942]: Server listening on :: port 22.
 5月 07 10:09:03 tisim.besite systemd[1]: Started OpenSSH server daemon.

構成ファイルを作成します。

同一ノードにTiKVを3つ構成するため、異なるポートを割り当てる必要があります。

ドキュメントのサンプルtopo.yamlを流用し、IPアドレス部分のみ変更しました。
あるいは、tiup cluster template topologyを実行して、サンプルを出力する事ができるようです。

[root@tisim ~]# vi topo.yaml
[root@tisim ~]# cat topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 192.168.3.171

tidb_servers:
 - host: 192.168.3.171

tikv_servers:
 - host: 192.168.3.171
   port: 20160
   status_port: 20180
   config:
     server.labels: { host: "logic-host-1" }

 - host: 192.168.3.171
   port: 20161
   status_port: 20181
   config:
     server.labels: { host: "logic-host-2" }

 - host: 192.168.3.171
   port: 20162
   status_port: 20182
   config:
     server.labels: { host: "logic-host-3" }

tiflash_servers:
 - host: 192.168.3.171

monitoring_servers:
 - host: 192.168.3.171

grafana_servers:
 - host: 192.168.3.171

インストール可能なTiDBのバージョンを確認します。今回はv7.0.0をインストールします。

[root@tisim ~]# tiup list tidb|grep 2023
nightly -> v7.2.0-alpha-nightly-20230507             2023-05-07T00:25:26+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v6.1.4                                               2023-02-08T11:32:28+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v6.1.5                                               2023-02-28T11:21:37+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v6.1.6                                               2023-04-12T11:03:44+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v6.5.1                                               2023-03-10T13:34:51+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v6.5.2                                               2023-04-21T10:50:24+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v6.6.0                                               2023-02-20T16:40:24+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v7.0.0                                               2023-03-30T10:29:21+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64
v7.2.0-alpha-nightly-20230507                        2023-05-07T00:25:26+08:00            darwin/arm64,linux/amd64,linux/arm64,darwin/amd64

クラスタをデプロイします。
tiup cluster deploy <cluster-name> <version> ./topo.yaml --user root -p

  • cluster-nameに、任意のクラスタ名を指定します。
  • versionに、TiDBのバージョンを指定します。
[root@tisim ~]# tiup cluster deploy demo-cluster v7.0.0 ./topo.yaml --user root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster deploy demo-cluster v7.0.0 ./topo.yaml --user root -p
Input SSH password:

+ Detect CPU Arch Name
  - Detecting node 192.168.3.171 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 192.168.3.171 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    demo-cluster
Cluster version: v7.0.0
Role        Host           Ports                            OS/Arch       Directories
----        ----           -----                            -------       -----------
pd          192.168.3.171  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv        192.168.3.171  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv        192.168.3.171  20161/20181                      linux/x86_64  /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv        192.168.3.171  20162/20182                      linux/x86_64  /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb        192.168.3.171  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash     192.168.3.171  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus  192.168.3.171  9090/12020                       linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana     192.168.3.171  3000                             linux/x86_64  /tidb-deploy/grafana-3000
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) 

トポロジの内容が想定通りであれば、yを入力し、続行します。

Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v7.0.0 (linux/amd64) ... Done
  - Download tikv:v7.0.0 (linux/amd64) ... Done
  - Download tidb:v7.0.0 (linux/amd64) ... Done
  - Download tiflash:v7.0.0 (linux/amd64) ... Done
  - Download prometheus:v7.0.0 (linux/amd64) ... Done
  - Download grafana:v7.0.0 (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 192.168.3.171:22 ... Done
+ Deploy TiDB instance
  - Copy pd -> 192.168.3.171 ... Done
  - Copy tikv -> 192.168.3.171 ... Done
  - Copy tikv -> 192.168.3.171 ... Done
  - Copy tikv -> 192.168.3.171 ... Done
  - Copy tidb -> 192.168.3.171 ... Done
  - Copy tiflash -> 192.168.3.171 ... Done
  - Copy prometheus -> 192.168.3.171 ... Done
  - Copy grafana -> 192.168.3.171 ... Done
  - Deploy node_exporter -> 192.168.3.171 ... Done
  - Deploy blackbox_exporter -> 192.168.3.171 ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> 192.168.3.171:2379 ... Done
  - Generate config tikv -> 192.168.3.171:20160 ... Done
  - Generate config tikv -> 192.168.3.171:20161 ... Done
  - Generate config tikv -> 192.168.3.171:20162 ... Done
  - Generate config tidb -> 192.168.3.171:4000 ... Done
  - Generate config tiflash -> 192.168.3.171:9000 ... Done
  - Generate config prometheus -> 192.168.3.171:9090 ... Done
  - Generate config grafana -> 192.168.3.171:3000 ... Done
+ Init monitor configs
  - Generate config node_exporter -> 192.168.3.171 ... Done
  - Generate config blackbox_exporter -> 192.168.3.171 ... Done
Enabling component pd
        Enabling instance 192.168.3.171:2379
        Enable instance 192.168.3.171:2379 success
Enabling component tikv
        Enabling instance 192.168.3.171:20162
        Enabling instance 192.168.3.171:20161
        Enabling instance 192.168.3.171:20160
        Enable instance 192.168.3.171:20161 success
        Enable instance 192.168.3.171:20160 success
        Enable instance 192.168.3.171:20162 success
Enabling component tidb
        Enabling instance 192.168.3.171:4000
        Enable instance 192.168.3.171:4000 success
Enabling component tiflash
        Enabling instance 192.168.3.171:9000
        Enable instance 192.168.3.171:9000 success
Enabling component prometheus
        Enabling instance 192.168.3.171:9090
        Enable instance 192.168.3.171:9090 success
Enabling component grafana
        Enabling instance 192.168.3.171:3000
        Enable instance 192.168.3.171:3000 success
Enabling component node_exporter
        Enabling instance 192.168.3.171
        Enable 192.168.3.171 success
Enabling component blackbox_exporter
        Enabling instance 192.168.3.171
        Enable 192.168.3.171 success
Cluster `demo-cluster` deployed successfully, you can start it with command: `tiup cluster start demo-cluster --init`

クラスタを起動します。

[root@tisim ~]# tiup cluster start demo-cluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster start demo-cluster
Starting cluster demo-cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/demo-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/demo-cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 192.168.3.171:2379
        Start instance 192.168.3.171:2379 success
Starting component tikv
        Starting instance 192.168.3.171:20162
        Starting instance 192.168.3.171:20160
        Starting instance 192.168.3.171:20161
        Start instance 192.168.3.171:20161 success
        Start instance 192.168.3.171:20160 success
        Start instance 192.168.3.171:20162 success
Starting component tidb
        Starting instance 192.168.3.171:4000
        Start instance 192.168.3.171:4000 success
Starting component tiflash
        Starting instance 192.168.3.171:9000
        Start instance 192.168.3.171:9000 success
Starting component prometheus
        Starting instance 192.168.3.171:9090
        Start instance 192.168.3.171:9090 success
Starting component grafana
        Starting instance 192.168.3.171:3000
        Start instance 192.168.3.171:3000 success
Starting component node_exporter
        Starting instance 192.168.3.171
        Start 192.168.3.171 success
Starting component blackbox_exporter
        Starting instance 192.168.3.171
        Start 192.168.3.171 success
+ [ Serial ] - UpdateTopology: cluster=demo-cluster
Started cluster `demo-cluster` successfully

クラスタの一覧を確認します。

[root@tisim ~]# tiup cluster list
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster list
Name          User  Version  Path                                               PrivateKey
----          ----  -------  ----                                               ----------
demo-cluster  tidb  v7.0.0   /root/.tiup/storage/cluster/clusters/demo-cluster  /root/.tiup/storage/cluster/clusters/demo-cluster/ssh/id_rsa

クラスタのトポロジを表示します。

[root@tisim ~]# tiup cluster display demo-cluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster display demo-cluster
Cluster type:       tidb
Cluster name:       demo-cluster
Cluster version:    v7.0.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.3.171:2379/dashboard
Grafana URL:        http://192.168.3.171:3000
ID                   Role        Host           Ports                            OS/Arch       Status   Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------   --------                    ----------
192.168.3.171:3000   grafana     192.168.3.171  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000
192.168.3.171:2379   pd          192.168.3.171  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
192.168.3.171:9090   prometheus  192.168.3.171  9090/12020                       linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
192.168.3.171:4000   tidb        192.168.3.171  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000
192.168.3.171:9000   tiflash     192.168.3.171  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
192.168.3.171:20160  tikv        192.168.3.171  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.3.171:20161  tikv        192.168.3.171  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.3.171:20162  tikv        192.168.3.171  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162
Total nodes: 8

mysqlクライアントから接続確認します。

[root@mysql01 ~]# mysql -h 192.168.3.171 -P 4000 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 437
Server version: 5.7.25-TiDB-v7.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
5 rows in set (0.00 sec)

まとめ

公式のガイド資料の品質が良いため、迷わずTiDBを構築する事ができました。今後、チュートリアルを実施し理解を深めていきたいと思います。

5
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?