モチベーション
業務で k8s のマルチクラスタ構成について検討する機会があったのですが、設計段階でも実際に環境を触りたいケースがあると思います。
そんな時にパッと環境構築出来たら検証作業が捗るなーという思いから、Terraform で自動化してみました。
やること
- Terraform でマルチクラスタを構築する
- EKSクラスタ & GKEクラスタの構成
- クラスタ間をVPNで接続する
- Internal Load Balancer 経由でクラスタ間通信
やりたいことは、ざっくり下図のようなイメージとなります。
VPNの箇所はだいぶ端折ってますが、悪しからず、、
ディレクトリ構成
当初はクラウドリソース毎にモジュール化しようと思っていたのですが、途中で力尽きました。
.
|-- Dockerfile
|-- docker-compose.yaml
|-- .env
|-- credentials
| `-- t-matsuno-xxxxxx.json
|-- backend
| `-- main.tf
|-- environments
| `-- test
| |-- backend.tf
| |-- main.tf
| |-- provider.tf
| |-- terraform.tfvars
| |-- variables.tf
| `-- versions.tf
`-- modules
|-- aws
| |-- eks.tf
| |-- main.tf
| |-- multicloud_vpn.tf
| |-- outputs.tf
| |-- variables.tf
| `-- vpc.tf
`-- gcp
|-- bastion.tf
|-- gke.tf
|-- nat.tf
|-- outputs.tf
|-- variables.tf
`-- vpc.tf
実行環境
docker-compose
で実行環境をコンテナとして起動します。
用途に応じたコンテナイメージを用意することで、ローカル環境を汚さずに済みます。
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:alpine
ENV TF_CLI_ARGS_plan "--parallelism=30"
ENV TF_CLI_ARGS_apply "--parallelism=100"
# Update and install packages
RUN apk update && \
apk add --no-cache \
bash \
unzip \
curl \
git \
vim \
jq \
aws-cli
# Install kubectl
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" && \
chmod +x kubectl && \
mv kubectl /usr/local/bin/
# Install Helm
ARG HELM_VERSION=3.13.2
RUN curl https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz -o helm.tar.gz && \
tar -zxvf helm.tar.gz && \
mv linux-amd64/helm /usr/local/bin/helm && \
rm -rf linux-amd64 && \
rm helm.tar.gz
# Install Terraform
ARG TERRAFORM_VERSION=1.6.3
RUN curl -LO https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
mv terraform /usr/bin/ && \
rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip
WORKDIR /work
ENTRYPOINT bash
CMD ["-c", "sleep", "infinity"]
.env
に各クラウド用の環境変数を定義します。
認証情報もここに含めてしまいます。
AWS_ACCESS_KEY_ID=xxxxxxxxxx
AWS_SECRET_ACCESS_KEY=yyyyyyyyyy
AWS_DEFAULT_REGION=ap-northeast-1
AWS_DEFAULT_OUTPUT=json
CLOUDSDK_CORE_PROJECT=<GCP プロジェクト名>
CLOUDSDK_CORE_ACCOUNT=<GCPアカウントのメールアドレス>
CLOUDSDK_COMPUTE_REGION=asia-northeast2
CLOUDSDK_COMPUTE_ZONE=asia-northeast2-c
# GCPサービスアカウントキーを配置する
GOOGLE_CREDENTIALS=/work/credentials/t-matsuno-xxxxxx.json
先ほどのコンテナイメージを起動するためのdocker-compose.yaml
を作成します。
.env
の内容はコンテナの環境変数として参照可能となります。
version: '3'
services:
terraform:
build:
context: .
dockerfile: ./Dockerfile
env_file:
- .env
volumes:
- ./credentials:/work/credentials
- ./backend:/work/backend
- ./modules:/work/modules
- ./environments:/work/environments
- tfdata:/tfdata
tty: true
environment:
- TF_DATA_DIR=/tfdata
volumes:
tfdata:
起動してコンテナにログインします。
$ podman compose up -d
...
[+] Running 1/1
✔ Container terraform-multicloud-terraform-1 Started
$ podman exec -it terraform-multicloud-terraform-1 sh
/work # terraform --version
Terraform v1.6.3
on linux_amd64
Your version of Terraform is out of date! The latest version
is 1.6.5. You can update by downloading from https://www.terraform.io/downloads.html
backend 作成
Terraformの tfstateファイルを GCS で管理するため、最初に Cloud Storage バケットを作成します。
今回は検証目的なので、無料枠が適用される条件でバケットを作成します。
backend/main.tf
provider "google" {
project = <GCP プロジェクト名>
}
resource "google_storage_bucket" "terraform-state-store" {
name = "tmatsuno-gke-test-tfstate"
location = "us-west1"
storage_class = "REGIONAL"
versioning {
enabled = true
}
lifecycle_rule {
action {
type = "Delete"
}
condition {
num_newer_versions = 5
}
}
}
上記の内容で、Terraformを実行します。
/work # cd backend
# ワークスペースを初期化
/work/backend # terraform init
# 変更内容を確認
/work/backend # terraform plan
# ↑で確認した内容を実行
/work/backend # terraform apply -auto-approve
以後の Terraform 実行手順は割愛します。
GCP リソースの定義
GCP の各種リソース定義を modules/gcp
配下に作成します。
modules/gcp/vpc.tf
VPCとサブネットを定義します。
VPN 接続時の通信を許可するための Firewall Rule も先に作成してしまいます。
# VPC
resource "google_compute_network" "vpc_network" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-gke-vpc"
auto_create_subnetworks = false
routing_mode = "GLOBAL"
}
# サブネット
resource "google_compute_subnetwork" "subnet_gke" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-subnet"
ip_cidr_range = var.gcp_vpc.subnet_cidr
region = var.gcp_common.region
network = google_compute_network.vpc_network.id
private_ip_google_access = true
secondary_ip_range {
range_name = "service-ranges-${var.common.prefix}-${var.common.env}"
ip_cidr_range = var.gcp_vpc.secondary_ip_service_ranges
}
secondary_ip_range {
range_name = "pod-ranges-${var.common.prefix}-${var.common.env}"
ip_cidr_range = var.gcp_vpc.secondary_ip_pod_ranges
}
}
# AWS からのインバウンド通信を許可
resource "google_compute_firewall" "fr_to_aws" {
name = "${var.common.prefix}-${var.common.env}-firewall-up-to-aws"
network = google_compute_network.vpc_network.id
direction = "INGRESS"
priority = 900
source_ranges = [var.aws_vpc.vpc_cidr]
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
}
# AWS へのアウトバウンド通信を許可
resource "google_compute_firewall" "fr_from_aws" {
name = "${var.common.prefix}-${var.common.env}-firewall-down-from-aws"
network = google_compute_network.vpc_network.id
direction = "EGRESS"
priority = 900
destination_ranges = [var.aws_vpc.vpc_cidr]
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
}
modules/gcp/gke.tf
ip_allocation_policy.{cluster,services}_secondary_range_name
と VPC の secondary_ip_range.range_name
を一致させる必要があります。
# GKE 限定公開クラスタ
resource "google_container_cluster" "primary" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-cluster"
location = var.gcp_common.region
network = google_compute_network.vpc_network.id
subnetwork = google_compute_subnetwork.subnet_gke.id
deletion_protection = false
initial_node_count = 1
min_master_version = var.gcp_gke.cluster_version
networking_mode = "VPC_NATIVE"
remove_default_node_pool = true
enable_l4_ilb_subsetting = true
ip_allocation_policy {
cluster_secondary_range_name = "pod-ranges-${var.common.prefix}-${var.common.env}"
services_secondary_range_name = "service-ranges-${var.common.prefix}-${var.common.env}"
}
release_channel {
channel = "STABLE"
}
# 限定公開クラスタの設定
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = true
master_ipv4_cidr_block = var.gcp_gke.master_cidr
master_global_access_config {
enabled = false
}
}
master_authorized_networks_config {
# コントロールプレーンへのアクセスを許可する IP 範囲
cidr_blocks {
# ノードと踏み台が作られるサブネットからのアクセスを許可
cidr_block = var.gcp_vpc.subnet_cidr
}
}
maintenance_policy {
recurring_window {
start_time = "2023-11-01T00:00:00Z"
end_time = "2023-11-01T04:00:00Z"
recurrence = "FREQ=WEEKLY;BYDAY=FR,SA,SU"
}
}
}
# ノードプール
resource "google_container_node_pool" "primary_nodes" {
name = "${var.common.prefix}-${var.common.env}-node-pool"
location = var.gcp_common.region
cluster = google_container_cluster.primary.name
node_count = 1
autoscaling {
min_node_count = 1
max_node_count = 3
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
}
management {
auto_repair = true
auto_upgrade = true
}
node_config {
preemptible = var.gcp_gke.preemptible
machine_type = var.gcp_gke.machine_type
disk_size_gb = 20
service_account = var.gcp_common.email
tags = ["gke-node", "${var.gcp_common.project}-gke"]
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
labels = {
env = var.common.env
}
metadata = {
disable-legacy-endpoints = "true"
}
}
}
modules/gcp/nat.tf
GKE からインターネットにアクセスできるようにするため、Cloud NAT を作成します。
# Cloud Router
resource "google_compute_router" "nat_router" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-nat-router"
region = var.gcp_common.region
network = google_compute_network.vpc_network.id
}
# Cloud NAT
resource "google_compute_router_nat" "nat" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-nat"
router = google_compute_router.nat_router.name
region = google_compute_router.nat_router.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
log_config {
enable = true
filter = "ERRORS_ONLY"
}
}
modules/gcp/bastion.tf
GKE クラスタにアクセスするための踏み台サーバを用意します。
metadata_startup_script
で必要なツールを予めインストールできて便利です。
# 踏み台 VM
resource "google_compute_instance" "bastion" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-bastion"
machine_type = var.gcp_bastion.machine_type
zone = var.gcp_common.zone
tags = ["ssh"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
subnetwork_project = var.gcp_common.project
network = google_compute_network.vpc_network.name
subnetwork = google_compute_subnetwork.subnet_gke.name
access_config {}
}
metadata = {
# OS Login を有効化
enable-oslogin = "true"
}
# 起動スクリプトで必要なツールと GKE プラグインをインストール
metadata_startup_script = <<EOF
#!/bin/bash
sudo apt update
sudo apt install kubectl unzip make
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
sudo apt install google-cloud-sdk-gke-gcloud-auth-plugin
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm awscliv2.zip
gcloud config set compute/region ${var.gcp_common.region}
gcloud config set compute/zone ${var.gcp_common.zone}
EOF
scheduling {
# 料金を抑えるためにプリエンプティブルにしておく
preemptible = true
# プリエンプティブルの場合は下のオプションが必須
automatic_restart = false
}
}
resource "google_compute_firewall" "ssh" {
project = var.gcp_common.project
name = "${var.common.prefix}-${var.common.env}-vpc-ssh-allow"
network = google_compute_network.vpc_network.name
target_tags = ["ssh"]
direction = "INGRESS"
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = [
var.gcp_bastion.ssh_sourcerange
]
}
modules/gcp/variables.tf
変数は variables.tf に定義します。
variable "common" {
type = object ({
prefix = string
env = string
})
description = "リソース共通の設定値"
}
variable "gcp_common" {
type = object ({
project = string
region = string
zone = string
email = string
})
description = "GCPリソース共通の設定値"
}
variable "gcp_vpc" {
type = object ({
subnet_cidr = string
secondary_ip_service_ranges = string
secondary_ip_pod_ranges = string
})
description = "GCP VPC の設定値"
}
variable "gcp_gke" {
type = object ({
master_cidr = string
preemptible = bool
machine_type = string
cluster_version = string
})
description = "GKE の設定値"
}
variable "gcp_bastion" {
type = object ({
machine_type = string
ssh_sourcerange = string
})
description = "GCP 踏み台 VM の設定値"
}
variable "aws_vpc" {
type = object ({
vpc_cidr = string
subnet_availability_zones = list(string)
public_subnet_cidr = list(string)
private_subnet_cidr = list(string)
})
description = "AWS VPC の設定値"
}
modules/gcp/outputs.tf
terraform output で表示したいパラメータを outputs.tf に定義します。
output "vpc_network_id" {
value = google_compute_network.vpc_network.id
}
output "cluster_name" {
value = google_container_cluster.primary.name
}
output "cluster_location" {
value = google_container_cluster.primary.location
}
output "cluster_project" {
value = google_container_cluster.primary.project
}
AWS リソースの定義
GCP と同様に、AWS リソースの定義を modules/aws
配下に作成します。
VPN の設定内容は公式ドキュメントを参考にしました。
modules/aws/main.tf
自身のアカウントIDを取得するために aws_caller_identity を定義しています。
data "aws_caller_identity" "current" {}
# Most recent Amazon Linux 2 AMI
data "aws_ami" "amazon_linux_2" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
}
modules/aws/vpc.tf
各種ゲートウェイのフラグを true にして自動作成していますが、別リソースとして定義する事も可能です。
また、amazon_side_asn
は仮想プライベートゲートウェイに設定するASNの値となります。
# EKS クラスタ用 VPC
module "eks_vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "${var.common.prefix}-${var.common.env}-eks-vpc"
cidr = var.aws_vpc.vpc_cidr
azs = var.aws_vpc.subnet_availability_zones
public_subnets = var.aws_vpc.public_subnet_cidr
private_subnets = var.aws_vpc.private_subnet_cidr
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
enable_vpn_gateway = true
amazon_side_asn = var.aws_bgp_asn
tags = {
"kubernetes.io/cluster/${var.common.prefix}-${var.common.env}-cluster" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${var.common.prefix}-${var.common.env}-cluster" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.common.prefix}-${var.common.env}-cluster" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
resource "aws_security_group" "default_sg" {
vpc_id = module.eks_vpc.vpc_id
name = "${var.common.prefix}-${var.common.env}-allow-all-from-gcp"
ingress {
description = "All access from GCP"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
var.gcp_vpc.subnet_cidr,
var.gcp_vpc.secondary_ip_service_ranges,
var.gcp_vpc.secondary_ip_pod_ranges
]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
modules/aws/eks.tf
限定公開のEKSクラスタを作成します。
クラスタにアクセス可能なIAMユーザーがTerraform実行時のユーザーのみのため、あとから編集できるように configmap も一緒に作成しています。
#EKSクラスタのIAMロール
resource "aws_iam_role" "cluster" {
name = "${var.common.prefix}-${var.common.env}-eks-cluster"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.cluster.name}"
}
resource "aws_iam_role_policy_attachment" "cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.cluster.name}"
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.19.1"
cluster_version = var.aws_eks.cluster_version
cluster_name = "${var.common.prefix}-${var.common.env}-cluster"
vpc_id = module.eks_vpc.vpc_id
subnet_ids = module.eks_vpc.private_subnets
enable_irsa = true
iam_role_arn = aws_iam_role.cluster.arn
create_iam_role = false
aws_auth_accounts = [data.aws_caller_identity.current.account_id]
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
eks_managed_node_groups = {
"${var.common.prefix}-${var.common.env}-ng1" = {
desired_size = 2
max_capacity = 3
min_capacity = 1
instance_types = [var.aws_eks.instance_types]
}
}
cluster_additional_security_group_ids = [
aws_security_group.eks_sg.id
]
node_security_group_additional_rules = {
admission_webhook = {
description = "Admission Webhook"
protocol = "tcp"
from_port = 0
to_port = 65535
type = "ingress"
source_cluster_security_group = true
}
ingress_node_communications = {
description = "Ingress Node to node"
protocol = "tcp"
from_port = 0
to_port = 65535
type = "ingress"
self = true
}
egress_node_communications = {
description = "Egress Node to node"
protocol = "tcp"
from_port = 0
to_port = 65535
type = "egress"
self = true
}
}
}
# Create a security group
resource "aws_security_group" "eks_sg" {
name = "${var.common.prefix}-${var.common.env}-eks-sg"
vpc_id = module.eks_vpc.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "eks_sg_ingress_rule_https" {
security_group_id = aws_security_group.eks_sg.id
type = "ingress"
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
resource "aws_security_group_rule" "eks_sg_ingress_rule_from_gcp" {
security_group_id = aws_security_group.eks_sg.id
type = "ingress"
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = [
var.gcp_vpc.subnet_cidr,
var.gcp_vpc.secondary_ip_service_ranges,
var.gcp_vpc.secondary_ip_pod_ranges
]
}
# IAMユーザー追加用の configmap をローカルに出力しておく
resource "null_resource" "get_auth_configmap" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = <<-EOF
echo "${module.eks.aws_auth_configmap_yaml}
mapUsers: |
- userarn: ${data.aws_iam_user.tmatsuno.arn}
username: ${data.aws_iam_user.tmatsuno.user_name}
groups:
- system:masters" > aws_auth_configmap.yaml
EOF
}
depends_on = [module.eks]
}
modules/aws/multicloud_vpn.tf
AWSとGCPのVPN用リソースをまとめて作成します。
VPN 用リソースには一部 GCP のリソースが含まれますが、わけて管理するとややこしくなるので、ここに入れ込んでしまいました。
# GCP HA VPN Gateway
resource "google_compute_ha_vpn_gateway" "gcp_aws_gateway" {
name = "${var.common.prefix}-${var.common.env}-vpn-gateway"
network = var.gcp_vpc_nw_id
region = var.gcp_common.region
vpn_interfaces {
id = 0
}
vpn_interfaces {
id = 1
}
depends_on = [ var.gcp_vpc_nw_id ]
}
# Google Compute Router
resource "google_compute_router" "gcp_vpn_router" {
name = "${var.common.prefix}-${var.common.env}-gcp-vpn-router"
network = var.gcp_vpc_nw_id
region = var.gcp_common.region
bgp {
asn = var.vpn_common.gcp_bgp_asn
advertise_mode = "CUSTOM"
advertised_groups = [ "ALL_SUBNETS" ]
}
depends_on = [ var.gcp_vpc_nw_id ]
}
# Configuring route propagation for AWS Virtual Private Gateway
resource "aws_vpn_gateway_route_propagation" "aws_vgw_rp_public" {
vpn_gateway_id = module.eks_vpc.vgw_id
route_table_id = module.eks_vpc.public_route_table_ids[0]
}
resource "aws_vpn_gateway_route_propagation" "aws_vgw_rp_private" {
vpn_gateway_id = module.eks_vpc.vgw_id
route_table_id = module.eks_vpc.private_route_table_ids[0]
}
# AWS Customer Gateway with Public IP of GCP Cloud Gateway
resource "aws_customer_gateway" "google1" {
bgp_asn = var.vpn_common.gcp_bgp_asn
ip_address = google_compute_ha_vpn_gateway.gcp_aws_gateway.vpn_interfaces[0].ip_address
type = "ipsec.1"
}
resource "aws_customer_gateway" "google2" {
bgp_asn = var.vpn_common.gcp_bgp_asn
ip_address = google_compute_ha_vpn_gateway.gcp_aws_gateway.vpn_interfaces[1].ip_address
type = "ipsec.1"
}
# AWS VPN Tunnel
resource "aws_vpn_connection" "aws_tunnel1" {
vpn_gateway_id = module.eks_vpc.vgw_id
customer_gateway_id = aws_customer_gateway.google1.id
type = "ipsec.1"
static_routes_only = false
depends_on = [ module.eks_vpc, aws_customer_gateway.google1 ]
}
resource "aws_vpn_connection" "aws_tunnel2" {
vpn_gateway_id = module.eks_vpc.vgw_id
customer_gateway_id = aws_customer_gateway.google2.id
type = "ipsec.1"
static_routes_only = false
depends_on = [ module.eks_vpc, aws_customer_gateway.google2 ]
}
# GCP Peer VPN Gateway
resource "google_compute_external_vpn_gateway" "peer_gateway" {
name = "${var.common.prefix}-${var.common.env}-peer-gateway"
redundancy_type = "FOUR_IPS_REDUNDANCY"
interface {
id = 0
ip_address = aws_vpn_connection.aws_tunnel1.tunnel1_address
}
interface {
id = 1
ip_address = aws_vpn_connection.aws_tunnel1.tunnel2_address
}
interface {
id = 2
ip_address = aws_vpn_connection.aws_tunnel2.tunnel1_address
}
interface {
id = 3
ip_address = aws_vpn_connection.aws_tunnel2.tunnel2_address
}
}
# GCP VPN tunnel
resource "google_compute_vpn_tunnel" "tunnel1" {
name = "${var.common.prefix}-${var.common.env}-gcp-aws-vpn-tunnel-1"
region = var.gcp_common.region
shared_secret = aws_vpn_connection.aws_tunnel1.tunnel1_preshared_key
ike_version = 1
router = google_compute_router.gcp_vpn_router.id
vpn_gateway = google_compute_ha_vpn_gateway.gcp_aws_gateway.id
vpn_gateway_interface = 0
peer_external_gateway = google_compute_external_vpn_gateway.peer_gateway.id
peer_external_gateway_interface = 0
depends_on = [
google_compute_ha_vpn_gateway.gcp_aws_gateway,
google_compute_external_vpn_gateway.peer_gateway,
aws_vpn_connection.aws_tunnel1
]
}
resource "google_compute_vpn_tunnel" "tunnel2" {
name = "${var.common.prefix}-${var.common.env}-gcp-aws-vpn-tunnel-2"
region = var.gcp_common.region
shared_secret = aws_vpn_connection.aws_tunnel1.tunnel2_preshared_key
ike_version = 1
router = google_compute_router.gcp_vpn_router.id
vpn_gateway = google_compute_ha_vpn_gateway.gcp_aws_gateway.id
vpn_gateway_interface = 0
peer_external_gateway = google_compute_external_vpn_gateway.peer_gateway.id
peer_external_gateway_interface = 1
depends_on = [
google_compute_ha_vpn_gateway.gcp_aws_gateway,
google_compute_external_vpn_gateway.peer_gateway,
aws_vpn_connection.aws_tunnel1
]
}
resource "google_compute_vpn_tunnel" "tunnel3" {
name = "${var.common.prefix}-${var.common.env}-gcp-aws-vpn-tunnel-3"
region = var.gcp_common.region
shared_secret = aws_vpn_connection.aws_tunnel2.tunnel1_preshared_key
ike_version = 1
router = google_compute_router.gcp_vpn_router.id
vpn_gateway = google_compute_ha_vpn_gateway.gcp_aws_gateway.id
vpn_gateway_interface = 1
peer_external_gateway = google_compute_external_vpn_gateway.peer_gateway.id
peer_external_gateway_interface = 2
depends_on = [
google_compute_ha_vpn_gateway.gcp_aws_gateway,
google_compute_external_vpn_gateway.peer_gateway,
aws_vpn_connection.aws_tunnel2
]
}
resource "google_compute_vpn_tunnel" "tunnel4" {
name = "${var.common.prefix}-${var.common.env}-gcp-aws-vpn-tunnel-4"
region = var.gcp_common.region
shared_secret = aws_vpn_connection.aws_tunnel2.tunnel2_preshared_key
ike_version = 1
router = google_compute_router.gcp_vpn_router.id
vpn_gateway = google_compute_ha_vpn_gateway.gcp_aws_gateway.id
vpn_gateway_interface = 1
peer_external_gateway = google_compute_external_vpn_gateway.peer_gateway.id
peer_external_gateway_interface = 3
depends_on = [
google_compute_ha_vpn_gateway.gcp_aws_gateway,
google_compute_external_vpn_gateway.peer_gateway,
aws_vpn_connection.aws_tunnel2
]
}
# GCP Router for tunnel 1
resource "google_compute_router_peer" "tunnel1_bgp1" {
name = "${var.common.prefix}-${var.common.env}-vpn-tunnel1-bgp1"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
peer_ip_address = aws_vpn_connection.aws_tunnel1.tunnel1_vgw_inside_address
ip_address = aws_vpn_connection.aws_tunnel1.tunnel1_cgw_inside_address
peer_asn = var.vpn_common.aws_bgp_asn
interface = google_compute_router_interface.router_interface1.name
}
resource "google_compute_router_peer" "tunnel1_bgp2" {
name = "${var.common.prefix}-${var.common.env}-vpn-tunnel1-bgp2"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
peer_ip_address = aws_vpn_connection.aws_tunnel1.tunnel2_vgw_inside_address
ip_address = aws_vpn_connection.aws_tunnel1.tunnel2_cgw_inside_address
peer_asn = var.vpn_common.aws_bgp_asn
interface = google_compute_router_interface.router_interface2.name
}
# GCP Router for tunnel 2
resource "google_compute_router_peer" "tunnel2_bgp1" {
name = "${var.common.prefix}-${var.common.env}-vpn-tunnel2-bgp1"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
peer_ip_address = aws_vpn_connection.aws_tunnel2.tunnel1_vgw_inside_address
ip_address = aws_vpn_connection.aws_tunnel2.tunnel1_cgw_inside_address
peer_asn = var.vpn_common.aws_bgp_asn
interface = google_compute_router_interface.router_interface3.name
}
resource "google_compute_router_peer" "tunnel2_bgp2" {
name = "${var.common.prefix}-${var.common.env}-vpn-tunnel2-bgp2"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
peer_ip_address = aws_vpn_connection.aws_tunnel2.tunnel2_vgw_inside_address
ip_address = aws_vpn_connection.aws_tunnel2.tunnel2_cgw_inside_address
peer_asn = var.vpn_common.aws_bgp_asn
interface = google_compute_router_interface.router_interface4.name
}
# Computer Router Interface for tunnel 1
resource "google_compute_router_interface" "router_interface1" {
name = "${var.common.prefix}-${var.common.env}-vpn-router-interface1"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
ip_range = "${aws_vpn_connection.aws_tunnel1.tunnel1_cgw_inside_address}/30"
vpn_tunnel = google_compute_vpn_tunnel.tunnel1.name
}
resource "google_compute_router_interface" "router_interface2" {
name = "${var.common.prefix}-${var.common.env}-vpn-router-interface2"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
ip_range = "${aws_vpn_connection.aws_tunnel1.tunnel2_cgw_inside_address}/30"
vpn_tunnel = google_compute_vpn_tunnel.tunnel1.name
}
# Computer Router Interface for tunnel 2
resource "google_compute_router_interface" "router_interface3" {
name = "${var.common.prefix}-${var.common.env}-vpn-router-interface3"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
ip_range = "${aws_vpn_connection.aws_tunnel2.tunnel1_cgw_inside_address}/30"
vpn_tunnel = google_compute_vpn_tunnel.tunnel2.name
}
resource "google_compute_router_interface" "router_interface4" {
name = "${var.common.prefix}-${var.common.env}-vpn-router-interface4"
region = var.gcp_common.region
router = google_compute_router.gcp_vpn_router.name
ip_range = "${aws_vpn_connection.aws_tunnel2.tunnel2_cgw_inside_address}/30"
vpn_tunnel = google_compute_vpn_tunnel.tunnel2.name
}
modules/aws/loadbalancer.tf
AWS Load Balancer Controllerを Helm でインストールできるようにするため、必要になるリソースを事前に作成しておきます。
data "http" "albc_policy_json" {
url = "https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json"
}
resource "aws_iam_policy" "albc" {
name = "${var.common.prefix}-${var.common.env}-aws-loadbalancer-controller-iam-policy"
policy = data.http.albc_policy_json.response_body
}
module "albc_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "5.33.0"
create_role = true
role_name = "${var.common.prefix}-${var.common.env}-aws-load-balancer-controller-role"
role_policy_arns = [aws_iam_policy.albc.arn]
provider_url = module.eks.cluster_oidc_issuer_url
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:aws-load-balancer-controller"]
}
# AWS load balancer cotroller 用 ServiceAccount
resource "null_resource" "get_alb_service_account" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = <<-EOF
echo "apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: ${module.albc_irsa.iam_role_arn}" > aws_alb_sa.yaml
EOF
}
depends_on = [module.eks, module.albc_irsa]
}
modules/aws/variables.tf
GCP側と同様に variables.tf
を作成します。
variable "common" {
type = object ({
prefix = string
env = string
})
description = "リソース共通の設定値"
}
variable "aws_common" {
type = object ({
region = string
})
description = "AWS リソース共通の設定値"
}
variable "aws_vpc" {
type = object ({
vpc_cidr = string
subnet_availability_zones = list(string)
public_subnet_cidr = list(string)
private_subnet_cidr = list(string)
})
description = "AWS EKS VPC の設定値"
}
variable "aws_eks" {
type = object ({
instance_types = string
cluster_version = string
})
description = "AWS EKS の設定値"
}
variable "gcp_common" {
type = object ({
project = string
region = string
zone = string
email = string # GCP ServiceAccount のメールアドレス
})
description = "GCPリソース共通の設定値"
}
variable "gcp_vpc" {
type = object ({
subnet_cidr = string
secondary_ip_service_ranges = string
secondary_ip_pod_ranges = string
})
description = "GCP VPC の設定値"
}
variable "vpn_common" {
type = object ({
gcp_bgp_asn = number
aws_bgp_asn = number
})
description = "VPN の設定値"
}
variable "gcp_vpc_nw_id" {
type = string
description = "GCP VPC のID"
}
modules/aws/outputs.tf
EKS の kubeconfig を取得する際に必要となるパラメータを定義しています。
output "cluster_name" {
value = try(module.eks.cluster_name, "")
}
output "vpc_id" {
value = try(module.eks_vpc.vpc_id, "")
}
環境毎のモジュール定義
先ほど作成した各クラウドリソースを利用する側のモジュールを作成します。
terraform.tfvars
を環境毎に用意することで、部品を再利用しつつパラメータを変更することが可能となります。
environments/test/main.tf
各モジュールに後述の terraform.tfvars
で定義した変数を引き渡します。
module "gcp_modules" {
source = "../../modules/gcp"
common = var.common
gcp_common = var.gcp_common
gcp_vpc = var.gcp_vpc
gcp_gke = var.gcp_gke
gcp_bastion = var.gcp_bastion
aws_vpc = var.aws_vpc
}
module "aws_modules" {
source = "../../modules/aws"
common = var.common
aws_common = var.aws_common
aws_vpc = var.aws_vpc
aws_eks = var.aws_eks
gcp_vpc_nw_id = module.gcp_modules.vpc_network_id
gcp_common = var.gcp_common
gcp_vpc = var.gcp_vpc
vpn_common = var.vpn_common
}
environments/test/terraform.tfvars
環境毎のパラメータを定義します。
common = {
prefix = "tmatsuno"
env = "test"
}
# ----- 以下、GCP設定値 -----
gcp_common = {
project = "<GCP Project Name>"
region = "asia-northeast1"
zone = "asia-northeast1-c"
email = "<GCP Account Name>@<GCP Project Name>.iam.gserviceaccount.com"
}
gcp_vpc = {
subnet_cidr = "172.16.0.0/15"
secondary_ip_service_ranges = "172.18.0.0/18"
secondary_ip_pod_ranges = "172.19.0.0/18"
}
gcp_gke = {
master_cidr = "192.168.16.0/28"
preemptible = true
machine_type = "e2-medium"
cluster_version = "1.27.4-gke.900"
}
gcp_bastion = {
machine_type = "e2-small"
ssh_sourcerange = "35.235.240.0/20"
}
# ----- 以下、AWS設定値 -----
aws_common = {
region = "ap-northeast-1"
}
aws_vpc = {
vpc_cidr = "172.20.0.0/16"
subnet_availability_zones = [
"ap-northeast-1a",
"ap-northeast-1c"
]
public_subnet_cidr = [
"172.20.0.0/21",
"172.20.8.0/21"
]
private_subnet_cidr = [
"172.20.16.0/21",
"172.20.24.0/21"
]
}
aws_eks = {
instance_types = "t3.medium"
cluster_version = "1.28"
}
# ----- 以下、VPN設定値 -----
vpn_common = {
gcp_bgp_asn = 64513
aws_bgp_asn = 65001
}
environments/test/variables.tf
terraform.tfvars
の変数と型を一致させる必要があります。
variable "common" {
type = object ({
prefix = string # 固有のプレフィクス (任意の文字列)
env = string # 環境名 ( dev、prod など)
})
description = "リソース共通の設定値"
}
#----- GCP parameters -----
variable "gcp_common" {
type = object ({
project = string
region = string
zone = string
email = string # GCP ServiceAccount のメールアドレス
})
description = "GCPリソース共通の設定値"
}
variable "gcp_vpc" {
type = object ({
subnet_cidr = string
secondary_ip_service_ranges = string
secondary_ip_pod_ranges = string
proxy_only_subnet_cidr = string
})
description = "GCP VPC の設定値"
}
variable "gcp_gke" {
type = object ({
master_cidr = string
preemptible = bool
machine_type = string
cluster_version = string
})
description = "GKE の設定値"
}
variable "gcp_bastion" {
type = object ({
machine_type = string
ssh_sourcerange = string
})
description = "GCP 踏み台 VM の設定値"
}
#----- AWS parameters -----
variable "aws_common" {
type = object ({
region = string
})
description = "AWS リソース共通の設定値"
}
variable "aws_vpc" {
type = object ({
vpc_cidr = string
subnet_availability_zones = list(string)
public_subnet_cidr = list(string)
private_subnet_cidr = list(string)
})
description = "AWS EKS用 VPC の設定値"
}
variable "aws_eks" {
type = object ({
instance_types = string
cluster_version = string
})
description = "AWS EKS の設定値"
}
variable "vpn_common" {
type = object ({
gcp_bgp_asn = number
aws_bgp_asn = number
})
description = "VPN の設定値"
}
environments/test/backend.tf
先に作成した GCS バケットを指定することで、tfstate
がオブジェクトストレージに保存されます。
複数ユーザーから Terraform が実行される場合は、同時に更新してしまわないように運用フローを考慮する必要があります。
terraform {
backend "gcs" {
bucket = "tmatsuno-gke-test-tfstate"
prefix = "tmatsuno-gke-test"
}
}
environments/test/provider.tf
利用するプロバイダの初期設定を定義します。
provider "google" {
project = var.gcp_common.project
region = var.gcp_common.region
zone = var.gcp_common.zone
}
provider "google-beta" {
project = var.gcp_common.project
region = var.gcp_common.region
zone = var.gcp_common.zone
}
provider "aws" {
region = var.aws_common.region
default_tags {
tags = {
Environment = "${var.common.prefix}-${var.common.env}"
}
}
}
environments/test/versions.tf
各プロバイダのバージョンは versions.tf
にまとめて定義します。
terraform {
required_version = "~> 1.6.0"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.4"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 5.4"
}
aws = {
source = "hashicorp/aws"
version = "~> 5.24"
}
http = {
source = "hashicorp/http"
version = "~> 3.4"
}
}
}
environments/test/outputs.tf
リソースの作成が完了した際に標準出力に表示させたい内容を outputs.tf
に定義します。
# GKE クラスタの kubeconfig を取得
output "command_to_connect_cluster" {
value = "gcloud container clusters get-credentials ${module.gcp_modules.cluster_name} --region ${module.gcp_modules.cluster_location} --project ${module.gcp_modules.cluster_project}"
}
# EKS クラスタの kubeconfig を取得
output "eks_generage_config" {
value = "aws eks update-kubeconfig --region ${var.aws_common.region} --name ${module.aws_modules.cluster_name}"
}
# AWS load balancer controller のインストールコマンド
output "eks_install_loadbalancer_cmd" {
value = <<-EOT
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=${module.aws_modules.cluster_name} \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
EOT
}
Terraform の実行
コンテナにログインした状態で、Terraform を実行します。
$ cd /work/environments/test
$ terraform init
$ terraform plan
$ terraform apply -auto-approve
正常終了すると outputs.tf
に定義した内容が表示されます。
...
Outputs:
command_to_connect_cluster = "gcloud container clusters get-credentials tmatsuno-test-cluster --region asia-northeast1 --project <GCP Project Name>"
eks_generage_config = "aws eks update-kubeconfig --region ap-northeast-1 --name tmatsuno-test-cluster"
eks_install_loadbalancer_cmd = <<EOT
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=tmatsuno-test-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
EOT
クラスタにアクセス
GCP の踏み台サーバから k8s クラスタにアクセスします。
GCP コンソールから VM インスタンス一覧の「SSH」ボタンを押下してください。
SSHコンソールがポップアップで表示されるので、コマンドを実行します。
最初に GCP の認証情報を設定します。
$ gcloud auth login
You are running on a Google Compute Engine virtual machine.
It is recommended that you use service accounts for authentication.
You can run:
$ gcloud config set account `ACCOUNT`
to switch accounts if necessary.
Your credentials may be visible to others with access to this
virtual machine. Are you sure you want to authenticate with
your personal account?
Do you want to continue (Y/n)?
「Y」を押下すると、URLが表示されるのでコピペしてブラウザアクセスしてください。
トークンを取得したら、SSHコンソールに戻ってトークンを貼り付けます。
以下のようなメッセージが表示されたらOKです。
You are now logged in as [takahiro.matsuno@systemi.co.jp].
Your current project is [xxxxxx]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
次に AWS の認証情報を設定します。
AWS CLI は初期化スクリプトでインストール済みなので、aws configure
を実行します。
$ aws configure
AWS Access Key ID [None]: xxxxx
AWS Secret Access Key [None]: xxxxx
Default region name [None]: ap-northeast-1
Default output format [None]: json
認証情報を設定したら kubeconfig
を取得します。
# GKE
$ gcloud container clusters get-credentials tmatsuno-test-cluster --region asia-northeast1 --project <GCP Project Name>
Fetching cluster endpoint and auth data.
kubeconfig entry generated for tmatsuno-test-cluster
# EKS
$ aws eks update-kubeconfig --region ap-northeast-1 --name tmatsuno-test-cluster
Updated context arn:aws:eks:ap-northeast-1:xxxxxx:cluster/tmatsuno-test-cluster in /home/takahiro_matsuno_systemi_co_jp/.kube/config
context
が AWS EKS に向いていることを確認したら、AWS load balancer controller をインストールします。
# CURRENT 列にアスタリスクが付与されている行が現在の context となる
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* arn:aws:eks:ap-northeast-1:xxxxxx:cluster/tmatsuno-test-cluster arn:aws:eks:ap-northeast-1:xxxxxx:cluster/tmatsuno-test-cluster arn:aws:eks:ap-northeast-1:xxxxxx:cluster/tmatsuno-test-cluster
gke_xxxxxxx_asia-northeast1_tmatsuno-test-cluster gke_xxxxxx_asia-northeast1_tmatsuno-test-cluster gke_xxxxxx_asia-northeast1_tmatsuno-test-cluster
# AWS が提供する Helm Chart をインストールします
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update eks
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=tmatsuno-test-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Terraform 実行時のカレントディレクトリにマニフェストファイルが出力されているので、こちらもデプロイします。
#どちらも AWS EKS にデプロイするリソースです
$ kubectl apply -f aws_alb_sa.yaml
$ kubectl apply -f aws_auth_configmap.yaml
クラスタ間通信の疎通確認
EKS にnginx
、GKE にhttpd
のコンテナを起動します。
$ export AWS_CTX="arn:aws:eks:ap-northeast-1:xxxxxx:cluster/tmatsuno-test-cluster"
$ export GCP_CTX="gke_xxxxxx_asia-northeast1_tmatsuno-test-cluster"
$ kubectl apply -f nginx.yaml --context=${AWS_CTX}
$ kubectl apply -f httpd.yaml --context=${GCP_CTX}
マニフェストの内容は以下の通りです。
nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: nginx
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: nginx
name: nginx
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
httpd.yaml
apiVersion: v1
kind: Namespace
metadata:
name: httpd
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd
namespace: httpd
spec:
selector:
matchLabels:
app: httpd
replicas: 1
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: httpd
namespace: httpd
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
selector:
app: httpd
ports:
- name: http
port: 80
targetPort: 80
しばらくすると L4LB のデプロイが完了します。
内部ネットワークに公開されるエンドポイントがEXTERNAL-IP
列に表示されるので、手元に控えておきます。
$ kubectl get ing -n nginx --context=${AWS_CTX}
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx alb * internal-k8s-nginx-nginx-029f60f89d-1145409129.ap-northeast-1.elb.amazonaws.com 80 53s
$ kubectl get svc -n httpd --context=${GCP_CTX}
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd LoadBalancer 172.18.48.138 172.16.0.17 80:32435/TCP 4m39s
curl
を実行するコンテナを起動して、クラスタを跨いだ通信が可能か確認します。
$ kubectl run -it --rm curl --image=curlimages/curl --context="${AWS_CTX}" -- /bin/sh
If you don\'t see a command prompt, try pressing enter.
~ $ curl -v http://172.16.0.17:80
* Trying 172.16.0.17:80...
* Connected to 172.16.0.17 (172.16.0.17) port 80
> GET / HTTP/1.1
> Host: 172.16.0.17
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 24 Dec 2023 13:56:37 GMT
< Server: Apache/2.4.58 (Unix)
< Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
< ETag: "2d-432a5e4a73a80"
< Accept-Ranges: bytes
< Content-Length: 45
< Content-Type: text/html
<
<html><body><h1>It works!</h1></body></html>
* Connection #0 to host 172.16.0.17 left intact
$ kubectl run -it --rm curl --image=curlimages/curl --context="${GCP_CTX}" -- /bin/sh
If you don\'t see a command prompt, try pressing enter.
~ $ curl -v http://internal-k8s-nginx-nginx-029f60f89d-1145409129.ap-northeast-1.elb.amazonaws.com:80
* Host internal-k8s-nginx-nginx-029f60f89d-1145409129.ap-northeast-1.elb.amazonaws.com:80 was resolved.
* IPv6: (none)
* IPv4: 172.20.23.80, 172.20.26.202
* Trying 172.20.23.80:80...
* Connected to internal-k8s-nginx-nginx-029f60f89d-1145409129.ap-northeast-1.elb.amazonaws.com (172.20.23.80) port 80
> GET / HTTP/1.1
> Host: internal-k8s-nginx-nginx-029f60f89d-1145409129.ap-northeast-1.elb.amazonaws.com
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 24 Dec 2023 13:57:17 GMT
< Content-Type: text/html
< Content-Length: 615
< Connection: keep-alive
< Server: nginx/1.25.3
< Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
< ETag: "6537cac7-267"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host internal-k8s-nginx-nginx-029f60f89d-1145409129.ap-northeast-1.elb.amazonaws.com left intact
簡単ですが、クラスタを跨いだ通信ができることが確認できました。
補足
GCP でマルチクラスタを構成する場合はGKE Enterprise
やAnthos
という選択肢もあります。
- https://cloud.google.com/anthos/docs/concepts/gke-editions?hl=ja
- https://cloud.google.com/anthos/docs/concepts/overview?hl=ja
まとめ
クラウド上の k8s クラスタをVPNで接続して、Internal Load Balancer 経由でクラスタを跨いだ通信を検証してみました。
本当はIstio
でマルチクラスタ間通信を試してみたかったのですが、時間が取れず断念。
またの機会に検証してみたいと思います。
参考リンク
- https://dev.classmethod.jp/articles/aws_gcp_vpn_terraform/
- https://www.qoosky.io/techs/c11188b146
- https://cloud.google.com/network-connectivity/docs/vpn/tutorials/create-ha-vpn-connections-google-cloud-aws?hl=ja
- https://medium.com/@niyi.alimi/seamless-vpn-connectivity-achieving-high-availability-connectivity-between-aws-and-gcp-using-1b51800d3b0f
- https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/aws-load-balancer-controller.html
- https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html