0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

ACKとApsaraDB for MongoDBをTerraformでデプロイしてみる

Last updated at Posted at 2024-07-18

前の記事はこちら! ACKをApsaraDB for MongoDBとともに使ってみる

TerraformでACKにデプロイ

公式ドキュメントにTerraformを用いたデプロイ方法があったので試してみる。
https://www.alibabacloud.com/help/en/ack/ack-managed-and-ack-dedicated/developer-reference/terraform/?spm=a2c63.p38356.0.0.58704f25P2NFxS

ひととおりのリソースのサンプルがあったので、これベースでつくっていく。
方針としては、それぞれのリソースをimportして、tfstateと見比べながら必要パラメータを埋める。
terraform planである程度差分がでないことを確認したら新規クラスタとしてデプロイする。
新規クラスタのAPIエンドポイントにアクセスし、既存クラスタのものと同じ結果が返ってくれば完成とする!

Terraformのversionは1.7.5
TerraformのAlibaba cloudドキュメントはこれ

まずAlibabaCloudの認証情報を取得する。
コンソール右上のメニューからAccessKey Managementを選択し、アクセスキーを発行、保存しておく。

https://storage.googleapis.com/zenn-user-upload/b1cedfca3c8c-20240320.png

ドキュメントにあったsampleのtfは以下。
これを参考に埋めていく!

#provider, use alicloud
provider "alicloud" {
  region = "cn-shenzhen"
  # Make sure that the same region is specified in the main.tf and variable.tf files.
}
variable "k8s_name_prefix" {
  description = "The name prefix used to create managed kubernetes cluster."
  default     = "tf-ack-shenzhen"
}
resource "random_uuid" "this" {}
# The default resource names.
locals {
  k8s_name_terway         = substr(join("-", [var.k8s_name_prefix, "terway"]), 0, 63)
  k8s_name_flannel        = substr(join("-", [var.k8s_name_prefix, "flannel"]), 0, 63)
  k8s_name_ask            = substr(join("-", [var.k8s_name_prefix, "ask"]), 0, 63)
  new_vpc_name            = "tf-vpc-172-16"
  new_vsw_name_azD        = "tf-vswitch-azD-172-16-0"
  new_vsw_name_azE        = "tf-vswitch-azE-172-16-2"
  new_vsw_name_azF        = "tf-vswitch-azF-172-16-4"
  nodepool_name           = "default-nodepool"
  managed_nodepool_name   = "managed-node-pool"
  autoscale_nodepool_name = "autoscale-node-pool"
  log_project_name        = "log-for-${local.k8s_name_terway}"
}
# The ECS instance specifications of the worker nodes. Terraform searches for ECS instance types that fulfill the CPU and memory requests.
data "alicloud_instance_types" "default" {
  cpu_core_count       = 8
  memory_size          = 32
  availability_zone    = var.availability_zone[0]
  kubernetes_node_role = "Worker"
}
// The zone that has sufficient ECS instances of the required specifications.
data "alicloud_zones" "default" {
  available_instance_type = data.alicloud_instance_types.default.instance_types[0].id
}
# The VPC.
resource "alicloud_vpc" "default" {
  vpc_name   = local.new_vpc_name
  cidr_block = "172.16.0.0/12"
}
# The node vSwitches.
resource "alicloud_vswitch" "vswitches" {
  count      = length(var.node_vswitch_ids) > 0 ? 0 : length(var.node_vswitch_cidrs)
  vpc_id     = alicloud_vpc.default.id
  cidr_block = element(var.node_vswitch_cidrs, count.index)
  zone_id    = element(var.availability_zone, count.index)
}
# The pod vSwitches.
resource "alicloud_vswitch" "terway_vswitches" {
  count      = length(var.terway_vswitch_ids) > 0 ? 0 : length(var.terway_vswitch_cidrs)
  vpc_id     = alicloud_vpc.default.id
  cidr_block = element(var.terway_vswitch_cidrs, count.index)
  zone_id    = element(var.availability_zone, count.index)
}
# The managed Kubernetes cluster.
resource "alicloud_cs_managed_kubernetes" "default" {
  # The name of the cluster.
  name = local.k8s_name_terway
  # Create an ACK Pro cluster.
  cluster_spec = "ack.pro.small"
  version      = "1.28.3-aliyun.1"
  # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
  worker_vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id))

  # The pod vSwitches.
  pod_vswitch_ids = split(",", join(",", alicloud_vswitch.terway_vswitches.*.id))

  # Specify whether to create a NAT gateway when the system creates the Kubernetes cluster. Default value: true.
  new_nat_gateway = true
  # The pod CIDR block. If you set cluster_network_type to flannel, this parameter is required. The pod CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the pod CIDR block after the cluster is created. Maximum number of hosts in the cluster: 256.
  # pod_cidr                  = "10.10.0.0/16"
  # The Service CIDR block. The Service CIDR block cannot be the same as the VPC CIDR block or the CIDR blocks of other Kubernetes clusters in the VPC. You cannot change the Service CIDR block after the cluster is created.
  service_cidr = "10.11.0.0/16"
  # Specify whether to create an Internet-facing SLB instance for the API server of the cluster. Default value: false.
  slb_internet_enabled = true

  # Enable Ram Role for ServiceAccount
  enable_rrsa = true

  # The logs of the control planes.
  control_plane_log_components = ["apiserver", "kcm", "scheduler", "ccm"]

  # The components.
  dynamic "addons" {
    for_each = var.cluster_addons
    content {
      name   = lookup(addons.value, "name", var.cluster_addons)
      config = lookup(addons.value, "config", var.cluster_addons)
    }
  }
}

# The regular node pool.
resource "alicloud_cs_kubernetes_node_pool" "default" {
  # The name of the cluster.
  cluster_id = alicloud_cs_managed_kubernetes.default.id
  # The name of the node pool.
  name = local.nodepool_name
  # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
  vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id))

  # Worker ECS Type and ChargeType
  instance_types       = var.worker_instance_types
  instance_charge_type = "PostPaid"

  # customize worker instance name
  # node_name_mode      = "customized,ack-terway-shenzhen,ip,default"

  #Container Runtime
  runtime_name    = "containerd"
  runtime_version = "1.6.20"

  # The expected number of nodes in the node pool.
  desired_size = 2
  # The password that is used to log on to the cluster by using SSH.
  password = var.password

  # Specify whether to install the CloudMonitor agent on the nodes in the cluster.
  install_cloud_monitor = true

  # The type of system disk used by the nodes. Default value: cloud_efficiency.
  system_disk_category = "cloud_efficiency"
  system_disk_size     = 100

  # OS Type
  image_type = "AliyunLinux"

  # The configuration of the data disks of the nodes.
  data_disks {
    # The disk type.
    category = "cloud_essd"
    # The disk size.
    size = 120
  }
}

# The managed node pool.
resource "alicloud_cs_kubernetes_node_pool" "managed_node_pool" {
  # The name of the cluster.
  cluster_id = alicloud_cs_managed_kubernetes.default.id
  # The name of the node pool.
  name = local.managed_nodepool_name
  # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
  vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id))

  # The expected number of nodes in the node pool.
  desired_size = 0

  # Managed Node Pool
  management {
    auto_repair     = true
    auto_upgrade    = true
    surge           = 1
    max_unavailable = 1
  }

  # Worker ECS Type and ChargeType
  # instance_types      = [data.alicloud_instance_types.default.instance_types[0].id]
  instance_types       = var.worker_instance_types
  instance_charge_type = "PostPaid"

  # customize worker instance name
  # node_name_mode      = "customized,ack-terway-shenzhen,ip,default"

  #Container Runtime
  runtime_name    = "containerd"
  runtime_version = "1.6.20"

  # The password that is used to log on to the cluster by using SSH.
  password = var.password

  # Specify whether to install the CloudMonitor agent on the nodes in the cluster.
  install_cloud_monitor = true

  # The type of system disk used by the nodes. Default value: cloud_efficiency.
  system_disk_category = "cloud_efficiency"
  system_disk_size     = 100

  # OS Type
  image_type = "AliyunLinux"

  # The configuration of the data disks of the nodes.
  data_disks {
    # The disk type.
    category = "cloud_essd"
    # The disk size.
    size = 120
  }
}

# The node pool that has auto scaling enabled.
resource "alicloud_cs_kubernetes_node_pool" "autoscale_node_pool" {
  # The name of the cluster.
  cluster_id = alicloud_cs_managed_kubernetes.default.id
  # The name of the node pool.
  name = local.autoscale_nodepool_name
  # The vSwitches of the node pool. Specify one or more vSwitch IDs. The vSwitches must be in the zone specified by availability_zone.
  vswitch_ids = split(",", join(",", alicloud_vswitch.vswitches.*.id))

  # AutoScale Node Pool
  scaling_config {
    min_size = 1
    max_size = 10
  }

  # Worker ECS Type and ChargeType
  instance_types = var.worker_instance_types

  # customize worker instance name
  # node_name_mode      = "customized,ack-terway-shenzhen,ip,default"

  #Container Runtime
  runtime_name    = "containerd"
  runtime_version = "1.6.20"

  # The password that is used to log on to the cluster by using SSH.
  password = var.password

  # Specify whether to install the CloudMonitor agent on the nodes in the cluster.
  install_cloud_monitor = true

  # The type of system disk used by the nodes. Default value: cloud_efficiency.
  system_disk_category = "cloud_efficiency"
  system_disk_size     = 100

  # OS Type
  image_type = "AliyunLinux3"

  # The configuration of the data disks of the nodes.
  data_disks {
    # The disk type.
    category = "cloud_essd"
    # The disk size.
    size = 120
  }
}

providerから

provider "alicloud" {
  access_key = "アクセスキー"
  secret_key = "シークレットキー"
  region = "ap-northeast-1"
}

ちなみにalicloudのversionはv1.223.1だった。

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "alibaba_test2"
}

kubernetesのversionはv2.30.0だった!

vpcとvswitch

resource "alicloud_vpc" "vpc" {
  cidr_block = "192.168.0.0/16"
  vpc_name   = var.base_name
}
resource "alicloud_vswitch" "vswitches" {
  vpc_id       = alicloud_vpc.vpc.id
  cidr_block   = "192.168.0.0/24"
  zone_id      = "ap-northeast-1c"
  vswitch_name = var.base_name
}
resource "alicloud_vswitch" "terway_vswitches" {
  vpc_id       = alicloud_vpc.vpc.id
  cidr_block   = "192.168.16.0/24"
  zone_id      = "ap-northeast-1c"
  vswitch_name = var.base_name
}

クラスタでterwayを使うのだけど、その場合worker_vswitch_idsだけでなくpod_vswitch_idsも指定が必要になる(前述したネットワーク設計のとこでも書いたな)。

apsara mongo

resource "alicloud_mongodb_instance" "mongodb" {
  name                = var.base_name
  engine_version      = "7.0"
  db_instance_class   = "mdb.shard.2x.xlarge.d"
  db_instance_storage = 20
  vpc_id              = alicloud_vpc.vpc.id
  vswitch_id          = alicloud_vswitch.vswitches.id
  security_ip_list    = [alicloud_vpc.vpc.cidr_block]
  network_type        = "VPC"
  account_password    = var.password
}

ACK cluster

resource "alicloud_cs_managed_kubernetes" "default" {
  worker_vswitch_ids   = [alicloud_vswitch.vswitches.id]
  pod_vswitch_ids      = [alicloud_vswitch.terway_vswitches.id]
  cluster_spec         = "ack.pro.small"
  cluster_domain       = "cluster.local"
  load_balancer_spec   = "slb.s1.small"
  name                 = var.base_name
  service_cidr         = "172.16.0.0/16"
  slb_internet_enabled = true
  enable_rrsa          = true
  addons {
    name   = "terway-eniip"
    config = ""
  }
}

terway使うからaddonに書いた!

nodepool

resource "alicloud_ecs_key_pair" "test" {
  key_pair_name = var.base_name
}

resource "alicloud_cs_kubernetes_node_pool" "default" {
  cluster_id                 = alicloud_cs_managed_kubernetes.default.id
  instance_types             = ["ecs.g7.xlarge"]
  vswitch_ids                = [alicloud_vswitch.vswitches.id]
  image_type                 = "AliyunLinux3"
  image_id                   = "aliyun_3_9_x64_20G_alibase_20231219.vhd"
  key_name                   = alicloud_ecs_key_pair.test.key_name
  system_disk_category       = "cloud_essd"
  system_disk_size           = 120
  node_pool_name             = "default-nodepool"
  cpu_policy                 = "none"
  multi_az_policy            = "BALANCE"
  count                      = 1
  auto_renew                 = false
  auto_renew_period          = 0
  compensate_with_on_demand  = false
  desired_size               = 1
  internet_max_bandwidth_out = 0
  login_as_non_root          = false
  node_name_mode             = "nodeip"
  management {
    auto_repair  = true
    auto_upgrade = true
    auto_vul_fix = true
    enable       = true
  }
}

keypairが必要だったのでそれもつくった!
node_poolhに関しては目についた項目をとりあえず指定してみたけど、デフォルト値とかあるだろうしもしかしたらこんなに指定しなくてもなんとかなるのかもしれない…?

ここまでで一旦terrafrom applyする!
もろもろ生成されたらkubeconfigの設定をして、kubeのdeploymentやらsvcやらをもっかいapplyする。

kubernetesのやつら

まずclusterにアクセスして接続情報をkubeconfigに設定する。
kubectxで今回作ったtest2のクラスターに向き先を変えておく〜。
あと例によってMongoの接続情報が変わっているので、APIのなかでの向き先を変えておく!
そしていざdeployment以下をapplyする〜。

resource "kubernetes_deployment_v1" "test" {
  metadata {
    name      = var.base_name
    namespace = "default"
    labels = {
      app = var.base_name
    }
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = var.base_name
      }
    }
    template {
      metadata {
        labels = {
          app = var.base_name
        }
      }
      spec {
        container {
          image             = var.image_id
          name              = var.base_name
          image_pull_policy = "Always"
          port {
            container_port = 8000
            name           = var.base_name
            protocol       = "TCP"
          }
          resources {
            requests = {
              cpu    = "250m"
              memory = "512Mi"
            }
          }
          stdin                      = true
          termination_message_path   = "/dev/termination-log"
          termination_message_policy = "File"
          tty                        = true
        }
        dns_policy = "ClusterFirst"
        image_pull_secrets {
          name = kubernetes_secret_v1.test.metadata.0.name
        }
        restart_policy = "Always"
      }
    }
  }
}
resource "kubernetes_secret_v1" "test" {
  metadata {
    name = var.base_name
  }
  data = {
    ".dockerconfigjson" = templatefile(".dockerconfigjson", { email = var.email, password = var.password })
  }
  type = "kubernetes.io/dockerconfigjson"
}
resource "kubernetes_service" "test" {
  metadata {
    name = "test-svc2"
    annotations = {
      "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec" : "slb.s1.small"
    }
  }
  spec {
    allocate_load_balancer_node_ports = true
    external_traffic_policy           = "Local"
    internal_traffic_policy           = "Cluster"
    port {
      port        = 80
      target_port = 8000
      protocol    = "TCP"
      name        = var.base_name
    }
    selector = {
      app = var.base_name
    }
    type = "LoadBalancer"
  }
}
resource "kubernetes_ingress_v1" "test" {
  metadata {
    name      = "test-ingress2"
    namespace = "default"
    labels = {
      "ingress-controller" = "nginx"
    }
  }
  spec {
    ingress_class_name = "nginx"
    rule {
      http {
        path {
          backend {
            service {
              name = "test-svc2"
              port {
                number = 80
              }
            }
          }
          path      = "/"
          path_type = "ImplementationSpecific"
        }
      }
    }
  }
}

deploymentでdocker imageのpullのためのシークレットは、既存のクラスタで作られたやつを見て参考にした。
.dockerconfig.jsonを用意してtf上で読み込んでる。

{
  "auths": {
    "registry-intl-vpc.ap-northeast-1.aliyuncs.com": {
      "username": "${email}",
      "password": "${password}",
      "auth": "アカウント名とアクセストークンをコロンで繋いだ文字列をBase64エンコードしたもの"
    }
  }
}

引用

これで完成!
できたingressのエンドポイントになげたら疎通もできました〜〜〜〜。

https://storage.googleapis.com/zenn-user-upload/d552b2b41283-20240517.png

container registry

ここまででcontainer registry以外はTF化していたので、最後これだけやってみよう。

現状はpersonal editionでつくっていたのでした。

https://storage.googleapis.com/zenn-user-upload/641d374ba53f-20240517.png

で、じゃあこれをTF管理しようかな…と思ったところ。

https://storage.googleapis.com/zenn-user-upload/56b6f6b651f9-20240517.png

なんかpersonal editionのresourceない???

repoだけありそうだけど、インスタンス立てられなさそうだしそれならenterprise editionたててみるか…。

とりあえずコンソールからインスタンスたてる。

https://storage.googleapis.com/zenn-user-upload/320062cf88db-20240517.png

https://storage.googleapis.com/zenn-user-upload/b65e0e52db6c-20240517.png

課金形態はサブスクだけなのか。

https://storage.googleapis.com/zenn-user-upload/c9a543683475-20240517.png

できた〜〜〜。

これをterraform importしてみて設定したのが以下。

resource "alicloud_cr_ee_instance" "default" {
  payment_type   = "Subscription"
  period         = 1
  renew_period   = 0
  renewal_status = "ManualRenewal"
  instance_type  = "Basic"
  instance_name  = var.base_name
}

ちなみにコンソールからインスタンス削除できないっぽい?し、公式にも載ってなさげだったし、
terraform destroyでも消えなかったので、とりあえずterraformからの立ち上げは諦める。

コンソールにあるガイダンスに沿ってterraformで設定していく。

https://storage.googleapis.com/zenn-user-upload/d4de92299102-20240517.png

アクセス制御は以下。

resource "alicloud_cr_vpc_endpoint_linked_vpc" "test" {
  instance_id                      = alicloud_cr_ee_instance.test.id
  vpc_id                           = alicloud_vpc.vpc.id
  vswitch_id                       = alicloud_vswitch.vswitches.id
  module_name                      = "Registry"
  enable_create_dns_record_in_pvzt = true
}

アクセス資格制御はTFリソースになさそうだったので後回し。

名前空間とリポジトリは以下!

resource "alicloud_cr_ee_namespace" "test" {
  instance_id        = alicloud_cr_ee_instance.test.id
  name               = "test2dayo"
  auto_create        = false
  default_visibility = "PUBLIC"
}

resource "alicloud_cr_ee_repo" "test" {
  instance_id = alicloud_cr_ee_instance.test.id
  namespace   = alicloud_cr_ee_namespace.test.name
  name        = "test2"
  summary     = "test2 repo"
  repo_type   = "PUBLIC"
  detail      = "this is a test2 repo"
}

アクセス資格制御をコンソールから設定する。

https://storage.googleapis.com/zenn-user-upload/52390da2ab4d-20240517.png

ドキュメントを読む限り、セキュリティの観点的にECS経由のインターネット接続またはコードソースからのビルドがよさそうだけど、とりあえずホワイトリスト削除、インターネット接続フルオープンにしてローカルから繋いでみる。

https://storage.googleapis.com/zenn-user-upload/ad7b6180edec-20240517.png

https://storage.googleapis.com/zenn-user-upload/0647f973ccb4-20240519.png

ログイン成功したので、test2リポジトリにdocker imageをpushする。

https://storage.googleapis.com/zenn-user-upload/beff8789c9ed-20240519.png

できた〜。
ついでにACKのデプロイメントのimage idもtest2のほうに変更してデプロイしなおし、APIアクセスしておわり!

今回terraformで管理したのは以下!

https://storage.googleapis.com/zenn-user-upload/5725f18649df-20240519.png

alicloud_cr_ee_instance.test だけは一回立てちゃったからtfの方で制御?削除?できなかったけど、それ以外は快適に扱えた✌️

感想

ざっと箇条書きで

  • Alibaba Cloudはじめて触ってみたけど、他社クラウドに比べて個人的にはドキュメントが読みやすく感じた(特にネットワーク周りは可視化してくれたり比較表まめに載っけてくれててよかった)
  • ACKはpodの立ち上がりと削除がはやくて快適だし、Terwayまじ良くない?
  • ApsaraDB for MongoDBも気軽につかえて便利
  • Terraformのimport block使おうとしたとき多分対応されてないっぽいし、やっぱりまだできないことも多そう
  • 値段は正直他とちゃんと比べてない…笑 力尽きてしまった でもそんな高額なイメージはないかも
  • ドキュメントはまだしも、コンソールがちらほら日本語対応できていなかったりする箇所も割とあるので、そこもうちょい対応してくれればいいな〜〜〜
  • GCPのマネージド証明書みたいな感じでAlibabaが払出しできる証明書があったらよかったな〜 なにも考えずにingressに証明書つけて公開するやつやりたくね そこはGCPつよくね
  • dms(データ管理)も便利〜〜〜〜〜これ最高じゃん?

多分理解間違ってたりベストプラクティスじゃないとこもいっぱいあるかもだけどいい感じにできたんじゃん!?
今後もAlibaba Cloud触ってみたいな。

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?