LoginSignup
5
4

More than 3 years have passed since last update.

kafka で SASL/PLAIN, SASL/SCRAM, SSL を設定する

Posted at

kafka で SASL認証とSSLを設定するのに苦労したので備忘録。

https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html#authentication-with-sasl-using-jaas
によると、SASLで使える認証メカニズムは次の通り。

  • GSSAPI (Kerberos)
  • OAUTHBEARER
  • SCRAM
  • PLAIN
  • Delegation Tokens
  • LDAP

この記事では、PLAINとSCRAMを紹介します。また、それぞれの場合でSSLを有効にします。

背景

kafkaのユーザーアカウントをjaasファイル(SASL/PLAIN)で管理していましたが、アカウント追加のたびにjaasファイル書き換え&サーバーを再起動する必要がありました。
SASL/SCRAMで管理すると、ユーザー情報がzookeeperに格納される形となり、コマンドでユーザーアカウント追加をできることが分かったので、SASL/SCRAMに移行しました。

ファイル

次のようなファイルを用意することになります。

.
|- docker-compose.yaml
|- zookeeper.env
|- kafka.env
|- secrets/
|  |- zookeeper_jaas.conf
|  |- kafka_jaas.conf
|  |- ssl/
|  |  |- kafka.keystore.jks
|  |  |- kafka.truststore.jks
|  |  |- kafka_keystore_creds    # kafka.keystore.jks のパスワードが記載されているファイル
|  |  |- kafka_truststore_creds  # kafka.truststore.jks のパスワードが記載されているファイル
|  |  |- kafka_sslkey_creds      # kafka.keystore.jks に格納されている秘密鍵のパスワードが記載されているファイル

注意点

本記事では、confluent社の用意しているdockerイメージ(cp-zookeeper, cp-kafka)を使います。
通常のパッケージ版(Apache Aafka含む)の場合、パラメータはzookeeper.propertiesやserver.properties(kafka.properties)に記載していきますが、
dockerイメージの場合はパラメータを環境変数で与えることになります。
パラメータ名と環境変数名は1対1に対応していますが、例外もあります。

ドキュメントはないので、cp-zookeeper, cp-kafkaのDockerファイルや、環境変数から.propertiesを生成するconfigureシェルスクリプトやjinja2テンプレートファイル(.properties)、exampleを読んで確認することになります。

PLAINTEXT

まずは、ユーザー認証もSSLも使わない設定を示します。

docker-compose.yaml (長いので折りたたみ)
docker-compose.yaml
version: '2.1'
services:
  zookeeper1:
    image: confluentinc/cp-zookeeper:6.0.1
    env_file:
      - ./zookeeper.env
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 12181
      ZOOKEEPER_SERVERS: localhost:12888:13888;localhost:22888:23888;localhost:32888:33888
      # ZOOKEEPER_ADMIN_SERVER_PORT: 8888
    network_mode: host
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/zookeeper1_data:/var/lib/zookeeper/data
      - ./volumes/zookeeper1_log:/var/lib/zookeeper/log

  zookeeper2:
    image: confluentinc/cp-zookeeper:6.0.1
    env_file:
      - ./zookeeper.env
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_SERVERS: localhost:12888:13888;localhost:22888:23888;localhost:32888:33888
      # ZOOKEEPER_ADMIN_SERVER_PORT: 8888
    network_mode: host
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/zookeeper2_data:/var/lib/zookeeper/data
      - ./volumes/zookeeper2_log:/var/lib/zookeeper/log

  zookeeper3:
    image: confluentinc/cp-zookeeper:6.0.1
    env_file:
      - ./zookeeper.env
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_CLIENT_PORT: 32181
      ZOOKEEPER_SERVERS: localhost:12888:13888;localhost:22888:23888;localhost:32888:33888
      # ZOOKEEPER_ADMIN_SERVER_PORT: 8888
    network_mode: host
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/zookeeper3_data:/var/lib/zookeeper/data
      - ./volumes/zookeeper3_log:/var/lib/zookeeper/log

  kafka1:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: localhost:12181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092
    network_mode: host
    depends_on:
      - zookeeper1
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka1:/var/lib/kafka

  kafka2:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092
    network_mode: host
    depends_on:
      - zookeeper2
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka2:/var/lib/kafka

  kafka3:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: localhost:32181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092
    network_mode: host
    depends_on:
      - zookeeper3
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka3:/var/lib/kafka

zookeeper.env
ZOOKEEPER_TICK_TIME = 2000
ZOOKEEPER_ADMIN_ENABLE_SERVER = false  # 管理者用WebGUIサーバーを起動しない
kafka.env
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = PLAINTEXT

SASL/PLAIN

まずは、ユーザー認証を組み込むために、SASL/PLAINを設定します。(procuder,consumer)-broker間でSASL認証を使う場合、broker-zookeeper間のSASL認証も強制的に有効化されてしまうので、その設定も併せて行う必要があります。

docker-compose.yaml (長いので折りたたみ)
docker-compose.yaml
version: '2.1'
services:
  zookeeper1:
    # 変更なし
  zookeeper2:
    # 変更なし
  zookeeper3:
    # 変更なし

  kafka1:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: localhost:12181
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:19092
    network_mode: host
    depends_on:
      - zookeeper1
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka1:/var/lib/kafka

  kafka2:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:29092
    network_mode: host
    depends_on:
      - zookeeper2
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka2:/var/lib/kafka

  kafka3:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: localhost:32181
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:39092
    network_mode: host
    depends_on:
      - zookeeper3
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka3:/var/lib/kafka

zookeeper.env
ZOOKEEPER_TICK_TIME = 2000
ZOOKEEPER_ADMIN_ENABLE_SERVER = false
KAFKA_OPTS = -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_jaas.conf
kafka.env
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = SASL_PLAINTEXT
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL = PLAIN
KAFKA_SASL_ENABLED_MECHANISMS = PLAIN
KAFKA_OPTS = -Djava.security.auth.login.config=/etc/kafka/secrets/kafka_jaas.conf
secrets/zookeeper_jaas.conf
Server{
   org.apache.zookeeper.server.auth.DigestLoginModule required
   user_zooadmin="zooadmin-pass"  # kafka-zookeeper間、zookeeper-zookeeper間における接続先が受け入れるアカウント
   ;
};
Client{
   org.apache.zookeeper.server.auth.DigestLoginModule required
   username="zooadmin"  # zookeeper-zookeeper間における接続元が接続先に提示するアカウント
   password="zooadmin-pass"
   ;
};
secrets/kafka_jaas.conf
KafkaServer {  # client-broker間、broker-broker間で使われるユーザー情報
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="kafkaadmin"  # broker-broker間における接続元側が接続先に提示するアカウント
    password="kafkaadmin-pass"
    user_kafkaadmin="kafkaadmin-pass"  # broker-broker間における接続先が受け入れるアカウント
   ;
};
KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="kafkaadmin"  # kafkaコンテナ内のtoolスクリプト(kafka-configやkafka-aclsなど)が接続先に提示するアカウント
    password="kafkaadmin-pass"
    ;
};
Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="zooadmin"  # brokerがzookeeperに提示するアカウント 
    password="zooadmin-pass"
    ;
};

SSL

次に、(procuder,consumer)-broker間通信をSSL化したコンフィグを作ります。この状態で、broker-broker間通信もSSL化されます。

brokerで利用するSSL証明書は、jks形式のファイルに格納しておく必要があります。
作り方は補足に記載しました。

docker-compose.yaml (長いので折りたたみ)
docker-compose.yaml
version: '2.1'
services:
  zookeeper1:
    # 変更なし
  zookeeper2:
    # 変更なし
  zookeeper3:
    # 変更なし

  kafka1:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: localhost:12181
      KAFKA_ADVERTISED_LISTENERS: SSL://localhost:19092  # *SSL://というスキーマ-である必要がある(cp-kafkaの仕様)
    network_mode: host
    depends_on:
      - zookeeper1
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka1:/var/lib/kafka

  kafka2:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: SSL://localhost:29092
    network_mode: host
    depends_on:
      - zookeeper2
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka2:/var/lib/kafka

  kafka3:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: localhost:32181
      KAFKA_ADVERTISED_LISTENERS: SSL://localhost:39092
    network_mode: host
    depends_on:
      - zookeeper3
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka3:/var/lib/kafka

kafka.env
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = SSL
# kafkaの設定項目としてはssl.keystore.locationだが、cp-kafkaとしては、
# ssl.keystore.location = /etc/kafka/secrets/${KAFKA_SSL_KEYSTORE_FILENAME} と強制解釈される
KAFKA_SSL_KEYSTORE_FILENAME = ssl/kafka.keystore.jks
# kafkaの設定項目としては存在しないが、cp-kafkaとしては、
# ssl.keystore.password = $(cat /etc/kafka/secrets/${KAFKA_SSL_KEYSTORE_CREDENTIALS }) のように解釈される
KAFKA_SSL_KEYSTORE_CREDENTIALS = ssl/kafka_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS = ssl/kafka_sslkey_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM = https  # 補足参照
KAFKA_SSL_TRUSTSTORE_FILENAME = ssl/kafka.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS = ssl/kafka_truststore_creds
KAFKA_SSL_CLIENT_AUTH = required

SASL_SSL (SASL/PLAIN + SSL)

SASL機能とSSL機能を同時に有効にします。

docker-compose.yaml (長いので折りたたみ)
docker-compose.yaml
version: '2.1'
services:
  zookeeper1:
    # 変更なし
  zookeeper2:
    # 変更なし
  zookeeper3:
    # 変更なし

  kafka1:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: localhost:12181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:19092  # *SSL://というスキーマ-である必要がある(cp-kafkaの仕様)
    network_mode: host
    depends_on:
      - zookeeper1
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka1:/var/lib/kafka

  kafka2:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:29092
    network_mode: host
    depends_on:
      - zookeeper2
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka2:/var/lib/kafka

  kafka3:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: localhost:32181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:39092
    network_mode: host
    depends_on:
      - zookeeper3
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka3:/var/lib/kafka

kafka.env
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = SASL_SSL
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL = PLAIN
KAFKA_SASL_ENABLED_MECHANISMS = PLAIN
KAFKA_SSL_KEYSTORE_FILENAME = ssl/kafka.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS = ssl/kafka_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS = ssl/kafka_sslkey_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM = https  # 補足参照
KAFKA_SSL_TRUSTSTORE_FILENAME = ssl/kafka.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS = ssl/kafka_truststore_creds
KAFKA_SSL_CLIENT_AUTH = required
KAFKA_OPTS = -Djava.security.auth.login.config=/etc/kafka/secrets/kafka_jaas.conf

KAFKA_ADVERTISED_LISTENERSは、*SSL://というスキーマ-であり、かつSASLという文字を含んでいる必要があります。
正確には、*SSL://というスキーマーであればKAFKA_SSL_* 環境変数がkafka.propertiesに利用され、SASLという文字を含んでいれば、KAFKA_OPTS に "java.security.auth.login.config" という文字が含まれているかがチェックされます。
- https://github.com/confluentinc/kafka-images/blob/db792e2276f9b6d03ad5997d31e7994accefa01f/kafka/include/etc/confluent/docker/configure#L65
- https://github.com/confluentinc/kafka-images/blob/db792e2276f9b6d03ad5997d31e7994accefa01f/kafka/include/etc/confluent/docker/configure#L101

SASL/SCRAM

SASL/PLAINは、ユーザー一覧をjaasファイルに書く必要があります。ユーザー追加やパスワード変更の際はjaasファイルを書き換え、kafkaを再起動する必要があり、運用に問題が生じます。
SASL/SCRAMにすることで、kafka-configsコマンドを使ってユーザー情報を動的に登録することができます。

docker-compose.yaml (長いので折りたたみ)
docker-compose.yaml
version: '2.1'
services:
  zookeeper1:
    # 変更なし
  zookeeper2:
    # 変更なし
  zookeeper3:
    # 変更なし

  kafka1:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: localhost:12181
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:19092  # SASLという文字列を含んでいる必要がある。
    network_mode: host
    depends_on:
      - zookeeper1
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka1:/var/lib/kafka

  kafka2:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:29092
    network_mode: host
    depends_on:
      - zookeeper2
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka2:/var/lib/kafka

  kafka3:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: localhost:32181
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:39092
    network_mode: host
    depends_on:
      - zookeeper3
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka3:/var/lib/kafka

zookeeper.env
ZOOKEEPER_TICK_TIME = 2000
ZOOKEEPER_ADMIN_ENABLE_SERVER = false
ZOOKEEPER_AUTH_PROVIDER_SASL=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
kafka.env
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL = SCRAM-SHA-256
KAFKA_SASL_ENABLED_MECHANISMS = SCRAM-SHA-256
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = SASL_PLAINTEXT
KAFKA_OPTS = -Djava.security.auth.login.config=/etc/kafka/secrets/kafka_jaas.conf
kafka_jaas.conf
KafkaServer {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="kafkaadmin"
   password="kafkaadmin-pass"
   // user_kafkaadmin="kafkaadmin-pass"  // サーバー起動時に初期登録されたりはしない
   ;
};
KafkaClient {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="kafkaadmin"
   password="kafkaadmin-pass"
   ;
};

ユーザー情報はzookeeperに格納されます。まずzookeeperを起動し、kafkaを起動する前に、kafka-configsコマンドでkafkaadminユーザーを登録します。
この登録をしないと、broker-broker間通信用のユーザーが存在しないためkafkaが起動できません。
ユーザー登録後、kafkaを起動させます。

$ docker-compose up -d zookeeper{1,2,3}
$ docker-compose exec zookeeper1 kafka-configs --zookeeper localhost:12181 --alter \
  --add-config 'SCRAM-SHA-256=[iterations=8192,password=kafkaadmin-pass],SCRAM-SHA-512=[password=kafkaadmin-pass]' \
  --entity-type users --entity-name kafkaadmin
$ docker-compose up -d kafka{1,2,3}

SASL_SSL(SASL/SCRAM + SSL)

SASL/SCRAMとSSL機能を同時に有効にします。

docker-compose.yaml (長いので折りたたみ)
docker-compose.yaml
version: '2.1'
services:
  zookeeper1:
    # 変更なし
  zookeeper2:
    # 変更なし
  zookeeper3:
    # 変更なし

  kafka1:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: localhost:12181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:19092  # *SSL://というスキーマ-である必要がある(cp-kafkaの仕様)
    network_mode: host
    depends_on:
      - zookeeper1
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka1:/var/lib/kafka

  kafka2:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:29092
    network_mode: host
    depends_on:
      - zookeeper2
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka2:/var/lib/kafka

  kafka3:
    image: confluentinc/cp-kafka:6.0.1
    env_file: 
      - ./kafka.env
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: localhost:32181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:39092
    network_mode: host
    depends_on:
      - zookeeper3
    volumes:
      - ./secrets:/etc/kafka/secrets
      - ./volumes/kafka3:/var/lib/kafka

zookeeper.env
ZOOKEEPER_TICK_TIME = 2000
ZOOKEEPER_ADMIN_ENABLE_SERVER = false
ZOOKEEPER_AUTH_PROVIDER_SASL=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
kafka.env
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = SASL_SSL
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL = SCRAM-SHA-256
KAFKA_SASL_ENABLED_MECHANISMS = SCRAM-SHA-256
KAFKA_SECURITY_INTER_BROKER_PROTOCOL = SASL_SSL
KAFKA_OPTS = -Djava.security.auth.login.config=/etc/kafka/secrets/kafka_jaas.conf
KAFKA_SSL_KEYSTORE_FILENAME = ssl/kafka.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS = ssl/kafka_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS = ssl/kafka_sslkey_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM = https  # 補足参照
KAFKA_SSL_TRUSTSTORE_FILENAME = ssl/kafka.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS = ssl/kafka_truststore_creds
KAFKA_SSL_CLIENT_AUTH = required
KAFKA_OPTS = -Djava.security.auth.login.config=/etc/kafka/secrets/kafka_jaas.conf
kafka_jaas.conf
KafkaServer {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="kafkaadmin"
   password="kafkaadmin-pass"
   // user_kafkaadmin="kafkaadmin-pass"  // サーバー起動時に初期登録されたりはしない
   ;
};
KafkaClient {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="kafkaadmin"
   password="kafkaadmin-pass"
   ;
};

補足

cp-zookeeper, cp-kafka dockerイメージでは、ssl関連ファイルのディレクトリパスは/etc/kafka/secretsに固定されている

SSLの項目でも触れていますが、SSL keystoreファイルのパスの指定方法が特殊です。
パッケージ版のkafkaではssl.keystore.location=/path/to/fileという形で指定しますが、
co-kafkaの場合はKAFKA_SSL_KEYSTORE_FILENAME=filenameとしてファイル名で指定します。

configureスクリプト内では、ssl.keystore.location=/etc/kafka/secrets/${KAFKA_SSL_KEYSTORE_FILENAME}のように解釈されます。

従って、SSL関連ファイルをdocker volumeでマウントするパスは /etc/kafka/secrets とする必要があります。

configureスクリプトの関連部分のURLは次となります。
https://github.com/confluentinc/kafka-images/blob/db792e2276f9b6d03ad5997d31e7994accefa01f/kafka/include/etc/confluent/docker/configure#L70

KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM = https について

自分がclientになる場合、かつSSL通信を行う場合、接続先サーバのホスト名を検証する方法を指定するパラメータです。
ここでいうclientには、broker-broker間通信における接続元も含みます。
SSL証明書生成の際、CNを指定しますが、KAFKA_ADVERTISED_LISTENERSに記載するホスト名と一致させる必要があります。
(clientは、bootstrap-serverとしてもbrokerから提供されたKAFKA_ADVERTISED_LISTENERSをもとにbrokerに接続するため)

本記事のようにKAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092としている場合は、SSL証明書作成の際、CN=localhostとする必要があります。
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:19092とした場合は、SSLのsubjectAltNameを使って、IPアドレスをホスト名の代わりに使う必要があります。
kafkaを検証目的で使う場合には、KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM =として、空白文字を設定することで、検証を無効化することができます。

SSL関連ファイルの生成方法

jaasファイルを生成するのは、手順が多くて大変です。
次のスクリプトを利用するとよいでしょう。

$ curl -Ls https://github.com/confluentinc/confluent-platform-security-tools/raw/master/kafka-generate-ssl.sh > kafka-generate-ssl.sh
$ bash kafka-generate-ssl.sh

スクリプトの内部ではkeytoolコマンドが使われています。
手元の環境にインストールするか、dockerを使います。

$ curl -Ls https://github.com/confluentinc/confluent-platform-security-tools/raw/master/kafka-generate-ssl.sh > kafka-generate-ssl.sh

$ cat <<EOF > Dockerfile.keytool
FROM openjdk:alpine
RUN apk upgrade --update-cache --available && \
    apk add openssl bash && \
    rm -rf /var/cache/apk/*

WORKDIR /work
EOF

$ docker build -f Dockerfile.keytool -t keytool .
$ docker run -v $(pwd):/work -it --rm --user 1000 keytool bash kafka-generate-ssl.sh
$ echo "<入力したkeystore用パスワード>" > kafka_keystore_creds
$ echo "<入力したtruststore用パスワード>" > kafka_truststore_creds
$ echo "<入力したsslkey用パスワード>" > kafka_sslkey_creds

実行ログ
Welcome to the Kafka SSL keystore and truststore generator script.

First, do you need to generate a trust store and associated private key,
or do you already have a trust store file and private key?

Do you need to generate a trust store and associated private key? [yn] y

OK, we'll generate a trust store and associated private key.

First, the private key.

You will be prompted for:
 - A password for the private key. Remember this.
 - Information about you and your company.
 - NOTE that the Common Name (CN) is currently not important.
Generating a RSA private key
........................................................................................+++++
.....................................................................................................................+++++
unable to write 'random state'
writing new private key to 'truststore/ca-key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:
State or Province Name (full name) []:
Locality Name (eg, city) []:
Organization Name (eg, company) []:
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:example
Email Address []:

Two files were created:
 - truststore/ca-key -- the private key used later to
   sign certificates
 - truststore/ca-cert -- the certificate that will be
   stored in the trust store in a moment and serve as the certificate
   authority (CA). Once this certificate has been stored in the trust
   store, it will be deleted. It can be retrieved from the trust store via:
   $ keytool -keystore <trust-store-file> -export -alias CARoot -rfc

Now the trust store will be generated from the certificate.

You will be prompted for:
 - the trust store's password (labeled 'keystore'). Remember this
 - a confirmation that you want to import the certificate
Enter keystore password:  
Re-enter new password: 
Owner: CN=example
Issuer: CN=example
Serial number: f5348fa271bbef8c
Valid from: Sat Dec 19 16:33:11 GMT 2020 until: Tue Dec 17 16:33:11 GMT 2030
Certificate fingerprints:
         MD5:  36:91:0F:65:62:A8:95:26:1B:F7:89:0C:A5:00:E7:4E
         SHA1: C2:4C:6B:F9:31:35:49:13:DC:D3:84:38:48:78:69:8E:D5:FB:DB:F2
         SHA256: D6:57:0C:AF:21:A0:07:B6:B5:AA:4C:1A:AD:38:F3:17:B8:64:13:A9:1E:F2:23:B7:34:96:17:BC:9D:B6:87:A3
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 1
Trust this certificate? [no]:  yes
Certificate was added to keystore

truststore/kafka.truststore.jks was created.

Continuing with:
 - trust store file:        truststore/kafka.truststore.jks
 - trust store private key: truststore/ca-key

Now, a keystore will be generated. Each broker and logical client needs its own
keystore. This script will create only one keystore. Run this script multiple
times for multiple keystores.

You will be prompted for the following:
 - A keystore password. Remember it.
 - Personal information, such as your name.
     NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of
           this host. However, at some point, this may change. As such, make the CN
           the FQDN. Some operating systems call the CN prompt 'first / last name'
 - A key password, for the key being generated within the keystore. Remember this.
Enter keystore password:  
Re-enter new password: 
What is your first and last name?
  [Unknown]:  
What is the name of your organizational unit?
  [Unknown]:  
What is the name of your organization?
  [Unknown]:  
What is the name of your City or Locality?
  [Unknown]:  
What is the name of your State or Province?
  [Unknown]:  
What is the two-letter country code for this unit?
  [Unknown]:  
Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
  [no]:  yes

Enter key password for <localhost>
        (RETURN if same as keystore password):  
Re-enter new password: 

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore keystore/kafka.keystore.jks -destkeystore keystore/kafka.keystore.jks -deststoretype pkcs12".

'keystore/kafka.keystore.jks' now contains a key pair and a
self-signed certificate. Again, this keystore can only be used for one broker or
one logical client. Other brokers or clients need to generate their own keystores.

Fetching the certificate from the trust store and storing in ca-cert.

You will be prompted for the trust store's password (labeled 'keystore')
Enter keystore password:  
Certificate stored in file <ca-cert>

Now a certificate signing request will be made to the keystore.

You will be prompted for the keystore's password.
Enter keystore password:  

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore keystore/kafka.keystore.jks -destkeystore keystore/kafka.keystore.jks -deststoretype pkcs12".

Now the trust store's private key (CA) will sign the keystore's certificate.

You will be prompted for the trust store's private key password.
Signature ok
subject=/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=Unknown
Getting CA Private Key
Enter pass phrase for truststore/ca-key:
unable to write 'random state'

Now the CA will be imported into the keystore.

You will be prompted for the keystore's password and a confirmation that you want to
import the certificate.
Enter keystore password:  
Owner: CN=example
Issuer: CN=example
Serial number: f5348fa271bbef8c
Valid from: Sat Dec 19 16:33:11 GMT 2020 until: Tue Dec 17 16:33:11 GMT 2030
Certificate fingerprints:
         MD5:  36:91:0F:65:62:A8:95:26:1B:F7:89:0C:A5:00:E7:4E
         SHA1: C2:4C:6B:F9:31:35:49:13:DC:D3:84:38:48:78:69:8E:D5:FB:DB:F2
         SHA256: D6:57:0C:AF:21:A0:07:B6:B5:AA:4C:1A:AD:38:F3:17:B8:64:13:A9:1E:F2:23:B7:34:96:17:BC:9D:B6:87:A3
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 1
Trust this certificate? [no]:  yes
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore keystore/kafka.keystore.jks -destkeystore keystore/kafka.keystore.jks -deststoretype pkcs12".

Now the keystore's signed certificate will be imported back into the keystore.

You will be prompted for the keystore's password.
Enter keystore password:  
Certificate reply was installed in keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore keystore/kafka.keystore.jks -destkeystore keystore/kafka.keystore.jks -deststoretype pkcs12".

All done!

Delete intermediate files? They are:
 - 'ca-cert.srl': CA serial number
 - 'cert-file': the keystore's certificate signing request
   (that was fulfilled)
 - 'cert-signed': the keystore's certificate, signed by the CA, and stored back
    into the keystore
Delete? [yn] y

5
4
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
4