The Data Guy

Debezium With AWS MSK IAM Authentication


AWS MSK now supports Kafka ACL via IAM permissions. You can assign that permission to an IAM user(aws credentials file) or an IAM role that is attached to the Kafka Clients(Both producers and consumers). These IAM permissions are having similar names as the Kafka ACLs. All the activites like Create topic, delete topic, describe topic can be logged into the cloudtrail logs. We can’t expect all the ACLs, but almost all the necessary permissions on the Topic level are already available. Soon we can expect that they can add more permissions. In this blog, we are going to see how to integrate the Debezium with AWS MSK IAM authentication and some problems I faced while implementing this.

How it works?

Debezium With AWS MSK IAM Authenticatio

Let’s configure

Download the AWS MSK IAM library into the Kafka connect instance. (always check their repo for the latest version of the library.

mkdir /opt/kafka-custom-lib/
cd /opt/kafka-custom-lib/

Edit the Kafka run class file(/usr/bin/kafka-run-class) and the library location under the launch mode section then restart the Kafka connect service.

-- From this
# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then

-- To this
# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
  nohup "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH:"/opt/kafka-custom-lib/*"  $KAFKA_OPTS "[email protected]" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &

Create an EC2 IAM role and attach it to the Kafka connect instance(you can read the AWS doc to create the role)

Create attach the inline policy to that EC2 IAM Role. Make sure you replace the following with your cluster details. Read more from this doc to know better about this syntax

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": [

Copy the default Java cacerts file as a Truststore file. Because this method requires a TLS connection from the client. So we can use the default Java certificate. Im running my Kafka connect with Amazon corretto.

cp /usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre/lib/security/cacerts /opt/ssl/kafka.client.truststore.jks

(Optional) To make the file more secure, set read-only permission to that file. Im running the kafka connect service under the cp-kafka user confluent as the group.

chown cp-kafka:confluent /opt/ssl/kafka.client.truststore.jks
chmod 400 /opt/ssl/kafka.client.truststore.jks

Add the following properties into the worker properties file and restart the service. Im using distributed properties file.

vim /etc/kafka/

## SSL Properties
sasl.mechanism=AWS_MSK_IAM required;	

## SSL Properties for Producer app
producer.sasl.mechanism=AWS_MSK_IAM required;

## SSL Properties for Consumer app
consumer.sasl.mechanism=AWS_MSK_IAM required;

It’s mandatory to add the producer and consumer properties into this file, else it’ll throw an error.

(Optional but applicable for t3 MSK family) - If you are running your MSK cluster, then while connecting with IAM will create more requests and your cluster may block those request and your client will return Too many connects error. So lets add this line into your worker properties file. If need use a larger value than this. Read the thread here

Its time for the debezium connector configuration. Lets add these lines into your debezium JSON config file.

"database.history.sasl.jaas.config":" required;",

"database.history.producer.sasl.jaas.config":" required;",

"database.history.consumer.sasl.jaas.config":" required;",

Again producer and consumer sections are mandatory here as well, else you’ll get an error.

Deploy the connector.

curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" http://localhost:8083/connectors -d @mysql_iam.json

Check the status.

curl GET localhost:8083/connectors/mysql-connector-02/status | jq
  "name": "mysql-connector-02",
  "connector": {
    "state": "RUNNING",
    "worker_id": ""
  "tasks": [
      "id": 0,
      "state": "RUNNING",
      "worker_id": ""
  "type": "source"

Check the topics created by debezium.

kafka-topics --list --zookeeper ZOOKEEPER_ENDPOINT:2181


Yeah, it worked, lets see the activities on the CloudTrail console

Debezium With AWS MSK IAM Authenticatio

Bonus Tip: Access AWS MSK Zookeeper with TLS

The TLS auth will be implemented between brokers, client to brokers, zookeeper to brokers. But client to Zookeeper can be accessed without TLS. Im sharing some steps to securly access the zookeeper with TLS.

We already have the truststore file. And it has password protected. The default password is changeit. Debezium doesn’t need this password, but to connect with zookeeper its required. So let reset this password with a complex one.

keytool -storepasswd -keystore /opt/ssl/kafka.client.truststore.jks  -storepass changeit -new STRONG_PASS

Then store this password into the AWS secret manager. I have stored it in the name of dev/debezium/jkspassword and the key value pair is

  "jkspass": "STRONG_PASS"

Im using lenses’s secret provider plugin to retrive the secrects in Kafka connect. Read the steps here to install and configure it.

This secrect provide will map a file in your Kafka config directory. We need to create that file manually. The file path is same as your AWS secret name. Or create this as folder.

cd /etc/kafka
touch dev/debezium/jkspassword

Add the zookeeper SSL properties into the file(im using distributed properties), then restart the worker service.


Now lets see the access with TLS Port(2182)

kafka-topics --list --zookeeper,,


Lets talk about some issues I faced.


#1 Cluster type:

Initially, I was using t3.small. So I used a very larger value for But it didn’t work for me, Then I increased the cluster to m5. After that, it was fixed. Then again I scale down to t3. This time I didn’t get that Too many connects error. I don’t why after scale down it didn’t occur.

#2 No errors on the connector even its not producing any data

In the worker properties file, I didn’t add the producer.ssl* and consumer.ssl*. It’ll not work, so it has to return errors while deploying debzium or sink connectors. But status is showing running and the kafka connect logs returned some errors like disconnected.

Aug 26 14:54:16 ip-172-30-32-13 connect-distributed: [2021-08-26 14:54:16,708] WARN [debezium-s3-sink-db01|task-0] [Consumer clientId=connector-consumer-debezium-s3-sink-db01-0, groupId=connect-debezium-s3-sink-db01] Bootstrap broker (id: -2 rack: null) disconnected

Aug 26 14:59:15 ip-172-30-32-13 connect-distributed: [2021-08-26 14:59:15,548] WARN [mysql-connector-02|task-0] [Producer clientId=connector-producer-mysql-connector-02-0] Bootstrap broker (id: -2 rack: null) disconnected

#3 Addon to #2 issue

We added the producer.ssl* and consumer.ssl* to the worker properties file, but in the debezium connector JSON file, I tried without adding the database.history.producer.* and database.history.consumer.* properties. After the deployment, the connector was running but nothing gets produced. But the log was showing,

Aug 26 14:59:15 ip-172-30-32-13 connect-distributed: [2021-08-26 14:59:15,548] WARN [mysql-connector-02|task-0] [Producer clientId=connector-producer-mysql-connector-02-0] Bootstrap broker (id: -2 rack: null) disconnected

#4 Sink connector group

I granted the IAM topic level permissions to debezium-* prefix. But when I deployed the sink connector with the name s3-sink-conn-01, It threw an error like Access Denied on the group connect-s3-sink-conn-01.

GroupAuthorizationException: Not authorized to access group: connect-s3-sink-conn-01

Because all the sink connectors will use a dedicated consumer group that is named connect-SINK_CONNECTOR_NAME. But I granted the IAM permission with debezium only. So I had to use a different prefix for the Sink connectors in IAM permission. Then I started using the Sink connectors names with debezium-* prefix. And in the IAM I have granted the permission as like below.


#5 zookeeper with ACL

I tried to delete a topic using 2 methods. 1st one with –bootstap flag and the other one with –zookeeper.

kafka-topics \ 
--bootstrap-server, \ 
--delete \ 
--topic debezium-my-topic \ 
--command-config /etc/kafka/ 

kafka-topics \ 
--zookeeper,, \ 
--delete \ 
--topic debezium-my-topic

According to my IAM policy(mentioned above), DeleteTopic is not granted, so the 1st command returned the IAM permission denied error. But when I call the zookeeper to delete the topic, it actually deleted. Because IAM auth will not enforce the zookeeper nodes. It can bypass. Anyhow zookeeper will be removed in future Kafka releases.

#6 Zookeeper security

The only way to solve the above issue is, create a seperate Security group for zookeeper and allow only Kafka broker nodes IP address into that. And if you want to access zookeeper then you can add your IP into that security group in on-demand basis. So the kafka clients will not have access to the zookeeper. These steps are documented here

#7 Few more things that need attention:

  1. A blog from AWS - how to use IAM with kafka client
  2. Implementing Kafka IAM permissions - AWS docs
  3. IAM Permissions will not work with Zookeeper - my SO question
  4. Scalability of Kafka Messaging using Consumer Groups
· aws, kafka, debezium, iam, security


Loading Comments