Kafka hands-on | First experience

  1. Install Kafka Binary

aditya-MAC:bin aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin

Modify the bash-profile :-

APB-LTaditya-MAC:bin aditya$ pwd

/Users/aditya/

aditya-MAC:~ aditya$vi .bash_profile

Add following line to the “.bash_profile” file.

export PATH=”$PATH:/Users/b0218162/Documents/INSTALLS/kafka_2.12–2.5.0/bin”

2. Modify the zookeeper data directory in zookeeper.properties :-

i.) Create a directory with name as ‘data’ and inside data create a directory with name as ‘zookeeper’.

APB-LTaditya-MAC:kafka_2.12–2.5.0 aditya$ mkdir data

APB-LTaditya-MAC:kafka_2.12–2.5.0 aditya$ cd data/

APB-LTaditya-MAC:data aditya$ mkdir zookeeper

APB-LTaditya-MAC:data aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data

APB-LTaditya-MAC:data aditya$ cd zookeeper/

APB-LTaditya-MAC:zookeeper aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/zookeeper

APB-LTaditya-MAC:zookeeper aditya$ vi ../../config/zookeeper.properties

ii.) Replace this line here.

dataDir=/Users/b0218162/Documents/INSTALLS/kafka_2.12–2.5.0/data/zookeeper

3. Start the Zookeeper using zookeepeer properties:-

Below command shall start the zookeeper at default port number 2181.

APB-LTaditya-MAC:bin aditya$ ./zookeeper-server-start.sh ../config/zookeeper.properties

[2020–08–16 11:30:32,469] INFO Reading configuration from: ../config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,472] WARN ../config/zookeeper.properties is relative. Prepend ./ to indicate that you’re sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,483] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,483] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,490] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)

[2020–08–16 11:30:32,491] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)

[2020–08–16 11:30:32,491] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)

[2020–08–16 11:30:32,491] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)

[2020–08–16 11:30:32,494] INFO Log4j found with jmx enabled. (org.apache.zookeeper.jmx.ManagedUtil)

[2020–08–16 11:30:32,518] INFO Reading configuration from: ../config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,518] WARN ../config/zookeeper.properties is relative. Prepend ./ to indicate that you’re sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,519] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,519] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)

[2020–08–16 11:30:32,519] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)

[2020–08–16 11:30:32,526] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)

[2020–08–16 11:30:32,566] INFO Server environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,566] INFO Server environment:host.name=192.168.1.12 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,566] INFO Server environment:java.version=1.8.0_181 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,566] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,566] INFO Server environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_181.jdk/Contents/Home/jre (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,566] INFO Server environment:java.class.path=/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/activation-1.1.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/aopalliance-repackaged-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/argparse4j-0.7.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/commons-cli-1.4.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/commons-lang3–3.8.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-api-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-basic-auth-extension-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-file-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-json-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-mirror-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-mirror-client-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-runtime-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-transforms-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/hk2-api-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/hk2-locator-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/hk2-utils-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-annotations-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-core-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-databind-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-datatype-jdk8–2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-module-paranamer-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-module-scala_2.12–2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.annotation-api-1.3.4.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.inject-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javassist-3.22.0-CR2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javassist-3.26.0-GA.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-client-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-common-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-container-servlet-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-container-servlet-core-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-hk2–2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-media-jaxb-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-server-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-client-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-http-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-io-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-security-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-server-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-util-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-clients-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-log4j-appender-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-examples-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-scala_2.12–2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-test-utils-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-tools-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka_2.12–2.5.0-sources.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka_2.12–2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/log4j-1.2.17.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/lz4-java-1.7.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/metrics-core-2.2.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-buffer-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-codec-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-common-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-handler-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-resolver-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-transport-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/paranamer-2.8.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/reflections-0.9.12.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/rocksdbjni-5.18.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-collection-compat_2.12–2.1.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-java8-compat_2.12–0.9.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-library-2.12.10.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-logging_2.12–3.9.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-reflect-2.12.10.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/slf4j-log4j12–1.7.30.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/snappy-java-1.1.7.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/validation-api-2.0.1.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/zookeeper-3.5.7.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/zookeeper-jute-3.5.7.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/zstd-jni-1.4.4–7.jar (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,567] INFO Server environment:java.library.path=/Users/aditya/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,567] INFO Server environment:java.io.tmpdir=/var/folders/2q/szzs2lrj42vcdztb1nkwm4940000gr/T/ (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,567] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:os.name=Mac OS X (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:os.arch=x86_64 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:os.version=10.13.6 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:user.name=aditya (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:user.home=/Users/aditya (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:user.dir=/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:os.memory.free=497MB (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,568] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,571] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,571] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,573] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer)

[2020–08–16 11:30:32,601] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)

[2020–08–16 11:30:32,607] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)

[2020–08–16 11:30:32,635] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)

[2020–08–16 11:30:32,665] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)

[2020–08–16 11:30:32,669] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)

[2020–08–16 11:30:32,677] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)

[2020–08–16 11:30:32,701] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)`

4. Verify that, there is a zookeeper specific directory got created inside the data directory.

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/zookeeper/version-2

APB-LTaditya-MAC:version-2 aditya$ cd ..

APB-LTaditya-MAC:zookeeper aditya$ ls version-2/

snapshot.0

APB-LTaditya-MAC:zookeeper aditya$

5. Modify the path to logs-directory inside server.properties file :-

APB-LTaditya-MAC:data aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data

APB-LTaditya-MAC:data aditya$ ls -lhrt

total 0

drwxr-xr-x 3 aditya staff 96B Aug 16 11:47 zookeeper

APB-LTaditya-MAC:data aditya$ mkdir kafka

APB-LTaditya-MAC:data aditya$ cd kafka/

APB-LTaditya-MAC:kafka aditya$ ls -lhrt

APB-LTaditya-MAC:kafka aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/kafka

APB-LTaditya-MAC:kafka aditya$ vi ../../config/server.properties

ii.) Replace this line here.

log.dirs=/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/kafka

6. Start the Kafka-Broker node now :-

APB-LTaditya-MAC:kafka aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin

APB-LTaditya-MAC:bin aditya$ ./kafka-server-start.sh ../config/server.properties

[2020–08–16 11:56:16,199] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)

[2020–08–16 11:56:16,954] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)

[2020–08–16 11:56:17,028] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)

[2020–08–16 11:56:17,036] INFO starting (kafka.server.KafkaServer)

[2020–08–16 11:56:17,038] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)

[2020–08–16 11:56:17,072] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)

[2020–08–16 11:56:17,090] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,090] INFO Client environment:host.name=192.168.1.12 (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,090] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,090] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,091] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_181.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,091] INFO Client environment:java.class.path=/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/activation-1.1.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/aopalliance-repackaged-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/argparse4j-0.7.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/commons-cli-1.4.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/commons-lang3–3.8.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-api-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-basic-auth-extension-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-file-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-json-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-mirror-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-mirror-client-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-runtime-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/connect-transforms-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/hk2-api-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/hk2-locator-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/hk2-utils-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-annotations-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-core-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-databind-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-datatype-jdk8–2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-module-paranamer-2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jackson-module-scala_2.12–2.10.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.annotation-api-1.3.4.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.inject-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javassist-3.22.0-CR2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javassist-3.26.0-GA.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-client-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-common-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-container-servlet-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-container-servlet-core-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-hk2–2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-media-jaxb-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jersey-server-2.28.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-client-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-http-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-io-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-security-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-server-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jetty-util-9.4.24.v20191120.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-clients-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-log4j-appender-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-examples-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-scala_2.12–2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-streams-test-utils-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka-tools-2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka_2.12–2.5.0-sources.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/kafka_2.12–2.5.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/log4j-1.2.17.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/lz4-java-1.7.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/metrics-core-2.2.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-buffer-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-codec-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-common-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-handler-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-resolver-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-transport-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/paranamer-2.8.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/reflections-0.9.12.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/rocksdbjni-5.18.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-collection-compat_2.12–2.1.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-java8-compat_2.12–0.9.0.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-library-2.12.10.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-logging_2.12–3.9.2.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/scala-reflect-2.12.10.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/slf4j-log4j12–1.7.30.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/snappy-java-1.1.7.3.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/validation-api-2.0.1.Final.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/zookeeper-3.5.7.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/zookeeper-jute-3.5.7.jar:/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin/../libs/zstd-jni-1.4.4–7.jar (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:java.library.path=/Users/aditya/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:java.io.tmpdir=/var/folders/2q/szzs2lrj42vcdztb1nkwm4940000gr/T/ (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:os.version=10.13.6 (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,092] INFO Client environment:user.name=aditya (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,093] INFO Client environment:user.home=/Users/aditya (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,093] INFO Client environment:user.dir=/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,093] INFO Client environment:os.memory.free=976MB (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,093] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,093] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,098] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@393671df (org.apache.zookeeper.ZooKeeper)

[2020–08–16 11:56:17,106] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)

[2020–08–16 11:56:17,116] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)

[2020–08–16 11:56:17,119] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)

[2020–08–16 11:56:17,125] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)

[2020–08–16 11:56:17,146] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:58382, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)

[2020–08–16 11:56:17,214] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x10015e9af790000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)

[2020–08–16 11:56:17,236] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)

[2020–08–16 11:56:17,712] INFO Cluster ID = kpkI8BdrSkaiVyBiaUojPw (kafka.server.KafkaServer)

[2020–08–16 11:56:17,718] WARN No meta.properties file under dir /Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)

[2020–08–16 11:56:17,802] INFO KafkaConfig values:

advertised.host.name = null

advertised.listeners = null

advertised.port = null

alter.config.policy.class.name = null

alter.log.dirs.replication.quota.window.num = 11

alter.log.dirs.replication.quota.window.size.seconds = 1

authorizer.class.name =

auto.create.topics.enable = true

auto.leader.rebalance.enable = true

background.threads = 10

broker.id = 0

broker.id.generation.enable = true

broker.rack = null

client.quota.callback.class = null

compression.type = producer

connection.failed.authentication.delay.ms = 100

connections.max.idle.ms = 600000

connections.max.reauth.ms = 0

control.plane.listener.name = null

controlled.shutdown.enable = true

controlled.shutdown.max.retries = 3

controlled.shutdown.retry.backoff.ms = 5000

controller.socket.timeout.ms = 30000

create.topic.policy.class.name = null

default.replication.factor = 1

delegation.token.expiry.check.interval.ms = 3600000

delegation.token.expiry.time.ms = 86400000

delegation.token.master.key = null

delegation.token.max.lifetime.ms = 604800000

delete.records.purgatory.purge.interval.requests = 1

delete.topic.enable = true

fetch.max.bytes = 57671680

fetch.purgatory.purge.interval.requests = 1000

group.initial.rebalance.delay.ms = 0

group.max.session.timeout.ms = 1800000

group.max.size = 2147483647

group.min.session.timeout.ms = 6000

host.name =

inter.broker.listener.name = null

inter.broker.protocol.version = 2.5-IV0

kafka.metrics.polling.interval.secs = 10

kafka.metrics.reporters = []

leader.imbalance.check.interval.seconds = 300

leader.imbalance.per.broker.percentage = 10

listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

listeners = null

log.cleaner.backoff.ms = 15000

log.cleaner.dedupe.buffer.size = 134217728

log.cleaner.delete.retention.ms = 86400000

log.cleaner.enable = true

log.cleaner.io.buffer.load.factor = 0.9

log.cleaner.io.buffer.size = 524288

log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308

log.cleaner.max.compaction.lag.ms = 9223372036854775807

log.cleaner.min.cleanable.ratio = 0.5

log.cleaner.min.compaction.lag.ms = 0

log.cleaner.threads = 1

log.cleanup.policy = [delete]

log.dir = /tmp/kafka-logs

log.dirs = /Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/kafka

log.flush.interval.messages = 9223372036854775807

log.flush.interval.ms = null

log.flush.offset.checkpoint.interval.ms = 60000

log.flush.scheduler.interval.ms = 9223372036854775807

log.flush.start.offset.checkpoint.interval.ms = 60000

log.index.interval.bytes = 4096

log.index.size.max.bytes = 10485760

log.message.downconversion.enable = true

log.message.format.version = 2.5-IV0

log.message.timestamp.difference.max.ms = 9223372036854775807

log.message.timestamp.type = CreateTime

log.preallocate = false

log.retention.bytes = -1

log.retention.check.interval.ms = 300000

log.retention.hours = 168

log.retention.minutes = null

log.retention.ms = null

log.roll.hours = 168

log.roll.jitter.hours = 0

log.roll.jitter.ms = null

log.roll.ms = null

log.segment.bytes = 1073741824

log.segment.delete.delay.ms = 60000

max.connections = 2147483647

max.connections.per.ip = 2147483647

max.connections.per.ip.overrides =

max.incremental.fetch.session.cache.slots = 1000

message.max.bytes = 1048588

metric.reporters = []

metrics.num.samples = 2

metrics.recording.level = INFO

metrics.sample.window.ms = 30000

min.insync.replicas = 1

num.io.threads = 8

num.network.threads = 3

num.partitions = 1

num.recovery.threads.per.data.dir = 1

num.replica.alter.log.dirs.threads = null

num.replica.fetchers = 1

offset.metadata.max.bytes = 4096

offsets.commit.required.acks = -1

offsets.commit.timeout.ms = 5000

offsets.load.buffer.size = 5242880

offsets.retention.check.interval.ms = 600000

offsets.retention.minutes = 10080

offsets.topic.compression.codec = 0

offsets.topic.num.partitions = 50

offsets.topic.replication.factor = 1

offsets.topic.segment.bytes = 104857600

password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding

password.encoder.iterations = 4096

password.encoder.key.length = 128

password.encoder.keyfactory.algorithm = null

password.encoder.old.secret = null

password.encoder.secret = null

port = 9092

principal.builder.class = null

producer.purgatory.purge.interval.requests = 1000

queued.max.request.bytes = -1

queued.max.requests = 500

quota.consumer.default = 9223372036854775807

quota.producer.default = 9223372036854775807

quota.window.num = 11

quota.window.size.seconds = 1

replica.fetch.backoff.ms = 1000

replica.fetch.max.bytes = 1048576

replica.fetch.min.bytes = 1

replica.fetch.response.max.bytes = 10485760

replica.fetch.wait.max.ms = 500

replica.high.watermark.checkpoint.interval.ms = 5000

replica.lag.time.max.ms = 30000

replica.selector.class = null

replica.socket.receive.buffer.bytes = 65536

replica.socket.timeout.ms = 30000

replication.quota.window.num = 11

replication.quota.window.size.seconds = 1

request.timeout.ms = 30000

reserved.broker.max.id = 1000

sasl.client.callback.handler.class = null

sasl.enabled.mechanisms = [GSSAPI]

sasl.jaas.config = null

sasl.kerberos.kinit.cmd = /usr/bin/kinit

sasl.kerberos.min.time.before.relogin = 60000

sasl.kerberos.principal.to.local.rules = [DEFAULT]

sasl.kerberos.service.name = null

sasl.kerberos.ticket.renew.jitter = 0.05

sasl.kerberos.ticket.renew.window.factor = 0.8

sasl.login.callback.handler.class = null

sasl.login.class = null

sasl.login.refresh.buffer.seconds = 300

sasl.login.refresh.min.period.seconds = 60

sasl.login.refresh.window.factor = 0.8

sasl.login.refresh.window.jitter = 0.05

sasl.mechanism.inter.broker.protocol = GSSAPI

sasl.server.callback.handler.class = null

security.inter.broker.protocol = PLAINTEXT

security.providers = null

socket.receive.buffer.bytes = 102400

socket.request.max.bytes = 104857600

socket.send.buffer.bytes = 102400

ssl.cipher.suites = []

ssl.client.auth = none

ssl.enabled.protocols = [TLSv1.2]

ssl.endpoint.identification.algorithm = https

ssl.key.password = null

ssl.keymanager.algorithm = SunX509

ssl.keystore.location = null

ssl.keystore.password = null

ssl.keystore.type = JKS

ssl.principal.mapping.rules = DEFAULT

ssl.protocol = TLSv1.2

ssl.provider = null

ssl.secure.random.implementation = null

ssl.trustmanager.algorithm = PKIX

ssl.truststore.location = null

ssl.truststore.password = null

ssl.truststore.type = JKS

transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000

transaction.max.timeout.ms = 900000

transaction.remove.expired.transaction.cleanup.interval.ms = 3600000

transaction.state.log.load.buffer.size = 5242880

transaction.state.log.min.isr = 1

transaction.state.log.num.partitions = 50

transaction.state.log.replication.factor = 1

transaction.state.log.segment.bytes = 104857600

transactional.id.expiration.ms = 604800000

unclean.leader.election.enable = false

zookeeper.clientCnxnSocket = null

zookeeper.connect = localhost:2181

zookeeper.connection.timeout.ms = 18000

zookeeper.max.in.flight.requests = 10

zookeeper.session.timeout.ms = 18000

zookeeper.set.acl = false

zookeeper.ssl.cipher.suites = null

zookeeper.ssl.client.enable = false

zookeeper.ssl.crl.enable = false

zookeeper.ssl.enabled.protocols = null

zookeeper.ssl.endpoint.identification.algorithm = HTTPS

zookeeper.ssl.keystore.location = null

zookeeper.ssl.keystore.password = null

zookeeper.ssl.keystore.type = null

zookeeper.ssl.ocsp.enable = false

zookeeper.ssl.protocol = TLSv1.2

zookeeper.ssl.truststore.location = null

zookeeper.ssl.truststore.password = null

zookeeper.ssl.truststore.type = null

zookeeper.sync.time.ms = 2000

(kafka.server.KafkaConfig)

[2020–08–16 11:56:17,813] INFO KafkaConfig values:

advertised.host.name = null

advertised.listeners = null

advertised.port = null

alter.config.policy.class.name = null

alter.log.dirs.replication.quota.window.num = 11

alter.log.dirs.replication.quota.window.size.seconds = 1

authorizer.class.name =

auto.create.topics.enable = true

auto.leader.rebalance.enable = true

background.threads = 10

broker.id = 0

broker.id.generation.enable = true

broker.rack = null

client.quota.callback.class = null

compression.type = producer

connection.failed.authentication.delay.ms = 100

connections.max.idle.ms = 600000

connections.max.reauth.ms = 0

control.plane.listener.name = null

controlled.shutdown.enable = true

controlled.shutdown.max.retries = 3

controlled.shutdown.retry.backoff.ms = 5000

controller.socket.timeout.ms = 30000

create.topic.policy.class.name = null

default.replication.factor = 1

delegation.token.expiry.check.interval.ms = 3600000

delegation.token.expiry.time.ms = 86400000

delegation.token.master.key = null

delegation.token.max.lifetime.ms = 604800000

delete.records.purgatory.purge.interval.requests = 1

delete.topic.enable = true

fetch.max.bytes = 57671680

fetch.purgatory.purge.interval.requests = 1000

group.initial.rebalance.delay.ms = 0

group.max.session.timeout.ms = 1800000

group.max.size = 2147483647

group.min.session.timeout.ms = 6000

host.name =

inter.broker.listener.name = null

inter.broker.protocol.version = 2.5-IV0

kafka.metrics.polling.interval.secs = 10

kafka.metrics.reporters = []

leader.imbalance.check.interval.seconds = 300

leader.imbalance.per.broker.percentage = 10

listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

listeners = null

log.cleaner.backoff.ms = 15000

log.cleaner.dedupe.buffer.size = 134217728

log.cleaner.delete.retention.ms = 86400000

log.cleaner.enable = true

log.cleaner.io.buffer.load.factor = 0.9

log.cleaner.io.buffer.size = 524288

log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308

log.cleaner.max.compaction.lag.ms = 9223372036854775807

log.cleaner.min.cleanable.ratio = 0.5

log.cleaner.min.compaction.lag.ms = 0

log.cleaner.threads = 1

log.cleanup.policy = [delete]

log.dir = /tmp/kafka-logs

log.dirs = /Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/kafka

log.flush.interval.messages = 9223372036854775807

log.flush.interval.ms = null

log.flush.offset.checkpoint.interval.ms = 60000

log.flush.scheduler.interval.ms = 9223372036854775807

log.flush.start.offset.checkpoint.interval.ms = 60000

log.index.interval.bytes = 4096

log.index.size.max.bytes = 10485760

log.message.downconversion.enable = true

log.message.format.version = 2.5-IV0

log.message.timestamp.difference.max.ms = 9223372036854775807

log.message.timestamp.type = CreateTime

log.preallocate = false

log.retention.bytes = -1

log.retention.check.interval.ms = 300000

log.retention.hours = 168

log.retention.minutes = null

log.retention.ms = null

log.roll.hours = 168

log.roll.jitter.hours = 0

log.roll.jitter.ms = null

log.roll.ms = null

log.segment.bytes = 1073741824

log.segment.delete.delay.ms = 60000

max.connections = 2147483647

max.connections.per.ip = 2147483647

max.connections.per.ip.overrides =

max.incremental.fetch.session.cache.slots = 1000

message.max.bytes = 1048588

metric.reporters = []

metrics.num.samples = 2

metrics.recording.level = INFO

metrics.sample.window.ms = 30000

min.insync.replicas = 1

num.io.threads = 8

num.network.threads = 3

num.partitions = 1

num.recovery.threads.per.data.dir = 1

num.replica.alter.log.dirs.threads = null

num.replica.fetchers = 1

offset.metadata.max.bytes = 4096

offsets.commit.required.acks = -1

offsets.commit.timeout.ms = 5000

offsets.load.buffer.size = 5242880

offsets.retention.check.interval.ms = 600000

offsets.retention.minutes = 10080

offsets.topic.compression.codec = 0

offsets.topic.num.partitions = 50

offsets.topic.replication.factor = 1

offsets.topic.segment.bytes = 104857600

password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding

password.encoder.iterations = 4096

password.encoder.key.length = 128

password.encoder.keyfactory.algorithm = null

password.encoder.old.secret = null

password.encoder.secret = null

port = 9092

principal.builder.class = null

producer.purgatory.purge.interval.requests = 1000

queued.max.request.bytes = -1

queued.max.requests = 500

quota.consumer.default = 9223372036854775807

quota.producer.default = 9223372036854775807

quota.window.num = 11

quota.window.size.seconds = 1

replica.fetch.backoff.ms = 1000

replica.fetch.max.bytes = 1048576

replica.fetch.min.bytes = 1

replica.fetch.response.max.bytes = 10485760

replica.fetch.wait.max.ms = 500

replica.high.watermark.checkpoint.interval.ms = 5000

replica.lag.time.max.ms = 30000

replica.selector.class = null

replica.socket.receive.buffer.bytes = 65536

replica.socket.timeout.ms = 30000

replication.quota.window.num = 11

replication.quota.window.size.seconds = 1

request.timeout.ms = 30000

reserved.broker.max.id = 1000

sasl.client.callback.handler.class = null

sasl.enabled.mechanisms = [GSSAPI]

sasl.jaas.config = null

sasl.kerberos.kinit.cmd = /usr/bin/kinit

sasl.kerberos.min.time.before.relogin = 60000

sasl.kerberos.principal.to.local.rules = [DEFAULT]

sasl.kerberos.service.name = null

sasl.kerberos.ticket.renew.jitter = 0.05

sasl.kerberos.ticket.renew.window.factor = 0.8

sasl.login.callback.handler.class = null

sasl.login.class = null

sasl.login.refresh.buffer.seconds = 300

sasl.login.refresh.min.period.seconds = 60

sasl.login.refresh.window.factor = 0.8

sasl.login.refresh.window.jitter = 0.05

sasl.mechanism.inter.broker.protocol = GSSAPI

sasl.server.callback.handler.class = null

security.inter.broker.protocol = PLAINTEXT

security.providers = null

socket.receive.buffer.bytes = 102400

socket.request.max.bytes = 104857600

socket.send.buffer.bytes = 102400

ssl.cipher.suites = []

ssl.client.auth = none

ssl.enabled.protocols = [TLSv1.2]

ssl.endpoint.identification.algorithm = https

ssl.key.password = null

ssl.keymanager.algorithm = SunX509

ssl.keystore.location = null

ssl.keystore.password = null

ssl.keystore.type = JKS

ssl.principal.mapping.rules = DEFAULT

ssl.protocol = TLSv1.2

ssl.provider = null

ssl.secure.random.implementation = null

ssl.trustmanager.algorithm = PKIX

ssl.truststore.location = null

ssl.truststore.password = null

ssl.truststore.type = JKS

transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000

transaction.max.timeout.ms = 900000

transaction.remove.expired.transaction.cleanup.interval.ms = 3600000

transaction.state.log.load.buffer.size = 5242880

transaction.state.log.min.isr = 1

transaction.state.log.num.partitions = 50

transaction.state.log.replication.factor = 1

transaction.state.log.segment.bytes = 104857600

transactional.id.expiration.ms = 604800000

unclean.leader.election.enable = false

zookeeper.clientCnxnSocket = null

zookeeper.connect = localhost:2181

zookeeper.connection.timeout.ms = 18000

zookeeper.max.in.flight.requests = 10

zookeeper.session.timeout.ms = 18000

zookeeper.set.acl = false

zookeeper.ssl.cipher.suites = null

zookeeper.ssl.client.enable = false

zookeeper.ssl.crl.enable = false

zookeeper.ssl.enabled.protocols = null

zookeeper.ssl.endpoint.identification.algorithm = HTTPS

zookeeper.ssl.keystore.location = null

zookeeper.ssl.keystore.password = null

zookeeper.ssl.keystore.type = null

zookeeper.ssl.ocsp.enable = false

zookeeper.ssl.protocol = TLSv1.2

zookeeper.ssl.truststore.location = null

zookeeper.ssl.truststore.password = null

zookeeper.ssl.truststore.type = null

zookeeper.sync.time.ms = 2000

(kafka.server.KafkaConfig)

[2020–08–16 11:56:17,861] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2020–08–16 11:56:17,861] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2020–08–16 11:56:17,866] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2020–08–16 11:56:17,922] INFO Loading logs. (kafka.log.LogManager)

[2020–08–16 11:56:17,935] INFO Logs loading complete in 13 ms. (kafka.log.LogManager)

[2020–08–16 11:56:17,957] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)

[2020–08–16 11:56:17,963] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)

[2020–08–16 11:56:18,786] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)

[2020–08–16 11:56:18,841] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)

[2020–08–16 11:56:18,843] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)

[2020–08–16 11:56:18,877] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:18,879] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:18,885] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:18,886] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:18,916] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)

[2020–08–16 11:56:18,950] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)

[2020–08–16 11:56:18,984] INFO Stat of the created znode at /brokers/ids/0 is: 24,24,1597559178974,1597559178974,1,0,0,72081687453433856,194,0,24

(kafka.zk.KafkaZkClient)

[2020–08–16 11:56:18,985] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(192.168.1.12,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 24 (kafka.zk.KafkaZkClient)

[2020–08–16 11:56:19,094] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:19,103] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:19,104] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:19,110] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)

[2020–08–16 11:56:19,155] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)

[2020–08–16 11:56:19,158] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)

[2020–08–16 11:56:19,166] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 9 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

[2020–08–16 11:56:19,203] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)

[2020–08–16 11:56:19,302] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)

[2020–08–16 11:56:19,304] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)

[2020–08–16 11:56:19,305] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)

[2020–08–16 11:56:19,402] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2020–08–16 11:56:19,459] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)

[2020–08–16 11:56:19,471] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)

[2020–08–16 11:56:19,477] INFO Kafka version: 2.5.0 (org.apache.kafka.common.utils.AppInfoParser)

[2020–08–16 11:56:19,479] INFO Kafka commitId: 66563e712b0b9f84 (org.apache.kafka.common.utils.AppInfoParser)

[2020–08–16 11:56:19,479] INFO Kafka startTimeMs: 1597559179472 (org.apache.kafka.common.utils.AppInfoParser)

[2020–08–16 11:56:19,481] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

If the kafka server starts successfully, then you should see the ‘kafkaServerid = 0’ statement getting printed in the logs.

7. Verify that, there is a kafka specific directory got created inside the data directory.

APB-LTaditya-MAC:kafka aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/data/kafka

APB-LTaditya-MAC:kafka aditya$

APB-LTaditya-MAC:kafka aditya$ ls -lhtr

total 8

-rw-r — r — 1 aditya staff 0B Aug 16 11:56 recovery-point-offset-checkpoint

-rw-r — r — 1 aditya staff 0B Aug 16 11:56 log-start-offset-checkpoint

-rw-r — r — 1 aditya staff 0B Aug 16 11:56 cleaner-offset-checkpoint

-rw-r — r — 1 aditya staff 0B Aug 16 11:56 replication-offset-checkpoint

rw-r — r — 1 aditya staff 88B Aug 16 11:56 meta.properties

NOTE: By the end of the step, there is an instance of zookeeper running on port-number 2181 in one window and another instance of kafka-server running in another parrallel window on port 9092.

8. Create first Kafka Topic :-

Since we have so far created only a single broker / kafka node, we can only specify the replication factor as 1. Use below command to create the topic.

APB-LTB0aditya62-MAC:bin b0aditya62$ ./kafka-topics.sh — zookeeper 127.0.0.1:aditya — topic first_topic — create — partitions 3 — replication-factor 1

WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both.

Created topic first_topic.

APB-LTB0aditya62-MAC:bin b0aditya62$

9. List down all the topics so far that have in our system:-

APB-LTB0aditya62-MAC:bin b0aditya62$ ./kafka-topics.sh — zookeeper 127.0.0.1:aditya — list

first_topic

APB-LTB0aditya62-MAC:bin b0aditya62$

10. Describe all the information pertaining to a given kafka topic :-

APB-LTaditya-MAC:bin aditya$ ./kafka-topics.sh — zookeeper 127.0.0.1:2181 — topic first_topic — describe

Topic: first_topic PartitionCount: 3 ReplicationFactor: 1 Configs:

Topic: first_topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0

Topic: first_topic Partition: 1 Leader: 0 Replicas: 0 Isr: 0

Topic: first_topic Partition: 2 Leader: 0 Replicas: 0 Isr: 0

APB-LTaditya-MAC:bin aditya$

11. Delete a kafka topic through console :-

For demo purpose, we are going to create a new topic and then marking it for deletion.

APB-LTaditya-MAC:bin aditya$

APB-LTaditya-MAC:bin aditya$ ./kafka-topics.sh — zookeeper 127.0.0.1:2181 — topic second_topic — create — partitions 3 — replication-factor 1

WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both.

Created topic second_topic.

APB-LTaditya-MAC:bin aditya$

APB-LTaditya-MAC:bin aditya$ ./kafka-topics.sh — zookeeper 127.0.0.1:2181 — list

first_topic

second_topic

APB-LTaditya-MAC:bin aditya$ ./kafka-topics.sh — zookeeper 127.0.0.1:2181 — topic second_topic — delete

Topic second_topic is marked for deletion.

Note: This will have no impact if delete.topic.enable is not set to true.

APB-LTaditya-MAC:bin aditya$

12. Produce some sample ‘string’ type of messages to the kafka topic :-

APB-LTaditya-MAC:kafka aditya$ cd ../../bin/

APB-LTaditya-MAC:bin aditya$ pwd

/Users/aditya/Documents/INSTALLS/kafka_2.12–2.5.0/bin

APB-LTaditya-MAC:bin aditya$

APB-LTaditya-MAC:bin aditya$ ./kafka-cons

kafka-console-consumer.sh kafka-console-producer.sh kafka-consumer-groups.sh kafka-consumer-perf-test.sh

APB-LTaditya-MAC:bin aditya$ ./kafka-cons

kafka-console-consumer.sh kafka-console-producer.sh kafka-consumer-groups.sh kafka-consumer-perf-test.sh

APB-LTaditya-MAC:bin aditya$ ./kafka-console-producer.sh — broker-list 127.0.0.1:9092 — topic first_topic — producer-property acks=all

>Hello Aditya

>Kafka is awesome technology.

>We are soon going to see streams as well !

>^C

APB-LTaditya-MAC:bin aditya$

Notes :-

i.) Aforesaid are 3 messages that we produced and killed the producer after that.

ii.) If the specific topic doesn’t exists, then kafka shall be creating the topic on its own.

iii.)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store