kafka报错解决:kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection

出现上述问题,在其他错误都没有的情况下,zookeeper的版本和kafka的libs目录下的zookeeper版本不一致导致的,如图 

 版本是3.5.6

 启动成功如下所示:

D:\kafka\kafka_2.12-2.4.0>.\bin\windows\kafka-server-start.bat .\config\server.properties
[2020-01-06 15:44:04,220] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-01-06 15:44:04,754] INFO starting (kafka.server.KafkaServer)
[2020-01-06 15:44:04,755] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-01-06 15:44:04,775] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-01-06 15:44:13,787] INFO Client environment:zookeeper.version=3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,788] INFO Client environment:host.name=DESKTOP-D1O68NK (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,789] INFO Client environment:java.version=1.8.0_91 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,790] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,790] INFO Client environment:java.home=D:\Java\jdk\jre (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,790] INFO Client environment:java.class.path=D:\Java\jdk\lib\dt.jar;D:\Java\jdk\lib\tools.jar;D:\scala\bin;D:\scala\lib\dt.jar;D:\scala\lib\tools.jar;;D:\kafka\kafka_2.12-2.4.0\libs\activation-1.1.1.jar;D:\kafka\kafka_2.12-2.4.0\libs\aopalliance-repackaged-2.5.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\argparse4j-0.7.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\audience-annotations-0.5.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\commons-cli-1.4.jar;D:\kafka\kafka_2.12-2.4.0\libs\commons-lang3-3.8.1.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-api-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-basic-auth-extension-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-file-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-json-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-mirror-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-mirror-client-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-runtime-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\connect-transforms-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\guava-20.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\hk2-api-2.5.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\hk2-locator-2.5.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\hk2-utils-2.5.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-annotations-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-core-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-databind-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-dataformat-csv-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-datatype-jdk8-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-jaxrs-base-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-jaxrs-json-provider-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-module-jaxb-annotations-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-module-paranamer-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jackson-module-scala_2.12-2.10.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jakarta.activation-api-1.2.1.jar;D:\kafka\kafka_2.12-2.4.0\libs\jakarta.annotation-api-1.3.4.jar;D:\kafka\kafka_2.12-2.4.0\libs\jakarta.inject-2.5.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jakarta.ws.rs-api-2.1.5.jar;D:\kafka\kafka_2.12-2.4.0\libs\jakarta.xml.bind-api-2.3.2.jar;D:\kafka\kafka_2.12-2.4.0\libs\javassist-3.22.0-CR2.jar;D:\kafka\kafka_2.12-2.4.0\libs\javax.servlet-api-3.1.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\javax.ws.rs-api-2.1.1.jar;D:\kafka\kafka_2.12-2.4.0\libs\jaxb-api-2.3.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-client-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-common-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-container-servlet-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-container-servlet-core-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-hk2-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-media-jaxb-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jersey-server-2.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-client-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-continuation-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-http-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-io-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-security-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-server-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-servlet-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-servlets-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jetty-util-9.4.20.v20190813.jar;D:\kafka\kafka_2.12-2.4.0\libs\jopt-simple-5.0.4.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-clients-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-log4j-appender-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-streams-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-streams-examples-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-streams-scala_2.12-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-streams-test-utils-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka-tools-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-javadoc.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-javadoc.jar.asc;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-scaladoc.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-scaladoc.jar.asc;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-sources.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-sources.jar.asc;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-test-sources.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-test-sources.jar.asc;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-test.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0-test.jar.asc;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\kafka_2.12-2.4.0.jar.asc;D:\kafka\kafka_2.12-2.4.0\libs\log4j-1.2.17.jar;D:\kafka\kafka_2.12-2.4.0\libs\lz4-java-1.6.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\maven-artifact-3.6.1.jar;D:\kafka\kafka_2.12-2.4.0\libs\metrics-core-2.2.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-buffer-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-codec-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-common-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-handler-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-resolver-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-transport-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-transport-native-epoll-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\netty-transport-native-unix-common-4.1.42.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\osgi-resource-locator-1.0.1.jar;D:\kafka\kafka_2.12-2.4.0\libs\paranamer-2.8.jar;D:\kafka\kafka_2.12-2.4.0\libs\plexus-utils-3.2.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\reflections-0.9.11.jar;D:\kafka\kafka_2.12-2.4.0\libs\rocksdbjni-5.18.3.jar;D:\kafka\kafka_2.12-2.4.0\libs\scala-collection-compat_2.12-2.1.2.jar;D:\kafka\kafka_2.12-2.4.0\libs\scala-java8-compat_2.12-0.9.0.jar;D:\kafka\kafka_2.12-2.4.0\libs\scala-library-2.12.10.jar;D:\kafka\kafka_2.12-2.4.0\libs\scala-logging_2.12-3.9.2.jar;D:\kafka\kafka_2.12-2.4.0\libs\scala-reflect-2.12.10.jar;D:\kafka\kafka_2.12-2.4.0\libs\slf4j-api-1.7.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\slf4j-log4j12-1.7.28.jar;D:\kafka\kafka_2.12-2.4.0\libs\snappy-java-1.1.7.3.jar;D:\kafka\kafka_2.12-2.4.0\libs\validation-api-2.0.1.Final.jar;D:\kafka\kafka_2.12-2.4.0\libs\zookeeper-3.5.6.jar;D:\kafka\kafka_2.12-2.4.0\libs\zookeeper-jute-3.5.6.jar;D:\kafka\kafka_2.12-2.4.0\libs\zstd-jni-1.4.3-1.jar (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,793] INFO Client environment:java.library.path=D:\Java\jdk\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\ProgramData\Oracle\Java\javapath;D:\Program Files (x86)\NetSarang\Xshell 6\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\iCLS\;C:\Program Files\Intel\Intel(R) Management Engine Components\iCLS\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;D:\phpStudy\PHPTutorial\php\php-7.1.13-nts;D:\Program Files\Python\Python37;D:\Java\jdk\bin\;D:\Program Files\TortoiseGit\bin;D:\Program Files\Git\cmd;D:\Anaconda3\condabin;D:\Go\bin;D:\hadoop\hadoop-2.6.5\bin\;D:\scala\bin;D:\scala\bin;D:\scala\jre\bin;D:\hbase\hbase-1.3.5\bin;D:\zookeeper\zookeeper-3.5.6\bin;D:\Anaconda3;D:\Anaconda3\Library\mingw-w64\bin;D:\Anaconda3\Library\usr\bin;D:\Anaconda3\Library\bin;D:\Anaconda3\Scripts;C:\Users\86188\AppData\Local\Microsoft\WindowsApps;D:\Program Files\JetBrains\PyCharm 2019.1.2\bin;D:\Program Files\Docker Toolbox;D:\Program Files\JetBrains\IntelliJ IDEA 2019.1.2\bin;D:\Program Files\Fiddler;D:\Program Files\JetBrains\GoLand 2019.1.3\bin;C:\Users\86188\AppData\Local\BypassRuntm;C:\Users\86188\AppData\Local\Microsoft\WindowsApps;;D:\Program Files\JetBrains\IntelliJ IDEA 2018.3.3\bin;;. (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,794] INFO Client environment:java.io.tmpdir=C:\Users\86188\AppData\Local\Temp\ (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,795] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,796] INFO Client environment:os.name=Windows 10 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,797] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,798] INFO Client environment:os.version=10.0 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,808] INFO Client environment:user.name=86188 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,809] INFO Client environment:user.home=C:\Users\86188 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,811] INFO Client environment:user.dir=D:\kafka\kafka_2.12-2.4.0 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,812] INFO Client environment:os.memory.free=973MB (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,813] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,814] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,822] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@619713e5 (org.apache.zookeeper.ZooKeeper)
[2020-01-06 15:44:13,844] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-01-06 15:44:13,861] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-01-06 15:44:13,867] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-01-06 15:44:13,869] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-01-06 15:44:13,877] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-01-06 15:44:13,880] INFO Socket connection established, initiating session, client: /127.0.0.1:60510, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2020-01-06 15:44:13,901] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10010aa20600000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2020-01-06 15:44:13,904] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-01-06 15:44:14,221] INFO Cluster ID = VOnMlOXJTNeiy22Fo71lmg (kafka.server.KafkaServer)
[2020-01-06 15:44:14,226] WARN No meta.properties file under dir D:\kafka\kafka_2.12-2.4.0\kafkakafka_2.12-2.4.0logs\meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-01-06 15:44:14,282] INFO KafkaConfig values:
        advertised.host.name = null
        advertised.listeners = null
        advertised.port = null
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name =
        auto.create.topics.enable = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 0
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 2.4-IV1
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
        listeners = null
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = D:kafkakafka_2.12-2.4.0logs
        log.dirs = D:kafkakafka_2.12-2.4.0logs
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 2.4-IV1
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1000012
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 1
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        port = 9092
        principal.builder.class = null
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism.inter.broker.protocol = GSSAPI
        sasl.server.callback.handler.class = null
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 1
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 1
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.connect = localhost:2181
        zookeeper.connection.timeout.ms = 6000
        zookeeper.max.in.flight.requests = 10
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2020-01-06 15:44:14,292] INFO KafkaConfig values:
        advertised.host.name = null
        advertised.listeners = null
        advertised.port = null
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name =
        auto.create.topics.enable = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 0
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 2.4-IV1
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
        listeners = null
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = D:kafkakafka_2.12-2.4.0logs
        log.dirs = D:kafkakafka_2.12-2.4.0logs
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 2.4-IV1
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1000012
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 1
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        port = 9092
        principal.builder.class = null
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism.inter.broker.protocol = GSSAPI
        sasl.server.callback.handler.class = null
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 1
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 1
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.connect = localhost:2181
        zookeeper.connection.timeout.ms = 6000
        zookeeper.max.in.flight.requests = 10
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2020-01-06 15:44:14,321] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-01-06 15:44:14,322] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-01-06 15:44:14,329] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-01-06 15:44:14,361] INFO Log directory D:\kafka\kafka_2.12-2.4.0\kafkakafka_2.12-2.4.0logs not found, creating it. (kafka.log.LogManager)
[2020-01-06 15:44:14,372] INFO Loading logs. (kafka.log.LogManager)
[2020-01-06 15:44:14,379] INFO Logs loading complete in 7 ms. (kafka.log.LogManager)
[2020-01-06 15:44:14,395] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2020-01-06 15:44:14,399] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2020-01-06 15:44:14,781] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-01-06 15:44:14,816] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2020-01-06 15:44:14,818] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-01-06 15:44:14,842] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:14,844] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:14,844] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:14,844] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:14,858] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-01-06 15:44:23,881] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-01-06 15:44:23,904] INFO Stat of the created znode at /brokers/ids/0 is: 24,24,1578296663896,1578296663896,1,0,0,72075916911575040,200,0,24
 (kafka.zk.KafkaZkClient)
[2020-01-06 15:44:23,905] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(DESKTOP-D1O68NK,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 24 (kafka.zk.KafkaZkClient)
[2020-01-06 15:44:23,970] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:23,974] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:23,981] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:23,996] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2020-01-06 15:44:24,018] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2020-01-06 15:44:24,019] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2020-01-06 15:44:24,027] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 7 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-01-06 15:44:24,037] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2020-01-06 15:44:24,067] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-01-06 15:44:24,069] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-01-06 15:44:24,075] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-01-06 15:44:24,108] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-01-06 15:44:24,139] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-01-06 15:44:24,174] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-01-06 15:44:24,183] INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-01-06 15:44:24,185] INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser)
[2020-01-06 15:44:24,189] INFO Kafka startTimeMs: 1578296664176 (org.apache.kafka.common.utils.AppInfoParser)
[2020-01-06 15:44:24,195] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

 运行成功如下:


版权声明:本文为anzhenxi3529原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。