docker环境下zookeeper单机搭建、集群搭建

docker环境下zookeeper单机搭建、集群搭建

1、单机搭建

version: '3.1'
services:
  zoo1:
    image: zookeeper:latest
    restart: always
    ports:
      - 2181:2181

2、集群搭建

version: '3.1'
services:
  zoo1:
    image: zookeeper:latest
    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
  zoo2:
    image: zookeeper:latest
    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
  zoo3:
    image: zookeeper:latest
    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

2.1、集群测试

启动集群

docker-compose up -d

查看运行中的zookeeper实例

docker ps

zookeeper容器的实例id分别是

66de1c1687e8

751a4c11bbcc

193b327a9c59

交互模式下分别进入三个zookeeper容器实例

docker exec -it 66de1c1687e8 /bin/bash
docker exec -it 751a4c11bbcc /bin/bash
docker exec -it 193b327a9c59 /bin/bash

三个zookeeper容器实例中,查看集群状态

./bin/zkServer.sh status

副节点

主节点

成功了;

2.2、集群形式,遇到的问题,以及可能会遇到的问题

1、通过上面的图,可以看到,我当前拉取的zookeeper:latest版本,是3.6.2版本;之前(两三年之前吧)再linux环境下,搭建的集群用的是3.4.x版本;

这两个版本,再zoo.cfg配置文件中,新的版本,是多加了;2181;如果不加的话,是会报错的~

server.1=zoo1:2888:3888;2181 
server.2=zoo2:2888:3888;2181 
server.3=zoo3:2888:3888;2181

2、如果搭建集群的时候,没有使用docker-compose的话,直接run指令,运行三个容器,三个容器实例,互相通信,应该是问题的;

这个跟docker的网络模式有关系;Docker有三种网络模式,bridge、host、none,在创建的时候,不指定–network默认是bridge。

bridge:为每一个容器分配IP,并将容器连接到一个docker0虚拟网桥,通过docker0网桥与宿主机通信。也就是说,此模式下,你不能用宿主机的IP+容器映射端口来进行Docker容器之间的通信。

host:容器不会虚拟自己的网卡,配置自己的IP,而是使用宿主机的IP和端口。这样一来,Docker容器之间的通信就可以用宿主机的IP+容器映射端口

none:无网络。

再使用docker-compose形式的话,会创建一个网络;

查看所有网络

docker network ls

查看对应网络详情,得到如下的文本内容

docker network inspect e93d1c70ca25
[
    {
        "Name": "cluster_default",
        "Id": "e93d1c70ca25dfd90aa48486eace9fa8c3df58b8ceaf3141c9b1a289b538e5c1",
        "Created": "2020-09-29T07:48:17.278800683Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.24.0.0/16",
                    "Gateway": "172.24.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "193b327a9c59593d7758da2ce9f23c707b114ce5eba558201ae8743e46bd32e6": {
                "Name": "cluster_zoo2_1",
                "EndpointID": "0c7213b1bf35ecabc0d20365d3a3e1a41bbf1792cfbbfa1030f8217f589e19ff",
                "MacAddress": "02:42:ac:18:00:03",
                "IPv4Address": "172.24.0.3/16",
                "IPv6Address": ""
            },
            "66de1c1687e81a3bb6a5f37728bdea147936a3afc6e56d06891b44ae1e2fbd07": {
                "Name": "cluster_zoo1_1",
                "EndpointID": "224574b0d254e6db152048106fd9eb8d2927d691bb312285e7c5a0a555062cf3",
                "MacAddress": "02:42:ac:18:00:02",
                "IPv4Address": "172.24.0.2/16",
                "IPv6Address": ""
            },
            "751a4c11bbccd57752cda314c140dfbfd8c07b6018d239de91568d3ca7e1d832": {
                "Name": "cluster_zoo3_1",
                "EndpointID": "bab3e579e072750174a2df009384b489f934e76869cd4b6900e34bf7978904c0",
                "MacAddress": "02:42:ac:18:00:04",
                "IPv4Address": "172.24.0.4/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "cluster",
            "com.docker.compose.version": "1.26.2"
        }
    }
]

可以看到,这个三个zookeeper的容器实例,是再一个网络下的,是可以互联的;

3、zookeeper再docker下的一些环境变量属性的可配置项

执行容器的启动的时候,可以看到,最终的一个COMMAND指令的,执行了一个docker-entrypoint.sh的脚本;内容如下,对照着zoo.cfg配置文件,是看一下,就可以了;

#!/bin/bash

set -e

# Allow the container to be started with `--user`
if [[ "$1" = 'zkServer.sh' && "$(id -u)" = '0' ]]; then
    chown -R zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" "$ZOO_LOG_DIR" "$ZOO_CONF_DIR"
    exec gosu zookeeper "$0" "$@"
fi

# Generate the config only if it doesn't exist
if [[ ! -f "$ZOO_CONF_DIR/zoo.cfg" ]]; then
    CONFIG="$ZOO_CONF_DIR/zoo.cfg"
    {
        echo "dataDir=$ZOO_DATA_DIR" 
        echo "dataLogDir=$ZOO_DATA_LOG_DIR"

        echo "tickTime=$ZOO_TICK_TIME"
        echo "initLimit=$ZOO_INIT_LIMIT"
        echo "syncLimit=$ZOO_SYNC_LIMIT"

        echo "autopurge.snapRetainCount=$ZOO_AUTOPURGE_SNAPRETAINCOUNT"
        echo "autopurge.purgeInterval=$ZOO_AUTOPURGE_PURGEINTERVAL"
        echo "maxClientCnxns=$ZOO_MAX_CLIENT_CNXNS"
        echo "standaloneEnabled=$ZOO_STANDALONE_ENABLED"
        echo "admin.enableServer=$ZOO_ADMINSERVER_ENABLED"
    } >> "$CONFIG"
    if [[ -z $ZOO_SERVERS ]]; then
      ZOO_SERVERS="server.1=localhost:2888:3888;2181"
    fi

    for server in $ZOO_SERVERS; do
        echo "$server" >> "$CONFIG"
    done

    if [[ -n $ZOO_4LW_COMMANDS_WHITELIST ]]; then
        echo "4lw.commands.whitelist=$ZOO_4LW_COMMANDS_WHITELIST" >> "$CONFIG"
    fi

    for cfg_extra_entry in $ZOO_CFG_EXTRA; do
        echo "$cfg_extra_entry" >> "$CONFIG"
    done
fi

# Write myid only if it doesn't exist
if [[ ! -f "$ZOO_DATA_DIR/myid" ]]; then
    echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid"
fi

exec "$@"

版权声明:本文为qq_28410283原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。