主要应用于logstash采集mysql的(输入)数据输出到elasticsearch,并通过ik分词器以便于搜索
docker-compose.yml #我是在宿主机的/opt/elk目录下创建,docker-compose版本1.24.1
编辑以下内容:
version: '3.1'
services:
elasticsearch:
image: elasticsearch:6.8.12
container_name: elasticsearch
restart: always
environment:
- cluster.name=elasticsearch #集群名称为elasticsearch
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" #jvm内存分配为512MB
- "discovery.type=single-node" #单节点启动
- "TZ=Asia/Shanghai"
volumes:
- /opt/elasticsearch/data:/usr/share/elasticsearch/data
- /opt/elasticsearch/logs:/usr/share/elasticsearch/logs
- /opt/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
kibana:
image: kibana:6.8.12
container_name: kibana
links:
- elasticsearch:es #配置elasticsearch域名为es
depends_on:
- elasticsearch
environment:
- "elasticsearch.hosts=http://es:9200" #因为上面配置了域名,所以这里可以简写为http://es:9200
ports:
- 5601:5601
logstash:
image: logstash:6.8.12
container_name: logstash
volumes:
- /opt/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
- /opt/logstash/mysql-connector-java-5.1.46.jar:/usr/share/logstash/app/mysql-connector-java-5.1.46.jar #为jdbcInput插件配置mysql链接的jar包
depends_on:
- elasticsearch
links:
- elasticsearch:es
ports:
- 4560:4560
题外话:elasticsearch:6.8.12对应的spring-data-elasticsearch版本是3.2.x
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
<version>3.2.1.RELEASE</version>
</dependency>
创建挂载目录
mkdir -p /opt/elasticsearch/data /opt/elasticsearch/logs /opt/logstash
注意:下载mysql-connector-java-5.1.46.jar放于/opt/logstash下,如不配mysql输入可忽略。
对Elasticsearch的挂载目录授予最高权限:
chmod 777 -R /opt/elasticsearch
创建Elasticsearch配置文件elasticsearch.yml:
vi /opt/elasticsearch/elasticsearch.yml
添加如下内容:
允许跨域访问
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
transport.host: 0.0.0.0
接着创建Logstash配置文件logstash.conf:(以收集mysql数据库data增量到es为例)
input {
stdin { }
jdbc {
jdbc_connection_string => "jdbc:mysql://192.168.0.111:3306/tensquare_article?characterEncoding=UTF8"
jdbc_user => "root"
jdbc_password => "root"
jdbc_driver_library => "/usr/share/logstash/app/mysql-connector-java-5.1.46.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "5000"
statement => "select id, title, content, state from tb_article"
schedule => "* * * * *"
}
}
output {
stdout {
codec => json_lines
}
elasticsearch {
hosts => "192.168.0.111:9200"
index => "tensquare_article"
document_type => "article"
document_id => "%{id}"
}
}
Elasticsearch默认使用mmapfs目录来存储索引。操作系统默认的mmap计数太低可能导致内存不足,我们可以使用下面这条命令来增加内存:
sysctl -w vm.max_map_count=262144
准备完毕后,与当前目录下运行下面这条命令创建容器:
docker-compose -f /opt/elk/docker-compose.yml up -d
docker-compose ps
#查看容器状态,可以看到容器都处于UP状态。
在Windows下使用浏览器访问:http://ip:9200
浏览器访问:http://ip:5601
进入容器
docker exec -it 容器名 /bin/bash
在容器里在线安装(注意:IK分词器和ES的版本号要一致)
参考IK版本
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.8.12/elasticsearch-analysis-ik-6.8.12.zip
进入到 plugins 目录可以看到IK分词器已经安装成功
接着退出容器并重启容器
exit
docker restart 容器名
测试ik分词器:
http://192.168.0.111:9200/_analyze
{
"analyzer": "ik_max_word",
"text": "中华人民共和国大会堂"
}

查看日志
docker logs -f 容器名
版权声明:本文为sinnamhin原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。