Filebeat 7.3.0
kafka 1.1.0
logstash 7.3.0
elasticserarch 7.1.1
kibana 7.1.1
zookeeper
docker pull wurstmeister/kafka:1.1.0
docker pull wurstmeister/zookeeper
docker pull kibana:7.1.1
docker pull logstash:7.3.0
docker pull grafana/grafana:6.3.2
docker pull elasticsearch:7.1.1
docker pull mobz/elasticsearch-head:5-alpine
# elasticsearch-head
docker run -d --name elasticsearch-head --network host mobz/elasticsearch-head:5-alpine
# elasticsearch
docker run -d --name elasticsearch --network host -e "discovery.type=single-node" elasticsearch:7.1.1
# kibana
docker run -d --name kibana --network host -p 5601:5601 kibana:7.1.1
# logstash
docker run -d --name logstash --network host -d logstash:7.3.0
# mysql
docker run --name mysql -e MYSQL_ROOT_PASSWORD=root --network host -d mysql:latest
# grafana
docker run -d --name grafana --network host grafana/grafana:6.3.2
# 查看docker 容器日志
docker logs -f [containerID]
## 应用访问
http://192.168.104.102:9100/ elasticsearch-head
http://192.168.104.102:9200/ elasticsearch
http://192.168.104.102:3000/ grafana admin/admin
http://192.168.104.102:3306/ mysql root/root
修改elasticsearch
的elasticsearch.yml
,追加
http.cors.enabled: true
http.cors.allow-origin: "*"
参考使用Docker快速搭建Kafka开发环境
# 下载filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.0-linux-x86_64.tar.gz
# 解压filebeat.tar.gz
tar -zxvf filebeat-7.3.0-linux-x86_64.tar.gz
## filebeat的多种启动方式
./filebeat test config -c my-filebeat.yml
./filebeat -e -c my-filebeat.yml -d "publish"
# 后台启动filebeat输出日志保存
nohup ./filebeat -e -c my-filebeat.yml > /tmp/filebeat.log 2>&1 &
# 不挂断后台运行 将所有标准输出及标准错误输出到/dev/null空设备,即没有任何输出
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
# filebeat启动后,查看相关输出信息
./filebeat -e -c filebeat.yml -d "publish"
# 参数说明
-e:输出到标准输出,默认输出到syslog和logs下
-c:指定配置文件
-d:输出debug信息
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/test.log
fields:
serviceName: jp-filebeat
fields_under_root: true
# 输出内容到kafka
output.kafka:
enabled: true
hosts: ["kafka1:9092"]
topic: 'stream-in'
required_acks: 1
## 注意
# kafka1为hostname, 可修改 etc/hosts文件,添加 kafka_cluster的ip -> kafka1
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/*.log
fields:
serviceName:jp-filebeat
fields_under_root: true
# 输出内容到logstash
output.logstash:
hosts: ["192.168.104.102:5045"]
往测试的log文件里写内容echo '321312' >> test.log
容器默认启动配置文件为
/usr/share/logstash/pipeline/logstash.conf
测试一个logstash(标准的输入输出)
bin/logatsh -e 'input{ stdin {}} output { stdout {}}
提示:
Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" settingundefined在原命令
bin/logstash -f config/logstash-sample.conf
的末尾加上--path.data=/usr/share/logstash/jpdata
# 启动多个logstash实例的命令
bin/logstash -f config/logstash-sample.conf --config.reload.automatic --path.data=/usr/share/logstash/jpdata
# 启动多个logstash实例的命令
bin/logstash -f config/test-logstash.conf --config.reload.automatic --path.data=/usr/share/logstash/jpdata
# 启动前先检查配置文件是否正确
bin/logstash -f logstash-sample.conf --config.test_and_exit
# 参数说明
--path.data是指存放数据的路径
--config.reload.automatic 热加载配置文件,修改配置文件后无需重新启动
test-logstash.conf
# 从kafka中读取内容
input {
kafka {
bootstrap_servers => "192.168.104.102:9092"
topics => ["stream-in"]
codec => "json"
}
}
# 内容过滤
filter {
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
}
}
#}
# 输出内容到控制台显示
output {
stdout { codec => rubydebug}
}
logstash-es.conf
# 从kafka中读取内容
input {
kafka {
bootstrap_servers => "192.168.104.102:9092"
topics => ["stream-in"]
codec => "json"
}
}
# 内容过滤
filter {
json {
source => "message"
}
}
# 输出内容到elasticsearch
output {
elasticsearch {
hosts => ["192.168.104.102:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
docker 中kafka的安装路径在/opt/kafka_2.12-2.3.0/bin
下,
# tree查看文件结构
yum -y install tree
# 查看kafka在zookeeper中的所有topic
kafka-topics.sh --zookeeper 192.168.104.102:2181 --list
# 此种方式查询到的是所有 topic 的所有 group.id , 而没有根据 topic 查询 group.id 的方式。
kafka-consumer-groups.sh --bootstrap-server kafka1:9092 --list
# 给topic插入数据
kafka-console-producer.sh --broker-list kafka1:9092 --topic stream-in
# Kafka中启动Consumer来查询topic内容:
kafka-console-consumer.sh --bootstrap-server kafka1:9092 --topic stream-in -from-beginning
./zkCli.sh
https://blog.csdn.net/s332755645/article/details/80180008
https://blog.csdn.net/wxlchinaren/article/details/83303708
https://www.elastic.co/guide/en/beats/filebeat/current/kafka-output.html
https://www.cnblogs.com/AcAc-t/p/kafka_topic_consumer_group_command.html
https://www.cnblogs.com/xiaodf/p/6093261.html#4
mysql Client does not support authentication protocol
兼容性
# 查看filebeat进程
ps -ef | grep filebeat
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。