官方文档 https://prometheus.io/docs/introduction/first_steps/
中文文档 https://prometheus.fuckcloudnative.io/di-yi-zhang-jie-shao/overview
安装文档:https://prometheus.fuckcloudnative.io/di-san-zhang-prometheus/di-2-jie-an-zhuang/installation
下载二进制文件:https://github.com/prometheus/prometheus/releases
cd ~
mkdir prometheus
cd prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.34.0/prometheus-2.34.0.linux-amd64.tar.gz
tar -xzvf prometheus*
cd prometheus-2.34.0.linux-amd64
./prometheus -h
./prometheus --config.file=prometheus.yml
访问 http://localhost:9090/targets
prometheus 默认配置会采集自身运行的数据,看上图是正常运行的
默认配置
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
prometheus 采集数据是采用 pull 模型,它会定时去拉取目标的数据,在这里被称为 job,我们需要让 prometheus 采集我们的数据,就需要配置对应的 job。
global
代表全局配置,我们可以在需要的 job 里覆盖全局配置,如果未指定相应配置,那么默认使用全局配置。
scrape_interval
指 prometheus 采集数据的间隔时间
evaluation_interval
prometheus 解析报警规则的间隔时间
alerting
报警配置
rule_files
警告规则
scrape_configs
采集数据 job 配置
详细的配置说明请查看文档。