# **配置(Configuration)**
Prometheus 通过命令行标志和配置文件进行配置。 尽管命令行标志配置了不可变的系统参数(例如存储位置,要保留在磁盘和内存中的数据量等),但配置文件定义了与抓取 [Job 及实例](https://prometheus.io/docs/concepts/jobs_instances/)相关的所有内容,以及加载哪些[规则文件](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#configuring-rules)。
要查看所有可用的命令行标志,请运行 `./prometheus -h`。
Prometheus 可以在运行时重新加载其配置。 如果新配置格式不正确,则更改将不会生效。 通过向 Prometheus 进程发送 SIGHUP 或向 `/-/reload` endpoint 发送 HTTP POST 请求(需要启用 `--web.enable-lifecycle` 标志)来触发配置重载。 这将重新加载所有已配置的规则文件。
## **配置文件**
使用 `--config.file` flag 来指定要加载的配置文件。
该配置文件以 YAML 格式编写,由以下描述的 schema 定义。方括号表示参数是可选的。对于非列表参数,该值设置为指定的默认值。
通用占位符定义如下:
* `<boolean>`: a boolean that can take the values`true`or`false`
* `<duration>`: a duration matching the regular expression`[0-9]+(ms|[smhdwy])`
* `<labelname>`: a string matching the regular expression`[a-zA-Z_][a-zA-Z0-9_]*`
* `<labelvalue>`: a string of unicode characters
* `<filename>`: a valid path in the current working directory
* `<host>`: a valid string consisting of a hostname or IP followed by an optional port number
* `<path>`: a valid URL path
* `<scheme>`: a string that can take the values`http`or`https`
* `<string>`: a regular string
* `<secret>`: a regular string that is a secret, such as a password
* `<tmpl_string>`: a string which is template-expanded before usage
其他的占位符将会分别详细说明。
在这里可以找到有效的[示例文件](https://github.com/prometheus/prometheus/blob/release-2.13/config/testdata/conf.good.yml)
全局配置指定了所有其他配置上下文中有效的参数。 它们还用作其他配置部分的默认设置。
~~~
global:
# 默认的目标采样频率。
[ scrape_interval: <duration> | default = 1m ]
# 采样请求超时时长。
[ scrape_timeout: <duration> | default = 10s ]
# 规则评估频率。
[ evaluation_interval: <duration> | default = 1m ]
# The labels to add to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
[ <labelname>: <labelvalue> ... ]
# Rule files specifies a list of globs. Rules and alerts are read from
# all matching files.
rule_files:
[ - <filepath_glob> ... ]
# A list of scrape configurations.
scrape_configs:
[ - <scrape_config> ... ]
# Alerting specifies settings related to the Alertmanager.
alerting:
alert_relabel_configs:
[ - <relabel_config> ... ]
alertmanagers:
[ - <alertmanager_config> ... ]
# Settings related to the remote write feature.
remote_write:
[ - <remote_write> ... ]
# Settings related to the remote read feature.
remote_read:
[ - <remote_read> ... ]
~~~
### **<scrape_config>**
`scrape_config` 部分指定了一组 targets 和参数,这些目标和参数描述了如何抓取它们。 在一般情况下,一个抓取配置指定一个Job。 在高级配置中,这可能会改变。
可以通过`static_configs`参数静态配置targets,也可以使用已支持的服务发现机制之一动态发现目标。
此外,`relabel_configs`允许在抓取之前对任何目标及其标签进行高级修改。
~~~
# The job name assigned to scraped metrics by default.
job_name: <job_name>
# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]
# Per-scrape timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]
# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]
# honor_labels controls how Prometheus handles conflicts between labels that are
# already present in scraped data and labels that Prometheus would attach
# server-side ("job" and "instance" labels, manually configured target
# labels, and labels generated by service discovery implementations).
#
# If honor_labels is set to "true", label conflicts are resolved by keeping label
# values from the scraped data and ignoring the conflicting server-side labels.
#
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels.
#
# Setting honor_labels to "true" is useful for use cases such as federation and
# scraping the Pushgateway, where all labels specified in the target should be
# preserved.
#
# Note that any globally configured "external_labels" are unaffected by this
# setting. In communication with external systems, they are always applied only
# when a time series does not have a given label yet and are ignored otherwise.
[ honor_labels: <boolean> | default = false ]
# honor_timestamps controls whether Prometheus respects the timestamps present
# in scraped data.
#
# If honor_timestamps is set to "true", the timestamps of the metrics exposed
# by the target will be used.
#
# If honor_timestamps is set to "false", the timestamps of the metrics exposed
# by the target will be ignored.
[ honor_timestamps: <boolean> | default = true ]
# Configures the protocol scheme used for requests.
[ scheme: <scheme> | default = http ]
# Optional HTTP URL parameters.
params:
[ <string>: [<string>, ...] ]
# Sets the `Authorization` header on every scrape request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
[ username: <string> ]
[ password: <secret> ]
[ password_file: <string> ]
# Sets the `Authorization` header on every scrape request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <secret> ]
# Sets the `Authorization` header on every scrape request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]
# Configures the scrape request's TLS settings.
tls_config:
[ <tls_config> ]
# Optional proxy URL.
[ proxy_url: <string> ]
# List of Azure service discovery configurations.
azure_sd_configs:
[ - <azure_sd_config> ... ]
# List of Consul service discovery configurations.
consul_sd_configs:
[ - <consul_sd_config> ... ]
# List of DNS service discovery configurations.
dns_sd_configs:
[ - <dns_sd_config> ... ]
# List of EC2 service discovery configurations.
ec2_sd_configs:
[ - <ec2_sd_config> ... ]
# List of OpenStack service discovery configurations.
openstack_sd_configs:
[ - <openstack_sd_config> ... ]
# List of file service discovery configurations.
file_sd_configs:
[ - <file_sd_config> ... ]
# List of GCE service discovery configurations.
gce_sd_configs:
[ - <gce_sd_config> ... ]
# List of Kubernetes service discovery configurations.
kubernetes_sd_configs:
[ - <kubernetes_sd_config> ... ]
# List of Marathon service discovery configurations.
marathon_sd_configs:
[ - <marathon_sd_config> ... ]
# List of AirBnB's Nerve service discovery configurations.
nerve_sd_configs:
[ - <nerve_sd_config> ... ]
# List of Zookeeper Serverset service discovery configurations.
serverset_sd_configs:
[ - <serverset_sd_config> ... ]
# List of Triton service discovery configurations.
triton_sd_configs:
[ - <triton_sd_config> ... ]
# List of labeled statically configured targets for this job.
static_configs:
[ - <static_config> ... ]
# List of target relabel configurations.
relabel_configs:
[ - <relabel_config> ... ]
# List of metric relabel configurations.
metric_relabel_configs:
[ - <relabel_config> ... ]
# Per-scrape limit on number of scraped samples that will be accepted.
# If more than this number of samples are present after metric relabelling
# the entire scrape will be treated as failed. 0 means no limit.
[ sample_limit: <int> | default = 0 ]
~~~
其中,`<job_name>`在所有抓取配置中必须唯一。
### `<tls_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config)
A`tls_config`allows configuring TLS connections.
~~~
# CA certificate to validate API server certificate with.
[ ca_file: <filename> ]
# Certificate and key files for client cert authentication to the server.
[ cert_file: <filename> ]
[ key_file: <filename> ]
# ServerName extension to indicate the name of the server.
# https://tools.ietf.org/html/rfc4366#section-3.1
[ server_name: <string> ]
# Disable validation of the server certificate.
[ insecure_skip_verify: <boolean> ]
~~~
### `<azure_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#azure_sd_config)
Azure SD configurations allow retrieving scrape targets from Azure VMs.
The following meta labels are available on targets during relabeling:
* `__meta_azure_machine_id`: the machine ID
* `__meta_azure_machine_location`: the location the machine runs in
* `__meta_azure_machine_name`: the machine name
* `__meta_azure_machine_os_type`: the machine operating system
* `__meta_azure_machine_private_ip`: the machine's private IP
* `__meta_azure_machine_public_ip`: the machine's public IP if it exists
* `__meta_azure_machine_resource_group`: the machine's resource group
* `__meta_azure_machine_tag_<tagname>`: each tag value of the machine
* `__meta_azure_machine_scale_set`: the name of the scale set which the vm is part of (this value is only set if you are using a[scale set](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/))
* `__meta_azure_subscription_id`: the subscription ID
* `__meta_azure_tenant_id`: the tenant ID
See below for the configuration options for Azure discovery:
~~~
# The information to access the Azure API.
# The Azure environment.
[ environment: <string> | default = AzurePublicCloud ]
# The authentication method, either OAuth or ManagedIdentity.
# See https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
[ authentication_method: <string> | default = OAuth]
# The subscription ID. Always required.
subscription_id: <string>
# Optional tenant ID. Only required with authentication_method OAuth.
[ tenant_id: <string> ]
# Optional client ID. Only required with authentication_method OAuth.
[ client_id: <string> ]
# Optional client secret. Only required with authentication_method OAuth.
[ client_secret: <secret> ]
# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 300s ]
# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]
~~~
### `<consul_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config)
Consul SD configurations allow retrieving scrape targets from[Consul's](https://www.consul.io/)Catalog API.
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_consul_address`: the address of the target
* `__meta_consul_dc`: the datacenter name for the target
* `__meta_consul_tagged_address_<key>`: each node tagged address key value of the target
* `__meta_consul_metadata_<key>`: each node metadata key value of the target
* `__meta_consul_node`: the node name defined for the target
* `__meta_consul_service_address`: the service address of the target
* `__meta_consul_service_id`: the service ID of the target
* `__meta_consul_service_metadata_<key>`: each service metadata key value of the target
* `__meta_consul_service_port`: the service port of the target
* `__meta_consul_service`: the name of the service the target belongs to
* `__meta_consul_tags`: the list of tags of the target joined by the tag separator
~~~
# The information to access the Consul API. It is to be defined
# as the Consul documentation requires.
[ server: <host> | default = "localhost:8500" ]
[ token: <secret> ]
[ datacenter: <string> ]
[ scheme: <string> | default = "http" ]
[ username: <string> ]
[ password: <secret> ]
tls_config:
[ <tls_config> ]
# A list of services for which targets are retrieved. If omitted, all services
# are scraped.
services:
[ - <string> ]
# See https://www.consul.io/api/catalog.html#list-nodes-for-service to know more
# about the possible filters that can be used.
# An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list.
tags:
[ - <string> ]
# Node metadata used to filter nodes for a given service.
[ node_meta:
[ <name>: <value> ... ] ]
# The string by which Consul tags are joined into the tag label.
[ tag_separator: <string> | default = , ]
# Allow stale Consul results (see https://www.consul.io/api/features/consistency.html). Will reduce load on Consul.
[ allow_stale: <bool> ]
# The time after which the provided names are refreshed.
# On large setup it might be a good idea to increase this value because the catalog will change all the time.
[ refresh_interval: <duration> | default = 30s ]
~~~
Note that the IP number and port used to scrape the targets is assembled as`<__meta_consul_address>:<__meta_consul_service_port>`. However, in some Consul setups, the relevant address is in`__meta_consul_service_address`. In those cases, you can use the[relabel](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)feature to replace the special`__address__`label.
The[relabeling phase](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)is the preferred and more powerful way to filter services or nodes for a service based on arbitrary labels. For users with thousands of services it can be more efficient to use the Consul API directly which has basic support for filtering nodes (currently by node metadata and a single tag).
### `<dns_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config)
A DNS-based service discovery configuration allows specifying a set of DNS domain names which are periodically queried to discover a list of targets. The DNS servers to be contacted are read from`/etc/resolv.conf`.
This service discovery method only supports basic DNS A, AAAA and SRV record queries, but not the advanced DNS-SD approach specified in[RFC6763](https://tools.ietf.org/html/rfc6763).
During the[relabeling phase](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config), the meta label`__meta_dns_name`is available on each target and is set to the record name that produced the discovered target.
~~~
# A list of DNS domain names to be queried.
names:
[ - <domain_name> ]
# The type of DNS query to perform.
[ type: <query_type> | default = 'SRV' ]
# The port number used if the query type is not SRV.
[ port: <number>]
# The time after which the provided names are refreshed.
[ refresh_interval: <duration> | default = 30s ]
~~~
Where`<domain_name>`is a valid DNS domain name. Where`<query_type>`is`SRV`,`A`, or`AAAA`.
### `<ec2_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config)
EC2 SD configurations allow retrieving scrape targets from AWS EC2 instances. The private IP address is used by default, but may be changed to the public IP address with relabeling.
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_ec2_availability_zone`: the availability zone in which the instance is running
* `__meta_ec2_instance_id`: the EC2 instance ID
* `__meta_ec2_instance_state`: the state of the EC2 instance
* `__meta_ec2_instance_type`: the type of the EC2 instance
* `__meta_ec2_owner_id`: the ID of the AWS account that owns the EC2 instance
* `__meta_ec2_platform`: the Operating System platform, set to 'windows' on Windows servers, absent otherwise
* `__meta_ec2_primary_subnet_id`: the subnet ID of the primary network interface, if available
* `__meta_ec2_private_dns_name`: the private DNS name of the instance, if available
* `__meta_ec2_private_ip`: the private IP address of the instance, if present
* `__meta_ec2_public_dns_name`: the public DNS name of the instance, if available
* `__meta_ec2_public_ip`: the public IP address of the instance, if available
* `__meta_ec2_subnet_id`: comma separated list of subnets IDs in which the instance is running, if available
* `__meta_ec2_tag_<tagkey>`: each tag value of the instance
* `__meta_ec2_vpc_id`: the ID of the VPC in which the instance is running, if available
See below for the configuration options for EC2 discovery:
~~~
# The information to access the EC2 API.
# The AWS region. If blank, the region from the instance metadata is used.
[ region: <string> ]
# Custom endpoint to be used.
[ endpoint: <string> ]
# The AWS API keys. If blank, the environment variables `AWS_ACCESS_KEY_ID`
# and `AWS_SECRET_ACCESS_KEY` are used.
[ access_key: <string> ]
[ secret_key: <secret> ]
# Named AWS profile used to connect to the API.
[ profile: <string> ]
# AWS Role ARN, an alternative to using AWS API keys.
[ role_arn: <string> ]
# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]
# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]
# Filters can be used optionally to filter the instance list by other criteria.
# Available filter criteria can be found here:
# https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html
# Filter API documentation: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Filter.html
filters:
[ - name: <string>
values: <string>, [...] ]
~~~
The[relabeling phase](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)is the preferred and more powerful way to filter targets based on arbitrary labels. For users with thousands of instances it can be more efficient to use the EC2 API directly which has support for filtering instances.
### `<openstack_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config)
OpenStack SD configurations allow retrieving scrape targets from OpenStack Nova instances.
One of the following`<openstack_role>`types can be configured to discover targets:
#### `hypervisor`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#hypervisor)
The`hypervisor`role discovers one target per Nova hypervisor node. The target address defaults to the`host_ip`attribute of the hypervisor.
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_openstack_hypervisor_host_ip`: the hypervisor node's IP address.
* `__meta_openstack_hypervisor_name`: the hypervisor node's name.
* `__meta_openstack_hypervisor_state`: the hypervisor node's state.
* `__meta_openstack_hypervisor_status`: the hypervisor node's status.
* `__meta_openstack_hypervisor_type`: the hypervisor node's type.
#### `instance`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#instance)
The`instance`role discovers one target per network interface of Nova instance. The target address defaults to the private IP address of the network interface.
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_openstack_address_pool`: the pool of the private IP.
* `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance.
* `__meta_openstack_instance_id`: the OpenStack instance ID.
* `__meta_openstack_instance_name`: the OpenStack instance name.
* `__meta_openstack_instance_status`: the status of the OpenStack instance.
* `__meta_openstack_private_ip`: the private IP of the OpenStack instance.
* `__meta_openstack_project_id`: the project (tenant) owning this instance.
* `__meta_openstack_public_ip`: the public IP of the OpenStack instance.
* `__meta_openstack_tag_<tagkey>`: each tag value of the instance.
* `__meta_openstack_user_id`: the user account owning the tenant.
See below for the configuration options for OpenStack discovery:
~~~
# The information to access the OpenStack API.
# The OpenStack role of entities that should be discovered.
role: <openstack_role>
# The OpenStack Region.
region: <string>
# identity_endpoint specifies the HTTP endpoint that is required to work with
# the Identity API of the appropriate version. While it's ultimately needed by
# all of the identity services, it will often be populated by a provider-level
# function.
[ identity_endpoint: <string> ]
# username is required if using Identity V2 API. Consult with your provider's
# control panel to discover your account's username. In Identity V3, either
# userid or a combination of username and domain_id or domain_name are needed.
[ username: <string> ]
[ userid: <string> ]
# password for the Identity V2 and V3 APIs. Consult with your provider's
# control panel to discover your account's preferred method of authentication.
[ password: <secret> ]
# At most one of domain_id and domain_name must be provided if using username
# with Identity V3. Otherwise, either are optional.
[ domain_name: <string> ]
[ domain_id: <string> ]
# The project_id and project_name fields are optional for the Identity V2 API.
# Some providers allow you to specify a project_name instead of the project_id.
# Some require both. Your provider's authentication policies will determine
# how these fields influence authentication.
[ project_name: <string> ]
[ project_id: <string> ]
# The application_credential_id or application_credential_name fields are
# required if using an application credential to authenticate. Some providers
# allow you to create an application credential to authenticate rather than a
# password.
[ application_credential_name: <string> ]
[ application_credential_id: <string> ]
# The application_credential_secret field is required if using an application
# credential to authenticate.
[ application_credential_secret: <secret> ]
# Whether the service discovery should list all instances for all projects.
# It is only relevant for the 'instance' role and usually requires admin permissions.
[ all_tenants: <boolean> | default: false ]
# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]
# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]
# TLS configuration.
tls_config:
[ <tls_config> ]
~~~
### `<file_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config)
File-based service discovery provides a more generic way to configure static targets and serves as an interface to plug in custom service discovery mechanisms.
It reads a set of files containing a list of zero or more`<static_config>`s. Changes to all defined files are detected via disk watches and applied immediately. Files may be provided in YAML or JSON format. Only changes resulting in well-formed target groups are applied.
The JSON file must contain a list of static configs, using this format:
~~~
[
{
"targets": [ "<host>", ... ],
"labels": {
"<labelname>": "<labelvalue>", ...
}
},
...
]
~~~
As a fallback, the file contents are also re-read periodically at the specified refresh interval.
Each target has a meta label`__meta_filepath`during the[relabeling phase](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config). Its value is set to the filepath from which the target was extracted.
There is a list of[integrations](https://prometheus.io/docs/operating/integrations/#file-service-discovery)with this discovery mechanism.
~~~
# Patterns for files from which target groups are extracted.
files:
[ - <filename_pattern> ... ]
# Refresh interval to re-read the files.
[ refresh_interval: <duration> | default = 5m ]
~~~
Where`<filename_pattern>`may be a path ending in`.json`,`.yml`or`.yaml`. The last path segment may contain a single`*`that matches any character sequence, e.g.`my/path/tg_*.json`.
### `<gce_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config)
[GCE](https://cloud.google.com/compute/)SD configurations allow retrieving scrape targets from GCP GCE instances. The private IP address is used by default, but may be changed to the public IP address with relabeling.
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_gce_instance_id`: the numeric id of the instance
* `__meta_gce_instance_name`: the name of the instance
* `__meta_gce_label_<name>`: each GCE label of the instance
* `__meta_gce_machine_type`: full or partial URL of the machine type of the instance
* `__meta_gce_metadata_<name>`: each metadata item of the instance
* `__meta_gce_network`: the network URL of the instance
* `__meta_gce_private_ip`: the private IP address of the instance
* `__meta_gce_project`: the GCP project in which the instance is running
* `__meta_gce_public_ip`: the public IP address of the instance, if present
* `__meta_gce_subnetwork`: the subnetwork URL of the instance
* `__meta_gce_tags`: comma separated list of instance tags
* `__meta_gce_zone`: the GCE zone URL in which the instance is running
See below for the configuration options for GCE discovery:
~~~
# The information to access the GCE API.
# The GCP Project
project: <string>
# The zone of the scrape targets. If you need multiple zones use multiple
# gce_sd_configs.
zone: <string>
# Filter can be used optionally to filter the instance list by other criteria
# Syntax of this filter string is described here in the filter query parameter section:
# https://cloud.google.com/compute/docs/reference/latest/instances/list
[ filter: <string> ]
# Refresh interval to re-read the instance list
[ refresh_interval: <duration> | default = 60s ]
# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]
# The tag separator is used to separate the tags on concatenation
[ tag_separator: <string> | default = , ]
~~~
Credentials are discovered by the Google Cloud SDK default client by looking in the following places, preferring the first location found:
1. a JSON file specified by the`GOOGLE_APPLICATION_CREDENTIALS`environment variable
2. a JSON file in the well-known path`$HOME/.config/gcloud/application_default_credentials.json`
3. fetched from the GCE metadata server
If Prometheus is running within GCE, the service account associated with the instance it is running on should have at least read-only permissions to the compute resources. If running outside of GCE make sure to create an appropriate service account and place the credential file in one of the expected locations.
### `<kubernetes_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config)
Kubernetes SD configurations allow retrieving scrape targets from[Kubernetes'](https://kubernetes.io/)REST API and always staying synchronized with the cluster state.
One of the following`role`types can be configured to discover targets:
#### `node`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#node)
The`node`role discovers one target per cluster node with the address defaulting to the Kubelet's HTTP port. The target address defaults to the first existing address of the Kubernetes node object in the address type order of`NodeInternalIP`,`NodeExternalIP`,`NodeLegacyHostIP`, and`NodeHostName`.
Available meta labels:
* `__meta_kubernetes_node_name`: The name of the node object.
* `__meta_kubernetes_node_label_<labelname>`: Each label from the node object.
* `__meta_kubernetes_node_labelpresent_<labelname>`:`true`for each label from the node object.
* `__meta_kubernetes_node_annotation_<annotationname>`: Each annotation from the node object.
* `__meta_kubernetes_node_annotationpresent_<annotationname>`:`true`for each annotation from the node object.
* `__meta_kubernetes_node_address_<address_type>`: The first address for each node address type, if it exists.
In addition, the`instance`label for the node will be set to the node name as retrieved from the API server.
#### `service`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#service)
The`service`role discovers a target for each service port for each service. This is generally useful for blackbox monitoring of a service. The address will be set to the Kubernetes DNS name of the service and respective service port.
Available meta labels:
* `__meta_kubernetes_namespace`: The namespace of the service object.
* `__meta_kubernetes_service_annotation_<annotationname>`: Each annotation from the service object.
* `__meta_kubernetes_service_annotationpresent_<annotationname>`: "true" for each annotation of the service object.
* `__meta_kubernetes_service_cluster_ip`: The cluster IP address of the service. (Does not apply to services of type ExternalName)
* `__meta_kubernetes_service_external_name`: The DNS name of the service. (Applies to services of type ExternalName)
* `__meta_kubernetes_service_label_<labelname>`: Each label from the service object.
* `__meta_kubernetes_service_labelpresent_<labelname>`:`true`for each label of the service object.
* `__meta_kubernetes_service_name`: The name of the service object.
* `__meta_kubernetes_service_port_name`: Name of the service port for the target.
* `__meta_kubernetes_service_port_protocol`: Protocol of the service port for the target.
#### `pod`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#pod)
The`pod`role discovers all pods and exposes their containers as targets. For each declared port of a container, a single target is generated. If a container has no specified ports, a port-free target per container is created for manually adding a port via relabeling.
Available meta labels:
* `__meta_kubernetes_namespace`: The namespace of the pod object.
* `__meta_kubernetes_pod_name`: The name of the pod object.
* `__meta_kubernetes_pod_ip`: The pod IP of the pod object.
* `__meta_kubernetes_pod_label_<labelname>`: Each label from the pod object.
* `__meta_kubernetes_pod_labelpresent_<labelname>`:`true`for each label from the pod object.
* `__meta_kubernetes_pod_annotation_<annotationname>`: Each annotation from the pod object.
* `__meta_kubernetes_pod_annotationpresent_<annotationname>`:`true`for each annotation from the pod object.
* `__meta_kubernetes_pod_container_init`:`true`if the container is an[InitContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
* `__meta_kubernetes_pod_container_name`: Name of the container the target address points to.
* `__meta_kubernetes_pod_container_port_name`: Name of the container port.
* `__meta_kubernetes_pod_container_port_number`: Number of the container port.
* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container port.
* `__meta_kubernetes_pod_ready`: Set to`true`or`false`for the pod's ready state.
* `__meta_kubernetes_pod_phase`: Set to`Pending`,`Running`,`Succeeded`,`Failed`or`Unknown`in the[lifecycle](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase).
* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto.
* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object.
* `__meta_kubernetes_pod_uid`: The UID of the pod object.
* `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller.
* `__meta_kubernetes_pod_controller_name`: Name of the pod controller.
#### `endpoints`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#endpoints)
The`endpoints`role discovers targets from listed endpoints of a service. For each endpoint address one target is discovered per port. If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well.
Available meta labels:
* `__meta_kubernetes_namespace`: The namespace of the endpoints object.
* `__meta_kubernetes_endpoints_name`: The names of the endpoints object.
* For all targets discovered directly from the endpoints list (those not additionally inferred from underlying pods), the following labels are attached:
* `__meta_kubernetes_endpoint_hostname`: Hostname of the endpoint.
* `__meta_kubernetes_endpoint_node_name`: Name of the node hosting the endpoint.
* `__meta_kubernetes_endpoint_ready`: Set to`true`or`false`for the endpoint's ready state.
* `__meta_kubernetes_endpoint_port_name`: Name of the endpoint port.
* `__meta_kubernetes_endpoint_port_protocol`: Protocol of the endpoint port.
* `__meta_kubernetes_endpoint_address_target_kind`: Kind of the endpoint address target.
* `__meta_kubernetes_endpoint_address_target_name`: Name of the endpoint address target.
* If the endpoints belong to a service, all labels of the`role: service`discovery are attached.
* For all targets backed by a pod, all labels of the`role: pod`discovery are attached.
#### `ingress`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ingress)
The`ingress`role discovers a target for each path of each ingress. This is generally useful for blackbox monitoring of an ingress. The address will be set to the host specified in the ingress spec.
Available meta labels:
* `__meta_kubernetes_namespace`: The namespace of the ingress object.
* `__meta_kubernetes_ingress_name`: The name of the ingress object.
* `__meta_kubernetes_ingress_label_<labelname>`: Each label from the ingress object.
* `__meta_kubernetes_ingress_labelpresent_<labelname>`:`true`for each label from the ingress object.
* `__meta_kubernetes_ingress_annotation_<annotationname>`: Each annotation from the ingress object.
* `__meta_kubernetes_ingress_annotationpresent_<annotationname>`:`true`for each annotation from the ingress object.
* `__meta_kubernetes_ingress_scheme`: Protocol scheme of ingress,`https`if TLS config is set. Defaults to`http`.
* `__meta_kubernetes_ingress_path`: Path from ingress spec. Defaults to`/`.
See below for the configuration options for Kubernetes discovery:
~~~
# The information to access the Kubernetes API.
# The API server addresses. If left empty, Prometheus is assumed to run inside
# of the cluster and will discover API servers automatically and use the pod's
# CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/.
[ api_server: <host> ]
# The Kubernetes role of entities that should be discovered.
role: <role>
# Optional authentication information used to authenticate to the API server.
# Note that `basic_auth`, `bearer_token` and `bearer_token_file` options are
# mutually exclusive.
# password and password_file are mutually exclusive.
# Optional HTTP basic authentication information.
basic_auth:
[ username: <string> ]
[ password: <secret> ]
[ password_file: <string> ]
# Optional bearer token authentication information.
[ bearer_token: <secret> ]
# Optional bearer token file authentication information.
[ bearer_token_file: <filename> ]
# Optional proxy URL.
[ proxy_url: <string> ]
# TLS configuration.
tls_config:
[ <tls_config> ]
# Optional namespace discovery. If omitted, all namespaces are used.
namespaces:
names:
[ - <string> ]
~~~
Where`<role>`must be`endpoints`,`service`,`pod`,`node`, or`ingress`.
See[this example Prometheus configuration file](https://github.com/prometheus/prometheus/blob/release-2.13/documentation/examples/prometheus-kubernetes.yml)for a detailed example of configuring Prometheus for Kubernetes.
You may wish to check out the 3rd party[Prometheus Operator](https://github.com/coreos/prometheus-operator), which automates the Prometheus setup on top of Kubernetes.
### `<marathon_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#marathon_sd_config)
Marathon SD configurations allow retrieving scrape targets using the[Marathon](https://mesosphere.github.io/marathon/)REST API. Prometheus will periodically check the REST endpoint for currently running tasks and create a target group for every app that has at least one healthy task.
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_marathon_app`: the name of the app (with slashes replaced by dashes)
* `__meta_marathon_image`: the name of the Docker image used (if available)
* `__meta_marathon_task`: the ID of the Mesos task
* `__meta_marathon_app_label_<labelname>`: any Marathon labels attached to the app
* `__meta_marathon_port_definition_label_<labelname>`: the port definition labels
* `__meta_marathon_port_mapping_label_<labelname>`: the port mapping labels
* `__meta_marathon_port_index`: the port index number (e.g.`1`for`PORT1`)
See below for the configuration options for Marathon discovery:
~~~
# List of URLs to be used to contact Marathon servers.
# You need to provide at least one server URL.
servers:
- <string>
# Polling interval
[ refresh_interval: <duration> | default = 30s ]
# Optional authentication information for token-based authentication
# https://docs.mesosphere.com/1.11/security/ent/iam-api/#passing-an-authentication-token
# It is mutually exclusive with `auth_token_file` and other authentication mechanisms.
[ auth_token: <secret> ]
# Optional authentication information for token-based authentication
# https://docs.mesosphere.com/1.11/security/ent/iam-api/#passing-an-authentication-token
# It is mutually exclusive with `auth_token` and other authentication mechanisms.
[ auth_token_file: <filename> ]
# Sets the `Authorization` header on every request with the
# configured username and password.
# This is mutually exclusive with other authentication mechanisms.
# password and password_file are mutually exclusive.
basic_auth:
[ username: <string> ]
[ password: <string> ]
[ password_file: <string> ]
# Sets the `Authorization` header on every request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file` and other authentication mechanisms.
# NOTE: The current version of DC/OS marathon (v1.11.0) does not support standard Bearer token authentication. Use `auth_token` instead.
[ bearer_token: <string> ]
# Sets the `Authorization` header on every request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token` and other authentication mechanisms.
# NOTE: The current version of DC/OS marathon (v1.11.0) does not support standard Bearer token authentication. Use `auth_token_file` instead.
[ bearer_token_file: /path/to/bearer/token/file ]
# TLS configuration for connecting to marathon servers
tls_config:
[ <tls_config> ]
# Optional proxy URL.
[ proxy_url: <string> ]
~~~
By default every app listed in Marathon will be scraped by Prometheus. If not all of your services provide Prometheus metrics, you can use a Marathon label and Prometheus relabeling to control which instances will actually be scraped. See[the Prometheus marathon-sd configuration file](https://github.com/prometheus/prometheus/blob/release-2.13/documentation/examples/prometheus-marathon.yml)for a practical example on how to set up your Marathon app and your Prometheus configuration.
By default, all apps will show up as a single job in Prometheus (the one specified in the configuration file), which can also be changed using relabeling.
### `<nerve_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#nerve_sd_config)
Nerve SD configurations allow retrieving scrape targets from[AirBnB's Nerve](https://github.com/airbnb/nerve)which are stored in[Zookeeper](https://zookeeper.apache.org/).
The following meta labels are available on targets during[relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `__meta_nerve_path`: the full path to the endpoint node in Zookeeper
* `__meta_nerve_endpoint_host`: the host of the endpoint
* `__meta_nerve_endpoint_port`: the port of the endpoint
* `__meta_nerve_endpoint_name`: the name of the endpoint
~~~
# The Zookeeper servers.
servers:
- <host>
# Paths can point to a single service, or the root of a tree of services.
paths:
- <string>
[ timeout: <duration> | default = 10s ]
~~~
### `<serverset_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#serverset_sd_config)
Serverset SD configurations allow retrieving scrape targets from[Serversets](https://github.com/twitter/finagle/tree/master/finagle-serversets)which are stored in[Zookeeper](https://zookeeper.apache.org/). Serversets are commonly used by[Finagle](https://twitter.github.io/finagle/)and[Aurora](https://aurora.apache.org/).
The following meta labels are available on targets during relabeling:
* `__meta_serverset_path`: the full path to the serverset member node in Zookeeper
* `__meta_serverset_endpoint_host`: the host of the default endpoint
* `__meta_serverset_endpoint_port`: the port of the default endpoint
* `__meta_serverset_endpoint_host_<endpoint>`: the host of the given endpoint
* `__meta_serverset_endpoint_port_<endpoint>`: the port of the given endpoint
* `__meta_serverset_shard`: the shard number of the member
* `__meta_serverset_status`: the status of the member
~~~
# The Zookeeper servers.
servers:
- <host>
# Paths can point to a single serverset, or the root of a tree of serversets.
paths:
- <string>
[ timeout: <duration> | default = 10s ]
~~~
Serverset data must be in the JSON format, the Thrift format is not currently supported.
### `<triton_sd_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#triton_sd_config)
[Triton](https://github.com/joyent/triton)SD configurations allow retrieving scrape targets from[Container Monitor](https://github.com/joyent/rfd/blob/master/rfd/0027/README.md)discovery endpoints.
The following meta labels are available on targets during relabeling:
* `__meta_triton_groups`: the list of groups belonging to the target joined by a comma separator
* `__meta_triton_machine_alias`: the alias of the target container
* `__meta_triton_machine_brand`: the brand of the target container
* `__meta_triton_machine_id`: the UUID of the target container
* `__meta_triton_machine_image`: the target containers image type
* `__meta_triton_server_id`: the server UUID for the target container
~~~
# The information to access the Triton discovery API.
# The account to use for discovering new target containers.
account: <string>
# The DNS suffix which should be applied to target containers.
dns_suffix: <string>
# The Triton discovery endpoint (e.g. 'cmon.us-east-3b.triton.zone'). This is
# often the same value as dns_suffix.
endpoint: <string>
# A list of groups for which targets are retrieved. If omitted, all containers
# available to the requesting account are scraped.
groups:
[ - <string> ... ]
# The port to use for discovery and metric scraping.
[ port: <int> | default = 9163 ]
# The interval which should be used for refreshing target containers.
[ refresh_interval: <duration> | default = 60s ]
# The Triton discovery API version.
[ version: <int> | default = 1 ]
# TLS configuration.
tls_config:
[ <tls_config> ]
~~~
### `<static_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config)
A`static_config`allows specifying a list of targets and a common label set for them. It is the canonical way to specify static targets in a scrape configuration.
~~~
# The targets specified by the static config.
targets:
[ - '<host>' ]
# Labels assigned to all metrics scraped from the targets.
labels:
[ <labelname>: <labelvalue> ... ]
~~~
### `<relabel_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)
Relabeling is a powerful tool to dynamically rewrite the label set of a target before it gets scraped. Multiple relabeling steps can be configured per scrape configuration. They are applied to the label set of each target in order of their appearance in the configuration file.
Initially, aside from the configured per-target labels, a target's`job`label is set to the`job_name`value of the respective scrape configuration. The`__address__`label is set to the`<host>:<port>`address of the target. After relabeling, the`instance`label is set to the value of`__address__`by default if it was not set during relabeling. The`__scheme__`and`__metrics_path__`labels are set to the scheme and metrics path of the target respectively. The`__param_<name>`label is set to the value of the first passed URL parameter called`<name>`.
Additional labels prefixed with`__meta_`may be available during the relabeling phase. They are set by the service discovery mechanism that provided the target and vary between mechanisms.
Labels starting with`__`will be removed from the label set after target relabeling is completed.
If a relabeling step needs to store a label value only temporarily (as the input to a subsequent relabeling step), use the`__tmp`label name prefix. This prefix is guaranteed to never be used by Prometheus itself.
~~~
# The source labels select values from existing labels. Their content is concatenated
# using the configured separator and matched against the configured regular expression
# for the replace, keep, and drop actions.
[ source_labels: '[' <labelname> [, ...] ']' ]
# Separator placed between concatenated source label values.
[ separator: <string> | default = ; ]
# Label to which the resulting value is written in a replace action.
# It is mandatory for replace actions. Regex capture groups are available.
[ target_label: <labelname> ]
# Regular expression against which the extracted value is matched.
[ regex: <regex> | default = (.*) ]
# Modulus to take of the hash of the source label values.
[ modulus: <uint64> ]
# Replacement value against which a regex replace is performed if the
# regular expression matches. Regex capture groups are available.
[ replacement: <string> | default = $1 ]
# Action to perform based on regex matching.
[ action: <relabel_action> | default = replace ]
~~~
`<regex>`is any valid[RE2 regular expression](https://github.com/google/re2/wiki/Syntax). It is required for the`replace`,`keep`,`drop`,`labelmap`,`labeldrop`and`labelkeep`actions. The regex is anchored on both ends. To un-anchor the regex, use`.*<regex>.*`.
`<relabel_action>`determines the relabeling action to take:
* `replace`: Match`regex`against the concatenated`source_labels`. Then, set`target_label`to`replacement`, with match group references (`${1}`,`${2}`, ...) in`replacement`substituted by their value. If`regex`does not match, no replacement takes place.
* `keep`: Drop targets for which`regex`does not match the concatenated`source_labels`.
* `drop`: Drop targets for which`regex`matches the concatenated`source_labels`.
* `hashmod`: Set`target_label`to the`modulus`of a hash of the concatenated`source_labels`.
* `labelmap`: Match`regex`against all label names. Then copy the values of the matching labels to label names given by`replacement`with match group references (`${1}`,`${2}`, ...) in`replacement`substituted by their value.
* `labeldrop`: Match`regex`against all label names. Any label that matches will be removed from the set of labels.
* `labelkeep`: Match`regex`against all label names. Any label that does not match will be removed from the set of labels.
Care must be taken with`labeldrop`and`labelkeep`to ensure that metrics are still uniquely labeled once the labels are removed.
### `<metric_relabel_configs>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)
Metric relabeling is applied to samples as the last step before ingestion. It has the same configuration format and actions as target relabeling. Metric relabeling does not apply to automatically generated timeseries such as`up`.
One use for this is to blacklist time series that are too expensive to ingest.
### `<alert_relabel_configs>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs)
Alert relabeling is applied to alerts before they are sent to the Alertmanager. It has the same configuration format and actions as target relabeling. Alert relabeling is applied after external labels.
One use for this is ensuring a HA pair of Prometheus servers with different external labels send identical alerts.
### `<alertmanager_config>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config)
An`alertmanager_config`section specifies Alertmanager instances the Prometheus server sends alerts to. It also provides parameters to configure how to communicate with these Alertmanagers.
Alertmanagers may be statically configured via the`static_configs`parameter or dynamically discovered using one of the supported service-discovery mechanisms.
Additionally,`relabel_configs`allow selecting Alertmanagers from discovered entities and provide advanced modifications to the used API path, which is exposed through the`__alerts_path__`label.
~~~
# Per-target Alertmanager timeout when pushing alerts.
[ timeout: <duration> | default = 10s ]
# The api version of Alertmanager.
[ api_version: <version> | default = v1 ]
# Prefix for the HTTP path alerts are pushed to.
[ path_prefix: <path> | default = / ]
# Configures the protocol scheme used for requests.
[ scheme: <scheme> | default = http ]
# Sets the `Authorization` header on every request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
[ username: <string> ]
[ password: <string> ]
[ password_file: <string> ]
# Sets the `Authorization` header on every request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]
# Sets the `Authorization` header on every request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]
# Configures the scrape request's TLS settings.
tls_config:
[ <tls_config> ]
# Optional proxy URL.
[ proxy_url: <string> ]
# List of Azure service discovery configurations.
azure_sd_configs:
[ - <azure_sd_config> ... ]
# List of Consul service discovery configurations.
consul_sd_configs:
[ - <consul_sd_config> ... ]
# List of DNS service discovery configurations.
dns_sd_configs:
[ - <dns_sd_config> ... ]
# List of EC2 service discovery configurations.
ec2_sd_configs:
[ - <ec2_sd_config> ... ]
# List of file service discovery configurations.
file_sd_configs:
[ - <file_sd_config> ... ]
# List of GCE service discovery configurations.
gce_sd_configs:
[ - <gce_sd_config> ... ]
# List of Kubernetes service discovery configurations.
kubernetes_sd_configs:
[ - <kubernetes_sd_config> ... ]
# List of Marathon service discovery configurations.
marathon_sd_configs:
[ - <marathon_sd_config> ... ]
# List of AirBnB's Nerve service discovery configurations.
nerve_sd_configs:
[ - <nerve_sd_config> ... ]
# List of Zookeeper Serverset service discovery configurations.
serverset_sd_configs:
[ - <serverset_sd_config> ... ]
# List of Triton service discovery configurations.
triton_sd_configs:
[ - <triton_sd_config> ... ]
# List of labeled statically configured Alertmanagers.
static_configs:
[ - <static_config> ... ]
# List of Alertmanager relabel configurations.
relabel_configs:
[ - <relabel_config> ... ]
~~~
### `<remote_write>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write)
`write_relabel_configs`is relabeling applied to samples before sending them to the remote endpoint. Write relabeling is applied after external labels. This could be used to limit which samples are sent.
There is a[small demo](https://github.com/prometheus/prometheus/blob/release-2.13/documentation/examples/remote_storage)of how to use this functionality.
~~~
# The URL of the endpoint to send samples to.
url: <string>
# Timeout for requests to the remote write endpoint.
[ remote_timeout: <duration> | default = 30s ]
# List of remote write relabel configurations.
write_relabel_configs:
[ - <relabel_config> ... ]
# Sets the `Authorization` header on every remote write request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
[ username: <string> ]
[ password: <string> ]
[ password_file: <string> ]
# Sets the `Authorization` header on every remote write request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]
# Sets the `Authorization` header on every remote write request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]
# Configures the remote write request's TLS settings.
tls_config:
[ <tls_config> ]
# Optional proxy URL.
[ proxy_url: <string> ]
# Configures the queue used to write to remote storage.
queue_config:
# Number of samples to buffer per shard before we block reading of more
# samples from the WAL. It is recommended to have enough capacity in each
# shard to buffer several requests to keep throughput up while processing
# occasional slow remote requests.
[ capacity: <int> | default = 500 ]
# Maximum number of shards, i.e. amount of concurrency.
[ max_shards: <int> | default = 1000 ]
# Minimum number of shards, i.e. amount of concurrency.
[ min_shards: <int> | default = 1 ]
# Maximum number of samples per send.
[ max_samples_per_send: <int> | default = 100]
# Maximum time a sample will wait in buffer.
[ batch_send_deadline: <duration> | default = 5s ]
# Initial retry delay. Gets doubled for every retry.
[ min_backoff: <duration> | default = 30ms ]
# Maximum retry delay.
[ max_backoff: <duration> | default = 100ms ]
~~~
There is a list of[integrations](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage)with this feature.
### `<remote_read>`[](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_read)
~~~
# The URL of the endpoint to query from.
url: <string>
# An optional list of equality matchers which have to be
# present in a selector to query the remote read endpoint.
required_matchers:
[ <labelname>: <labelvalue> ... ]
# Timeout for requests to the remote read endpoint.
[ remote_timeout: <duration> | default = 1m ]
# Whether reads should be made for queries for time ranges that
# the local storage should have complete data for.
[ read_recent: <boolean> | default = false ]
# Sets the `Authorization` header on every remote read request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
[ username: <string> ]
[ password: <string> ]
[ password_file: <string> ]
# Sets the `Authorization` header on every remote read request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]
# Sets the `Authorization` header on every remote read request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]
# Configures the remote read request's TLS settings.
tls_config:
[ <tls_config> ]
# Optional proxy URL.
[ proxy_url: <string> ]
~~~
There is a list of[integrations](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage)with this feature.
- 介绍(Introduction)
- 概览(Overview)
- First steps
- 方案比较
- FAQ
- Roadmap
- Media
- 术语表(Glossary)
- 概念(Concepts)
- 数据模型(Data model)
- 指标类型(Metric types)
- 作业和实例(Jobs and instances)
- Prometheus
- Getting started
- 安装
- 配置
- 配置
- 记录规则(Recording Rules)
- 报警规则(Alerting Rules)
- 模版示例
- 模版参考
- Rules的单元测试
- Querying
- Basics
- Operators
- Functions
- Examples
- HTTP API
- 存储(Storage)
- 联邦(Federation)
- 管理API(Management API)
- 迁移(Migration)
- API稳定性
- 可视化(Virsualization)
- Instrumenting
- 客户端库
- 开发客户端库
- 推送metrics
- exporters & 集成
- 开发exporters
- 格式一览(Exposition formats)
- Operating
- 安全
- 集成
- 报警(Alerting)
- 报警概览
- Alertmanager
- 配置
- 客户端
- 通知模版参考
- 通知模版样例
- 管理API
- Best Practices
- Metric & label 名称
- Instrumentation
- console & dashboard
- Histogram & summary
- 报警
- 记录规则(Recording rules)
- 何时使用Pushgateway
- Remote write tuning
- Guides
- 使用cAdvisor监控Docker容器
- 使用基于文件的服务发现来发现抓取target
- Basic auth
- 使用node exporter来监控Linux宿主机metrics
- Instrumenting一个Go应用程序
- TLS加密