18f0a74a13 | ||
---|---|---|
.. | ||
ci | ||
templates | ||
.helmignore | ||
Chart.yaml | ||
README.md | ||
values.yaml |
README.md
Prometheus Adapter
Installs the Prometheus Adapter for the Custom Metrics API. Custom metrics are used in Kubernetes by Horizontal Pod Autoscalers to scale workloads based upon your own metric pulled from an external metrics provider like Prometheus. This chart complements the metrics-server chart that provides resource only metrics.
Prerequisites
Kubernetes 1.14+
Get Repo Info
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
See helm repo for command documentation.
Install Chart
# Helm 3
$ helm install [RELEASE_NAME] prometheus-community/prometheus-adapter
# Helm 2
$ helm install --name [RELEASE_NAME] prometheus-community/prometheus-adapter
See configuration below.
See helm install for command documentation.
Uninstall Chart
# Helm 3
$ helm uninstall [RELEASE_NAME]
# Helm 2
# helm delete --purge [RELEASE_NAME]
This removes all the Kubernetes components associated with the chart and deletes the release.
See helm uninstall for command documentation.
Upgrading Chart
# Helm 3 or 2
$ helm upgrade [RELEASE_NAME] [CHART] --install
See helm upgrade for command documentation.
Configuration
See Customizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart's values.yaml, or run these configuration commands:
# Helm 2
$ helm inspect values prometheus-community/prometheus-adapter
# Helm 3
$ helm show values prometheus-community/prometheus-adapter
Prometheus Service Endpoint
To use the chart, ensure the prometheus.url
and prometheus.port
are configured with the correct Prometheus service endpoint. If Prometheus is exposed under HTTPS the host's CA Bundle must be exposed to the container using extraVolumes
and extraVolumeMounts
.
Adapter Rules
Additionally, the chart comes with a set of default rules out of the box but they may pull in too many metrics or not map them correctly for your needs. Therefore, it is recommended to populate rules.custom
with a list of rules (see the config document for the proper format).
Horizontal Pod Autoscaler Metrics
Finally, to configure your Horizontal Pod Autoscaler to use the custom metric, see the custom metrics section of the HPA walkthrough.
The Prometheus Adapter can serve three different metrics APIs:
Custom Metrics
Enabling this option will cause custom metrics to be served at /apis/custom.metrics.k8s.io/v1beta1
. Enabled by default when rules.default
is true, but can be customized by populating rules.custom
:
rules:
custom:
- seriesQuery: '{__name__=~"^some_metric_count$"}'
resources:
template: <<.Resource>>
name:
matches: ""
as: "my_custom_metric"
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
External Metrics
Enabling this option will cause external metrics to be served at /apis/external.metrics.k8s.io/v1beta1
. Can be enabled by populating rules.external
:
rules:
external:
- seriesQuery: '{__name__=~"^some_metric_count$"}'
resources:
template: <<.Resource>>
name:
matches: ""
as: "my_external_metric"
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
Resource Metrics
Enabling this option will cause resource metrics to be served at /apis/metrics.k8s.io/v1beta1
. Resource metrics will allow pod CPU and Memory metrics to be used in Horizontal Pod Autoscalers as well as the kubectl top
command. Can be enabled by populating rules.resource
:
rules:
resource:
cpu:
containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, container!=""}[3m])) by (<<.GroupBy>>)
nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[3m])) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod:
resource: pod
containerLabel: container
memory:
containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>, container!=""}) by (<<.GroupBy>>)
nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod:
resource: pod
containerLabel: container
window: 3m
NOTE: Setting a value for rules.resource
will also deploy the resource metrics API service, providing the same functionality as metrics-server. As such it is not possible to deploy them both in the same cluster.