Create Alertmanager secret in pre-install hook

pull/637/head
Arvind Iyengar 2020-09-10 17:27:40 -07:00
parent 7425a1316b
commit d03ffe81df
2 changed files with 266 additions and 26 deletions

View File

@ -39,3 +39,4 @@ All notable changes from the upstream Prometheus Operator chart will be added to
- Updated default Grafana `deploymentStrategy` to `Recreate` to prevent deployments from being stuck on upgrade if a PV is attached to Grafana
- Modified the default `<serviceMonitor|podMonitor|rule>SelectorNilUsesHelmValues` to default to `false`. As a result, we look for all CRs with any labels in all namespaces by default rather than just the ones tagged with the label `release: rancher-monitoring`.
- Modified the default images used by the `rancher-monitoring` chart to point to Rancher mirrors of the original images from upstream.
- Modified the behavior of the chart to create the Alertmanager Config Secret via a pre-install hook instead of using the normal Helm lifecycle to manage the secret. The benefit of this approach is that all changes to the Config Secret done on a live cluster will never get overridden on a `helm upgrade` since the secret only gets created on a `helm install`. If you would like the secret to be cleaned up on an `helm uninstall`, enable `alertmanager.cleanupOnUninstall`; however, this is disabled by default to prevent the loss of alerting configuration on an uninstall. This secret will never be modified on a `helm upgrade`.

View File

@ -183,7 +183,18 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/REA
| `alertmanager.alertmanagerSpec.image.tag` | Tag of Alertmanager container image to be deployed. | `v0.20.0` |
| `alertmanager.alertmanagerSpec.listenLocal` | ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. | `false` |
| `alertmanager.alertmanagerSpec.logFormat` | Log format for Alertmanager to be configured with. | `logfmt` |
@@ -465,17 +495,23 @@
@@ -415,6 +445,10 @@
| `alertmanager.podDisruptionBudget.enabled` | If true, create a pod disruption budget for Alertmanager pods. The created resource cannot be modified once created - it must be deleted to perform a change | `false` |
| `alertmanager.podDisruptionBudget.maxUnavailable` | Maximum number / percentage of pods that may be made unavailable | `""` |
| `alertmanager.podDisruptionBudget.minAvailable` | Minimum number / percentage of pods that should remain scheduled | `1` |
+| `alertmanager.secret.cleanupOnUninstall` | Whether or not to trigger a job to clean up the alertmanager config secret to be deleted on a `helm uninstall`. By default, this is disabled to prevent the loss of alerting configuration on an uninstall. | `false` |
+| `alertmanager.secret.image.pullPolicy` | Image pull policy for job(s) related to alertmanager config secret's lifecycle | `IfNotPresent` |
+| `alertmanager.secret.image.repository` | Repository to use for job(s) related to alertmanager config secret's lifecycle | `rancher/hyperkube` |
+| `alertmanager.secret.image.tag` | Tag to use for job(s) related to alertmanager config secret's lifecycle | `v1.18.6-rancher1` |
| `alertmanager.secret.annotations` | Alertmanager Secret annotations | `{}` |
| `alertmanager.service.annotations` | Alertmanager Service annotations | `{}` |
| `alertmanager.service.clusterIP` | Alertmanager service clusterIP IP | `""` |
@@ -465,17 +499,23 @@
| `grafana.namespaceOverride` | Override the deployment namespace of grafana | `""` (`Release.Namespace`) |
| `grafana.rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires rbac.pspEnabled) | `true` |
| `grafana.service.portName` | Allow to customize Grafana service portname. Will be used by servicemonitor as well | `service` |
@ -207,7 +218,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/REA
### Exporters
| Parameter | Description | Default |
@@ -649,7 +685,7 @@
@@ -649,7 +689,7 @@
The Grafana chart is more feature-rich than this chart - it contains a sidecar that is able to load data sources and dashboards from configmaps deployed into the same cluster. For more information check out the [documentation for the chart](https://github.com/helm/charts/tree/master/stable/grafana)
### Coreos CRDs
@ -706,6 +717,212 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/tem
version: {{ .Values.alertmanager.alertmanagerSpec.image.tag }}
{{- end }}
replicas: {{ .Values.alertmanager.alertmanagerSpec.replicas }}
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/templates/alertmanager/cleanupSecret.yaml packages/rancher-monitoring/charts/templates/alertmanager/cleanupSecret.yaml
--- packages/rancher-monitoring/charts-original/templates/alertmanager/cleanupSecret.yaml
+++ packages/rancher-monitoring/charts/templates/alertmanager/cleanupSecret.yaml
@@ -0,0 +1,82 @@
+{{- if and (.Values.alertmanager.enabled) (not .Values.alertmanager.alertmanagerSpec.useExistingSecret) (.Values.alertmanager.secret.cleanupOnUninstall) }}
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ namespace: {{ template "prometheus-operator.namespace" . }}
+ labels:
+{{ include "prometheus-operator.labels" . | indent 4 }}
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": post-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "5"
+spec:
+ template:
+ metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ labels: {{ include "prometheus-operator.labels" . | nindent 8 }}
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ spec:
+ serviceAccountName: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ containers:
+ - name: delete-secret
+ image: {{ template "system_default_registry" . }}{{ .Values.alertmanager.secret.image.repository }}:{{ .Values.alertmanager.secret.image.tag }}
+ imagePullPolicy: {{ .Values.alertmanager.secret.image.pullPolicy }}
+ command:
+ - /bin/sh
+ - -c
+ - >
+ if kubectl get secret -n {{ template "prometheus-operator.namespace" . }} alertmanager-{{ template "prometheus-operator.fullname" . }}-alertmanager > /dev/null 2>&1; then
+ kubectl delete secret -n {{ template "prometheus-operator.namespace" . }} alertmanager-{{ template "prometheus-operator.fullname" . }}-alertmanager
+ fi;
+ restartPolicy: OnFailure
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": post-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "3"
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - secrets
+ verbs: ['get', 'delete']
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": post-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "3"
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+subjects:
+- kind: ServiceAccount
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ namespace: {{ template "prometheus-operator.namespace" . }}
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-post-delete
+ namespace: {{ template "prometheus-operator.namespace" . }}
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": post-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "3"
+{{- end }}
\ No newline at end of file
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/templates/alertmanager/secret.yaml packages/rancher-monitoring/charts/templates/alertmanager/secret.yaml
--- packages/rancher-monitoring/charts-original/templates/alertmanager/secret.yaml
+++ packages/rancher-monitoring/charts/templates/alertmanager/secret.yaml
@@ -1,11 +1,19 @@
{{- if and (.Values.alertmanager.enabled) (not .Values.alertmanager.alertmanagerSpec.useExistingSecret) }}
+{{- if .Release.IsInstall }}
+{{- $secretName := (printf "alertmanager-%s-alertmanager" (include "prometheus-operator.fullname" .)) }}
+{{- if (lookup "v1" "Secret" (include "prometheus-operator.namespace" .) $secretName) }}
+{{- required (printf "Cannot overwrite existing secret %s in namespace %s." $secretName (include "prometheus-operator.namespace" .)) "" }}
+{{- end }}{{- end }}
apiVersion: v1
kind: Secret
metadata:
- name: alertmanager-{{ template "prometheus-operator.fullname" . }}-alertmanager
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
namespace: {{ template "prometheus-operator.namespace" . }}
-{{- if .Values.alertmanager.secret.annotations }}
annotations:
+ "helm.sh/hook": pre-install
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "4"
+{{- if .Values.alertmanager.secret.annotations }}
{{ toYaml .Values.alertmanager.secret.annotations | indent 4 }}
{{- end }}
labels:
@@ -20,4 +28,93 @@
{{- range $key, $val := .Values.alertmanager.templateFiles }}
{{ $key }}: {{ $val | b64enc | quote }}
{{- end }}
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ namespace: {{ template "prometheus-operator.namespace" . }}
+ labels:
+{{ include "prometheus-operator.labels" . | indent 4 }}
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": pre-install
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "5"
+spec:
+ template:
+ metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ labels: {{ include "prometheus-operator.labels" . | nindent 8 }}
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ spec:
+ serviceAccountName: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ containers:
+ - name: copy-pre-install-secret
+ image: {{ template "system_default_registry" . }}{{ .Values.alertmanager.secret.image.repository }}:{{ .Values.alertmanager.secret.image.tag }}
+ imagePullPolicy: {{ .Values.alertmanager.secret.image.pullPolicy }}
+ command:
+ - /bin/sh
+ - -c
+ - >
+ if kubectl get secret -n {{ template "prometheus-operator.namespace" . }} alertmanager-{{ template "prometheus-operator.fullname" . }}-alertmanager > /dev/null 2>&1; then
+ echo "Secret already exists"
+ exit 1
+ fi;
+ kubectl patch secret -n {{ template "prometheus-operator.namespace" . }} --dry-run -o yaml
+ alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ -p '{{ printf "{\"metadata\":{\"name\": \"alertmanager-%s-alertmanager\"}}" (include "prometheus-operator.fullname" .) }}'
+ | kubectl apply -f -;
+ kubectl annotate secret -n {{ template "prometheus-operator.namespace" . }}
+ alertmanager-{{ template "prometheus-operator.fullname" . }}-alertmanager
+ helm.sh/hook- helm.sh/hook-delete-policy- helm.sh/hook-weight-;
+ restartPolicy: OnFailure
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": pre-install
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "3"
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - secrets
+ verbs: ['create', 'get', 'patch']
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": pre-install
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "3"
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+subjects:
+- kind: ServiceAccount
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ namespace: {{ template "prometheus-operator.namespace" . }}
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: alertmanager-{{ template "prometheus-operator.fullname" . }}-pre-install
+ namespace: {{ template "prometheus-operator.namespace" . }}
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-alertmanager
+ annotations:
+ "helm.sh/hook": pre-install
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+ "helm.sh/hook-weight": "3"
{{- end }}
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/templates/exporters/core-dns/servicemonitor.yaml packages/rancher-monitoring/charts/templates/exporters/core-dns/servicemonitor.yaml
--- packages/rancher-monitoring/charts-original/templates/exporters/core-dns/servicemonitor.yaml
+++ packages/rancher-monitoring/charts/templates/exporters/core-dns/servicemonitor.yaml
@ -2199,7 +2416,29 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
ingress:
enabled: false
@@ -334,7 +672,7 @@
@@ -208,6 +546,21 @@
## Configuration for Alertmanager secret
##
secret:
+
+ # Should the Alertmanager Config Secret be cleaned up on an uninstall?
+ # This is set to false by default to prevent the loss of alerting configuration on an uninstall
+ # Only used Alertmanager is deployed and alertmanager.alertmanagerSpec.useExistingSecret=false
+ #
+ cleanupOnUninstall: false
+
+ # The image used to manage the Alertmanager Config Secret's lifecycle
+ # Only used Alertmanager is deployed and alertmanager.alertmanagerSpec.useExistingSecret=false
+ #
+ image:
+ repository: rancher/rancher-agent
+ tag: v2.4.8
+ pullPolicy: IfNotPresent
+
annotations: {}
## Configuration for creating an Ingress that will map to each Alertmanager replica service
@@ -334,7 +687,7 @@
## Image of Alertmanager
##
image:
@ -2208,7 +2447,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
tag: v0.20.0
## If true then the user will be responsible to provide a secret with alertmanager configuration
@@ -409,9 +747,13 @@
@@ -409,9 +762,13 @@
## Define resources requests and limits for single Pods.
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
@ -2225,7 +2464,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node.
## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided.
@@ -486,6 +828,9 @@
@@ -486,6 +843,9 @@
enabled: true
namespaceOverride: ""
@ -2235,7 +2474,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
@@ -529,6 +874,7 @@
@@ -529,6 +889,7 @@
dashboards:
enabled: true
label: grafana_dashboard
@ -2243,7 +2482,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Annotations for Grafana dashboard configmaps
##
@@ -547,6 +893,7 @@
@@ -547,6 +908,7 @@
## ref: https://git.io/fjaBS
createPrometheusReplicasDatasources: false
label: grafana_datasource
@ -2251,7 +2490,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
extraConfigmapMounts: []
# - name: certs-configmap
@@ -574,6 +921,19 @@
@@ -574,6 +936,19 @@
##
service:
portName: service
@ -2271,7 +2510,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## If true, create a serviceMonitor for grafana
##
@@ -599,6 +959,14 @@
@@ -599,6 +974,14 @@
# targetLabel: nodename
# replacement: $1
# action: replace
@ -2286,7 +2525,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Component scraping the kube api server
##
@@ -755,7 +1123,7 @@
@@ -755,7 +1138,7 @@
## Component scraping the kube controller manager
##
kubeControllerManager:
@ -2295,7 +2534,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## If your kube controller manager is not deployed as a pod, specify IPs it can be found on
##
@@ -888,7 +1256,7 @@
@@ -888,7 +1271,7 @@
## Component scraping etcd
##
kubeEtcd:
@ -2304,7 +2543,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## If your etcd is not deployed as a pod, specify IPs it can be found on
##
@@ -948,7 +1316,7 @@
@@ -948,7 +1331,7 @@
## Component scraping kube scheduler
##
kubeScheduler:
@ -2313,7 +2552,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## If your kube scheduler is not deployed as a pod, specify IPs it can be found on
##
@@ -1001,7 +1369,7 @@
@@ -1001,7 +1384,7 @@
## Component scraping kube proxy
##
kubeProxy:
@ -2322,7 +2561,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## If your kube proxy is not deployed as a pod, specify IPs it can be found on
##
@@ -1075,6 +1443,13 @@
@@ -1075,6 +1458,13 @@
create: true
podSecurityPolicy:
enabled: true
@ -2336,7 +2575,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Deploy node exporter as a daemonset to all nodes
##
@@ -1124,6 +1499,16 @@
@@ -1124,6 +1514,16 @@
extraArgs:
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
@ -2353,7 +2592,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Manages Prometheus and Alertmanager components
##
@@ -1137,7 +1522,7 @@
@@ -1137,7 +1537,7 @@
tlsProxy:
enabled: true
image:
@ -2362,7 +2601,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
tag: v1.5.2
pullPolicy: IfNotPresent
resources: {}
@@ -1154,7 +1539,7 @@
@@ -1154,7 +1554,7 @@
patch:
enabled: true
image:
@ -2371,7 +2610,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
tag: v1.2.1
pullPolicy: IfNotPresent
resources: {}
@@ -1280,13 +1665,13 @@
@@ -1280,13 +1680,13 @@
## Resource limits & requests
##
@ -2392,7 +2631,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
# Required for use in managed kubernetes clusters (such as AWS EKS) with custom CNI (such as calico),
# because control-plane managed by AWS cannot communicate with pods' IP CIDR and admission webhooks are not working
@@ -1330,20 +1715,20 @@
@@ -1330,20 +1730,20 @@
## Prometheus-operator image
##
image:
@ -2416,7 +2655,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
tag: v0.38.1
## Set the prometheus config reloader side-car CPU limit
@@ -1354,13 +1739,6 @@
@@ -1354,13 +1754,6 @@
##
configReloaderMemory: 25Mi
@ -2430,7 +2669,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Deploy a Prometheus instance
##
prometheus:
@@ -1577,7 +1955,7 @@
@@ -1577,7 +1970,7 @@
## Image of Prometheus.
##
image:
@ -2439,7 +2678,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
tag: v2.18.1
## Tolerations for use with node taints
@@ -1628,6 +2006,11 @@
@@ -1628,6 +2021,11 @@
##
externalUrl: ""
@ -2451,7 +2690,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## Define which Nodes the Pods are scheduled on.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
@@ -1660,7 +2043,7 @@
@@ -1660,7 +2058,7 @@
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the PrometheusRule resources created
##
@ -2460,7 +2699,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## PrometheusRules to be selected for target discovery.
## If {}, select all ServiceMonitors
@@ -1685,7 +2068,7 @@
@@ -1685,7 +2083,7 @@
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the servicemonitors created
##
@ -2469,7 +2708,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## ServiceMonitors to be selected for target discovery.
## If {}, select all ServiceMonitors
@@ -1705,7 +2088,7 @@
@@ -1705,7 +2103,7 @@
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the podmonitors created
##
@ -2478,7 +2717,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
## PodMonitors to be selected for target discovery.
## If {}, select all PodMonitors
@@ -1802,9 +2185,13 @@
@@ -1802,9 +2200,13 @@
## Resource limits & requests
##