mirror of https://git.rancher.io/charts
Merge pull request #507 from rancher/fix_pushprox_image
Minor updates to rancher-monitoring and rancher-pushprox chartpull/559/head
commit
2c23c16695
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -11,8 +11,8 @@ All notable changes from the upstream Prometheus Operator chart will be added to
|
|||
- Exposed `prometheus.prometheusSpec.ignoreNamespaceSelectors` on values.yaml and set it to `true` by default. This value instructs the default Prometheus server deployed with this chart to ignore the `namespaceSelector` field within any created ServiceMonitor or PodMonitor CRs that it selects. This prevents ServiceMonitors and PodMonitors from configuring the Prometheus scrape configuration to monitor resources outside the namespace that they are deployed in; if a user needs to have one ServiceMonitor / PodMonitor monitor resources within several namespaces, they will need to either disable this default option or create one ServiceMonitor / PodMonitor CR per namespace that they would like to monitor. Relevant fields were also updated in the default README.md
|
||||
- Added `grafana.sidecar.dashboards.searchNamespace` to values.yaml with a default value of `grafana-dashboards`. The namespace provided should contain all ConfigMaps with the label `grafana_dashboard` and will be searched by the Grafana Dashboards sidecar for updates. The namespace specified is also created along with this deployment. All default dashboard ConfigMaps have been relocated from the deployment namespace to the namespace specified
|
||||
- Added `grafana.sidecar.datasources.searchNamespace` to values.yaml with a default value of `grafana-datasources`. The namespace provided should contain all ConfigMaps with the label `grafana_datasource` and will be searched by the Grafana Datasources sidecar for updates. The namespace specified is also created along with this deployment. All default datasource ConfigMaps have been relocated from the deployment namespace to the namespace specified
|
||||
- Added `monitoring-admin`, `monitoring-edit`, and `monitoring-view` default `ClusterRoles` to allow admins to assign roles to users to interact with Prometheus Operator CRs. In a typical RBAC setup, you might want to assign specific users `monitoring-edit` or `monitoring-view` within a specific namespace to allow them to set up `ServiceMonitors` / `PodMonitors` that only monitor resources within that namespace. If `.Values.monitoringRoles.aggregateRolesForRBAC` is enabled, these ClusterRoles will aggregate into the respective default ClusterRoles provided by Kubernetes
|
||||
- Added `grafana-config-edit` and `grafana-config-view` default `ClusterRoles` to allow admins to assign roles to users to interact with Secrets or ConfigMaps utilized by Grafana. In a typical RBAC setup, you might want to assign the following users with these permissions:
|
||||
- Added `monitoring-admin`, `monitoring-edit`, and `monitoring-view` default `ClusterRoles` to allow admins to assign roles to users to interact with Prometheus Operator CRs. These can be enabled by setting `.Values.global.rbac.userRoles.create` (default: `true`). In a typical RBAC setup, you might want to assign specific users `monitoring-edit` or `monitoring-view` within a specific namespace to allow them to set up `ServiceMonitors` / `PodMonitors` that only monitor resources within that namespace. If `.Values.global.rbac.userRoles.aggregateRolesForRBAC` is enabled, these ClusterRoles will aggregate into the respective default ClusterRoles provided by Kubernetes
|
||||
- Added `grafana-config-edit` and `grafana-config-view` default `ClusterRoles` to allow admins to assign roles to users to interact with Secrets or ConfigMaps utilized by Grafana. These can be enabled by setting `.Values.global.rbac.userRoles.create` (default: `true`). In a typical RBAC setup, you might want to assign the following users with these permissions:
|
||||
- User who needs to be able to persist custom Grafana dashboards from the Grafana UI but does not need to be able to interact with Prometheus CRs: `grafana-config-edit` within the `.Values.grafana.sidecar.dashboards.searchNamespace` (default `grafana-dashboards`) namespace
|
||||
- User who needs to be able to persist new Grafana datasources but does not need to be able to interact with Prometheus CRs: `.Values.grafana.sidecar.datasources.searchNamespace` (default `grafana-datasources`) namespace
|
||||
- Added default resource limits for `Prometheus Operator`, `Prometheus`, `AlertManager`, `Grafana`, `kube-state-metrics`, `node-exporter`
|
||||
|
@ -20,7 +20,7 @@ All notable changes from the upstream Prometheus Operator chart will be added to
|
|||
- Updated the chart name from `prometheus-operator` to `rancher-monitoring` and added the `io.rancher.certified: rancher` annotation to `Chart.yaml`
|
||||
- Modified the default `node-exporter` port from `9100` to `9796`
|
||||
- Modified the default `nameOverride` to `rancher-monitoring`. This change is necessary as the Prometheus Adapter's default URL (`http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc`) is based off of the value used here; if modified, the default Adapter URL must also be modified
|
||||
- Modified the default `namespaceOverride` to `monitoring-system`. This change is necessary as the Prometheus Adapter's default URL (`http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc`) is based off of the value used here; if modified, the default Adapter URL must also be modified
|
||||
- Modified the default `namespaceOverride` to `cattle-monitoring-system`. This change is necessary as the Prometheus Adapter's default URL (`http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc`) is based off of the value used here; if modified, the default Adapter URL must also be modified
|
||||
- Configured some default values for `grafana.service` values and exposed them in the default README.md
|
||||
- The default namespaces the following ServiceMonitors were changed from the deployment namespace to allow them to continue to monitor metrics when `prometheus.prometheusSpec.ignoreNamespaceSelectors` is enabled:
|
||||
- `core-dns`: `kube-system`
|
||||
|
@ -32,4 +32,6 @@ All notable changes from the upstream Prometheus Operator chart will be added to
|
|||
- `kube-controller-manager` metrics exporter
|
||||
- `kube-etcd` metrics exporter
|
||||
- `kube-scheduler` metrics exporter
|
||||
- `kube-proxy` metrics exporter
|
||||
- `kube-proxy` metrics exporter
|
||||
- Updated default Grafana `deploymentStrategy` to `Recreate` to prevent deployments from being stuck on upgrade if a PV is attached to Grafana
|
||||
- Modified the default `<serviceMonitor|podMonitor|rule>SelectorNilUsesHelmValues` to default to `false`. As a result, we look for all CRs with any labels in all namespaces by default rather than just the ones tagged with the label `release: rancher-monitoring`.
|
|
@ -8,7 +8,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/Cha
|
|||
- and management of Prometheus instances.
|
||||
+annotations:
|
||||
+ catalog.cattle.io/certified: rancher
|
||||
+ catalog.cattle.io/namespace: monitoring-system
|
||||
+ catalog.cattle.io/namespace: cattle-monitoring-system
|
||||
+ catalog.cattle.io/release-name: rancher-monitoring
|
||||
+ catalog.cattle.io/ui-component: monitoring
|
||||
+description: A Rancher chart that modifies the upstream Prometheus Operator chart, which provides easy monitoring definitions for Kubernetes services and the deployment and management of Prometheus instances, and enables Prometheus Adapter on a default Prometheus instance.
|
||||
|
@ -43,62 +43,71 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/Cha
|
|||
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/README.md packages/rancher-monitoring/charts/README.md
|
||||
--- packages/rancher-monitoring/charts-original/README.md
|
||||
+++ packages/rancher-monitoring/charts/README.md
|
||||
@@ -33,7 +33,7 @@
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
## Prerequisites
|
||||
- Kubernetes 1.10+ with Beta APIs
|
||||
- - Helm 2.12+ (If using Helm < 2.14, [see below for CRD workaround](#Helm-fails-to-create-CRDs))
|
||||
+ - Helm 2.12+
|
||||
Installs [prometheus-operator](https://github.com/coreos/prometheus-operator) to create/configure/manage Prometheus clusters atop Kubernetes. This chart includes multiple components and is suitable for a variety of use-cases.
|
||||
|
||||
## Installing the Chart
|
||||
+You must install the Prometheus Operator CRDs first using the `rancher-monitoring-crd` chart before installing this chart.
|
||||
+
|
||||
The default installation is intended to suit monitoring a kubernetes cluster the chart is deployed onto. It closely matches the kube-prometheus project.
|
||||
- [prometheus-operator](https://github.com/coreos/prometheus-operator)
|
||||
- [prometheus](https://prometheus.io/)
|
||||
@@ -9,6 +11,12 @@
|
||||
- [node-exporter](https://github.com/helm/charts/tree/master/stable/prometheus-node-exporter)
|
||||
- [kube-state-metrics](https://github.com/helm/charts/tree/master/stable/kube-state-metrics)
|
||||
- [grafana](https://github.com/helm/charts/tree/master/stable/grafana)
|
||||
+- [prometheus-adapter](https://github.com/helm/charts/tree/master/stable/prometheus-adapter)
|
||||
+- [rancher-pushprox](https://github.com/rancher/dev-charts/tree/master/packages/rancher-pushprox) charts to monitor internal kubernetes components for k3s, rke, and kubeAdm clusters
|
||||
+ - kube-scheduler
|
||||
+ - kube-controller-manager
|
||||
+ - kube-proxy
|
||||
+ - kube-etcd (only rke and kubeAdm)
|
||||
- service monitors to scrape internal kubernetes components
|
||||
- kube-apiserver
|
||||
- kube-scheduler
|
||||
@@ -136,6 +144,30 @@
|
||||
|
||||
@@ -57,17 +57,6 @@
|
||||
The following tables list the configurable parameters of the prometheus-operator chart and their default values.
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
-CRDs created by this chart are not removed by default and should be manually cleaned up:
|
||||
-
|
||||
-```console
|
||||
-kubectl delete crd prometheuses.monitoring.coreos.com
|
||||
-kubectl delete crd prometheusrules.monitoring.coreos.com
|
||||
-kubectl delete crd servicemonitors.monitoring.coreos.com
|
||||
-kubectl delete crd podmonitors.monitoring.coreos.com
|
||||
-kubectl delete crd alertmanagers.monitoring.coreos.com
|
||||
-kubectl delete crd thanosrulers.monitoring.coreos.com
|
||||
-```
|
||||
-
|
||||
## Work-Arounds for Known Issues
|
||||
|
||||
### Running on private GKE clusters
|
||||
@@ -77,27 +66,6 @@
|
||||
|
||||
Alternatively, you can disable the hooks by setting `prometheusOperator.admissionWebhooks.enabled=false`.
|
||||
|
||||
-### Helm fails to create CRDs
|
||||
-You should upgrade to Helm 2.14 + in order to avoid this issue. However, if you are stuck with an earlier Helm release you should instead use the following approach: Due to a bug in helm, it is possible for the 5 CRDs that are created by this chart to fail to get fully deployed before Helm attempts to create resources that require them. This affects all versions of Helm with a [potential fix pending](https://github.com/helm/helm/pull/5112). In order to work around this issue when installing the chart you will need to make sure all 5 CRDs exist in the cluster first and disable their previsioning by the chart:
|
||||
-
|
||||
-1. Create CRDs
|
||||
-```console
|
||||
-kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
|
||||
-kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
|
||||
-kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
|
||||
-kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
|
||||
-kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
|
||||
-kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
|
||||
-
|
||||
-```
|
||||
-
|
||||
-2. Wait for CRDs to be created, which should only take a few seconds
|
||||
-
|
||||
-3. Install the chart, but disable the CRD provisioning by setting `prometheusOperator.createCustomResource=false`
|
||||
-```console
|
||||
-$ helm install --name my-release stable/prometheus-operator --set prometheusOperator.createCustomResource=false
|
||||
-```
|
||||
-
|
||||
## Upgrading an existing Release to a new major version
|
||||
|
||||
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an
|
||||
@@ -195,16 +163,14 @@
|
||||
+### Rancher Monitoring Configuration
|
||||
+| Parameter | Description | Default |
|
||||
+| ----- | ----------- | ------ |
|
||||
+| `prometheus-adapter.enabled` | Whether to install [prometheus-adapter](https://github.com/helm/charts/tree/master/stable/prometheus-adapter) within the cluster | `true` |
|
||||
+| `prometheus-adapter.prometheus.url` | A URL pointing to the Prometheus deployment within your cluster. The default value is set based on the assumption that you plan to deploy the default Prometheus instance from this chart where `.Values.namespaceOverride=cattle-monitoring-system` and `.Values.nameOverride=rancher-monitoring` | `http://rancher-monitoring-prometheus.cattle-monitoring-system.svc` |
|
||||
+| `prometheus-adapter.prometheus.port` | The port on the Prometheus deployment that Prometheus Adapter can make requests to | `9090` |
|
||||
+
|
||||
+The following values are enabled for different distributions via [rancher-pushprox](https://github.com/rancher/dev-charts/tree/master/packages/rancher-pushprox). See the rancher-pushprox `README.md` for more information on what all values can be configured for the PushProxy chart.
|
||||
+
|
||||
+| Parameter | Description | Default |
|
||||
+| ----- | ----------- | ------ |
|
||||
+| `rkeControllerManager.enabled` | Create a PushProx installation for monitoring kube-controller-manager metrics in RKE clusters | `false` |
|
||||
+| `rkeScheduler.enabled` | Create a PushProx installation for monitoring kube-scheduler metrics in RKE clusters | `false` |
|
||||
+| `rkeProxy.enabled` | Create a PushProx installation for monitoring kube-proxy metrics in RKE clusters | `false` |
|
||||
+| `rkeEtcd.enabled` | Create a PushProx installation for monitoring etcd metrics in RKE clusters | `false` |
|
||||
+| `k3sControllerManager.enabled` | Create a PushProx installation for monitoring kube-controller-manager metrics in k3s clusters | `false` |
|
||||
+| `k3sScheduler.enabled` | Create a PushProx installation for monitoring kube-scheduler metrics in k3s clusters | `false` |
|
||||
+| `k3sProxy.enabled` | Create a PushProx installation for monitoring kube-proxy metrics in k3s clusters | `false` |
|
||||
+| `kubeAdmControllerManager.enabled` | Create a PushProx installation for monitoring kube-controller-manager metrics in kubeAdm clusters | `false` |
|
||||
+| `kubeAdmScheduler.enabled` | Create a PushProx installation for monitoring kube-scheduler metrics in kubeAdm clusters | `false` |
|
||||
+| `kubeAdmProxy.enabled` | Create a PushProx installation for monitoring kube-proxy metrics in kubeAdm clusters | `false` |
|
||||
+| `kubeAdmEtcd.enabled` | Create a PushProx installation for monitoring etcd metrics in kubeAdm clusters | `false` |
|
||||
+
|
||||
+
|
||||
### General
|
||||
| Parameter | Description | Default |
|
||||
| ----- | ----------- | ------ |
|
||||
@@ -173,7 +205,9 @@
|
||||
| `defaultRules.rules.time` | Create time default rules | `true` |
|
||||
| `fullnameOverride` | Provide a name to substitute for the full names of resources |`""`|
|
||||
| `global.imagePullSecrets` | Reference to one or more secrets to be used when pulling images | `[]` |
|
||||
-| `global.rbac.create` | Create RBAC resources | `true` |
|
||||
+| `global.rbac.create` | Create RBAC resources for ServiceAccounts and users | `true` |
|
||||
+| `global.rbac.userRoles.create` | Create default user ClusterRoles to allow users to interact with Prometheus CRs, ConfigMaps, and Secrets | `true` |
|
||||
+| `global.rbac.userRoles.aggregateToDefaultRoles` | Aggregate default user ClusterRoles into default k8s ClusterRoles | `true` |
|
||||
| `global.rbac.pspEnabled` | Create pod security policy resources | `true` |
|
||||
| `global.rbac.pspAnnotations` | Add annotations to the PSP configurations | `{}` |
|
||||
| `kubeTargetVersionOverride` | Provide a target gitVersion of K8S, in case .Capabilites.KubeVersion is not available (e.g. `helm template`) |`""`|
|
||||
@@ -195,16 +229,14 @@
|
||||
| `prometheusOperator.admissionWebhooks.patch.podAnnotations` | Annotations for the webhook job pods | `nil` |
|
||||
| `prometheusOperator.admissionWebhooks.patch.priorityClassName` | Priority class for the webhook integration jobs | `nil` |
|
||||
| `prometheusOperator.affinity` | Assign custom affinity rules to the prometheus operator https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
|
||||
|
@ -116,7 +125,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/REA
|
|||
| `prometheusOperator.denyNamespaces` | Namespaces not to scope the interaction of the Prometheus Operator (deny list). This is mutually exclusive with `namespaces` | `[]` |
|
||||
| `prometheusOperator.enabled` | Deploy Prometheus Operator. Only one of these should be deployed into the cluster | `true` |
|
||||
| `prometheusOperator.hyperkubeImage.pullPolicy` | Image pull policy for hyperkube image used to perform maintenance tasks | `IfNotPresent` |
|
||||
@@ -291,6 +257,7 @@
|
||||
@@ -291,6 +323,7 @@
|
||||
| `prometheus.prometheusSpec.evaluationInterval` | Interval between consecutive evaluations. | `""` |
|
||||
| `prometheus.prometheusSpec.externalLabels` | The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). | `{}` |
|
||||
| `prometheus.prometheusSpec.externalUrl` | The external URL the Prometheus instances will be available under. This is necessary to generate correct URLs. This is necessary if Prometheus is not served from root of a DNS name. | `""` |
|
||||
|
@ -124,7 +133,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/REA
|
|||
| `prometheus.prometheusSpec.image.repository` | Base image to use for a Prometheus deployment. | `quay.io/prometheus/prometheus` |
|
||||
| `prometheus.prometheusSpec.image.tag` | Tag of Prometheus container image to be deployed. | `v2.18.1` |
|
||||
| `prometheus.prometheusSpec.listenLocal` | ListenLocal makes the Prometheus server listen on loopback, so that it does not bind against the Pod IP. | `false` |
|
||||
@@ -465,17 +432,23 @@
|
||||
@@ -465,17 +498,23 @@
|
||||
| `grafana.namespaceOverride` | Override the deployment namespace of grafana | `""` (`Release.Namespace`) |
|
||||
| `grafana.rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires rbac.pspEnabled) | `true` |
|
||||
| `grafana.service.portName` | Allow to customize Grafana service portname. Will be used by servicemonitor as well | `service` |
|
||||
|
@ -148,7 +157,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/REA
|
|||
|
||||
### Exporters
|
||||
| Parameter | Description | Default |
|
||||
@@ -649,7 +622,7 @@
|
||||
@@ -649,7 +688,7 @@
|
||||
The Grafana chart is more feature-rich than this chart - it contains a sidecar that is able to load data sources and dashboards from configmaps deployed into the same cluster. For more information check out the [documentation for the chart](https://github.com/helm/charts/tree/master/stable/grafana)
|
||||
|
||||
### Coreos CRDs
|
||||
|
@ -1152,21 +1161,140 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/tem
|
|||
----
|
||||
-{{- end }}
|
||||
-{{- end }}
|
||||
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/templates/rancher-monitoring/grafana-configmap-roles.yaml packages/rancher-monitoring/charts/templates/rancher-monitoring/grafana-configmap-roles.yaml
|
||||
--- packages/rancher-monitoring/charts-original/templates/rancher-monitoring/grafana-configmap-roles.yaml
|
||||
+++ packages/rancher-monitoring/charts/templates/rancher-monitoring/grafana-configmap-roles.yaml
|
||||
@@ -0,0 +1,22 @@
|
||||
+{{- if .Values.monitoringRoles }}
|
||||
+{{- if and .Values.monitoringRoles.create .Values.grafana.enabled }}
|
||||
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/templates/rancher-monitoring/clusterrole.yaml packages/rancher-monitoring/charts/templates/rancher-monitoring/clusterrole.yaml
|
||||
--- packages/rancher-monitoring/charts-original/templates/rancher-monitoring/clusterrole.yaml
|
||||
+++ packages/rancher-monitoring/charts/templates/rancher-monitoring/clusterrole.yaml
|
||||
@@ -0,0 +1,148 @@
|
||||
+{{- if and .Values.global.rbac.create .Values.global.rbac.userRoles.create }}
|
||||
+---
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: monitoring-admin
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+ {{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
|
||||
+ rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
+ {{- end }}
|
||||
+rules:
|
||||
+- apiGroups:
|
||||
+ - monitoring.coreos.com
|
||||
+ resources:
|
||||
+ - alertmanagers
|
||||
+ - prometheuses
|
||||
+ - prometheuses/finalizers
|
||||
+ - alertmanagers/finalizers
|
||||
+ verbs:
|
||||
+ - 'get'
|
||||
+ - 'list'
|
||||
+ - 'watch'
|
||||
+- apiGroups:
|
||||
+ - monitoring.coreos.com
|
||||
+ resources:
|
||||
+ - thanosrulers
|
||||
+ - thanosrulers/finalizers
|
||||
+ - servicemonitors
|
||||
+ - podmonitors
|
||||
+ - prometheusrules
|
||||
+ - podmonitors
|
||||
+ verbs:
|
||||
+ - '*'
|
||||
+- apiGroups:
|
||||
+ - ""
|
||||
+ resources:
|
||||
+ - configmaps
|
||||
+ - secrets
|
||||
+ verbs:
|
||||
+ - '*'
|
||||
+---
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: monitoring-edit
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+ {{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
|
||||
+ rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
+ {{- end }}
|
||||
+rules:
|
||||
+rules:
|
||||
+- apiGroups:
|
||||
+ - monitoring.coreos.com
|
||||
+ resources:
|
||||
+ - alertmanagers
|
||||
+ - prometheuses
|
||||
+ - prometheuses/finalizers
|
||||
+ - alertmanagers/finalizers
|
||||
+ verbs:
|
||||
+ - 'get'
|
||||
+ - 'list'
|
||||
+ - 'watch'
|
||||
+- apiGroups:
|
||||
+ - monitoring.coreos.com
|
||||
+ resources:
|
||||
+ - thanosrulers
|
||||
+ - thanosrulers/finalizers
|
||||
+ - servicemonitors
|
||||
+ - podmonitors
|
||||
+ - prometheusrules
|
||||
+ - podmonitors
|
||||
+ verbs:
|
||||
+ - '*'
|
||||
+- apiGroups:
|
||||
+ - ""
|
||||
+ resources:
|
||||
+ - configmaps
|
||||
+ - secrets
|
||||
+ verbs:
|
||||
+ - '*'
|
||||
+---
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: monitoring-view
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+ {{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
|
||||
+ rbac.authorization.k8s.io/aggregate-to-view: "true"
|
||||
+ {{- end }}
|
||||
+rules:
|
||||
+- apiGroups:
|
||||
+ - monitoring.coreos.com
|
||||
+ resources:
|
||||
+ - alertmanagers
|
||||
+ - prometheuses
|
||||
+ - prometheuses/finalizers
|
||||
+ - alertmanagers/finalizers
|
||||
+ - thanosrulers
|
||||
+ - thanosrulers/finalizers
|
||||
+ - servicemonitors
|
||||
+ - podmonitors
|
||||
+ - prometheusrules
|
||||
+ - podmonitors
|
||||
+ verbs:
|
||||
+ - 'get'
|
||||
+ - 'list'
|
||||
+ - 'watch'
|
||||
+- apiGroups:
|
||||
+ - ""
|
||||
+ resources:
|
||||
+ - configmaps
|
||||
+ - secrets
|
||||
+ verbs:
|
||||
+ - 'get'
|
||||
+ - 'list'
|
||||
+ - 'watch'
|
||||
+{{- if .Values.grafana.enabled }}
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: grafana-config-edit
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+rules:
|
||||
+- apiGroups: [""] # "" indicates the core API group
|
||||
+ resources: ["configmaps", "secrets"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups:
|
||||
+ - ""
|
||||
+ resources:
|
||||
+ - configmaps
|
||||
+ - secrets
|
||||
+ verbs:
|
||||
+ - '*'
|
||||
+---
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
|
@ -1174,116 +1302,27 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/tem
|
|||
+ name: grafana-config-view
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+rules:
|
||||
+- apiGroups: [""] # "" indicates the core API group
|
||||
+ resources: ["configmaps", "secrets"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+{{- end }}{{- end }}
|
||||
\ No newline at end of file
|
||||
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/templates/rancher-monitoring/monitoring-roles.yaml packages/rancher-monitoring/charts/templates/rancher-monitoring/monitoring-roles.yaml
|
||||
--- packages/rancher-monitoring/charts-original/templates/rancher-monitoring/monitoring-roles.yaml
|
||||
+++ packages/rancher-monitoring/charts/templates/rancher-monitoring/monitoring-roles.yaml
|
||||
@@ -0,0 +1,85 @@
|
||||
+{{- if .Values.monitoringRoles }}{{- if .Values.monitoringRoles.create }}
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: monitoring-admin
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+ {{- if .Values.monitoringRoles.aggregateRolesForRBAC }}
|
||||
+ rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
+ {{- end }}
|
||||
+rules:
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["prometheuses"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["alertmanagers"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["servicemonitors"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["podmonitors"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["prometheusrules"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups: [""] # "" indicates the core API group
|
||||
+ resources: ["configmaps", "secrets"]
|
||||
+ verbs: ["*"]
|
||||
+---
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: monitoring-edit
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+ {{- if .Values.monitoringRoles.aggregateRolesForRBAC }}
|
||||
+ rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
+ {{- end }}
|
||||
+rules:
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["prometheuses"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["alertmanagers"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["servicemonitors"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["podmonitors"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["prometheusrules"]
|
||||
+ verbs: ["*"]
|
||||
+- apiGroups: [""] # "" indicates the core API group
|
||||
+ resources: ["configmaps", "secrets"]
|
||||
+ verbs: ["*"]
|
||||
+---
|
||||
+apiVersion: rbac.authorization.k8s.io/v1
|
||||
+kind: ClusterRole
|
||||
+metadata:
|
||||
+ name: monitoring-view
|
||||
+ labels: {{ include "prometheus-operator.labels" . | nindent 4 }}
|
||||
+ {{- if .Values.monitoringRoles.aggregateRolesForRBAC }}
|
||||
+ rbac.authorization.k8s.io/aggregate-to-view: "true"
|
||||
+ {{- end }}
|
||||
+rules:
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["prometheuses"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["alertmanagers"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["servicemonitors"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["podmonitors"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
|
||||
+ resources: ["prometheusrules"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+- apiGroups: [""] # "" indicates the core API group
|
||||
+ resources: ["configmaps", "secrets"]
|
||||
+ verbs: ["get", "watch", "list"]
|
||||
+{{- end }}{{- end }}
|
||||
+- apiGroups:
|
||||
+ - ""
|
||||
+ resources:
|
||||
+ - configmaps
|
||||
+ - secrets
|
||||
+ verbs:
|
||||
+ - 'get'
|
||||
+ - 'list'
|
||||
+ - 'watch'
|
||||
+{{- end }}
|
||||
+{{- end }}
|
||||
\ No newline at end of file
|
||||
diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/values.yaml packages/rancher-monitoring/charts/values.yaml
|
||||
--- packages/rancher-monitoring/charts-original/values.yaml
|
||||
+++ packages/rancher-monitoring/charts/values.yaml
|
||||
@@ -2,13 +2,186 @@
|
||||
@@ -2,13 +2,207 @@
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
+# Rancher Monitoring Configuration
|
||||
+
|
||||
+## Deploy some default ClusterRoles to allow users to interact with Prometheus CRs, ConfigMaps, and Secrets
|
||||
+##
|
||||
+monitoringRoles:
|
||||
+ create: true
|
||||
+ aggregateRolesForRBAC: false
|
||||
+
|
||||
+## Configuration for prometheus-adapter
|
||||
+## ref: https://github.com/helm/charts/tree/master/stable/prometheus-adapter
|
||||
+##
|
||||
|
@ -1291,7 +1330,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ enabled: true
|
||||
+ prometheus:
|
||||
+ # Change this if you change the namespaceOverride or nameOverride of prometheus-operator
|
||||
+ url: http://rancher-monitoring-prometheus.monitoring-system.svc
|
||||
+ url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc
|
||||
+ port: 9090
|
||||
+
|
||||
+## RKE PushProx Monitoring
|
||||
|
@ -1306,6 +1345,8 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/controlplane: "true"
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
|
@ -1318,6 +1359,8 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/controlplane: "true"
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
|
@ -1329,6 +1372,8 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ port: 10013
|
||||
+ useLocalhost: true
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
|
@ -1349,6 +1394,8 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+## k3s PushProx Monitoring
|
||||
+## ref: https://github.com/rancher/charts/tree/master/packages/rancher-pushprox
|
||||
|
@ -1361,6 +1408,11 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ port: 10011
|
||||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/master: "true"
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+k3sScheduler:
|
||||
+ enabled: false
|
||||
|
@ -1370,6 +1422,11 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ port: 10012
|
||||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/master: "true"
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+k3sProxy:
|
||||
+ enabled: false
|
||||
|
@ -1378,6 +1435,11 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ clients:
|
||||
+ port: 10013
|
||||
+ useLocalhost: true
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+## KubeADM PushProx Monitoring
|
||||
+## ref: https://github.com/rancher/charts/tree/master/packages/rancher-pushprox
|
||||
|
@ -1396,8 +1458,9 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/master: ""
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ key: node-role.kubernetes.io/master
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+kubeAdmScheduler:
|
||||
|
@ -1414,9 +1477,10 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/master: ""
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ key: node-role.kubernetes.io/master
|
||||
+ operator: "Equal"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+kubeAdmProxy:
|
||||
+ enabled: false
|
||||
|
@ -1426,9 +1490,10 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ port: 10013
|
||||
+ useLocalhost: true
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ key: node-role.kubernetes.io/master
|
||||
+ operator: "Equal"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+kubeAdmEtcd:
|
||||
+ enabled: false
|
||||
|
@ -1440,9 +1505,10 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+ nodeSelector:
|
||||
+ node-role.kubernetes.io/master: ""
|
||||
+ tolerations:
|
||||
+ - effect: "NoExecute"
|
||||
+ operator: "Exists"
|
||||
+ - effect: "NoSchedule"
|
||||
+ key: node-role.kubernetes.io/master
|
||||
+ operator: "Equal"
|
||||
+ operator: "Exists"
|
||||
+
|
||||
+
|
||||
+# Prometheus Operator Configuration
|
||||
|
@ -1457,20 +1523,28 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
+## NOTE: If you change this value, you must update the prometheus-adapter.prometheus.url
|
||||
##
|
||||
-namespaceOverride: ""
|
||||
+namespaceOverride: "monitoring-system"
|
||||
+namespaceOverride: "cattle-monitoring-system"
|
||||
|
||||
## Provide a k8s version to auto dashboard import script example: kubeTargetVersionOverride: 1.16.6
|
||||
##
|
||||
@@ -102,7 +275,7 @@
|
||||
|
||||
## Deploy alertmanager
|
||||
##
|
||||
- enabled: true
|
||||
+ enabled: false
|
||||
|
||||
## Api that prometheus will use to communicate with alertmanager. Possible values are v1, v2
|
||||
##
|
||||
@@ -409,9 +582,13 @@
|
||||
@@ -77,7 +271,16 @@
|
||||
##
|
||||
global:
|
||||
rbac:
|
||||
+ ## Create RBAC resources for ServiceAccounts and users
|
||||
+ ##
|
||||
create: true
|
||||
+
|
||||
+ userRoles:
|
||||
+ ## Create default user ClusterRoles to allow users to interact with Prometheus CRs, ConfigMaps, and Secrets
|
||||
+ create: true
|
||||
+ ## Aggregate default user ClusterRoles into default k8s ClusterRoles
|
||||
+ aggregateToDefaultRoles: true
|
||||
+
|
||||
pspEnabled: true
|
||||
pspAnnotations: {}
|
||||
## Specify pod annotations
|
||||
@@ -409,9 +612,13 @@
|
||||
## Define resources requests and limits for single Pods.
|
||||
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
|
||||
##
|
||||
|
@ -1487,7 +1561,17 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node.
|
||||
## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided.
|
||||
@@ -529,6 +706,7 @@
|
||||
@@ -486,6 +693,9 @@
|
||||
enabled: true
|
||||
namespaceOverride: ""
|
||||
|
||||
+ deploymentStrategy:
|
||||
+ type: Recreate
|
||||
+
|
||||
## Deploy default dashboards.
|
||||
##
|
||||
defaultDashboardsEnabled: true
|
||||
@@ -529,6 +739,7 @@
|
||||
dashboards:
|
||||
enabled: true
|
||||
label: grafana_dashboard
|
||||
|
@ -1495,7 +1579,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## Annotations for Grafana dashboard configmaps
|
||||
##
|
||||
@@ -547,6 +725,7 @@
|
||||
@@ -547,6 +758,7 @@
|
||||
## ref: https://git.io/fjaBS
|
||||
createPrometheusReplicasDatasources: false
|
||||
label: grafana_datasource
|
||||
|
@ -1503,7 +1587,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
extraConfigmapMounts: []
|
||||
# - name: certs-configmap
|
||||
@@ -574,6 +753,19 @@
|
||||
@@ -574,6 +786,19 @@
|
||||
##
|
||||
service:
|
||||
portName: service
|
||||
|
@ -1523,7 +1607,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## If true, create a serviceMonitor for grafana
|
||||
##
|
||||
@@ -599,6 +791,14 @@
|
||||
@@ -599,6 +824,14 @@
|
||||
# targetLabel: nodename
|
||||
# replacement: $1
|
||||
# action: replace
|
||||
|
@ -1538,7 +1622,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## Component scraping the kube api server
|
||||
##
|
||||
@@ -755,7 +955,7 @@
|
||||
@@ -755,7 +988,7 @@
|
||||
## Component scraping the kube controller manager
|
||||
##
|
||||
kubeControllerManager:
|
||||
|
@ -1547,7 +1631,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## If your kube controller manager is not deployed as a pod, specify IPs it can be found on
|
||||
##
|
||||
@@ -888,7 +1088,7 @@
|
||||
@@ -888,7 +1121,7 @@
|
||||
## Component scraping etcd
|
||||
##
|
||||
kubeEtcd:
|
||||
|
@ -1556,7 +1640,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## If your etcd is not deployed as a pod, specify IPs it can be found on
|
||||
##
|
||||
@@ -948,7 +1148,7 @@
|
||||
@@ -948,7 +1181,7 @@
|
||||
## Component scraping kube scheduler
|
||||
##
|
||||
kubeScheduler:
|
||||
|
@ -1565,7 +1649,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## If your kube scheduler is not deployed as a pod, specify IPs it can be found on
|
||||
##
|
||||
@@ -1001,7 +1201,7 @@
|
||||
@@ -1001,7 +1234,7 @@
|
||||
## Component scraping kube proxy
|
||||
##
|
||||
kubeProxy:
|
||||
|
@ -1574,7 +1658,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## If your kube proxy is not deployed as a pod, specify IPs it can be found on
|
||||
##
|
||||
@@ -1075,6 +1275,13 @@
|
||||
@@ -1075,6 +1308,13 @@
|
||||
create: true
|
||||
podSecurityPolicy:
|
||||
enabled: true
|
||||
|
@ -1588,7 +1672,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## Deploy node exporter as a daemonset to all nodes
|
||||
##
|
||||
@@ -1124,13 +1331,23 @@
|
||||
@@ -1124,6 +1364,16 @@
|
||||
extraArgs:
|
||||
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
|
@ -1605,30 +1689,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
## Manages Prometheus and Alertmanager components
|
||||
##
|
||||
prometheusOperator:
|
||||
enabled: true
|
||||
|
||||
- # If true prometheus operator will create and update its CRDs on startup
|
||||
+ # If true prometheus operator will create and update its CRs on startup
|
||||
# Only for prometheusOperator.image.tag < v0.39.0
|
||||
manageCrds: true
|
||||
|
||||
@@ -1220,14 +1437,6 @@
|
||||
##
|
||||
externalIPs: []
|
||||
|
||||
- ## Deploy CRDs used by Prometheus Operator.
|
||||
- ##
|
||||
- createCustomResource: true
|
||||
-
|
||||
- ## Attempt to clean up CRDs created by Prometheus Operator.
|
||||
- ##
|
||||
- cleanupCustomResource: false
|
||||
-
|
||||
## Labels to add to the operator pod
|
||||
##
|
||||
podLabels: {}
|
||||
@@ -1280,13 +1489,13 @@
|
||||
@@ -1280,13 +1530,13 @@
|
||||
|
||||
## Resource limits & requests
|
||||
##
|
||||
|
@ -1649,7 +1710,7 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
|
||||
# Required for use in managed kubernetes clusters (such as AWS EKS) with custom CNI (such as calico),
|
||||
# because control-plane managed by AWS cannot communicate with pods' IP CIDR and admission webhooks are not working
|
||||
@@ -1628,6 +1837,11 @@
|
||||
@@ -1628,6 +1878,11 @@
|
||||
##
|
||||
externalUrl: ""
|
||||
|
||||
|
@ -1661,7 +1722,34 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
## Define which Nodes the Pods are scheduled on.
|
||||
## ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
##
|
||||
@@ -1802,9 +2016,13 @@
|
||||
@@ -1660,7 +1915,7 @@
|
||||
## prometheus resource to be created with selectors based on values in the helm deployment,
|
||||
## which will also match the PrometheusRule resources created
|
||||
##
|
||||
- ruleSelectorNilUsesHelmValues: true
|
||||
+ ruleSelectorNilUsesHelmValues: false
|
||||
|
||||
## PrometheusRules to be selected for target discovery.
|
||||
## If {}, select all ServiceMonitors
|
||||
@@ -1685,7 +1940,7 @@
|
||||
## prometheus resource to be created with selectors based on values in the helm deployment,
|
||||
## which will also match the servicemonitors created
|
||||
##
|
||||
- serviceMonitorSelectorNilUsesHelmValues: true
|
||||
+ serviceMonitorSelectorNilUsesHelmValues: false
|
||||
|
||||
## ServiceMonitors to be selected for target discovery.
|
||||
## If {}, select all ServiceMonitors
|
||||
@@ -1705,7 +1960,7 @@
|
||||
## prometheus resource to be created with selectors based on values in the helm deployment,
|
||||
## which will also match the podmonitors created
|
||||
##
|
||||
- podMonitorSelectorNilUsesHelmValues: true
|
||||
+ podMonitorSelectorNilUsesHelmValues: false
|
||||
|
||||
## PodMonitors to be selected for target discovery.
|
||||
## If {}, select all PodMonitors
|
||||
@@ -1802,9 +2057,13 @@
|
||||
|
||||
## Resource limits & requests
|
||||
##
|
||||
|
@ -1670,11 +1758,11 @@ diff -x '*.tgz' -x '*.lock' -uNr packages/rancher-monitoring/charts-original/val
|
|||
- # memory: 400Mi
|
||||
+ resources:
|
||||
+ limits:
|
||||
+ memory: 500Mi
|
||||
+ memory: 1500Mi
|
||||
+ cpu: 1000m
|
||||
+ requests:
|
||||
+ memory: 100Mi
|
||||
+ cpu: 100m
|
||||
+ memory: 750Mi
|
||||
+ cpu: 750m
|
||||
|
||||
## Prometheus StorageSpec for persistent data
|
||||
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md
|
||||
|
|
|
@ -3,7 +3,7 @@ version: 0.1.0
|
|||
appVersion: 0.1.0
|
||||
annotations:
|
||||
catalog.rancher.io/certified: rancher
|
||||
catalog.rancher.io/namespace: monitoring-system
|
||||
catalog.rancher.io/namespace: cattle-monitoring-system
|
||||
catalog.rancher.io/release-name: rancher-pushprox
|
||||
description: A Rancher chart based on PushProx that sets up a deployment of the PushProx proxy and a DaemonSet of PushProx clients on a Kubernetes cluster.
|
||||
name: rancher-pushprox
|
||||
|
|
|
@ -26,8 +26,11 @@ spec:
|
|||
{{- end }}
|
||||
containers:
|
||||
- name: pushprox-client
|
||||
image: arvindiyengar/pushprox-linux-amd64:add_flag_for_token_path
|
||||
command: [ "/app/pushprox-client" ]
|
||||
image: {{ .Values.clients.image.repository }}:{{ .Values.clients.image.tag }}
|
||||
command:
|
||||
{{- range .Values.clients.command }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
args:
|
||||
- --fqdn=$(HOST_IP)
|
||||
- --proxy-url=$(PROXY_URL)
|
||||
|
|
|
@ -21,8 +21,11 @@ spec:
|
|||
{{- end }}
|
||||
containers:
|
||||
- name: pushprox-proxy
|
||||
image: arvindiyengar/pushprox-linux-amd64:add_flag_for_token_path
|
||||
command: [ "/app/pushprox-proxy" ]
|
||||
image: {{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}
|
||||
command:
|
||||
{{- range .Values.proxy.command }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.proxy.resources }}
|
||||
resources: {{ toYaml .Values.proxy.resources | nindent 10 }}
|
||||
{{- end }}
|
||||
|
|
|
@ -55,6 +55,11 @@ clients:
|
|||
nodeSelector: {}
|
||||
tolerations: []
|
||||
|
||||
image:
|
||||
repository: rancher/pushprox-client
|
||||
tag: v0.1.0-rancher1-client
|
||||
command: ["pushprox-client"]
|
||||
|
||||
proxy:
|
||||
enabled: true
|
||||
# The port through which PushProx clients will communicate to the proxy
|
||||
|
@ -65,4 +70,9 @@ proxy:
|
|||
|
||||
# Options to select a node to run a single proxy deployment on
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
tolerations: []
|
||||
|
||||
image:
|
||||
repository: rancher/pushprox-proxy
|
||||
tag: v0.1.0-rancher1-proxy
|
||||
command: ["pushprox-proxy"]
|
Loading…
Reference in New Issue