make charts

pull/2411/head
Arvind Iyengar 2023-02-10 15:47:53 -08:00
parent 39242d51de
commit e26befc92a
No known key found for this signature in database
GPG Key ID: A8DD9BFD6C811498
131 changed files with 36244 additions and 0 deletions

View File

@ -0,0 +1,20 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/display-name: Prometheus Federator
catalog.cattle.io/kube-version: '>=1.16.0-0'
catalog.cattle.io/namespace: cattle-monitoring-system
catalog.cattle.io/os: linux,windows
catalog.cattle.io/permits-os: linux,windows
catalog.cattle.io/provides-gvr: helm.cattle.io.projecthelmchart/v1alpha1
catalog.cattle.io/rancher-version: '>= 2.7.0-0 < 2.8.0-0'
catalog.cattle.io/release-name: prometheus-federator
apiVersion: v2
appVersion: 0.2.0
dependencies:
- condition: helmProjectOperator.enabled
name: helmProjectOperator
repository: file://./charts/helmProjectOperator
description: Prometheus Federator
icon: https://raw.githubusercontent.com/rancher/prometheus-federator/main/assets/logos/prometheus-federator.svg
name: prometheus-federator
version: 2.0.0+up0.2.0

View File

@ -0,0 +1,120 @@
# Prometheus Federator
This chart is deploys a Helm Project Operator (based on the [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator)), an operator that manages deploying Helm charts each containing a Project Monitoring Stack, where each stack contains:
- [Prometheus](https://prometheus.io/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator))
- [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator))
- [Grafana](https://github.com/helm/charts/tree/master/stable/grafana) (deployed via an embedded Helm chart)
- Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/)
- Default ServiceMonitors that watch the deployed resources
> **Important Note: Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs.**
By default, the chart is configured and intended to be deployed alongside [rancher-monitoring](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/), which deploys Prometheus Operator alongside a Cluster Prometheus that each Project Monitoring Stack is configured to federate namespace-scoped metrics from by default.
## Pre-Installation: Using Prometheus Federator with Rancher and rancher-monitoring
If you are running your cluster on [Rancher](https://rancher.com/) and already have [rancher-monitoring](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/) deployed onto your cluster, Prometheus Federator's default configuration should already be configured to work with your existing Cluster Monitoring Stack; however, here are some notes on how we recommend you configure rancher-monitoring to optimize the security and usability of Prometheus Federator in your cluster:
### Ensure the cattle-monitoring-system namespace is placed into the System Project (or a similarly locked down Project that has access to other Projects in the cluster)
Prometheus Operator's security model expects that the namespace it is deployed into (`cattle-monitoring-system`) has limited access for anyone except Cluster Admins to avoid privilege escalation via execing into Pods (such as the Jobs executing Helm operations). In addition, deploying Prometheus Federator and all Project Prometheus stacks into the System Project ensures that the each Project Prometheus is able to reach out to scrape workloads across all Projects (even if Network Policies are defined via Project Network Isolation) but has limited access for Project Owners, Project Members, and other users to be able to access data they shouldn't have access to (i.e. being allowed to exec into pods, set up the ability to scrape namespaces outside of a given Project, etc.).
### Configure rancher-monitoring to only watch for resources created by the Helm chart itself
Since each Project Monitoring Stack will watch the other namespaces and collect additional custom workload metrics or dashboards already, it's recommended to configure the following settings on all selectors to ensure that the Cluster Prometheus Stack only monitors resources created by the Helm Chart itself:
```
matchLabels:
release: "rancher-monitoring"
```
The following selector fields are recommended to have this value:
- `.Values.alertmanager.alertmanagerSpec.alertmanagerConfigSelector`
- `.Values.prometheus.prometheusSpec.serviceMonitorSelector`
- `.Values.prometheus.prometheusSpec.podMonitorSelector`
- `.Values.prometheus.prometheusSpec.ruleSelector`
- `.Values.prometheus.prometheusSpec.probeSelector`
Once this setting is turned on, you can always create ServiceMonitors or PodMonitors that are picked up by the Cluster Prometheus by adding the label `release: "rancher-monitoring"` to them (in which case they will be ignored by Project Monitoring Stacks automatically by default, even if the namespace in which those ServiceMonitors or PodMonitors reside in are not system namespaces).
> Note: If you don't want to allow users to be able to create ServiceMonitors and PodMonitors that aggregate into the Cluster Prometheus in Project namespaces, you can additionally set the namespaceSelectors on the chart to only target system namespaces (which must contain `cattle-monitoring-system` and `cattle-dashboards`, where resources are deployed into by default by rancher-monitoring; you will also need to monitor the `default` namespace to get apiserver metrics or create a custom ServiceMonitor to scrape apiserver metrics from the Service residing in the default namespace) to limit your Cluster Prometheus from picking up other Prometheus Operator CRs; in that case, it would be recommended to turn `.Values.prometheus.prometheusSpec.ignoreNamespaceSelectors=true` to allow you to define ServiceMonitors that can monitor non-system namespaces from within a system namespace.
In addition, if you modified the default `.Values.grafana.sidecar.*.searchNamespace` values on the Grafana Helm subchart for Monitoring V2, it is also recommended to remove the overrides or ensure that your defaults are scoped to only system namespaces for the following values:
- `.Values.grafana.sidecar.dashboards.searchNamespace` (default `cattle-dashboards`)
- `.Values.grafana.sidecar.datasources.searchNamespace` (default `null`, which means it uses the release namespace `cattle-monitoring-system`)
- `.Values.grafana.sidecar.plugins.searchNamespace` (default `null`, which means it uses the release namespace `cattle-monitoring-system`)
- `.Values.grafana.sidecar.notifiers.searchNamespace` (default `null`, which means it uses the release namespace `cattle-monitoring-system`)
### Increase the CPU / memory limits of the Cluster Prometheus
Depending on a cluster's setup, it's generally recommended to give a large amount of dedicated memory to the Cluster Prometheus to avoid restarts due to out-of-memory errors (OOMKilled), usually caused by churn created in the cluster that causes a large number of high cardinality metrics to be generated and ingested by Prometheus within one block of time; this is one of the reasons why the default Rancher Monitoring stack expects around 4GB of RAM to be able to operate in a normal-sized cluster. However, when introducing Project Monitoring Stacks that are all sending `/federate` requests to the same Cluster Prometheus and are reliant on the Cluster Prometheus being "up" to federate that system data on their namespaces, it's even more important that the Cluster Prometheus has an ample amount of CPU / memory assigned to it to prevent an outage that can cause data gaps across all Project Prometheis in the cluster.
> Note: There are no specific recommendations on how much memory the Cluster Prometheus should be configured with since it depends entirely on the user's setup (namely the likelihood of encountering a high churn rate and the scale of metrics that could be generated at that time); it generally varies per setup.
## How does the operator work?
1. On deploying this chart, users can create ProjectHelmCharts CRs with `spec.helmApiVersion` set to `monitoring.cattle.io/v1alpha1` (also known as "Project Monitors" in the Rancher UI) in a **Project Registration Namespace (`cattle-project-<id>`)**.
2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project-<id>-monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**.
3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) (see below for more information about configuring RBAC).
### What is a Project?
In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`; by default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given [Rancher](https://rancher.com/) Project.
### Configuring the Helm release created by a ProjectHelmChart
The `spec.values` of this ProjectHelmChart resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either:
- View to the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator)
- Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary).
### Namespaces
As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for:
1. **Operator / System Namespace**: this is the namespace that the operator is deployed into (e.g. `cattle-monitoring-system`). This namespace will contain all HelmCharts and HelmReleases for all ProjectHelmCharts watched by this operator. **Only Cluster Admins should have access to this namespace.**
2. **Project Registration Namespace (`cattle-project-<id>`)**: this is the set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace (see more details below). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace**.
> Note: Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided (which is set to `field.cattle.io/projectId` by default); this indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g. this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted) or it is no longer watching that project (because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects). These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project
> Note: if `.Values.global.cattle.projectLabel` is not provided, the Operator / System Namespace will also be the Project Registration Namespace
3. **Project Release Namespace (`cattle-project-<id>-monitoring`)**: this is the set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.**
> Note: Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue` (which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified) whenever a ProjectHelmChart is specified in a Project Registration Namespace
> Note: Project Release Namespaces follow the same orphaning conventions as Project Registration Namespaces (see note above)
> Note: if `.Values.projectReleaseNamespaces.enabled` is false, the Project Release Namespace will be the same as the Project Registration Namespace
### Helm Resources (HelmChart, HelmRelease)
On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn:
- A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): this custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR; this CR is automatically updated on changes to the ProjectHelmChart (e.g. modifying the values.yaml) or changes to the underlying Project definition (e.g. adding or removing namespaces from a project).
> **Important Note: If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation; however, this is generally only accessible by a Cluster Admin.**
- A HelmRelease CR (managed via an embedded [rancher/helm-locker](https://github.com/rancher/helm-locker) in the operator): this custom resource automatically locks a deployed Helm release in place and automatically overwrites updates to underlying resources unless the change happens via a Helm operation (`helm install`, `helm upgrade`, or `helm uninstall` performed by the HelmChart CR).
> Note: HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm release is being modified and locks it back to place; to view these events, you can use `kubectl describe helmrelease <helm-release-name> -n <operator/system-namespace>`; you can also view the logs on this operator to see when changes are detected and which resources were attempted to be modified
Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users.
### RBAC
As described in the section on namespaces above, Prometheus Federator expects that Project Owners, Project Members, and other users in the cluster with Project-level permissions (e.g. permissions in a certain set of namespaces identified by a single label selector) have minimal permissions in any namespaces except the Project Registration Namespace (which is imported into the project by default) and those that already comprise their projects. Therefore, in order to allow Project Owners to assign specific chart permissions to other users in their Project namespaces, the Helm Project Operator will automatically watch the following bindings:
- ClusterRoleBindings
- RoleBindings in the Project Release Namespace
On observing a change to one of those types of bindings, the Helm Project Operator will check whether the `roleRef` that the the binding points to matches a ClusterRole with the name provided under `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.admin`, `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.edit`, or `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.view`; by default, these roleRefs correspond will correspond to `admin`, `edit`, and `view` respectively, which are the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
> Note: for Rancher RBAC users, these [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) directly correlate to the `Project Owner`, `Project Member`, and `Read-Only` default Project Role Templates.
If the `roleRef` matches, the Helm Project Operator will filter the `subjects` of the binding for all Users and Groups and use that to automatically construct a RoleBinding for each Role in the Project Release Namespace with the same name as the role and the following labels:
- `helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}`
- `helm.cattle.io/project-helm-chart-role-aggregate-from: <admin|edit|view>`
By default, the `rancher-project-monitoring` (the underlying chart deployed by Prometheus Federator) creates three default Roles per Project Release Namespace that provide `admin`, `edit`, and `view` users to permissions to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack to provide least privilege; however, if a Cluster Admin would like to assign additional permissions to certain users, they can either directly assign RoleBindings in the Project Release Namespace to certain users or created Roles with the above two labels on them to allow Project Owners to control assigning those RBAC roles to users in their Project Registration namespaces.
### Advanced Helm Project Operator Configuration
|Value|Configuration|
|---|---------------------------|
|`helmProjectOperator.valuesOverride`| Allows an Operator to override values that are set on each ProjectHelmChart deployment on an operator-level; user-provided options (specified on the `spec.values` of the ProjectHelmChart) are automatically overridden if operator-level values are provided. For an exmaple, see how the default value overrides `federate.targets` (note: when overriding list values like `federate.targets`, user-provided list values will **not** be concatenated) |
|`helmProjectOperator.projectReleaseNamespaces.labelValues`| The value of the Project that all Project Release Namespaces should be auto-imported into (via label and annotation). Not recommended to be overridden on a Rancher setup. |
|`helmProjectOperator.otherSystemProjectLabelValues`| Other namespaces that the operator should treat as a system namespace that should not be monitored. By default, all namespaces that match `global.cattle.systemProjectId` will not be matched. `cattle-monitoring-system`, `cattle-dashboards`, and `kube-system` are explicitly marked as system namespaces as well, regardless of label or annotation. |
|`helmProjectOperator.releaseRoleBindings.aggregate`| Whether to automatically create RBAC resources in Project Release namespaces
|`helmProjectOperator.releaseRoleBindings.clusterRoleRefs.<admin\|edit\|view>`| ClusterRoles to reference to discover subjects to create RoleBindings for in the Project Release Namespace for all corresponding Project Release Roles. See RBAC above for more information |
|`helmProjectOperator.hardenedNamespaces.enabled`| Whether to automatically patch the default ServiceAccount with `automountServiceAccountToken: false` and create a default NetworkPolicy in all managed namespaces in the cluster; the default values ensure that the creation of the namespace does not break a CIS 1.16 hardened scan |
|`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace |
|`helmProjectOperator.helmController.enabled`| Whether to enable an embedded k3s-io/helm-controller instance within the Helm Project Operator. Should be disabled for RKE2/K3s clusters before v1.23.14 / v1.24.8 / v1.25.4 since RKE2/K3s clusters already run Helm Controller at a cluster-wide level to manage internal Kubernetes components |
|`helmProjectOperator.helmLocker.enabled`| Whether to enable an embedded rancher/helm-locker instance within the Helm Project Operator. |

View File

@ -0,0 +1,27 @@
# Prometheus Federator
This chart deploys an operator that manages Project Monitoring Stacks composed of the following set of resources that are scoped to project namespaces:
- [Prometheus](https://prometheus.io/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator))
- [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator))
- [Grafana](https://github.com/helm/charts/tree/master/stable/grafana) (deployed via an embedded Helm chart)
- Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/)
- Default ServiceMonitors that watch the deployed Prometheus, Grafana, and Alertmanager
Since this Project Monitoring Stack deploys Prometheus Operator CRs, an existing Prometheus Operator instance must already be deployed in the cluster for Prometheus Federator to successfully be able to deploy Project Monitoring Stacks. It is recommended to use [`rancher-monitoring`](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/) for this. For more information on how the chart works or advanced configurations, please read the `README.md`.
## Upgrading to Kubernetes v1.25+
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`.
> **Note:**
> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`.
> **Note:**
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
>
> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets.
Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.

View File

@ -0,0 +1,15 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/display-name: Helm Project Operator
catalog.cattle.io/kube-version: '>=1.16.0-0'
catalog.cattle.io/namespace: cattle-helm-system
catalog.cattle.io/os: linux,windows
catalog.cattle.io/permits-os: linux,windows
catalog.cattle.io/provides-gvr: helm.cattle.io.projecthelmchart/v1alpha1
catalog.cattle.io/rancher-version: '>= 2.6.0-0'
catalog.cattle.io/release-name: helm-project-operator
apiVersion: v2
appVersion: 0.1.0
description: Helm Project Operator
name: helmProjectOperator
version: 0.1.0

View File

@ -0,0 +1,77 @@
# Helm Project Operator
## How does the operator work?
1. On deploying a Helm Project Operator, users can create ProjectHelmCharts CRs with `spec.helmApiVersion` set to `dummy.cattle.io/v1alpha1` in a **Project Registration Namespace (`cattle-project-<id>`)**.
2. On seeing each ProjectHelmChartCR, the operator will automatically deploy the embedded Helm chart on the Project Owner's behalf in the **Project Release Namespace (`cattle-project-<id>-dummy`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**.
3. RBAC will automatically be assigned in the Project Release Namespace to allow users to based on Role created in the Project Release Namespace with a given set of labels; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) (see below for more information about configuring RBAC).
### What is a Project?
In Helm Project Operator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`; by default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given [Rancher](https://rancher.com/) Project.
### What is a ProjectHelmChart?
A ProjectHelmChart is an instance of a (project-scoped) Helm chart deployed on behalf of a user who has permissions to create ProjectHelmChart resources in a Project Registration namespace.
Generally, the best way to think about the ProjectHelmChart model is by comparing it to two other models:
1. Managed Kubernetes providers (EKS, GKE, AKS, etc.): in this model, a user has the ability to say "I want a Kubernetes cluster" but the underlying cloud provider is responsible for provisioning the infrastructure and offering **limited view and access** of the underlying resources created on their behalf; similarly, Helm Project Operator allows a Project Owner to say "I want this Helm chart deployed", but the underlying Operator is responsible for "provisioning" (deploying) the Helm chart and offering **limited view and access** of the underlying Kubernetes resources created on their behalf (based on configuring "least-privilege" Kubernetes RBAC for the Project Owners / Members in the newly created Project Release Namespace).
2. Dynamically-provisioned Persistent Volumes: in this model, a single resource (PersistentVolume) exists that allows you to specify a Storage Class that actually implements provisioning the underlying storage via a Storage Class Provisioner (e.g. Longhorn). Similarly, the ProjectHelmChart exists that allows you to specify a `spec.helmApiVersion` ("storage class") that actually implements deploying the underlying Helm chart via a Helm Project Operator (e.g. [`rancher/prometheus-federator`](https://github.com/rancher/prometheus-federator)).
### Configuring the Helm release created by a ProjectHelmChart
The `spec.values` of this ProjectHelmChart resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either:
- View to the chart's definition located at [`rancher/helm-project-operator` under `charts/example-chart`](https://github.com/rancher/helm-project-operator/blob/main/charts/example-chart) (where the chart version will be tied to the version of this operator)
- Look for the ConfigMap named `dummy.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `helm-project-operator` binary).
### Namespaces
All Helm Project Operators have three different classifications of namespaces that the operator looks out for:
1. **Operator / System Namespace**: this is the namespace that the operator is deployed into (e.g. `cattle-helm-system`). This namespace will contain all HelmCharts and HelmReleases for all ProjectHelmCharts watched by this operator. **Only Cluster Admins should have access to this namespace.**
2. **Project Registration Namespace (`cattle-project-<id>`)**: this is the set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace (see more details below). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace**.
> Note: Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided (which is set to `field.cattle.io/projectId` by default); this indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g. this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted) or it is no longer watching that project (because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects). These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project
> Note: if `.Values.global.cattle.projectLabel` is not provided, the Operator / System Namespace will also be the Project Registration Namespace
3. **Project Release Namespace (`cattle-project-<id>-dummy`)**: this is the set of namespaces that the operator deploys Helm charts within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Helm charts based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Helm Project Operator.**
> Note: Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue` (which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified) whenever a ProjectHelmChart is specified in a Project Registration Namespace
> Note: Project Release Namespaces follow the same orphaning conventions as Project Registration Namespaces (see note above)
> Note: if `.Values.projectReleaseNamespaces.enabled` is false, the Project Release Namespace will be the same as the Project Registration Namespace
### Helm Resources (HelmChart, HelmRelease)
On deploying a ProjectHelmChart, the Helm Project Operator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn:
- A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): this custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR; this CR is automatically updated on changes to the ProjectHelmChart (e.g. modifying the values.yaml) or changes to the underlying Project definition (e.g. adding or removing namespaces from a project).
> **Important Note: If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation; however, this is generally only accessible by a Cluster Admin.**
- A HelmRelease CR (managed via an embedded [rancher/helm-locker](https://github.com/rancher/helm-locker) in the operator): this custom resource automatically locks a deployed Helm release in place and automatically overwrites updates to underlying resources unless the change happens via a Helm operation (`helm install`, `helm upgrade`, or `helm uninstall` performed by the HelmChart CR).
> Note: HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm release is being modified and locks it back to place; to view these events, you can use `kubectl describe helmrelease <helm-release-name> -n <operator/system-namespace>`; you can also view the logs on this operator to see when changes are detected and which resources were attempted to be modified
Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users.
### RBAC
As described in the section on namespaces above, Helm Project Operator expects that Project Owners, Project Members, and other users in the cluster with Project-level permissions (e.g. permissions in a certain set of namespaces identified by a single label selector) have minimal permissions in any namespaces except the Project Registration Namespace (which is imported into the project by default) and those that already comprise their projects. Therefore, in order to allow Project Owners to assign specific chart permissions to other users in their Project namespaces, the Helm Project Operator will automatically watch the following bindings:
- ClusterRoleBindings
- RoleBindings in the Project Release Namespace
On observing a change to one of those types of bindings, the Helm Project Operator will check whether the `roleRef` that the the binding points to matches a ClusterRole with the name provided under `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.admin`, `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.edit`, or `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.view`; by default, these roleRefs correspond will correspond to `admin`, `edit`, and `view` respectively, which are the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
> Note: for Rancher RBAC users, these [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) directly correlate to the `Project Owner`, `Project Member`, and `Read-Only` default Project Role Templates.
If the `roleRef` matches, the Helm Project Operator will filter the `subjects` of the binding for all Users and Groups and use that to automatically construct a RoleBinding for each Role in the Project Release Namespace with the same name as the role and the following labels:
- `helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}`
- `helm.cattle.io/project-helm-chart-role-aggregate-from: <admin|edit|view>`
By default, the `example-chart` (the underlying chart deployed by Helm Project Operator) does not create any default roles; however, if a Cluster Admin would like to assign additional permissions to certain users, they can either directly assign RoleBindings in the Project Release Namespace to certain users or created Roles with the above two labels on them to allow Project Owners to control assigning those RBAC roles to users in their Project Registration namespaces.
### Advanced Helm Project Operator Configuration
|Value|Configuration|
|---|---------------------------|
|`valuesOverride`| Allows an Operator to override values that are set on each ProjectHelmChart deployment on an operator-level; user-provided options (specified on the `spec.values` of the ProjectHelmChart) are automatically overridden if operator-level values are provided. For an exmaple, see how the default value overrides `federate.targets` (note: when overriding list values like `federate.targets`, user-provided list values will **not** be concatenated) |
|`projectReleaseNamespaces.labelValues`| The value of the Project that all Project Release Namespaces should be auto-imported into (via label and annotation). Not recommended to be overridden on a Rancher setup. |
|`otherSystemProjectLabelValues`| Other namespaces that the operator should treat as a system namespace that should not be monitored. By default, all namespaces that match `global.cattle.systemProjectId` will not be matched. `kube-system` is explicitly marked as a system namespace as well, regardless of label or annotation. |
|`releaseRoleBindings.aggregate`| Whether to automatically create RBAC resources in Project Release namespaces
|`releaseRoleBindings.clusterRoleRefs.<admin\|edit\|view>`| ClusterRoles to reference to discover subjects to create RoleBindings for in the Project Release Namespace for all corresponding Project Release Roles. See RBAC above for more information |
|`hardenedNamespaces.enabled`| Whether to automatically patch the default ServiceAccount with `automountServiceAccountToken: false` and create a default NetworkPolicy in all managed namespaces in the cluster; the default values ensure that the creation of the namespace does not break a CIS 1.16 hardened scan |
|`hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace |
|`helmController.enabled`| Whether to enable an embedded k3s-io/helm-controller instance within the Helm Project Operator. Should be disabled for RKE2 clusters since RKE2 clusters already run Helm Controller to manage internal Kubernetes components |
|`helmLocker.enabled`| Whether to enable an embedded rancher/helm-locker instance within the Helm Project Operator. |

View File

@ -0,0 +1,20 @@
# Helm Project Operator
This chart installs the example [Helm Project Operator](https://github.com/rancher/helm-project-operator) onto your cluster.
## Upgrading to Kubernetes v1.25+
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`.
> **Note:**
> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`.
> **Note:**
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
>
> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets.
Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.

View File

@ -0,0 +1,43 @@
questions:
- variable: global.cattle.psp.enabled
default: "false"
description: "Flag to enable or disable the installation of PodSecurityPolicies by this chart in the target cluster. If the cluster is running Kubernetes 1.25+, you must update this value to false."
label: "Enable PodSecurityPolicies"
type: boolean
group: "Security Settings"
- variable: helmController.enabled
label: Enable Embedded Helm Controller
description: 'Note: If you are running this chart in an RKE2 cluster, this should be disabled.'
type: boolean
group: Helm Controller
- variable: helmLocker.enabled
label: Enable Embedded Helm Locker
type: boolean
group: Helm Locker
- variable: projectReleaseNamespaces.labelValue
label: Project Release Namespace Project ID
description: By default, the System Project is selected. This can be overriden to a different Project (e.g. p-xxxxx)
type: string
required: false
group: Namespaces
- variable: releaseRoleBindings.clusterRoleRefs.admin
label: Admin ClusterRole
description: By default, admin selects Project Owners. This can be overridden to a different ClusterRole (e.g. rt-xxxxx)
type: string
default: admin
required: false
group: RBAC
- variable: releaseRoleBindings.clusterRoleRefs.edit
label: Edit ClusterRole
description: By default, edit selects Project Members. This can be overridden to a different ClusterRole (e.g. rt-xxxxx)
type: string
default: edit
required: false
group: RBAC
- variable: releaseRoleBindings.clusterRoleRefs.view
label: View ClusterRole
description: By default, view selects Read-Only users. This can be overridden to a different ClusterRole (e.g. rt-xxxxx)
type: string
default: view
required: false
group: RBAC

View File

@ -0,0 +1,2 @@
{{ $.Chart.Name }} has been installed. Check its status by running:
kubectl --namespace {{ template "helm-project-operator.namespace" . }} get pods -l "release={{ $.Release.Name }}"

View File

@ -0,0 +1,66 @@
# Rancher
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- end -}}
{{- end -}}
# Windows Support
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
beta.kubernetes.io/os: linux
{{- else -}}
kubernetes.io/os: linux
{{- end -}}
{{- end -}}
# Helm Project Operator
{{/* vim: set filetype=mustache: */}}
{{/* Expand the name of the chart. This is suffixed with -alertmanager, which means subtract 13 from longest 63 available */}}
{{- define "helm-project-operator.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 50 | trimSuffix "-" -}}
{{- end }}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "helm-project-operator.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/* Create chart name and version as used by the chart label. */}}
{{- define "helm-project-operator.chartref" -}}
{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}}
{{- end }}
{{/* Generate basic labels */}}
{{- define "helm-project-operator.labels" -}}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: "{{ replace "+" "_" .Chart.Version }}"
app.kubernetes.io/part-of: {{ template "helm-project-operator.name" . }}
chart: {{ template "helm-project-operator.chartref" . }}
release: {{ $.Release.Name | quote }}
heritage: {{ $.Release.Service | quote }}
{{- if .Values.commonLabels}}
{{ toYaml .Values.commonLabels }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,82 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "helm-project-operator.name" . }}-cleanup
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded, hook-failed
spec:
template:
metadata:
name: {{ template "helm-project-operator.name" . }}-cleanup
labels: {{ include "helm-project-operator.labels" . | nindent 8 }}
app: {{ template "helm-project-operator.name" . }}
spec:
serviceAccountName: {{ template "helm-project-operator.name" . }}
{{- if .Values.cleanup.securityContext }}
securityContext: {{ toYaml .Values.cleanup.securityContext | nindent 8 }}
{{- end }}
initContainers:
- name: add-cleanup-annotations
image: {{ template "system_default_registry" . }}{{ .Values.cleanup.image.repository }}:{{ .Values.cleanup.image.tag }}
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
command:
- /bin/sh
- -c
- >
echo "Labeling all ProjectHelmCharts with helm.cattle.io/helm-project-operator-cleanup=true";
EXPECTED_HELM_API_VERSION={{ .Values.helmApiVersion }};
IFS=$'\n';
for namespace in $(kubectl get namespaces -l helm.cattle.io/helm-project-operated=true --no-headers -o=custom-columns=NAME:.metadata.name); do
for projectHelmChartAndHelmApiVersion in $(kubectl get projecthelmcharts -n ${namespace} --no-headers -o=custom-columns=NAME:.metadata.name,HELMAPIVERSION:.spec.helmApiVersion); do
projectHelmChartAndHelmApiVersion=$(echo ${projectHelmChartAndHelmApiVersion} | xargs);
projectHelmChart=$(echo ${projectHelmChartAndHelmApiVersion} | cut -d' ' -f1);
helmApiVersion=$(echo ${projectHelmChartAndHelmApiVersion} | cut -d' ' -f2);
if [[ ${helmApiVersion} != ${EXPECTED_HELM_API_VERSION} ]]; then
echo "Skipping marking ${namespace}/${projectHelmChart} with cleanup annotation since spec.helmApiVersion: ${helmApiVersion} is not ${EXPECTED_HELM_API_VERSION}";
continue;
fi;
kubectl label projecthelmcharts -n ${namespace} ${projectHelmChart} helm.cattle.io/helm-project-operator-cleanup=true --overwrite;
done;
done;
{{- if .Values.cleanup.resources }}
resources: {{ toYaml .Values.cleanup.resources | nindent 12 }}
{{- end }}
{{- if .Values.cleanup.containerSecurityContext }}
securityContext: {{ toYaml .Values.cleanup.containerSecurityContext | nindent 12 }}
{{- end }}
containers:
- name: ensure-subresources-deleted
image: {{ template "system_default_registry" . }}{{ .Values.cleanup.image.repository }}:{{ .Values.cleanup.image.tag }}
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- >
SYSTEM_NAMESPACE={{ .Release.Namespace }}
EXPECTED_HELM_API_VERSION={{ .Values.helmApiVersion }};
HELM_API_VERSION_TRUNCATED=$(echo ${EXPECTED_HELM_API_VERSION} | cut -d'/' -f0);
echo "Ensuring HelmCharts and HelmReleases are deleted from ${SYSTEM_NAMESPACE}...";
while [[ "$(kubectl get helmcharts,helmreleases -l helm.cattle.io/helm-api-version=${HELM_API_VERSION_TRUNCATED} -n ${SYSTEM_NAMESPACE} 2>&1)" != "No resources found in ${SYSTEM_NAMESPACE} namespace." ]]; do
echo "waiting for HelmCharts and HelmReleases to be deleted from ${SYSTEM_NAMESPACE}... sleeping 3 seconds";
sleep 3;
done;
echo "Successfully deleted all HelmCharts and HelmReleases in ${SYSTEM_NAMESPACE}!";
{{- if .Values.cleanup.resources }}
resources: {{ toYaml .Values.cleanup.resources | nindent 12 }}
{{- end }}
{{- if .Values.cleanup.containerSecurityContext }}
securityContext: {{ toYaml .Values.cleanup.containerSecurityContext | nindent 12 }}
{{- end }}
restartPolicy: OnFailure
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.cleanup.nodeSelector }}
{{- toYaml .Values.cleanup.nodeSelector | nindent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.cleanup.tolerations }}
{{- toYaml .Values.cleanup.tolerations | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,57 @@
{{- if and .Values.global.rbac.create .Values.global.rbac.userRoles.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "helm-project-operator.name" . }}-admin
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
{{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
rbac.authorization.k8s.io/aggregate-to-admin: "true"
{{- end }}
rules:
- apiGroups:
- helm.cattle.io
resources:
- projecthelmcharts
- projecthelmcharts/finalizers
- projecthelmcharts/status
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "helm-project-operator.name" . }}-edit
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
{{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
rbac.authorization.k8s.io/aggregate-to-edit: "true"
{{- end }}
rules:
- apiGroups:
- helm.cattle.io
resources:
- projecthelmcharts
- projecthelmcharts/status
verbs:
- 'get'
- 'list'
- 'watch'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "helm-project-operator.name" . }}-view
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
{{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
rbac.authorization.k8s.io/aggregate-to-view: "true"
{{- end }}
rules:
- apiGroups:
- helm.cattle.io
resources:
- projecthelmcharts
- projecthelmcharts/status
verbs:
- 'get'
- 'list'
- 'watch'
{{- end }}

View File

@ -0,0 +1,14 @@
## Note: If you add another entry to this ConfigMap, make sure a corresponding env var is set
## in the deployment of the operator to ensure that a Helm upgrade will force the operator
## to reload the values in the ConfigMap and redeploy
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "helm-project-operator.name" . }}-config
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
data:
hardened.yaml: |-
{{ .Values.hardenedNamespaces.configuration | toYaml | indent 4 }}
values.yaml: |-
{{ .Values.valuesOverride | toYaml | indent 4 }}

View File

@ -0,0 +1,124 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-project-operator.name" . }}
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "helm-project-operator.name" . }}
release: {{ $.Release.Name | quote }}
template:
metadata:
labels: {{ include "helm-project-operator.labels" . | nindent 8 }}
app: {{ template "helm-project-operator.name" . }}
spec:
containers:
- name: {{ template "helm-project-operator.name" . }}
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
args:
- {{ template "helm-project-operator.name" . }}
- --namespace={{ template "helm-project-operator.namespace" . }}
- --controller-name={{ template "helm-project-operator.name" . }}
- --values-override-file=/etc/helmprojectoperator/config/values.yaml
{{- if .Values.global.cattle.systemDefaultRegistry }}
- --system-default-registry={{ .Values.global.cattle.systemDefaultRegistry }}
{{- end }}
{{- if .Values.global.cattle.url }}
- --cattle-url={{ .Values.global.cattle.url }}
{{- end }}
{{- if .Values.global.cattle.projectLabel }}
- --project-label={{ .Values.global.cattle.projectLabel }}
{{- end }}
{{- if not .Values.projectReleaseNamespaces.enabled }}
- --system-project-label-values={{ join "," (append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId) }}
{{- else if and (ne (len .Values.global.cattle.systemProjectId) 0) (ne (len .Values.projectReleaseNamespaces.labelValue) 0) (ne .Values.projectReleaseNamespaces.labelValue .Values.global.cattle.systemProjectId) }}
- --system-project-label-values={{ join "," (append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId) }}
{{- else if len .Values.otherSystemProjectLabelValues }}
- --system-project-label-values={{ join "," .Values.otherSystemProjectLabelValues }}
{{- end }}
{{- if .Values.projectReleaseNamespaces.enabled }}
{{- if .Values.projectReleaseNamespaces.labelValue }}
- --project-release-label-value={{ .Values.projectReleaseNamespaces.labelValue }}
{{- else if .Values.global.cattle.systemProjectId }}
- --project-release-label-value={{ .Values.global.cattle.systemProjectId }}
{{- end }}
{{- end }}
{{- if .Values.global.cattle.clusterId }}
- --cluster-id={{ .Values.global.cattle.clusterId }}
{{- end }}
{{- if .Values.releaseRoleBindings.aggregate }}
{{- if .Values.releaseRoleBindings.clusterRoleRefs }}
{{- if .Values.releaseRoleBindings.clusterRoleRefs.admin }}
- --admin-cluster-role={{ .Values.releaseRoleBindings.clusterRoleRefs.admin }}
{{- end }}
{{- if .Values.releaseRoleBindings.clusterRoleRefs.edit }}
- --edit-cluster-role={{ .Values.releaseRoleBindings.clusterRoleRefs.edit }}
{{- end }}
{{- if .Values.releaseRoleBindings.clusterRoleRefs.view }}
- --view-cluster-role={{ .Values.releaseRoleBindings.clusterRoleRefs.view }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hardenedNamespaces.enabled }}
- --hardening-options-file=/etc/helmprojectoperator/config/hardening.yaml
{{- else }}
- --disable-hardening
{{- end }}
{{- if .Values.debug }}
- --debug
- --debug-level={{ .Values.debugLevel }}
{{- end }}
{{- if not .Values.helmController.enabled }}
- --disable-embedded-helm-controller
{{- else }}
- --helm-job-image={{ template "system_default_registry" . }}{{ .Values.helmController.job.image.repository }}:{{ .Values.helmController.job.image.tag }}
{{- end }}
{{- if not .Values.helmLocker.enabled }}
- --disable-embedded-helm-locker
{{- end }}
{{- if .Values.additionalArgs }}
{{- toYaml .Values.additionalArgs | nindent 10 }}
{{- end }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
## Note: The below two values only exist to force Helm to upgrade the deployment on
## a change to the contents of the ConfigMap during an upgrade. Neither serve
## any practical purpose and can be removed and replaced with a configmap reloader
## in a future change if dynamic updates are required.
- name: HARDENING_OPTIONS_SHA_256_HASH
value: {{ .Values.hardenedNamespaces.configuration | toYaml | sha256sum }}
- name: VALUES_OVERRIDE_SHA_256_HASH
value: {{ .Values.valuesOverride | toYaml | sha256sum }}
{{- if .Values.resources }}
resources: {{ toYaml .Values.resources | nindent 12 }}
{{- end }}
{{- if .Values.containerSecurityContext }}
securityContext: {{ toYaml .Values.containerSecurityContext | nindent 12 }}
{{- end }}
volumeMounts:
- name: config
mountPath: "/etc/helmprojectoperator/config"
serviceAccountName: {{ template "helm-project-operator.name" . }}
{{- if .Values.securityContext }}
securityContext: {{ toYaml .Values.securityContext | nindent 8 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "helm-project-operator.name" . }}-config

View File

@ -0,0 +1,68 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "helm-project-operator.name" . }}-psp
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
{{- if .Values.global.rbac.pspAnnotations }}
annotations: {{ toYaml .Values.global.rbac.pspAnnotations | nindent 4 }}
{{- end }}
spec:
privileged: false
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Permits the container to run with root privileges as well.
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 0
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 0
max: 65535
readOnlyRootFilesystem: false
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "helm-project-operator.name" . }}-psp
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
rules:
{{- if semverCompare "> 1.15.0-0" .Capabilities.KubeVersion.GitVersion }}
- apiGroups: ['policy']
{{- else }}
- apiGroups: ['extensions']
{{- end }}
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "helm-project-operator.name" . }}-psp
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "helm-project-operator.name" . }}-psp
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "helm-project-operator.name" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "helm-project-operator.name" . }}
namespace: {{ template "helm-project-operator.namespace" . }}
{{- end }}

View File

@ -0,0 +1,32 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "helm-project-operator.name" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "cluster-admin" # see note below
subjects:
- kind: ServiceAccount
name: {{ template "helm-project-operator.name" . }}
namespace: {{ template "helm-project-operator.namespace" . }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "helm-project-operator.name" . }}
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
app: {{ template "helm-project-operator.name" . }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.global.imagePullSecrets | nindent 2 }}
{{- end }}
# ---
# NOTE:
# As of now, due to the fact that the k3s-io/helm-controller can only deploy jobs that are cluster-bound to the cluster-admin
# ClusterRole, the only way for this operator to be able to perform that binding is if it is also bound to the cluster-admin ClusterRole.
#
# As a result, this ClusterRoleBinding will be left as a work-in-progress until changes are made in k3s-io/helm-controller to allow us to grant
# only scoped down permissions to the Job that is deployed.

View File

@ -0,0 +1,62 @@
{{- if .Values.systemNamespacesConfigMap.create }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "helm-project-operator.name" . }}-system-namespaces
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
data:
system-namespaces.json: |-
{
{{- if .Values.projectReleaseNamespaces.enabled }}
{{- if .Values.projectReleaseNamespaces.labelValue }}
"projectReleaseLabelValue": {{ .Values.projectReleaseNamespaces.labelValue | quote }},
{{- else if .Values.global.cattle.systemProjectId }}
"projectReleaseLabelValue": {{ .Values.global.cattle.systemProjectId | quote }},
{{- else }}
"projectReleaseLabelValue": "",
{{- end }}
{{- else }}
"projectReleaseLabelValue": "",
{{- end }}
{{- if not .Values.projectReleaseNamespaces.enabled }}
"systemProjectLabelValues": {{ append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId | toJson }}
{{- else if and (ne (len .Values.global.cattle.systemProjectId) 0) (ne (len .Values.projectReleaseNamespaces.labelValue) 0) (ne .Values.projectReleaseNamespaces.labelValue .Values.global.cattle.systemProjectId) }}
"systemProjectLabelValues": {{ append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId | toJson }}
{{- else if len .Values.otherSystemProjectLabelValues }}
"systemProjectLabelValues": {{ .Values.otherSystemProjectLabelValues | toJson }}
{{- else }}
"systemProjectLabelValues": []
{{- end }}
}
---
{{- if (and .Values.systemNamespacesConfigMap.rbac.enabled .Values.systemNamespacesConfigMap.rbac.subjects) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "helm-project-operator.name" . }}-system-namespaces
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "{{ template "helm-project-operator.name" . }}-system-namespaces"
verbs:
- 'get'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "helm-project-operator.name" . }}-system-namespaces
namespace: {{ template "helm-project-operator.namespace" . }}
labels: {{ include "helm-project-operator.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "helm-project-operator.name" . }}-system-namespaces
subjects: {{ .Values.systemNamespacesConfigMap.rbac.subjects | toYaml | nindent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,7 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -0,0 +1,226 @@
# Default values for helm-project-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Helm Project Operator Configuration
global:
cattle:
clusterId: ""
psp:
enabled: false
projectLabel: field.cattle.io/projectId
systemDefaultRegistry: ""
systemProjectId: ""
url: ""
rbac:
## Create RBAC resources for ServiceAccounts and users
##
create: true
userRoles:
## Create default user ClusterRoles to allow users to interact with ProjectHelmCharts
create: true
## Aggregate default user ClusterRoles into default k8s ClusterRoles
aggregateToDefaultRoles: true
pspAnnotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - name: "image-pull-secret"
helmApiVersion: dummy.cattle.io/v1alpha1
## valuesOverride overrides values that are set on each ProjectHelmChart deployment on an operator-level
## User-provided values will be overwritten based on the values provided here
valuesOverride: {}
## projectReleaseNamespaces are auto-generated namespaces that are created to host Helm Releases
## managed by this operator on behalf of a ProjectHelmChart
projectReleaseNamespaces:
## Enabled determines whether Project Release Namespaces should be created. If false, the underlying
## Helm release will be deployed in the Project Registration Namespace
enabled: true
## labelValue is the value of the Project that the projectReleaseNamespace should be created within
## If empty, this will be set to the value of global.cattle.systemProjectId
## If global.cattle.systemProjectId is also empty, project release namespaces will be disabled
labelValue: ""
## otherSystemProjectLabelValues are project labels that identify namespaces as those that should be treated as system projects
## i.e. they will be entirely ignored by the operator
## By default, the global.cattle.systemProjectId will be in this list
otherSystemProjectLabelValues: []
## releaseRoleBindings configures RoleBindings automatically created by the Helm Project Operator
## in Project Release Namespaces where underlying Helm charts are deployed
releaseRoleBindings:
## aggregate enables creating these RoleBindings off aggregating RoleBindings in the
## Project Registration Namespace or ClusterRoleBindings that bind users to the ClusterRoles
## specified under clusterRoleRefs
aggregate: true
## clusterRoleRefs are the ClusterRoles whose RoleBinding or ClusterRoleBindings should determine
## the RoleBindings created in the Project Release Namespace
##
## By default, these are set to create RoleBindings based on the RoleBindings / ClusterRoleBindings
## attached to the default K8s user-facing ClusterRoles of admin, edit, and view.
## ref: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
##
clusterRoleRefs:
admin: admin
edit: edit
view: view
hardenedNamespaces:
# Whether to automatically manage the configuration of the default ServiceAccount and
# auto-create a NetworkPolicy for each namespace created by this operator
enabled: true
configuration:
# Values to be applied to each default ServiceAccount created in a managed namespace
serviceAccountSpec:
secrets: []
imagePullSecrets: []
automountServiceAccountToken: false
# Values to be applied to each default generated NetworkPolicy created in a managed namespace
networkPolicySpec:
podSelector: {}
egress: []
ingress: []
policyTypes: ["Ingress", "Egress"]
## systemNamespacesConfigMap is a ConfigMap created to allow users to see valid entries
## for registering a ProjectHelmChart for a given Project on the Rancher Dashboard UI.
## It does not need to be enabled for a non-Rancher use case.
systemNamespacesConfigMap:
## Create indicates whether the system namespaces configmap should be created
## This is a required value for integration with Rancher Dashboard
create: true
## RBAC provides options around the RBAC created to allow users to be able to view
## the systemNamespacesConfigMap; if not specified, only users with the ability to
## view ConfigMaps in the namespace where this chart is deployed will be able to
## properly view the system namespaces on the Rancher Dashboard UI
rbac:
## enabled indicates that we should deploy a RoleBinding and Role to view this ConfigMap
enabled: true
## subjects are the subjects that should be bound to this default RoleBinding
## By default, we allow anyone who is authenticated to the system to be able to view
## this ConfigMap in the deployment namespace
subjects:
- kind: Group
name: system:authenticated
nameOverride: ""
namespaceOverride: ""
image:
repository: rancher/helm-project-operator
tag: v0.1.0
pullPolicy: IfNotPresent
helmController:
# Note: should be disabled for RKE2 clusters since they already run Helm Controller to manage internal Kubernetes components
enabled: true
job:
image:
repository: rancher/klipper-helm
tag: v0.7.0-build20220315
helmLocker:
enabled: true
# Additional arguments to be passed into the Helm Project Operator image
additionalArgs: []
## Define which Nodes the Pods are scheduled on.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for use with node taints
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
resources: {}
# limits:
# memory: 500Mi
# cpu: 1000m
# requests:
# memory: 100Mi
# cpu: 100m
containerSecurityContext: {}
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# privileged: false
# readOnlyRootFilesystem: true
securityContext: {}
# runAsGroup: 1000
# runAsUser: 1000
# supplementalGroups:
# - 1000
debug: false
debugLevel: 0
cleanup:
image:
repository: rancher/shell
tag: v0.1.19-rc8
pullPolicy: IfNotPresent
## Define which Nodes the Pods are scheduled on.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for use with node taints
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
containerSecurityContext: {}
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# privileged: false
# readOnlyRootFilesystem: true
securityContext:
runAsNonRoot: false
runAsUser: 0
resources: {}
# limits:
# memory: 500Mi
# cpu: 1000m
# requests:
# memory: 100Mi
# cpu: 100m

View File

@ -0,0 +1,43 @@
questions:
- variable: global.cattle.psp.enabled
default: "false"
description: "Flag to enable or disable the installation of PodSecurityPolicies by this chart in the target cluster. If the cluster is running Kubernetes 1.25+, you must update this value to false."
label: "Enable PodSecurityPolicies"
type: boolean
group: "Security Settings"
- variable: helmProjectOperator.helmController.enabled
label: Enable Embedded Helm Controller
description: 'Note: If you are running Prometheus Federator in an RKE2 / K3s cluster before v1.23.14 / v1.24.8 / v1.25.4, this should be disabled.'
type: boolean
group: Helm Controller
- variable: helmProjectOperator.helmLocker.enabled
label: Enable Embedded Helm Locker
type: boolean
group: Helm Locker
- variable: helmProjectOperator.projectReleaseNamespaces.labelValue
label: Project Release Namespace Project ID
description: By default, the System Project is selected. This can be overriden to a different Project (e.g. p-xxxxx)
type: string
required: false
group: Namespaces
- variable: helmProjectOperator.releaseRoleBindings.clusterRoleRefs.admin
label: Admin ClusterRole
description: By default, admin selects Project Owners. This can be overridden to a different ClusterRole (e.g. rt-xxxxx)
type: string
default: admin
required: false
group: RBAC
- variable: helmProjectOperator.releaseRoleBindings.clusterRoleRefs.edit
label: Edit ClusterRole
description: By default, edit selects Project Members. This can be overridden to a different ClusterRole (e.g. rt-xxxxx)
type: string
default: edit
required: false
group: RBAC
- variable: helmProjectOperator.releaseRoleBindings.clusterRoleRefs.view
label: View ClusterRole
description: By default, view selects Read-Only users. This can be overridden to a different ClusterRole (e.g. rt-xxxxx)
type: string
default: view
required: false
group: RBAC

View File

@ -0,0 +1,3 @@
{{ $.Chart.Name }} has been installed. Check its status by running:
kubectl --namespace {{ template "prometheus-federator.namespace" . }} get pods -l "release={{ $.Release.Name }}"

View File

@ -0,0 +1,66 @@
# Rancher
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- end -}}
{{- end -}}
# Windows Support
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
beta.kubernetes.io/os: linux
{{- else -}}
kubernetes.io/os: linux
{{- end -}}
{{- end -}}
# Helm Project Operator
{{/* vim: set filetype=mustache: */}}
{{/* Expand the name of the chart. This is suffixed with -alertmanager, which means subtract 13 from longest 63 available */}}
{{- define "prometheus-federator.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 50 | trimSuffix "-" -}}
{{- end }}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "prometheus-federator.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/* Create chart name and version as used by the chart label. */}}
{{- define "prometheus-federator.chartref" -}}
{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}}
{{- end }}
{{/* Generate basic labels */}}
{{- define "prometheus-federator.labels" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: "{{ replace "+" "_" .Chart.Version }}"
app.kubernetes.io/part-of: {{ template "prometheus-federator.name" . }}
chart: {{ template "prometheus-federator.chartref" . }}
release: {{ $.Release.Name | quote }}
heritage: {{ $.Release.Service | quote }}
{{- if .Values.commonLabels}}
{{ toYaml .Values.commonLabels }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,94 @@
# Default values for helm-project-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Prometheus Federator Configuration
global:
cattle:
psp:
enabled: false
systemDefaultRegistry: ""
projectLabel: field.cattle.io/projectId
clusterId: ""
systemProjectId: ""
url: ""
rbac:
pspEnabled: true
pspAnnotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - name: "image-pull-secret"
helmProjectOperator:
enabled: true
# ensures that all resources created by subchart show up as prometheus-federator
helmApiVersion: monitoring.cattle.io/v1alpha1
nameOverride: prometheus-federator
helmController:
# Note: should be disabled for RKE2 clusters since they already run Helm Controller to manage internal Kubernetes components
enabled: true
helmLocker:
enabled: true
## valuesOverride overrides values that are set on each Project Prometheus Stack Helm Chart deployment on an operator level
## all values provided here will override any user-provided values automatically
valuesOverride:
federate:
# Change this to point at all Prometheuses you want all your Project Prometheus Stacks to federate from
# By default, this matches the default deployment of Rancher Monitoring
targets:
- rancher-monitoring-prometheus.cattle-monitoring-system.svc:9090
image:
repository: rancher/prometheus-federator
tag: v0.2.0
pullPolicy: IfNotPresent
# Additional arguments to be passed into the Prometheus Federator image
additionalArgs: []
## Define which Nodes the Pods are scheduled on.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for use with node taints
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
resources: {}
# limits:
# memory: 500Mi
# cpu: 1000m
# requests:
# memory: 100Mi
# cpu: 100m
securityContext: {}
# allowPrivilegeEscalation: false
# readOnlyRootFilesystem: true
debug: false
debugLevel: 0

View File

@ -0,0 +1,33 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/display-name: Project Monitoring
catalog.cattle.io/hidden: "true"
catalog.cattle.io/kube-version: '>= 1.16.0-0 < 1.26.0-0'
catalog.cattle.io/permits-os: linux,windows
catalog.cattle.io/rancher-version: '>= 2.6.5-0 <= 2.8.0-0'
catalog.cattle.io/release-name: rancher-project-monitoring
apiVersion: v2
appVersion: 0.59.1
dependencies:
- condition: grafana.enabled
name: grafana
repository: file://./charts/grafana
description: Collects several related Helm charts, Grafana dashboards, and Prometheus
rules combined with documentation and scripts to provide easy to operate end-to-end
Kubernetes cluster monitoring with Prometheus. Depends on the existence of a Cluster
Prometheus deployed via Prometheus Operator
home: https://github.com/prometheus-operator/kube-prometheus
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
keywords:
- prometheus
- monitoring
kubeVersion: '>=1.16.0-0'
maintainers:
- email: arvind.iyengar@suse.com
name: Arvind
- email: amangeet.samra@suse.com
name: Geet
url: https://github.com/geethub97
name: rancher-project-monitoring
type: application
version: 2.0.0+up0.2.0

View File

@ -0,0 +1,29 @@
# rancher-project-monitoring
This chart installs a project-scoped version of [`rancher-monitoring`](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting), a Helm chart based off of [`kube-prometheus stack`](https://github.com/prometheus-operator/kube-prometheus). It deploys a collection of Kubernetes manifests, [Grafana](http://grafana.com/) dashboards, and [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) combined with documentation and scripts to provide easy to operate end-to-end Kubernetes project monitoring with [Prometheus](https://prometheus.io/) using the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator). See the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) README for details about components, dashboards, and alerts.
## Prerequisites
- Kubernetes 1.16+
- Helm 3+
## Install Chart
This chart is not intended for standalone use; it's intended to be deployed via [Prometheus Federator](https://github.com/rancher/prometheus-federator). For a Prometheus Stack intended to be deployed standalone, please use [rancher-monitoring](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/) or the upstream [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) project.
## Dependencies
This chart is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. Specifically, the chart is configured and intended to be deployed alongside [`rancher-monitoring`](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/), which deploys Prometheus Operator alongside a Cluster Prometheus that `rancher-project-monitoring` is configured to federate namespace-scoped metrics from by default.
### Configuration
Since this chart installs a project-scoped version of [`rancher-monitoring`](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/), a Helm chart based off of [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack), most of the options that apply to either of those charts will apply to this chart (e.g. support for configuring persistent volumes, ingresses, etc.) and can be passed in as part of the `spec.values` of the ProjectHelmChart that deploys this chart; however, certain advanced functionality (such as Thanos support) and options that pose security risks in Project environments (e.g. ability to `ignoreNamespaceSelectors` or modify the existing namepaceSelectors of the Cluster Prometheus, ability to mount additional scrape configs, etc.) have been removed from the `values.yaml` of the chart. For more information on how to configure values and what they mean, please see the comments and options provided on the `values.yaml` packaged with this chart.
## Further Information
For more in-depth documentation of configuration options meanings, please see
- [`rancher-monitoring`](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/)
- [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)
- [Prometheus](https://prometheus.io/docs/introduction/overview/)
- [Grafana](https://github.com/grafana/helm-charts/tree/main/charts/grafana#grafana-helm-chart)

View File

@ -0,0 +1,10 @@
# Rancher Project Monitoring and Alerting
The chart installs a Project Monitoring Stack, which contains:
- [Prometheus](https://prometheus.io/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator))
- [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator))
- [Grafana](https://github.com/helm/charts/tree/master/stable/grafana) (deployed via an embedded Helm chart)
- Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/)
- Default ServiceMonitors that watch the deployed resources
Note: This chart is not intended for standalone use; it's intended to be deployed via [Prometheus Federator](https://github.com/rancher/prometheus-federator).

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.vscode
.project
.idea/
*.tmproj
OWNERS

View File

@ -0,0 +1,29 @@
annotations:
catalog.cattle.io/hidden: "true"
catalog.cattle.io/kube-version: '>= 1.16.0-0 < 1.25.0-0'
catalog.cattle.io/os: linux
catalog.rancher.io/certified: rancher
catalog.rancher.io/namespace: cattle-monitoring-system
catalog.rancher.io/release-name: rancher-grafana
apiVersion: v2
appVersion: 9.1.5
description: The leading tool for querying and visualizing time series and metrics.
home: https://grafana.net
icon: https://raw.githubusercontent.com/grafana/grafana/master/public/img/logo_transparent_400x.png
kubeVersion: ^1.8.0-0
maintainers:
- email: zanhsieh@gmail.com
name: zanhsieh
- email: rluckie@cisco.com
name: rtluckie
- email: maor.friedman@redhat.com
name: maorfr
- email: miroslav.hadzhiev@gmail.com
name: Xtigyro
- email: mail@torstenwalter.de
name: torstenwalter
name: grafana
sources:
- https://github.com/grafana/grafana
type: application
version: 6.38.6

View File

@ -0,0 +1,571 @@
# Grafana Helm Chart
* Installs the web dashboarding system [Grafana](http://grafana.org/)
## Get Repo Info
```console
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
```
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install my-release grafana/grafana
```
## Uninstalling the Chart
To uninstall/delete the my-release deployment:
```console
helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Upgrading an existing Release to a new major version
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an
incompatible breaking change needing manual actions.
### To 4.0.0 (And 3.12.1)
This version requires Helm >= 2.12.0.
### To 5.0.0
You have to add --force to your helm upgrade command as the labels of the chart have changed.
### To 6.0.0
This version requires Helm >= 3.1.0.
## Configuration
| Parameter | Description | Default |
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
| `replicas` | Number of nodes | `1` |
| `podDisruptionBudget.minAvailable` | Pod disruption minimum available | `nil` |
| `podDisruptionBudget.maxUnavailable` | Pod disruption maximum unavailable | `nil` |
| `deploymentStrategy` | Deployment strategy | `{ "type": "RollingUpdate" }` |
| `livenessProbe` | Liveness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }` |
| `readinessProbe` | Readiness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } }`|
| `securityContext` | Deployment securityContext | `{"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472}` |
| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
| `image.repository` | Image repository | `grafana/grafana` |
| `image.tag` | Overrides the Grafana image tag whose default is the chart appVersion (`Must be >= 5.0.0`) | `` |
| `image.sha` | Image sha (optional) | `` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Image pull secrets (can be templated) | `[]` |
| `service.enabled` | Enable grafana service | `true` |
| `service.type` | Kubernetes service type | `ClusterIP` |
| `service.port` | Kubernetes port where service is exposed | `80` |
| `service.portName` | Name of the port on the service | `service` |
| `service.appProtocol` | Adds the appProtocol field to the service | `` |
| `service.targetPort` | Internal service is port | `3000` |
| `service.nodePort` | Kubernetes service nodePort | `nil` |
| `service.annotations` | Service annotations (can be templated) | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.clusterIP` | internal cluster service IP | `nil` |
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `nil` |
| `service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to lb (if supported) | `[]` |
| `service.externalIPs` | service external IP addresses | `[]` |
| `headlessService` | Create a headless service | `false` |
| `extraExposePorts` | Additional service ports for sidecar containers| `[]` |
| `hostAliases` | adds rules to the pod's /etc/hosts | `[]` |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations (values are templated) | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress accepted path | `/` |
| `ingress.pathType` | Ingress type of path | `Prefix` |
| `ingress.hosts` | Ingress accepted hostnames | `["chart-example.local"]` |
| `ingress.extraPaths` | Ingress extra paths to prepend to every host configuration. Useful when configuring [custom actions with AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#actions). Requires `ingress.hosts` to have one or more host entries. | `[]` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `extraInitContainers` | Init containers to add to the grafana pod | `{}` |
| `extraContainers` | Sidecar containers to add to the grafana pod | `""` |
| `extraContainerVolumes` | Volumes that can be mounted in sidecar containers | `[]` |
| `extraLabels` | Custom labels for all manifests | `{}` |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `persistence.enabled` | Use persistent volume to store data | `false` |
| `persistence.type` | Type of persistence (`pvc` or `statefulset`) | `pvc` |
| `persistence.size` | Size of persistent volume claim | `10Gi` |
| `persistence.existingClaim` | Use an existing PVC to persist data (can be templated) | `nil` |
| `persistence.storageClassName` | Type of persistent volume claim | `nil` |
| `persistence.accessModes` | Persistence access modes | `[ReadWriteOnce]` |
| `persistence.annotations` | PersistentVolumeClaim annotations | `{}` |
| `persistence.finalizers` | PersistentVolumeClaim finalizers | `[ "kubernetes.io/pvc-protection" ]` |
| `persistence.subPath` | Mount a sub dir of the persistent volume (can be templated) | `nil` |
| `persistence.inMemory.enabled` | If persistence is not enabled, whether to mount the local storage in-memory to improve performance | `false` |
| `persistence.inMemory.sizeLimit` | SizeLimit for the in-memory local storage | `nil` |
| `initChownData.enabled` | If false, don't reset data ownership at startup | true |
| `initChownData.image.repository` | init-chown-data container image repository | `busybox` |
| `initChownData.image.tag` | init-chown-data container image tag | `1.31.1` |
| `initChownData.image.sha` | init-chown-data container image sha (optional)| `""` |
| `initChownData.image.pullPolicy` | init-chown-data container image pull policy | `IfNotPresent` |
| `initChownData.resources` | init-chown-data pod resource requests & limits | `{}` |
| `schedulerName` | Alternate scheduler name | `nil` |
| `env` | Extra environment variables passed to pods | `{}` |
| `envValueFrom` | Environment variables from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` |
| `envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
| `envFromSecrets` | List of Kubernetes secrets (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` |
| `envFromConfigMaps` | List of Kubernetes ConfigMaps (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` |
| `envRenderSecret` | Sensible environment variables passed to pods and stored as secret | `{}` |
| `enableServiceLinks` | Inject Kubernetes services as environment variables. | `true` |
| `extraSecretMounts` | Additional grafana server secret mounts | `[]` |
| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` |
| `createConfigmap` | Enable creating the grafana configmap | `true` |
| `extraConfigmapMounts` | Additional grafana server configMap volume mounts (values are templated) | `[]` |
| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` |
| `plugins` | Plugins to be loaded along with Grafana | `[]` |
| `datasources` | Configure grafana datasources (passed through tpl) | `{}` |
| `alerting` | Configure grafana alerting (passed through tpl) | `{}` |
| `notifiers` | Configure grafana notifiers | `{}` |
| `dashboardProviders` | Configure grafana dashboard providers | `{}` |
| `dashboards` | Dashboards to import | `{}` |
| `dashboardsConfigMaps` | ConfigMaps reference that contains dashboards | `{}` |
| `grafana.ini` | Grafana's primary configuration | `{}` |
| `ldap.enabled` | Enable LDAP authentication | `false` |
| `ldap.existingSecret` | The name of an existing secret containing the `ldap.toml` file, this must have the key `ldap-toml`. | `""` |
| `ldap.config` | Grafana's LDAP configuration | `""` |
| `annotations` | Deployment annotations | `{}` |
| `labels` | Deployment labels | `{}` |
| `podAnnotations` | Pod annotations | `{}` |
| `podLabels` | Pod labels | `{}` |
| `podPortName` | Name of the grafana port on the pod | `grafana` |
| `lifecycleHooks` | Lifecycle hooks for podStart and preStop [Example](https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers) | `{}` |
| `sidecar.image.repository` | Sidecar image repository | `quay.io/kiwigrid/k8s-sidecar` |
| `sidecar.image.tag` | Sidecar image tag | `1.19.2` |
| `sidecar.image.sha` | Sidecar image sha (optional) | `""` |
| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` |
| `sidecar.resources` | Sidecar resources | `{}` |
| `sidecar.securityContext` | Sidecar securityContext | `{}` |
| `sidecar.enableUniqueFilenames` | Sets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable. If set to `true` the sidecar will create unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces. | `false` |
| `sidecar.dashboards.enabled` | Enables the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` |
| `sidecar.dashboards.SCProvider` | Enables creation of sidecar provider | `true` |
| `sidecar.dashboards.provider.name` | Unique name of the grafana provider | `sidecarProvider` |
| `sidecar.dashboards.provider.orgid` | Id of the organisation, to which the dashboards should be added | `1` |
| `sidecar.dashboards.provider.folder` | Logical folder in which grafana groups dashboards | `""` |
| `sidecar.dashboards.provider.disableDelete` | Activate to avoid the deletion of imported dashboards | `false` |
| `sidecar.dashboards.provider.allowUiUpdates` | Allow updating provisioned dashboards from the UI | `false` |
| `sidecar.dashboards.provider.type` | Provider type | `file` |
| `sidecar.dashboards.provider.foldersFromFilesStructure` | Allow Grafana to replicate dashboard structure from filesystem. | `false` |
| `sidecar.dashboards.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
| `sidecar.skipTlsVerify` | Set to true to skip tls verification for kube api calls | `nil` |
| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added | `grafana_dashboard` |
| `sidecar.dashboards.labelValue` | Label value that config maps with dashboards should have to be added | `""` |
| `sidecar.dashboards.folder` | Folder in the pod that should hold the collected dashboards (unless `sidecar.dashboards.defaultFolderName` is set). This path will be mounted. | `/tmp/dashboards` |
| `sidecar.dashboards.folderAnnotation` | The annotation the sidecar will look for in configmaps to override the destination folder for files | `nil` |
| `sidecar.dashboards.defaultFolderName` | The default folder name, it will create a subfolder under the `sidecar.dashboards.folder` and put dashboards in there instead | `nil` |
| `sidecar.dashboards.script` | Absolute path to shell script to execute after a configmap got reloaded. | `nil` |
| `sidecar.dashboards.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.dashboards.extraMounts` | Additional dashboard sidecar volume mounts. | `[]` |
| `sidecar.datasources.enabled` | Enables the cluster wide search for datasources and adds/updates/deletes them in grafana |`false` |
| `sidecar.datasources.label` | Label that config maps with datasources should have to be added | `grafana_datasource` |
| `sidecar.datasources.labelValue` | Label value that config maps with datasources should have to be added | `""` |
| `sidecar.datasources.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.datasources.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/datasources/reload"` |
| `sidecar.datasources.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
| `sidecar.notifiers.enabled` | Enables the cluster wide search for notifiers and adds/updates/deletes them in grafana | `false` |
| `sidecar.notifiers.label` | Label that config maps with notifiers should have to be added | `grafana_notifier` |
| `sidecar.notifiers.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `smtp.existingSecret` | The name of an existing secret containing the SMTP credentials. | `""` |
| `smtp.userKey` | The key in the existing SMTP secret containing the username. | `"user"` |
| `smtp.passwordKey` | The key in the existing SMTP secret containing the password. | `"password"` |
| `admin.existingSecret` | The name of an existing secret containing the admin credentials (can be templated). | `""` |
| `admin.userKey` | The key in the existing admin secret containing the username. | `"admin-user"` |
| `admin.passwordKey` | The key in the existing admin secret containing the password. | `"admin-password"` |
| `serviceAccount.autoMount` | Automount the service account token in the pod| `true` |
| `serviceAccount.annotations` | ServiceAccount annotations | |
| `serviceAccount.create` | Create service account | `true` |
| `serviceAccount.name` | Service account name to use, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `` |
| `serviceAccount.nameTest` | Service account name to use for test, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `nil` |
| `rbac.create` | Create and use RBAC resources | `true` |
| `rbac.namespaced` | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the grafana instance | `false` |
| `rbac.useExistingRole` | Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here. | `nil` |
| `rbac.pspEnabled` | Create PodSecurityPolicy (with `rbac.create`, grant roles permissions as well) | `true` |
| `rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires `rbac.pspEnabled`) | `true` |
| `rbac.extraRoleRules` | Additional rules to add to the Role | [] |
| `rbac.extraClusterRoleRules` | Additional rules to add to the ClusterRole | [] |
| `command` | Define command to be executed by grafana container at startup | `nil` |
| `testFramework.enabled` | Whether to create test-related resources | `true` |
| `testFramework.image` | `test-framework` image repository. | `bats/bats` |
| `testFramework.tag` | `test-framework` image tag. | `v1.4.1` |
| `testFramework.imagePullPolicy` | `test-framework` image pull policy. | `IfNotPresent` |
| `testFramework.securityContext` | `test-framework` securityContext | `{}` |
| `downloadDashboards.env` | Environment variables to be passed to the `download-dashboards` container | `{}` |
| `downloadDashboards.envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
| `downloadDashboards.resources` | Resources of `download-dashboards` container | `{}` |
| `downloadDashboardsImage.repository` | Curl docker image repo | `curlimages/curl` |
| `downloadDashboardsImage.tag` | Curl docker image tag | `7.73.0` |
| `downloadDashboardsImage.sha` | Curl docker image sha (optional) | `""` |
| `downloadDashboardsImage.pullPolicy` | Curl docker image pull policy | `IfNotPresent` |
| `namespaceOverride` | Override the deployment namespace | `""` (`Release.Namespace`) |
| `serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
| `serviceMonitor.namespace` | Namespace this servicemonitor is installed in | |
| `serviceMonitor.interval` | How frequently Prometheus should scrape | `1m` |
| `serviceMonitor.path` | Path to scrape | `/metrics` |
| `serviceMonitor.scheme` | Scheme to use for metrics scraping | `http` |
| `serviceMonitor.tlsConfig` | TLS configuration block for the endpoint | `{}` |
| `serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` |
| `serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `30s` |
| `serviceMonitor.relabelings` | MetricRelabelConfigs to apply to samples before ingestion. | `[]` |
| `revisionHistoryLimit` | Number of old ReplicaSets to retain | `10` |
| `imageRenderer.enabled` | Enable the image-renderer deployment & service | `false` |
| `imageRenderer.image.repository` | image-renderer Image repository | `grafana/grafana-image-renderer` |
| `imageRenderer.image.tag` | image-renderer Image tag | `latest` |
| `imageRenderer.image.sha` | image-renderer Image sha (optional) | `""` |
| `imageRenderer.image.pullPolicy` | image-renderer ImagePullPolicy | `Always` |
| `imageRenderer.env` | extra env-vars for image-renderer | `{}` |
| `imageRenderer.serviceAccountName` | image-renderer deployment serviceAccountName | `""` |
| `imageRenderer.securityContext` | image-renderer deployment securityContext | `{}` |
| `imageRenderer.hostAliases` | image-renderer deployment Host Aliases | `[]` |
| `imageRenderer.priorityClassName` | image-renderer deployment priority class | `''` |
| `imageRenderer.service.enabled` | Enable the image-renderer service | `true` |
| `imageRenderer.service.portName` | image-renderer service port name | `http` |
| `imageRenderer.service.port` | image-renderer port used by deployment | `8081` |
| `imageRenderer.service.targetPort` | image-renderer service port used by service | `8081` |
| `imageRenderer.appProtocol` | Adds the appProtocol field to the service | `` |
| `imageRenderer.grafanaSubPath` | Grafana sub path to use for image renderer callback url | `''` |
| `imageRenderer.podPortName` | name of the image-renderer port on the pod | `http` |
| `imageRenderer.revisionHistoryLimit` | number of image-renderer replica sets to keep | `10` |
| `imageRenderer.networkPolicy.limitIngress` | Enable a NetworkPolicy to limit inbound traffic from only the created grafana pods | `true` |
| `imageRenderer.networkPolicy.limitEgress` | Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods | `false` |
| `imageRenderer.resources` | Set resource limits for image-renderer pdos | `{}` |
| `imageRenderer.nodeSelector` | Node labels for pod assignment | `{}` |
| `imageRenderer.tolerations` | Toleration labels for pod assignment | `[]` |
| `imageRenderer.affinity` | Affinity settings for pod assignment | `{}` |
| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources. | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed | `{}` |
| `networkPolicy.ingress` | Enable the creation of an ingress network policy | `true` |
| `networkPolicy.egress.enabled` | Enable the creation of an egress network policy | `false` |
| `networkPolicy.egress.ports` | An array of ports to allow for the egress | `[]` |
| `enableKubeBackwardCompatibility` | Enable backward compatibility of kubernetes where pod's defintion version below 1.13 doesn't have the enableServiceLinks option | `false` |
### Example ingress with path
With grafana 6.3 and above
```yaml
grafana.ini:
server:
domain: monitoring.example.com
root_url: "%(protocol)s://%(domain)s/grafana"
serve_from_sub_path: true
ingress:
enabled: true
hosts:
- "monitoring.example.com"
path: "/grafana"
```
### Example of extraVolumeMounts
Volume can be type persistentVolumeClaim or hostPath but not both at same time.
If neither existingClaim or hostPath argument is given then type is emptyDir.
```yaml
- extraVolumeMounts:
- name: plugins
mountPath: /var/lib/grafana/plugins
subPath: configs/grafana/plugins
existingClaim: existing-grafana-claim
readOnly: false
- name: dashboards
mountPath: /var/lib/grafana/dashboards
hostPath: /usr/shared/grafana/dashboards
readOnly: false
```
## Import dashboards
There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method:
```yaml
dashboards:
default:
some-dashboard:
json: |
{
"annotations":
...
# Complete json file here
...
"title": "Some Dashboard",
"uid": "abcd1234",
"version": 1
}
custom-dashboard:
# This is a path to a file inside the dashboards directory inside the chart directory
file: dashboards/custom-dashboard.json
prometheus-stats:
# Ref: https://grafana.com/dashboards/2
gnetId: 2
revision: 2
datasource: Prometheus
local-dashboard:
url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
```
## BASE64 dashboards
Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit)
A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk.
If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.
### Gerrit use case
Gerrit API for download files has the following schema: <https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content> where {project-name} and
{file-id} usually has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard
the url value is <https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content>
## Sidecar for dashboards
If the parameter `sidecar.dashboards.enabled` is set, a sidecar container is deployed in the grafana
pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with
a label as defined in `sidecar.dashboards.label`. The files defined in those configmaps are written
to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported
dashboards are deleted/updated.
A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside
one configmap is currently not properly mirrored in grafana.
Example dashboard config:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
labels:
grafana_dashboard: "1"
data:
k8s-dashboard.json: |-
[...]
```
## Sidecar for datasources
If the parameter `sidecar.datasources.enabled` is set, an init container is deployed in the grafana
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in
those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
the data sources in grafana can be imported.
Secrets are recommended over configmaps for this usecase because datasources usually contain private
data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
Example values to add a datasource adapted from [Grafana](http://docs.grafana.org/administration/provisioning/#example-datasource-config-file):
```yaml
datasources:
datasources.yaml:
apiVersion: 1
datasources:
# <string, required> name of the datasource. Required
- name: Graphite
# <string, required> datasource type. Required
type: graphite
# <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
access: proxy
# <int> org id. will default to orgId 1 if not specified
orgId: 1
# <string> url
url: http://localhost:8080
# <string> database password, if used
password:
# <string> database user, if used
user:
# <string> database name, if used
database:
# <bool> enable/disable basic auth
basicAuth:
# <string> basic auth username
basicAuthUser:
# <string> basic auth password
basicAuthPassword:
# <bool> enable/disable with credentials headers
withCredentials:
# <bool> mark as default datasource. Max one per org
isDefault:
# <map> fields that will be converted to json and stored in json_data
jsonData:
graphiteVersion: "1.1"
tlsAuth: true
tlsAuthWithCACert: true
# <string> json object of data that will be encrypted.
secureJsonData:
tlsCACert: "..."
tlsClientCert: "..."
tlsClientKey: "..."
version: 1
# <bool> allow users to edit datasources from the UI.
editable: false
```
## Sidecar for notifiers
If the parameter `sidecar.notifiers.enabled` is set, an init container is deployed in the grafana
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
filters out the ones with a label as defined in `sidecar.notifiers.label`. The files defined in
those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
the notification channels in grafana can be imported. The secrets must be created before
`helm install` so that the notifiers init container can list the secrets.
Secrets are recommended over configmaps for this usecase because alert notification channels usually contain
private data like SMTP usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
Example datasource config adapted from [Grafana](https://grafana.com/docs/grafana/latest/administration/provisioning/#alert-notification-channels):
```yaml
notifiers:
- name: notification-channel-1
type: slack
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
is_default: true
send_reminder: true
frequency: 1h
disable_resolve_message: false
# See `Supported Settings` section for settings supporter for each
# alert notification type.
settings:
recipient: 'XXX'
token: 'xoxb'
uploadImage: true
url: https://slack.com
delete_notifiers:
- name: notification-channel-1
uid: notifier1
org_id: 2
- name: notification-channel-2
# default org_id: 1
```
## How to serve Grafana with a path prefix (/grafana)
In order to serve Grafana with a prefix (e.g., <http://example.com/grafana>), add the following to your values.yaml.
```yaml
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
path: /grafana/?(.*)
hosts:
- k8s.example.dev
grafana.ini:
server:
root_url: http://localhost:3000/grafana # this host can be localhost
```
## How to securely reference secrets in grafana.ini
This example uses Grafana [file providers](https://grafana.com/docs/grafana/latest/administration/configuration/#file-provider) for secret values and the `extraSecretMounts` configuration flag (Additional grafana server secret mounts) to mount the secrets.
In grafana.ini:
```yaml
grafana.ini:
[auth.generic_oauth]
enabled = true
client_id = $__file{/etc/secrets/auth_generic_oauth/client_id}
client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret}
```
Existing secret, or created along with helm:
```yaml
---
apiVersion: v1
kind: Secret
metadata:
name: auth-generic-oauth-secret
type: Opaque
stringData:
client_id: <value>
client_secret: <value>
```
Include in the `extraSecretMounts` configuration flag:
```yaml
- extraSecretMounts:
- name: auth-generic-oauth-secret-mount
secretName: auth-generic-oauth-secret
defaultMode: 0440
mountPath: /etc/secrets/auth_generic_oauth
readOnly: true
```
### extraSecretMounts using a Container Storage Interface (CSI) provider
This example uses a CSI driver e.g. retrieving secrets using [Azure Key Vault Provider](https://github.com/Azure/secrets-store-csi-driver-provider-azure)
```yaml
- extraSecretMounts:
- name: secrets-store-inline
mountPath: /run/secrets
readOnly: true
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "my-provider"
nodePublishSecretRef:
name: akv-creds
```
## Image Renderer Plug-In
This chart supports enabling [remote image rendering](https://github.com/grafana/grafana-image-renderer/blob/master/README.md#run-in-docker)
```yaml
imageRenderer:
enabled: true
```
### Image Renderer NetworkPolicy
By default the image-renderer pods will have a network policy which only allows ingress traffic from the created grafana instance
### High Availability for unified alerting
If you want to run Grafana in a high availability cluster you need to enable
the headless service by setting `headlessService: true` in your `values.yaml`
file.
As next step you have to setup the `grafana.ini` in your `values.yaml` in a way
that it will make use of the headless service to obtain all the IPs of the
cluster. You should replace ``{{ Name }}`` with the name of your helm deployment.
```yaml
grafana.ini:
...
unified_alerting:
enabled: true
ha_peers: {{ Name }}-headless:9094
alerting:
enabled: false
```

View File

@ -0,0 +1,54 @@
1. Get your '{{ .Values.adminUser }}' user password by running:
kubectl get secret --namespace {{ template "grafana.namespace" . }} {{ template "grafana.fullname" . }} -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
2. The Grafana server can be accessed via port {{ .Values.service.port }} on the following DNS name from within your cluster:
{{ template "grafana.fullname" . }}.{{ template "grafana.namespace" . }}.svc.cluster.local
{{ if .Values.ingress.enabled }}
If you bind grafana to 80, please update values in values.yaml and reinstall:
```
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
command:
- "setcap"
- "'cap_net_bind_service=+ep'"
- "/usr/sbin/grafana-server &&"
- "sh"
- "/run.sh"
```
Details refer to https://grafana.com/docs/installation/configuration/#http-port.
Or grafana would always crash.
From outside the cluster, the server URL(s) are:
{{- range .Values.ingress.hosts }}
http://{{ . }}
{{- end }}
{{ else }}
Get the Grafana URL to visit by running these commands in the same shell:
{{ if contains "NodePort" .Values.service.type -}}
export NODE_PORT=$(kubectl get --namespace {{ template "grafana.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "grafana.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ template "grafana.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{ else if contains "LoadBalancer" .Values.service.type -}}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace {{ template "grafana.namespace" . }} -w {{ template "grafana.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ template "grafana.namespace" . }} {{ template "grafana.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
http://$SERVICE_IP:{{ .Values.service.port -}}
{{ else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ template "grafana.namespace" . }} -l "app.kubernetes.io/name={{ template "grafana.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace {{ template "grafana.namespace" . }} port-forward $POD_NAME 3000
{{- end }}
{{- end }}
3. Login with the password from step 1 and the username: {{ .Values.adminUser }}
{{- if not .Values.persistence.enabled }}
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
###### the Grafana pod is terminated. #####
#################################################################################
{{- end }}

View File

@ -0,0 +1,214 @@
# Rancher
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- end -}}
{{- end -}}
# Windows Support
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
beta.kubernetes.io/os: linux
{{- else -}}
kubernetes.io/os: linux
{{- end -}}
{{- end -}}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "grafana.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "grafana.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "grafana.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account
*/}}
{{- define "grafana.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "grafana.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- define "grafana.serviceAccountNameTest" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (print (include "grafana.fullname" .) "-test") .Values.serviceAccount.nameTest }}
{{- else -}}
{{ default "default" .Values.serviceAccount.nameTest }}
{{- end -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "grafana.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "grafana.labels" -}}
helm.sh/chart: {{ include "grafana.chart" . }}
{{ include "grafana.selectorLabels" . }}
{{- if or .Chart.AppVersion .Values.image.tag }}
app.kubernetes.io/version: {{ .Values.image.tag | default .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels }}
{{- end }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "grafana.selectorLabels" -}}
app.kubernetes.io/name: {{ include "grafana.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "grafana.imageRenderer.labels" -}}
helm.sh/chart: {{ include "grafana.chart" . }}
{{ include "grafana.imageRenderer.selectorLabels" . }}
{{- if or .Chart.AppVersion .Values.image.tag }}
app.kubernetes.io/version: {{ .Values.image.tag | default .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels ImageRenderer
*/}}
{{- define "grafana.imageRenderer.selectorLabels" -}}
app.kubernetes.io/name: {{ include "grafana.name" . }}-image-renderer
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Looks if there's an existing secret and reuse its password. If not it generates
new password and use it.
*/}}
{{- define "grafana.password" -}}
{{- $secret := (lookup "v1" "Secret" (include "grafana.namespace" .) (include "grafana.fullname" .) ) -}}
{{- if $secret -}}
{{- index $secret "data" "admin-password" -}}
{{- else -}}
{{- (randAlphaNum 40) | b64enc | quote -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for rbac.
*/}}
{{- define "grafana.rbac.apiVersion" -}}
{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }}
{{- print "rbac.authorization.k8s.io/v1" -}}
{{- else -}}
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for HorizontalPodAutoscaler.
*/}}
{{- define "grafana.hpa.apiVersion" -}}
{{- if .Capabilities.APIVersions.Has "autoscaling/v2" }}
{{- print "autoscaling/v2" -}}
{{- else if .Capabilities.APIVersions.Has "autoscaling/v1" }}
{{- print "autoscaling/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "grafana.ingress.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) -}}
{{- print "networking.k8s.io/v1" -}}
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "extensions/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for podDisruptionBudget.
*/}}
{{- define "grafana.podDisruptionBudget.apiVersion" -}}
{{- if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" -}}
{{- print "policy/v1" -}}
{{- else -}}
{{- print "policy/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Return if ingress is stable.
*/}}
{{- define "grafana.ingress.isStable" -}}
{{- eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1" -}}
{{- end -}}
{{/*
Return if ingress supports ingressClassName.
*/}}
{{- define "grafana.ingress.supportsIngressClassName" -}}
{{- or (eq (include "grafana.ingress.isStable" .) "true") (and (eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) -}}
{{- end -}}
{{/*
Return if ingress supports pathType.
*/}}
{{- define "grafana.ingress.supportsPathType" -}}
{{- or (eq (include "grafana.ingress.isStable" .) "true") (and (eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) -}}
{{- end -}}

View File

@ -0,0 +1,885 @@
{{- define "grafana.pod" -}}
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
serviceAccountName: {{ template "grafana.serviceAccountName" . }}
automountServiceAccountToken: {{ .Values.serviceAccount.autoMount }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.hostAliases }}
hostAliases:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if ( or .Values.persistence.enabled .Values.dashboards .Values.sidecar.notifiers.enabled .Values.extraInitContainers (and .Values.sidecar.datasources.enabled .Values.sidecar.datasources.initDatasources)) }}
initContainers:
{{- end }}
{{- if ( and .Values.persistence.enabled .Values.initChownData.enabled ) }}
- name: init-chown-data
{{- if .Values.initChownData.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.initChownData.image.repository }}:{{ .Values.initChownData.image.tag }}@sha256:{{ .Values.initChownData.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.initChownData.image.repository }}:{{ .Values.initChownData.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.initChownData.image.pullPolicy }}
securityContext:
runAsNonRoot: false
runAsUser: 0
command: ["chown", "-R", "{{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.runAsGroup }}", "/var/lib/grafana"]
{{- with .Values.initChownData.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: storage
mountPath: "/var/lib/grafana"
{{- if .Values.persistence.subPath }}
subPath: {{ tpl .Values.persistence.subPath . }}
{{- end }}
{{- end }}
{{- if .Values.dashboards }}
- name: download-dashboards
{{- if .Values.downloadDashboardsImage.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.downloadDashboardsImage.repository }}:{{ .Values.downloadDashboardsImage.tag }}@sha256:{{ .Values.downloadDashboardsImage.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.downloadDashboardsImage.repository }}:{{ .Values.downloadDashboardsImage.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.downloadDashboardsImage.pullPolicy }}
command: ["/bin/sh"]
args: [ "-c", "mkdir -p /var/lib/grafana/dashboards/default && /bin/sh -x /etc/grafana/download_dashboards.sh" ]
{{- with .Values.downloadDashboards.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
env:
{{- range $key, $value := .Values.downloadDashboards.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- with .Values.downloadDashboards.securityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if .Values.downloadDashboards.envFromSecret }}
envFrom:
- secretRef:
name: {{ tpl .Values.downloadDashboards.envFromSecret . }}
{{- end }}
volumeMounts:
- name: config
mountPath: "/etc/grafana/download_dashboards.sh"
subPath: download_dashboards.sh
- name: storage
mountPath: "/var/lib/grafana"
{{- if .Values.persistence.subPath }}
subPath: {{ tpl .Values.persistence.subPath . }}
{{- end }}
{{- range .Values.extraSecretMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
{{- end }}
{{- if and .Values.sidecar.datasources.enabled .Values.sidecar.datasources.initDatasources }}
- name: {{ template "grafana.name" . }}-init-sc-datasources
{{- if .Values.sidecar.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
env:
{{- range $key, $value := .Values.sidecar.datasources.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- if .Values.sidecar.datasources.ignoreAlreadyProcessed }}
- name: IGNORE_ALREADY_PROCESSED
value: "true"
{{- end }}
- name: METHOD
value: "LIST"
- name: LABEL
value: "{{ .Values.sidecar.datasources.label }}"
{{- if .Values.sidecar.datasources.labelValue }}
- name: LABEL_VALUE
value: {{ quote .Values.sidecar.datasources.labelValue }}
{{- end }}
{{- if or .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }}
- name: LOG_LEVEL
value: {{ default .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }}
{{- end }}
- name: FOLDER
value: "/etc/grafana/provisioning/datasources"
- name: RESOURCE
value: {{ quote .Values.sidecar.datasources.resource }}
{{- if .Values.sidecar.enableUniqueFilenames }}
- name: UNIQUE_FILENAMES
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
{{- end }}
- name: NAMESPACE
value: "{{ template "project-prometheus-stack.projectNamespaceList" . }}"
{{- if .Values.sidecar.skipTlsVerify }}
- name: SKIP_TLS_VERIFY
value: "{{ .Values.sidecar.skipTlsVerify }}"
{{- end }}
{{- with .Values.sidecar.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.securityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: sc-datasources-volume
mountPath: "/etc/grafana/provisioning/datasources"
{{- end }}
{{- if .Values.sidecar.notifiers.enabled }}
- name: {{ template "grafana.name" . }}-sc-notifiers
{{- if .Values.sidecar.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
env:
{{- range $key, $value := .Values.sidecar.notifiers.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- if .Values.sidecar.notifiers.ignoreAlreadyProcessed }}
- name: IGNORE_ALREADY_PROCESSED
value: "true"
{{- end }}
- name: METHOD
value: LIST
- name: LABEL
value: "{{ .Values.sidecar.notifiers.label }}"
{{- if .Values.sidecar.notifiers.labelValue }}
- name: LABEL_VALUE
value: {{ quote .Values.sidecar.notifiers.labelValue }}
{{- end }}
{{- if or .Values.sidecar.logLevel .Values.sidecar.notifiers.logLevel }}
- name: LOG_LEVEL
value: {{ default .Values.sidecar.logLevel .Values.sidecar.notifiers.logLevel }}
{{- end }}
- name: FOLDER
value: "/etc/grafana/provisioning/notifiers"
- name: RESOURCE
value: {{ quote .Values.sidecar.notifiers.resource }}
{{- if .Values.sidecar.enableUniqueFilenames }}
- name: UNIQUE_FILENAMES
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
{{- end }}
- name: NAMESPACE
value: "{{ template "project-prometheus-stack.projectNamespaceList" . }}"
{{- if .Values.sidecar.skipTlsVerify }}
- name: SKIP_TLS_VERIFY
value: "{{ .Values.sidecar.skipTlsVerify }}"
{{- end }}
{{- with .Values.sidecar.livenessProbe }}
livenessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.readinessProbe }}
readinessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.securityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: sc-notifiers-volume
mountPath: "/etc/grafana/provisioning/notifiers"
{{- end}}
{{- if .Values.extraInitContainers }}
{{ tpl (toYaml .Values.extraInitContainers) . | indent 2 }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- $root := . }}
{{- range .Values.image.pullSecrets }}
- name: {{ tpl . $root }}
{{- end }}
{{- end }}
{{- if not .Values.enableKubeBackwardCompatibility }}
enableServiceLinks: {{ .Values.enableServiceLinks }}
{{- end }}
containers:
{{- if .Values.sidecar.dashboards.enabled }}
- name: {{ template "grafana.name" . }}-sc-dashboard
{{- if .Values.sidecar.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
env:
{{- range $key, $value := .Values.sidecar.dashboards.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- if .Values.sidecar.dashboards.ignoreAlreadyProcessed }}
- name: IGNORE_ALREADY_PROCESSED
value: "true"
{{- end }}
- name: METHOD
value: {{ .Values.sidecar.dashboards.watchMethod }}
- name: LABEL
value: "{{ .Values.sidecar.dashboards.label }}"
{{- if .Values.sidecar.dashboards.labelValue }}
- name: LABEL_VALUE
value: {{ quote .Values.sidecar.dashboards.labelValue }}
{{- end }}
{{- if or .Values.sidecar.logLevel .Values.sidecar.dashboards.logLevel }}
- name: LOG_LEVEL
value: {{ default .Values.sidecar.logLevel .Values.sidecar.dashboards.logLevel }}
{{- end }}
- name: FOLDER
value: "{{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}"
- name: RESOURCE
value: {{ quote .Values.sidecar.dashboards.resource }}
{{- if .Values.sidecar.enableUniqueFilenames }}
- name: UNIQUE_FILENAMES
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
{{- end }}
- name: NAMESPACE
value: "{{ template "project-prometheus-stack.projectNamespaceList" . }}"
{{- if .Values.sidecar.skipTlsVerify }}
- name: SKIP_TLS_VERIFY
value: "{{ .Values.sidecar.skipTlsVerify }}"
{{- end }}
{{- if .Values.sidecar.dashboards.folderAnnotation }}
- name: FOLDER_ANNOTATION
value: "{{ .Values.sidecar.dashboards.folderAnnotation }}"
{{- end }}
{{- if .Values.sidecar.dashboards.script }}
- name: SCRIPT
value: "{{ .Values.sidecar.dashboards.script }}"
{{- end }}
{{- if .Values.sidecar.dashboards.watchServerTimeout }}
{{- if ne .Values.sidecar.dashboards.watchMethod "WATCH" }}
{{- fail (printf "Cannot use .Values.sidecar.dashboards.watchServerTimeout with .Values.sidecar.dashboards.watchMethod %s" .Values.sidecar.dashboards.watchMethod) }}
{{- end }}
- name: WATCH_SERVER_TIMEOUT
value: "{{ .Values.sidecar.dashboards.watchServerTimeout }}"
{{- end }}
{{- if .Values.sidecar.dashboards.watchClientTimeout }}
{{- if ne .Values.sidecar.dashboards.watchMethod "WATCH" }}
{{- fail (printf "Cannot use .Values.sidecar.dashboards.watchClientTimeout with .Values.sidecar.dashboards.watchMethod %s" .Values.sidecar.dashboards.watchMethod) }}
{{- end }}
- name: WATCH_CLIENT_TIMEOUT
value: "{{ .Values.sidecar.dashboards.watchClientTimeout }}"
{{- end }}
{{- with .Values.sidecar.livenessProbe }}
livenessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.readinessProbe }}
readinessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.securityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: sc-dashboard-volume
mountPath: {{ .Values.sidecar.dashboards.folder | quote }}
{{- if .Values.sidecar.dashboards.extraMounts }}
{{- toYaml .Values.sidecar.dashboards.extraMounts | trim | nindent 6}}
{{- end }}
{{- end}}
{{- if .Values.sidecar.datasources.enabled }}
- name: {{ template "grafana.name" . }}-sc-datasources
{{- if .Values.sidecar.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
env:
{{- range $key, $value := .Values.sidecar.datasources.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- if .Values.sidecar.datasources.ignoreAlreadyProcessed }}
- name: IGNORE_ALREADY_PROCESSED
value: "true"
{{- end }}
- name: METHOD
value: {{ .Values.sidecar.datasources.watchMethod }}
- name: LABEL
value: "{{ .Values.sidecar.datasources.label }}"
{{- if .Values.sidecar.datasources.labelValue }}
- name: LABEL_VALUE
value: {{ quote .Values.sidecar.datasources.labelValue }}
{{- end }}
{{- if or .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }}
- name: LOG_LEVEL
value: {{ default .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }}
{{- end }}
- name: FOLDER
value: "/etc/grafana/provisioning/datasources"
- name: RESOURCE
value: {{ quote .Values.sidecar.datasources.resource }}
{{- if .Values.sidecar.enableUniqueFilenames }}
- name: UNIQUE_FILENAMES
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
{{- end }}
- name: NAMESPACE
value: "{{ template "project-prometheus-stack.projectNamespaceList" . }}"
{{- if .Values.sidecar.skipTlsVerify }}
- name: SKIP_TLS_VERIFY
value: "{{ .Values.sidecar.skipTlsVerify }}"
{{- end }}
{{- if .Values.sidecar.datasources.script }}
- name: SCRIPT
value: "{{ .Values.sidecar.datasources.script }}"
{{- end }}
{{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
- name: REQ_USERNAME
valueFrom:
secretKeyRef:
name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.userKey | default "admin-user" }}
{{- end }}
{{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
- name: REQ_PASSWORD
valueFrom:
secretKeyRef:
name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.passwordKey | default "admin-password" }}
{{- end }}
{{- if not .Values.sidecar.datasources.skipReload }}
- name: REQ_URL
value: {{ .Values.sidecar.datasources.reloadURL }}
- name: REQ_METHOD
value: POST
{{- end }}
{{- if .Values.sidecar.datasources.watchServerTimeout }}
{{- if ne .Values.sidecar.datasources.watchMethod "WATCH" }}
{{- fail (printf "Cannot use .Values.sidecar.datasources.watchServerTimeout with .Values.sidecar.datasources.watchMethod %s" .Values.sidecar.datasources.watchMethod) }}
{{- end }}
- name: WATCH_SERVER_TIMEOUT
value: "{{ .Values.sidecar.datasources.watchServerTimeout }}"
{{- end }}
{{- if .Values.sidecar.datasources.watchClientTimeout }}
{{- if ne .Values.sidecar.datasources.watchMethod "WATCH" }}
{{- fail (printf "Cannot use .Values.sidecar.datasources.watchClientTimeout with .Values.sidecar.datasources.watchMethod %s" .Values.sidecar.datasources.watchMethod) }}
{{- end }}
- name: WATCH_CLIENT_TIMEOUT
value: "{{ .Values.sidecar.datasources.watchClientTimeout }}"
{{- end }}
{{- with .Values.sidecar.livenessProbe }}
livenessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.readinessProbe }}
readinessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.securityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: sc-datasources-volume
mountPath: "/etc/grafana/provisioning/datasources"
{{- end}}
{{- if .Values.sidecar.plugins.enabled }}
- name: {{ template "grafana.name" . }}-sc-plugins
{{- if .Values.sidecar.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
env:
{{- range $key, $value := .Values.sidecar.plugins.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- if .Values.sidecar.plugins.ignoreAlreadyProcessed }}
- name: IGNORE_ALREADY_PROCESSED
value: "true"
{{- end }}
- name: METHOD
value: {{ .Values.sidecar.plugins.watchMethod }}
- name: LABEL
value: "{{ .Values.sidecar.plugins.label }}"
{{- if .Values.sidecar.plugins.labelValue }}
- name: LABEL_VALUE
value: {{ quote .Values.sidecar.plugins.labelValue }}
{{- end }}
{{- if or .Values.sidecar.logLevel .Values.sidecar.plugins.logLevel }}
- name: LOG_LEVEL
value: {{ default .Values.sidecar.logLevel .Values.sidecar.plugins.logLevel }}
{{- end }}
- name: FOLDER
value: "/etc/grafana/provisioning/plugins"
- name: RESOURCE
value: {{ quote .Values.sidecar.plugins.resource }}
{{- if .Values.sidecar.enableUniqueFilenames }}
- name: UNIQUE_FILENAMES
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
{{- end }}
- name: NAMESPACE
value: "{{ template "project-prometheus-stack.projectNamespaceList" . }}"
{{- if .Values.sidecar.plugins.script }}
- name: SCRIPT
value: "{{ .Values.sidecar.plugins.script }}"
{{- end }}
{{- if .Values.sidecar.skipTlsVerify }}
- name: SKIP_TLS_VERIFY
value: "{{ .Values.sidecar.skipTlsVerify }}"
{{- end }}
{{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
- name: REQ_USERNAME
valueFrom:
secretKeyRef:
name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.userKey | default "admin-user" }}
{{- end }}
{{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
- name: REQ_PASSWORD
valueFrom:
secretKeyRef:
name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.passwordKey | default "admin-password" }}
{{- end }}
{{- if not .Values.sidecar.plugins.skipReload }}
- name: REQ_URL
value: {{ .Values.sidecar.plugins.reloadURL }}
- name: REQ_METHOD
value: POST
{{- end }}
{{- if .Values.sidecar.plugins.watchServerTimeout }}
{{- if ne .Values.sidecar.plugins.watchMethod "WATCH" }}
{{- fail (printf "Cannot use .Values.sidecar.plugins.watchServerTimeout with .Values.sidecar.plugins.watchMethod %s" .Values.sidecar.plugins.watchMethod) }}
{{- end }}
- name: WATCH_SERVER_TIMEOUT
value: "{{ .Values.sidecar.plugins.watchServerTimeout }}"
{{- end }}
{{- if .Values.sidecar.plugins.watchClientTimeout }}
{{- if ne .Values.sidecar.plugins.watchMethod "WATCH" }}
{{- fail (printf "Cannot use .Values.sidecar.plugins.watchClientTimeout with .Values.sidecar.plugins.watchMethod %s" .Values.sidecar.plugins.watchMethod) }}
{{- end }}
- name: WATCH_CLIENT_TIMEOUT
value: "{{ .Values.sidecar.plugins.watchClientTimeout }}"
{{- end }}
{{- with .Values.sidecar.livenessProbe }}
livenessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.readinessProbe }}
readinessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.sidecar.securityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: sc-plugins-volume
mountPath: "/etc/grafana/provisioning/plugins"
{{- end}}
- name: {{ .Chart.Name }}
{{- if .Values.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}@sha256:{{ .Values.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.command }}
command:
{{- range .Values.command }}
- {{ . }}
{{- end }}
{{- end}}
{{- with .Values.containerSecurityContext }}
securityContext:
{{- toYaml . | nindent 6 }}
{{- end }}
volumeMounts:
- name: config
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
{{- if .Values.ldap.enabled }}
- name: ldap
mountPath: "/etc/grafana/ldap.toml"
subPath: ldap.toml
{{- end }}
{{- $root := . }}
{{- range .Values.extraConfigmapMounts }}
- name: {{ tpl .name $root }}
mountPath: {{ tpl .mountPath $root }}
subPath: {{ (tpl .subPath $root) | default "" }}
readOnly: {{ .readOnly }}
{{- end }}
- name: storage
mountPath: "/var/lib/grafana"
{{- if .Values.persistence.subPath }}
subPath: {{ tpl .Values.persistence.subPath . }}
{{- end }}
{{- if .Values.dashboards }}
{{- range $provider, $dashboards := .Values.dashboards }}
{{- range $key, $value := $dashboards }}
{{- if (or (hasKey $value "json") (hasKey $value "file")) }}
- name: dashboards-{{ $provider }}
mountPath: "/var/lib/grafana/dashboards/{{ $provider }}/{{ $key }}.json"
subPath: "{{ $key }}.json"
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{- if .Values.dashboardsConfigMaps }}
{{- range (keys .Values.dashboardsConfigMaps | sortAlpha) }}
- name: dashboards-{{ . }}
mountPath: "/var/lib/grafana/dashboards/{{ . }}"
{{- end }}
{{- end }}
{{- if .Values.datasources }}
{{- range (keys .Values.datasources | sortAlpha) }}
- name: config
mountPath: "/etc/grafana/provisioning/datasources/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- if .Values.notifiers }}
{{- range (keys .Values.notifiers | sortAlpha) }}
- name: config
mountPath: "/etc/grafana/provisioning/notifiers/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- if .Values.alerting }}
{{- range (keys .Values.alerting | sortAlpha) }}
- name: config
mountPath: "/etc/grafana/provisioning/alerting/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- if .Values.dashboardProviders }}
{{- range (keys .Values.dashboardProviders | sortAlpha) }}
- name: config
mountPath: "/etc/grafana/provisioning/dashboards/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- if .Values.sidecar.dashboards.enabled }}
- name: sc-dashboard-volume
mountPath: {{ .Values.sidecar.dashboards.folder | quote }}
{{ if .Values.sidecar.dashboards.SCProvider }}
- name: sc-dashboard-provider
mountPath: "/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml"
subPath: provider.yaml
{{- end}}
{{- end}}
{{- if .Values.sidecar.datasources.enabled }}
- name: sc-datasources-volume
mountPath: "/etc/grafana/provisioning/datasources"
{{- end}}
{{- if .Values.sidecar.plugins.enabled }}
- name: sc-plugins-volume
mountPath: "/etc/grafana/provisioning/plugins"
{{- end}}
{{- if .Values.sidecar.notifiers.enabled }}
- name: sc-notifiers-volume
mountPath: "/etc/grafana/provisioning/notifiers"
{{- end}}
{{- range .Values.extraSecretMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
subPath: {{ .subPath | default "" }}
{{- end }}
{{- range .Values.extraVolumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
subPath: {{ .subPath | default "" }}
readOnly: {{ .readOnly }}
{{- end }}
{{- range .Values.extraEmptyDirMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
{{- end }}
ports:
- name: {{ .Values.podPortName }}
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
env:
{{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.userKey | default "admin-user" }}
{{- end }}
{{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.passwordKey | default "admin-password" }}
{{- end }}
{{- if .Values.plugins }}
- name: GF_INSTALL_PLUGINS
valueFrom:
configMapKeyRef:
name: {{ template "grafana.fullname" . }}
key: plugins
{{- end }}
{{- if .Values.smtp.existingSecret }}
- name: GF_SMTP_USER
valueFrom:
secretKeyRef:
name: {{ .Values.smtp.existingSecret }}
key: {{ .Values.smtp.userKey | default "user" }}
- name: GF_SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.smtp.existingSecret }}
key: {{ .Values.smtp.passwordKey | default "password" }}
{{- end }}
{{- if .Values.imageRenderer.enabled }}
- name: GF_RENDERING_SERVER_URL
value: http://{{ template "grafana.fullname" . }}-image-renderer.{{ template "grafana.namespace" . }}:{{ .Values.imageRenderer.service.port }}/render
- name: GF_RENDERING_CALLBACK_URL
value: {{ .Values.imageRenderer.grafanaProtocol }}://{{ template "grafana.fullname" . }}.{{ template "grafana.namespace" . }}:{{ .Values.service.port }}/{{ .Values.imageRenderer.grafanaSubPath }}
{{- end }}
- name: GF_PATHS_DATA
value: {{ (get .Values "grafana.ini").paths.data }}
- name: GF_PATHS_LOGS
value: {{ (get .Values "grafana.ini").paths.logs }}
- name: GF_PATHS_PLUGINS
value: {{ (get .Values "grafana.ini").paths.plugins }}
- name: GF_PATHS_PROVISIONING
value: {{ (get .Values "grafana.ini").paths.provisioning }}
{{- range $key, $value := .Values.envValueFrom }}
- name: {{ $key | quote }}
valueFrom:
{{ tpl (toYaml $value) $ | indent 10 }}
{{- end }}
{{- range $key, $value := .Values.env }}
- name: "{{ tpl $key $ }}"
value: "{{ tpl (print $value) $ }}"
{{- end }}
{{- if or .Values.envFromSecret (or .Values.envRenderSecret .Values.envFromSecrets) .Values.envFromConfigMaps }}
envFrom:
{{- if .Values.envFromSecret }}
- secretRef:
name: {{ tpl .Values.envFromSecret . }}
{{- end }}
{{- if .Values.envRenderSecret }}
- secretRef:
name: {{ template "grafana.fullname" . }}-env
{{- end }}
{{- range .Values.envFromSecrets }}
- secretRef:
name: {{ tpl .name $ }}
optional: {{ .optional | default false }}
{{- end }}
{{- range .Values.envFromConfigMaps }}
- configMapRef:
name: {{ tpl .name $ }}
optional: {{ .optional | default false }}
{{- end }}
{{- end }}
{{- with .Values.livenessProbe }}
livenessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.readinessProbe }}
readinessProbe:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if .Values.lifecycleHooks }}
lifecycle: {{ tpl (.Values.lifecycleHooks | toYaml) . | nindent 6 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.extraContainers }}
{{ tpl . $ | indent 2 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 2 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 2 }}
{{- end }}
{{- $root := . }}
{{- with .Values.affinity }}
affinity:
{{ tpl (toYaml .) $root | indent 2 }}
{{- end }}
{{- with .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 2 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 2 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 2 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "grafana.fullname" . }}
{{- $root := . }}
{{- range .Values.extraConfigmapMounts }}
- name: {{ tpl .name $root }}
configMap:
name: {{ tpl .configMap $root }}
{{- if .items }}
items: {{ toYaml .items | nindent 6 }}
{{- end }}
{{- end }}
{{- if .Values.dashboards }}
{{- range (keys .Values.dashboards | sortAlpha) }}
- name: dashboards-{{ . }}
configMap:
name: {{ template "grafana.fullname" $ }}-dashboards-{{ . }}
{{- end }}
{{- end }}
{{- if .Values.dashboardsConfigMaps }}
{{ $root := . }}
{{- range $provider, $name := .Values.dashboardsConfigMaps }}
- name: dashboards-{{ $provider }}
configMap:
name: {{ tpl $name $root }}
{{- end }}
{{- end }}
{{- if .Values.ldap.enabled }}
- name: ldap
secret:
{{- if .Values.ldap.existingSecret }}
secretName: {{ .Values.ldap.existingSecret }}
{{- else }}
secretName: {{ template "grafana.fullname" . }}
{{- end }}
items:
- key: ldap-toml
path: ldap.toml
{{- end }}
{{- if and .Values.persistence.enabled (eq .Values.persistence.type "pvc") }}
- name: storage
persistentVolumeClaim:
claimName: {{ tpl (.Values.persistence.existingClaim | default (include "grafana.fullname" .)) . }}
{{- else if and .Values.persistence.enabled (eq .Values.persistence.type "statefulset") }}
# nothing
{{- else }}
- name: storage
{{- if .Values.persistence.inMemory.enabled }}
emptyDir:
medium: Memory
{{- if .Values.persistence.inMemory.sizeLimit }}
sizeLimit: {{ .Values.persistence.inMemory.sizeLimit }}
{{- end -}}
{{- else }}
emptyDir: {}
{{- end -}}
{{- end -}}
{{- if .Values.sidecar.dashboards.enabled }}
- name: sc-dashboard-volume
{{- if .Values.sidecar.dashboards.sizeLimit }}
emptyDir:
sizeLimit: {{ .Values.sidecar.dashboards.sizeLimit }}
{{- else }}
emptyDir: {}
{{- end -}}
{{- if .Values.sidecar.dashboards.SCProvider }}
- name: sc-dashboard-provider
configMap:
name: {{ template "grafana.fullname" . }}-config-dashboards
{{- end }}
{{- end }}
{{- if .Values.sidecar.datasources.enabled }}
- name: sc-datasources-volume
{{- if .Values.sidecar.datasources.sizeLimit }}
emptyDir:
sizeLimit: {{ .Values.sidecar.datasources.sizeLimit }}
{{- else }}
emptyDir: {}
{{- end -}}
{{- end -}}
{{- if .Values.sidecar.plugins.enabled }}
- name: sc-plugins-volume
{{- if .Values.sidecar.plugins.sizeLimit }}
emptyDir:
sizeLimit: {{ .Values.sidecar.plugins.sizeLimit }}
{{- else }}
emptyDir: {}
{{- end -}}
{{- end -}}
{{- if .Values.sidecar.notifiers.enabled }}
- name: sc-notifiers-volume
{{- if .Values.sidecar.notifiers.sizeLimit }}
emptyDir:
sizeLimit: {{ .Values.sidecar.notifiers.sizeLimit }}
{{- else }}
emptyDir: {}
{{- end -}}
{{- end -}}
{{- range .Values.extraSecretMounts }}
{{- if .secretName }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
defaultMode: {{ .defaultMode }}
{{- if .items }}
items: {{ toYaml .items | nindent 6 }}
{{- end }}
{{- else if .projected }}
- name: {{ .name }}
projected: {{- toYaml .projected | nindent 6 }}
{{- else if .csi }}
- name: {{ .name }}
csi: {{- toYaml .csi | nindent 6 }}
{{- end }}
{{- end }}
{{- range .Values.extraVolumeMounts }}
- name: {{ .name }}
{{- if .existingClaim }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- else if .hostPath }}
hostPath:
path: {{ .hostPath }}
{{- else if .csi }}
csi:
data:
{{ toYaml .data | nindent 6 }}
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}
{{- range .Values.extraEmptyDirMounts }}
- name: {{ .name }}
emptyDir: {}
{{- end -}}
{{- if .Values.extraContainerVolumes }}
{{ tpl (toYaml .Values.extraContainerVolumes) . | indent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if and .Values.rbac.create (not .Values.rbac.namespaced) (not .Values.rbac.useExistingRole) }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
name: {{ template "grafana.fullname" . }}-clusterrole
{{- if or .Values.sidecar.dashboards.enabled (or .Values.rbac.extraClusterRoleRules (or .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled)) }}
rules:
{{- if or .Values.sidecar.dashboards.enabled (or .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled) }}
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps", "secrets"]
verbs: ["get", "watch", "list"]
{{- end}}
{{- with .Values.rbac.extraClusterRoleRules }}
{{ toYaml . | indent 0 }}
{{- end}}
{{- else }}
rules: []
{{- end}}
{{- end}}

View File

@ -0,0 +1,24 @@
{{- if and .Values.rbac.create (not .Values.rbac.namespaced) }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "grafana.fullname" . }}-clusterrolebinding
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ template "grafana.serviceAccountName" . }}
namespace: {{ template "grafana.namespace" . }}
roleRef:
kind: ClusterRole
{{- if (not .Values.rbac.useExistingRole) }}
name: {{ template "grafana.fullname" . }}-clusterrole
{{- else }}
name: {{ .Values.rbac.useExistingRole }}
{{- end }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,29 @@
{{- if .Values.sidecar.dashboards.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
name: {{ template "grafana.fullname" . }}-config-dashboards
namespace: {{ template "grafana.namespace" . }}
data:
provider.yaml: |-
apiVersion: 1
providers:
- name: '{{ .Values.sidecar.dashboards.provider.name }}'
orgId: {{ .Values.sidecar.dashboards.provider.orgid }}
{{- if not .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
folder: '{{ .Values.sidecar.dashboards.provider.folder }}'
{{- end}}
type: {{ .Values.sidecar.dashboards.provider.type }}
disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }}
allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }}
updateIntervalSeconds: {{ .Values.sidecar.dashboards.provider.updateIntervalSeconds | default 30 }}
options:
foldersFromFilesStructure: {{ .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
{{- end}}

View File

@ -0,0 +1,117 @@
{{- if .Values.createConfigmap }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
data:
{{- if .Values.plugins }}
plugins: {{ join "," .Values.plugins }}
{{- end }}
grafana.ini: |
{{- range $elem, $elemVal := index .Values "grafana.ini" }}
{{- if not (kindIs "map" $elemVal) }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := index .Values "grafana.ini" }}
{{- if kindIs "map" $value }}
[{{ $key }}]
{{- range $elem, $elemVal := $value }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.datasources }}
{{ $root := . }}
{{- range $key, $value := .Values.datasources }}
{{ $key }}: |
{{ tpl (toYaml $value | indent 4) $root }}
{{- end -}}
{{- end -}}
{{- if .Values.notifiers }}
{{- range $key, $value := .Values.notifiers }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
{{- if .Values.alerting }}
{{ $root := . }}
{{- range $key, $value := .Values.alerting }}
{{ $key }}: |
{{ tpl (toYaml $value | indent 4) $root }}
{{- end -}}
{{- end -}}
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
{{- if .Values.dashboards }}
download_dashboards.sh: |
#!/usr/bin/env sh
set -euf
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{- range $value.providers }}
mkdir -p {{ .options.path }}
{{- end }}
{{- end }}
{{- end }}
{{ $dashboardProviders := .Values.dashboardProviders }}
{{- range $provider, $dashboards := .Values.dashboards }}
{{- range $key, $value := $dashboards }}
{{- if (or (hasKey $value "gnetId") (hasKey $value "url")) }}
curl -skf \
--connect-timeout 60 \
--max-time 60 \
{{- if not $value.b64content }}
-H "Accept: application/json" \
{{- if $value.token }}
-H "Authorization: token {{ $value.token }}" \
{{- end }}
{{- if $value.bearerToken }}
-H "Authorization: Bearer {{ $value.bearerToken }}" \
{{- end }}
{{- if $value.gitlabToken }}
-H "PRIVATE-TOKEN: {{ $value.gitlabToken }}" \
{{- end }}
-H "Content-Type: application/json;charset=UTF-8" \
{{ end }}
{{- $dpPath := "" -}}
{{- range $kd := (index $dashboardProviders "dashboardproviders.yaml").providers }}
{{- if eq $kd.name $provider -}}
{{- $dpPath = $kd.options.path -}}
{{- end -}}
{{- end -}}
{{- if $value.url -}}"{{ $value.url }}"{{- else -}}"https://grafana.com/api/dashboards/{{ $value.gnetId }}/revisions/{{- if $value.revision -}}{{ $value.revision }}{{- else -}}1{{- end -}}/download"{{- end -}}{{ if $value.datasource }} | sed '/-- .* --/! s/"datasource":.*,/"datasource": "{{ $value.datasource }}",/g'{{ end }}{{- if $value.b64content -}} | base64 -d {{- end -}} \
> "{{- if $dpPath -}}{{ $dpPath }}{{- else -}}/var/lib/grafana/dashboards/{{ $provider }}{{- end -}}/{{ $key }}.json"
{{- end }}
{{- end -}}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,35 @@
{{- if .Values.dashboards }}
{{ $files := .Files }}
{{- range $provider, $dashboards := .Values.dashboards }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" $ }}-dashboards-{{ $provider }}
namespace: {{ template "grafana.namespace" $ }}
labels:
{{- include "grafana.labels" $ | nindent 4 }}
dashboard-provider: {{ $provider }}
{{- if $dashboards }}
data:
{{- $dashboardFound := false }}
{{- range $key, $value := $dashboards }}
{{- if (or (hasKey $value "json") (hasKey $value "file")) }}
{{- $dashboardFound = true }}
{{ print $key | indent 2 }}.json:
{{- if hasKey $value "json" }}
|-
{{ $value.json | indent 6 }}
{{- end }}
{{- if hasKey $value "file" }}
{{ toYaml ( $files.Get $value.file ) | indent 4}}
{{- end }}
{{- end }}
{{- end }}
{{- if not $dashboardFound }}
{}
{{- end }}
{{- end }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,50 @@
{{ if (and (not .Values.useStatefulSet) (or (not .Values.persistence.enabled) (eq .Values.persistence.type "pvc"))) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.labels }}
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if and (not .Values.autoscaling.enabled) (.Values.replicas) }}
replicas: {{ .Values.replicas }}
{{- end }}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
{{- with .Values.deploymentStrategy }}
strategy:
{{ toYaml . | trim | indent 4 }}
{{- end }}
template:
metadata:
labels:
{{- include "grafana.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{ toYaml . | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
{{- end }}
{{- if .Values.envRenderSecret }}
checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
{{- end }}
{{- with .Values.podAnnotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- include "grafana.pod" . | indent 6 }}
{{- end }}

View File

@ -0,0 +1,22 @@
{{- if or .Values.headlessService (and .Values.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "statefulset"))}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "grafana.fullname" . }}-headless
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
clusterIP: None
selector:
{{- include "grafana.selectorLabels" . | nindent 4 }}
type: ClusterIP
ports:
- protocol: TCP
port: 3000
targetPort: {{ .Values.service.targetPort }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: {{ template "grafana.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "grafana.name" . }}
helm.sh/chart: {{ template "grafana.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "grafana.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{ toYaml .Values.autoscaling.metrics | indent 4 }}
{{- end }}

View File

@ -0,0 +1,123 @@
{{ if .Values.imageRenderer.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}-image-renderer
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.imageRenderer.labels" . | nindent 4 }}
{{- if .Values.imageRenderer.labels }}
{{ toYaml .Values.imageRenderer.labels | indent 4 }}
{{- end }}
{{- with .Values.imageRenderer.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.imageRenderer.replicas }}
revisionHistoryLimit: {{ .Values.imageRenderer.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
{{- with .Values.imageRenderer.deploymentStrategy }}
strategy:
{{ toYaml . | trim | indent 4 }}
{{- end }}
template:
metadata:
labels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 8 }}
{{- with .Values.imageRenderer.podLabels }}
{{ toYaml . | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.imageRenderer.podAnnotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.imageRenderer.schedulerName }}
schedulerName: "{{ .Values.imageRenderer.schedulerName }}"
{{- end }}
{{- if .Values.imageRenderer.serviceAccountName }}
serviceAccountName: "{{ .Values.imageRenderer.serviceAccountName }}"
{{- else }}
serviceAccountName: {{ template "grafana.serviceAccountName" . }}
{{- end }}
{{- if .Values.imageRenderer.securityContext }}
securityContext:
{{- toYaml .Values.imageRenderer.securityContext | nindent 8 }}
{{- end }}
{{- if .Values.imageRenderer.hostAliases }}
hostAliases:
{{- toYaml .Values.imageRenderer.hostAliases | nindent 8 }}
{{- end }}
{{- if .Values.imageRenderer.priorityClassName }}
priorityClassName: {{ .Values.imageRenderer.priorityClassName }}
{{- end }}
{{- if .Values.imageRenderer.image.pullSecrets }}
imagePullSecrets:
{{- $root := . }}
{{- range .Values.imageRenderer.image.pullSecrets }}
- name: {{ tpl . $root }}
{{- end}}
{{- end }}
containers:
- name: {{ .Chart.Name }}-image-renderer
{{- if .Values.imageRenderer.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}@sha256:{{ .Values.imageRenderer.image.sha }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.imageRenderer.image.pullPolicy }}
{{- if .Values.imageRenderer.command }}
command:
{{- range .Values.imageRenderer.command }}
- {{ . }}
{{- end }}
{{- end}}
ports:
- name: {{ .Values.imageRenderer.service.portName }}
containerPort: {{ .Values.imageRenderer.service.targetPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: {{ .Values.imageRenderer.service.portName }}
env:
- name: HTTP_PORT
value: {{ .Values.imageRenderer.service.targetPort | quote }}
{{- range $key, $value := .Values.imageRenderer.env }}
- name: {{ $key | quote }}
value: {{ $value | quote }}
{{- end }}
securityContext:
capabilities:
drop: ['all']
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /tmp
name: image-renderer-tmpfs
{{- with .Values.imageRenderer.resources }}
resources:
{{ toYaml . | indent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.imageRenderer.nodeSelector }}
{{ toYaml . | indent 8 }}
{{- end }}
{{- $root := . }}
{{- with .Values.imageRenderer.affinity }}
affinity:
{{ tpl (toYaml .) $root | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.imageRenderer.tolerations }}
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: image-renderer-tmpfs
emptyDir: {}
{{- end }}

View File

@ -0,0 +1,73 @@
{{- if and (.Values.imageRenderer.enabled) (.Values.imageRenderer.networkPolicy.limitIngress) }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "grafana.fullname" . }}-image-renderer-ingress
namespace: {{ template "grafana.namespace" . }}
annotations:
comment: Limit image-renderer ingress traffic from grafana
spec:
podSelector:
matchLabels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
{{- if .Values.imageRenderer.podLabels }}
{{ toYaml .Values.imageRenderer.podLabels | nindent 6 }}
{{- end }}
policyTypes:
- Ingress
ingress:
- ports:
- port: {{ .Values.imageRenderer.service.targetPort }}
protocol: TCP
from:
- namespaceSelector:
matchLabels:
name: {{ template "grafana.namespace" . }}
podSelector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 14 }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | nindent 14 }}
{{- end }}
{{ end }}
{{- if and (.Values.imageRenderer.enabled) (.Values.imageRenderer.networkPolicy.limitEgress) }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "grafana.fullname" . }}-image-renderer-egress
namespace: {{ template "grafana.namespace" . }}
annotations:
comment: Limit image-renderer egress traffic to grafana
spec:
podSelector:
matchLabels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
{{- if .Values.imageRenderer.podLabels }}
{{ toYaml .Values.imageRenderer.podLabels | nindent 6 }}
{{- end }}
policyTypes:
- Egress
egress:
# allow dns resolution
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# talk only to grafana
- ports:
- port: {{ .Values.service.port }}
protocol: TCP
to:
- podSelector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 14 }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | nindent 14 }}
{{- end }}
{{ end }}

View File

@ -0,0 +1,33 @@
{{ if .Values.imageRenderer.enabled }}
{{ if .Values.imageRenderer.service.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "grafana.fullname" . }}-image-renderer
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.imageRenderer.labels" . | nindent 4 }}
{{- if .Values.imageRenderer.service.labels }}
{{ toYaml .Values.imageRenderer.service.labels | indent 4 }}
{{- end }}
{{- with .Values.imageRenderer.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: ClusterIP
{{- if .Values.imageRenderer.service.clusterIP }}
clusterIP: {{ .Values.imageRenderer.service.clusterIP }}
{{end}}
ports:
- name: {{ .Values.imageRenderer.service.portName }}
port: {{ .Values.imageRenderer.service.port }}
protocol: TCP
targetPort: {{ .Values.imageRenderer.service.targetPort }}
{{- if .Values.imageRenderer.appProtocol }}
appProtocol: {{ .Values.imageRenderer.appProtocol }}
{{- end }}
selector:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 4 }}
{{ end }}
{{ end }}

View File

@ -0,0 +1,78 @@
{{- if .Values.ingress.enabled -}}
{{- $ingressApiIsStable := eq (include "grafana.ingress.isStable" .) "true" -}}
{{- $ingressSupportsIngressClassName := eq (include "grafana.ingress.supportsIngressClassName" .) "true" -}}
{{- $ingressSupportsPathType := eq (include "grafana.ingress.supportsPathType" .) "true" -}}
{{- $fullName := include "grafana.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $ingressPathType := .Values.ingress.pathType -}}
{{- $extraPaths := .Values.ingress.extraPaths -}}
apiVersion: {{ include "grafana.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.ingress.labels }}
{{ toYaml .Values.ingress.labels | indent 4 }}
{{- end }}
{{- if .Values.ingress.annotations }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ tpl $value $ | quote }}
{{- end }}
{{- end }}
spec:
{{- if and $ingressSupportsIngressClassName .Values.ingress.ingressClassName }}
ingressClassName: {{ .Values.ingress.ingressClassName }}
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ tpl (toYaml .Values.ingress.tls) $ | indent 4 }}
{{- end }}
rules:
{{- if .Values.ingress.hosts }}
{{- range .Values.ingress.hosts }}
- host: {{ tpl . $}}
http:
paths:
{{- if $extraPaths }}
{{ toYaml $extraPaths | indent 10 }}
{{- end }}
- path: {{ $ingressPath }}
{{- if $ingressSupportsPathType }}
pathType: {{ $ingressPathType }}
{{- end }}
backend:
{{- if $ingressApiIsStable }}
service:
name: {{ $fullName }}
port:
number: {{ $servicePort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
{{- else }}
- http:
paths:
- backend:
{{- if $ingressApiIsStable }}
service:
name: {{ $fullName }}
port:
number: {{ $servicePort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- if $ingressPath }}
path: {{ $ingressPath }}
{{- end }}
{{- if $ingressSupportsPathType }}
pathType: {{ $ingressPathType }}
{{- end }}
{{- end -}}
{{- end }}

View File

@ -0,0 +1,52 @@
{{- if .Values.networkPolicy.enabled }}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{ toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
policyTypes:
{{- if .Values.networkPolicy.ingress }}
- Ingress
{{- end }}
{{- if .Values.networkPolicy.egress.enabled }}
- Egress
{{- end }}
podSelector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
{{- if .Values.networkPolicy.egress.enabled }}
egress:
- ports:
{{ .Values.networkPolicy.egress.ports | toJson }}
{{- end }}
{{- if .Values.networkPolicy.ingress }}
ingress:
- ports:
- port: {{ .Values.service.targetPort }}
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ template "grafana.fullname" . }}-client: "true"
{{- with .Values.networkPolicy.explicitNamespacesSelector }}
- namespaceSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
- podSelector:
matchLabels:
{{- include "grafana.labels" . | nindent 14 }}
role: read
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,94 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-nginx-proxy-config
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
data:
nginx.conf: |-
worker_processes auto;
error_log /dev/stdout warn;
pid /var/cache/nginx/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /var/cache/nginx/cache levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
access_log off;
gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
proxy_set_header Host $http_host;
location /api/dashboards {
proxy_pass http://localhost:3000;
}
location /api/search {
proxy_pass http://localhost:3000;
sub_filter_types application/json;
sub_filter_once off;
sub_filter '"url":"/d' '"url":"d';
}
location /api/live/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_pass http://localhost:3000;
}
location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";
proxy_pass http://localhost:3000/;
sub_filter_once off;
{{- if not (empty .Values.global.cattle.clusterId) }}
sub_filter '"appSubUrl":""' '"appSubUrl":"/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ template "grafana.namespace" . }}/services/http:{{ template "grafana.fullname" . }}:{{ .Values.service.port }}/proxy"';
{{- else }}
sub_filter '"appSubUrl":""' '"appSubUrl":"/api/v1/namespaces/{{ template "grafana.namespace" . }}/services/http:{{ template "grafana.fullname" . }}:{{ .Values.service.port }}/proxy"';
{{- end }}
sub_filter '"url":"/' '"url":"./';
sub_filter ':"/avatar/' ':"avatar/';
if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
rewrite ^/k8s/clusters/.*/proxy(.*) /$1 break;
}
}
}

View File

@ -0,0 +1,22 @@
{{- if .Values.podDisruptionBudget }}
apiVersion: {{ include "grafana.podDisruptionBudget.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.labels }}
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,45 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "grafana.fullname" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.rbac.pspAnnotations }}
annotations: {{ toYaml .Values.rbac.pspAnnotations | nindent 4 }}
{{- end }}
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
# Default set from Docker, with DAC_OVERRIDE and CHOWN
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'csi'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}

View File

@ -0,0 +1,35 @@
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "pvc")}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- with .Values.persistence.finalizers }}
finalizers:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
accessModes:
{{- $_ := required "Must provide at least one access mode for persistent volumes used by Grafana" .Values.persistence.accessModes }}
{{- $_ := required "Must provide at least one access mode for persistent volumes used by Grafana" (first .Values.persistence.accessModes) }}
{{- range .Values.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ required "Must provide size for persistent volumes used by Grafana" .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClassName }}
storageClassName: {{ .Values.persistence.storageClassName }}
{{- end -}}
{{- with .Values.persistence.selectorLabels }}
selector:
matchLabels:
{{ toYaml . | indent 6 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,32 @@
{{- if and .Values.rbac.create (not .Values.rbac.useExistingRole) -}}
apiVersion: {{ template "grafana.rbac.apiVersion" . }}
kind: Role
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if or .Values.global.cattle.psp.enabled (and .Values.rbac.namespaced (or .Values.sidecar.dashboards.enabled (or .Values.sidecar.datasources.enabled (or .Values.sidecar.plugins.enabled .Values.rbac.extraRoleRules)))) }}
rules:
{{- if .Values.global.cattle.psp.enabled }}
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: [{{ template "grafana.fullname" . }}]
{{- end }}
{{- if and .Values.rbac.namespaced (or .Values.sidecar.dashboards.enabled (or .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled)) }}
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps", "secrets"]
verbs: ["get", "watch", "list"]
{{- end }}
{{- with .Values.rbac.extraRoleRules }}
{{ toYaml . | indent 0 }}
{{- end}}
{{- else }}
rules: []
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if .Values.rbac.create -}}
apiVersion: {{ template "grafana.rbac.apiVersion" . }}
kind: RoleBinding
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
{{- if (not .Values.rbac.useExistingRole) }}
name: {{ template "grafana.fullname" . }}
{{- else }}
name: {{ .Values.rbac.useExistingRole }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ template "grafana.serviceAccountName" . }}
namespace: {{ template "grafana.namespace" . }}
{{- end -}}

View File

@ -0,0 +1,14 @@
{{- if .Values.envRenderSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "grafana.fullname" . }}-env
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $key, $val := .Values.envRenderSecret }}
{{ $key }}: {{ $val | b64enc | quote }}
{{- end -}}
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret)) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
type: Opaque
data:
{{- if and (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }}
admin-user: {{ .Values.adminUser | b64enc | quote }}
{{- if .Values.adminPassword }}
admin-password: {{ .Values.adminPassword | b64enc | quote }}
{{- else }}
admin-password: {{ template "grafana.password" . }}
{{- end }}
{{- end }}
{{- if not .Values.ldap.existingSecret }}
ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,55 @@
{{ if .Values.service.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- $root := . }}
{{- with .Values.service.annotations }}
annotations:
{{ tpl (toYaml . | indent 4) $root }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
ports:
- name: {{ .Values.service.portName }}
port: {{ .Values.service.port }}
protocol: TCP
targetPort: {{ .Values.service.targetPort }}
{{- if .Values.service.appProtocol }}
appProtocol: {{ .Values.service.appProtocol }}
{{- end }}
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
{{- if .Values.extraExposePorts }}
{{- tpl (toYaml .Values.extraExposePorts) . | nindent 4 }}
{{- end }}
selector:
{{- include "grafana.selectorLabels" . | nindent 4 }}
{{ end }}

View File

@ -0,0 +1,14 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- $root := . }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{ tpl (toYaml . | indent 4) $root }}
{{- end }}
name: {{ template "grafana.serviceAccountName" . }}
namespace: {{ template "grafana.namespace" . }}
{{- end }}

View File

@ -0,0 +1,60 @@
{{- if .Values.serviceMonitor.enabled }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "grafana.fullname" . }}
{{- if .Values.serviceMonitor.namespace }}
namespace: {{ tpl .Values.serviceMonitor.namespace . }}
{{- else }}
namespace: {{ template "grafana.namespace" . }}
{{- end }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.serviceMonitor.labels }}
{{- toYaml .Values.serviceMonitor.labels | nindent 4 }}
{{- end }}
spec:
endpoints:
- port: {{ .Values.service.portName }}
{{- with .Values.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
honorLabels: true
path: {{ .Values.serviceMonitor.path }}
scheme: {{ .Values.serviceMonitor.scheme }}
{{- if .Values.serviceMonitor.tlsConfig }}
tlsConfig:
{{- toYaml .Values.serviceMonitor.tlsConfig | nindent 6 }}
{{- end }}
{{- if .Values.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{- toYaml .Values.serviceMonitor.metricRelabelings | nindent 6 }}
{{ if .Values.global.cattle.clusterId }}
- sourceLabels: [__address__]
targetLabel: cluster_id
replacement: {{ .Values.global.cattle.clusterId }}
{{- end }}
{{ else }}
{{ if .Values.global.cattle.clusterId }}
metricRelabelings:
- sourceLabels: [__address__]
targetLabel: cluster_id
replacement: {{ .Values.global.cattle.clusterId }}
{{- end }}
{{- end }}
{{- if .Values.serviceMonitor.relabelings }}
relabelings:
{{- toYaml .Values.serviceMonitor.relabelings | nindent 4 }}
{{- end }}
jobLabel: "{{ .Release.Name }}"
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 8 }}
namespaceSelector:
matchNames:
- {{ template "grafana.namespace" . }}
{{- end }}

View File

@ -0,0 +1,56 @@
{{- if (or (.Values.useStatefulSet) (and .Values.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "statefulset")))}}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "grafana.fullname" . }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
serviceName: {{ template "grafana.fullname" . }}-headless
template:
metadata:
labels:
{{- include "grafana.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{ toYaml . | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
{{- end }}
{{- with .Values.podAnnotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- include "grafana.pod" . | nindent 6 }}
{{- if .Values.persistence.enabled}}
volumeClaimTemplates:
- metadata:
name: storage
spec:
{{- $_ := required "Must provide at least one access mode for persistent volumes used by Grafana" .Values.persistence.accessModes }}
{{- $_ := required "Must provide at least one access mode for persistent volumes used by Grafana" (first .Values.persistence.accessModes) }}
accessModes: {{ .Values.persistence.accessModes }}
storageClassName: {{ .Values.persistence.storageClassName }}
resources:
requests:
storage: {{ required "Must provide size for persistent volumes used by Grafana" .Values.persistence.size }}
{{- with .Values.persistence.selectorLabels }}
selector:
matchLabels:
{{ toYaml . | indent 10 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if .Values.testFramework.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-test
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
data:
run.sh: |-
@test "Test Health" {
url="http://{{ template "grafana.fullname" . }}/api/health"
code=$(wget --server-response --spider --timeout 10 --tries 1 ${url} 2>&1 | awk '/^ HTTP/{print $2}')
[ "$code" == "200" ]
}
{{- end }}

View File

@ -0,0 +1,29 @@
{{- if and .Values.testFramework.enabled .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "grafana.fullname" . }}-test
labels:
{{- include "grafana.labels" . | nindent 4 }}
spec:
allowPrivilegeEscalation: true
privileged: false
hostNetwork: false
hostIPC: false
hostPID: false
fsGroup:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- projected
- csi
- secret
{{- end }}

View File

@ -0,0 +1,14 @@
{{- if and .Values.testFramework.enabled .Values.global.cattle.psp.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "grafana.fullname" . }}-test
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: [{{ template "grafana.fullname" . }}-test]
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if and .Values.testFramework.enabled .Values.global.cattle.psp.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "grafana.fullname" . }}-test
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "grafana.fullname" . }}-test
subjects:
- kind: ServiceAccount
name: {{ template "grafana.serviceAccountNameTest" . }}
namespace: {{ template "grafana.namespace" . }}
{{- end }}

View File

@ -0,0 +1,9 @@
{{- if and .Values.testFramework.enabled .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
{{- include "grafana.labels" . | nindent 4 }}
name: {{ template "grafana.serviceAccountNameTest" . }}
namespace: {{ template "grafana.namespace" . }}
{{- end }}

View File

@ -0,0 +1,51 @@
{{- if .Values.testFramework.enabled }}
apiVersion: v1
kind: Pod
metadata:
name: {{ template "grafana.fullname" . }}-test
labels:
{{- include "grafana.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
namespace: {{ template "grafana.namespace" . }}
spec:
serviceAccountName: {{ template "grafana.serviceAccountNameTest" . }}
{{- if .Values.testFramework.securityContext }}
securityContext: {{ toYaml .Values.testFramework.securityContext | nindent 4 }}
{{- end }}
{{- $root := . }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ tpl . $root }}
{{- end}}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 4 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 4 }}
{{- end }}
{{- $root := . }}
{{- with .Values.affinity }}
affinity:
{{ tpl (toYaml .) $root | indent 4 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 4 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 4 }}
{{- end }}
containers:
- name: {{ .Release.Name }}-test
image: "{{ template "system_default_registry" . }}{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
imagePullPolicy: "{{ .Values.testFramework.imagePullPolicy}}"
command: ["/opt/bats/bin/bats", "-t", "/tests/run.sh"]
volumeMounts:
- mountPath: /tests
name: tests
readOnly: true
volumes:
- name: tests
configMap:
name: {{ template "grafana.fullname" . }}-test
restartPolicy: Never
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,13 @@
- job_name: 'federate'
scrape_interval: {{ .Values.federate.interval }}
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{namespace=~"{{ include "project-prometheus-stack.projectNamespaceList" . | replace "," "|" }}", job=~"kube-state-metrics|kubelet|k3s-server"}'
- '{namespace=~"{{ include "project-prometheus-stack.projectNamespaceList" . | replace "," "|" }}", job=""}'
static_configs:
- targets: {{ .Values.federate.targets | toYaml | nindent 6 }}

View File

@ -0,0 +1,636 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 28,
"iteration": 1618265214337,
"links": [],
"panels": [
{
"aliasColors": {
"{{container}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_cfs_throttled_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\", container!=\"\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "CFS throttled ({{container}})",
"refId": "A"
},
{
"expr": "sum(rate(container_cpu_system_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container) OR sum(rate(windows_container_cpu_usage_seconds_kernelmode{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "System ({{container}})",
"refId": "B"
},
{
"expr": "sum(rate(container_cpu_usage_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container) OR sum(rate(windows_container_cpu_usage_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Total ({{container}})",
"refId": "C"
},
{
"expr": "sum(rate(container_cpu_user_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container) OR sum(rate(windows_container_cpu_usage_seconds_usermode{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "User ({{container}})",
"refId": "D"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "CPU Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "cpu",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{container}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 0
},
"hiddenSeries": false,
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(container_memory_working_set_bytes{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\", container!=\"\"} OR windows_container_memory_usage_commit_bytes{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\", container!=\"\"}) by (container)",
"interval": "",
"legendFormat": "({{container}})",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Memory Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{container}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 16,
"y": 0
},
"hiddenSeries": false,
"id": 7,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(irate(container_network_receive_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container) OR sum(irate(windows_container_network_receive_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Receive Total ({{container}})",
"refId": "A"
},
{
"expr": "sum(irate(container_network_transmit_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container) OR sum(irate(windows_container_network_transmit_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Transmit Total ({{container}})",
"refId": "B"
},
{
"expr": "sum(irate(container_network_receive_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container) OR sum(irate(windows_container_network_receive_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Receive Dropped ({{container}})",
"refId": "C"
},
{
"expr": "sum(irate(container_network_receive_errors_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Receive Errors ({{container}})",
"refId": "D"
},
{
"expr": "sum(irate(container_network_transmit_errors_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Transmit Errors ({{container}})",
"refId": "E"
},
{
"expr": "sum(irate(container_network_transmit_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container) OR sum(irate(windows_container_network_transmit_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Transmit Dropped ({{container}})",
"refId": "F"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network Traffic",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "pps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{container}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 7
},
"hiddenSeries": false,
"id": 8,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(irate(container_network_receive_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container) OR sum(irate(windows_container_network_receive_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Receive Total ({{container}})",
"refId": "A"
},
{
"expr": "sum(irate(container_network_transmit_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container) OR sum(irate(windows_container_network_transmit_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Transmit Total ({{container}})",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{container}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 7
},
"hiddenSeries": false,
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_fs_writes_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Write ({{container}})",
"refId": "A"
},
{
"expr": "sum(rate(container_fs_reads_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) by (container)",
"interval": "",
"legendFormat": "Read ({{container}})",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Disk I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": false,
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 0,
"label": "Data Source",
"name": "datasource",
"options": [
],
"query": "prometheus",
"refresh": 1,
"regex": "",
"type": "datasource"
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "namespace",
"query": "label_values(kube_pod_info{cluster=\"$cluster\"}, namespace)",
"refresh": 2,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "pod",
"query": "label_values(kube_pod_info{cluster=\"$cluster\", namespace=\"$namespace\"}, pod)",
"refresh": 2,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Rancher / Pod (Containers)",
"uid": "rancher-pod-containers-1",
"version": 8
}

View File

@ -0,0 +1,636 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 28,
"iteration": 1618265214337,
"links": [],
"panels": [
{
"aliasColors": {
"": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_cfs_throttled_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\", container!=\"\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "CFS throttled",
"refId": "A"
},
{
"expr": "sum(rate(container_cpu_system_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) OR sum(rate(windows_container_cpu_usage_seconds_kernelmode{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "System",
"refId": "B"
},
{
"expr": "sum(rate(container_cpu_usage_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) OR sum(rate(windows_container_cpu_usage_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Total",
"refId": "C"
},
{
"expr": "sum(rate(container_cpu_user_seconds_total{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval])) OR sum(rate(windows_container_cpu_usage_seconds_usermode{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "User",
"refId": "D"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "CPU Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "cpu",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 0
},
"hiddenSeries": false,
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(container_memory_working_set_bytes{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\", container!=\"\"} OR windows_container_memory_usage_commit_bytes{container!=\"POD\",namespace=~\"$namespace\",pod=~\"$pod\", container!=\"\"})",
"interval": "",
"legendFormat": "Total",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Memory Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 16,
"y": 0
},
"hiddenSeries": false,
"id": 7,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(irate(container_network_receive_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) OR sum(irate(windows_container_network_receive_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Receive Total",
"refId": "A"
},
{
"expr": "sum(irate(container_network_transmit_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) OR sum(irate(windows_container_network_transmit_packets_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Transmit Total",
"refId": "B"
},
{
"expr": "sum(irate(container_network_receive_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) OR sum(irate(windows_container_network_receive_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Receive Dropped",
"refId": "C"
},
{
"expr": "sum(irate(container_network_receive_errors_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Receive Errors",
"refId": "D"
},
{
"expr": "sum(irate(container_network_transmit_errors_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Transmit Errors",
"refId": "E"
},
{
"expr": "sum(irate(container_network_transmit_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) OR sum(irate(windows_container_network_transmit_packets_dropped_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Transmit Dropped",
"refId": "F"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network Traffic",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "pps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 7
},
"hiddenSeries": false,
"id": 8,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(irate(container_network_receive_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) OR sum(irate(windows_container_network_receive_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Receive Total",
"refId": "A"
},
{
"expr": "sum(irate(container_network_transmit_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval])) OR sum(irate(windows_container_network_transmit_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Transmit Total",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 7
},
"hiddenSeries": false,
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_fs_writes_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Write",
"refId": "A"
},
{
"expr": "sum(rate(container_fs_reads_bytes_total{namespace=~\"$namespace\",pod=~\"$pod\",container!=\"\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Read",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Disk I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": false,
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 0,
"label": "Data Source",
"name": "datasource",
"options": [
],
"query": "prometheus",
"refresh": 1,
"regex": "",
"type": "datasource"
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "namespace",
"query": "label_values({__name__=~\"container_.*|windows_container_.*\", namespace!=\"\"}, namespace)",
"refresh": 2,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "pod",
"query": "label_values({__name__=~\"container_.*|windows_container_.*\", namespace=\"$namespace\", pod!=\"\"}, pod)",
"refresh": 2,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Rancher / Pod",
"uid": "rancher-pod-1",
"version": 8
}

View File

@ -0,0 +1,652 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 28,
"iteration": 1618265214337,
"links": [],
"panels": [
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "(sum(rate(container_cpu_cfs_throttled_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "CFS throttled ({{pod}})",
"refId": "A"
},
{
"expr": "(sum(rate(container_cpu_system_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(rate(windows_container_cpu_usage_seconds_kernelmode{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "System ({{pod}})",
"refId": "B"
},
{
"expr": "(sum(rate(container_cpu_usage_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(rate(windows_container_cpu_usage_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Total ({{pod}})",
"refId": "C"
},
{
"expr": "(sum(rate(container_cpu_user_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(rate(windows_container_cpu_usage_seconds_usermode{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "User ({{pod}})",
"refId": "D"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "CPU Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "cpu",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 0
},
"hiddenSeries": false,
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "(sum(container_memory_working_set_bytes{namespace=~\"$namespace\"} OR windows_container_memory_usage_commit_bytes{namespace=~\"$namespace\"}) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "({{pod}})",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Memory Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 16,
"y": 0
},
"hiddenSeries": false,
"id": 7,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "(sum(irate(container_network_receive_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_receive_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Receive Total ({{pod}})",
"refId": "A"
},
{
"expr": "(sum(irate(container_network_transmit_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_transmit_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Transmit Total ({{pod}})",
"refId": "B"
},
{
"expr": "(sum(irate(container_network_receive_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_receive_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Receive Dropped ({{pod}})",
"refId": "C"
},
{
"expr": "(sum(irate(container_network_receive_errors_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Receive Errors ({{pod}})",
"refId": "D"
},
{
"expr": "(sum(irate(container_network_transmit_errors_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Transmit Errors ({{pod}})",
"refId": "E"
},
{
"expr": "(sum(irate(container_network_transmit_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_transmit_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Transmit Dropped ({{pod}})",
"refId": "F"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network Traffic",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "pps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 7
},
"hiddenSeries": false,
"id": 8,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "(sum(irate(container_network_receive_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_receive_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Receive Total ({{pod}})",
"refId": "A"
},
{
"expr": "(sum(irate(container_network_transmit_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_transmit_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Transmit Total ({{pod}})",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 7
},
"hiddenSeries": false,
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "(sum(rate(container_fs_writes_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Write ({{pod}})",
"refId": "A"
},
{
"expr": "(sum(rate(container_fs_reads_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"}",
"interval": "",
"legendFormat": "Read ({{pod}})",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Disk I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": false,
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 0,
"label": "Data Source",
"name": "datasource",
"options": [
],
"query": "prometheus",
"refresh": 1,
"regex": "",
"type": "datasource"
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "namespace",
"query": "query_result(kube_pod_info{namespace!=\"\"} * on(pod) group_right(namespace, created_by_kind, created_by_name) count({__name__=~\"container_.*|windows_container_.*\", pod!=\"\"}) by (pod))",
"refresh": 2,
"regex": "/.*namespace=\"([^\"]*)\"/",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "kind",
"query": "query_result(kube_pod_info{namespace=\"$namespace\", created_by_kind!=\"\"} * on(pod) group_right(namespace, created_by_kind, created_by_name) count({__name__=~\"container_.*|windows_container_.*\", pod!=\"\"}) by (pod))",
"refresh": 2,
"regex": "/.*created_by_kind=\"([^\"]*)\"/",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "workload",
"query": "query_result(kube_pod_info{namespace=\"$namespace\", created_by_kind=\"$kind\", created_by_name!=\"\"} * on(pod) group_right(namespace, created_by_kind, created_by_name) count({__name__=~\"container_.*|windows_container_.*\", pod!=\"\"}) by (pod))",
"refresh": 2,
"regex": "/.*created_by_name=\"([^\"]*)\"/",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Rancher / Workload (Pods)",
"uid": "rancher-workload-pods-1",
"version": 8
}

View File

@ -0,0 +1,652 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 28,
"iteration": 1618265214337,
"links": [],
"panels": [
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum((sum(rate(container_cpu_cfs_throttled_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "CFS throttled",
"refId": "A"
},
{
"expr": "sum((sum(rate(container_cpu_system_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(rate(windows_container_cpu_usage_seconds_kernelmode{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "System",
"refId": "B"
},
{
"expr": "sum((sum(rate(container_cpu_usage_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(rate(windows_container_cpu_usage_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Total",
"refId": "C"
},
{
"expr": "sum((sum(rate(container_cpu_user_seconds_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(rate(windows_container_cpu_usage_seconds_usermode{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "User",
"refId": "D"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "CPU Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "cpu",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 0
},
"hiddenSeries": false,
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum((sum(container_memory_working_set_bytes{namespace=~\"$namespace\"} OR windows_container_memory_usage_commit_bytes{namespace=~\"$namespace\"}) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Total",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Memory Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 16,
"y": 0
},
"hiddenSeries": false,
"id": 7,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum((sum(irate(container_network_receive_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_receive_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Receive Total",
"refId": "A"
},
{
"expr": "sum((sum(irate(container_network_transmit_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_transmit_packets_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Transmit Total",
"refId": "B"
},
{
"expr": "sum((sum(irate(container_network_receive_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_receive_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Receive Dropped",
"refId": "C"
},
{
"expr": "sum((sum(irate(container_network_receive_errors_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Receive Errors",
"refId": "D"
},
{
"expr": "sum((sum(irate(container_network_transmit_errors_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Transmit Errors",
"refId": "E"
},
{
"expr": "sum((sum(irate(container_network_transmit_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_transmit_packets_dropped_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Transmit Dropped",
"refId": "F"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network Traffic",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "pps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 7
},
"hiddenSeries": false,
"id": 8,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum((sum(irate(container_network_receive_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_receive_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Receive Total",
"refId": "A"
},
{
"expr": "sum((sum(irate(container_network_transmit_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod) OR sum(irate(windows_container_network_transmit_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Transmit Total",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Network I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"{{pod}}": "#3797d5"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 8,
"x": 8,
"y": 7
},
"hiddenSeries": false,
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": false,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"percentage": false,
"pluginVersion": "7.1.5",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum((sum(rate(container_fs_writes_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Write",
"refId": "A"
},
{
"expr": "sum((sum(rate(container_fs_reads_bytes_total{namespace=~\"$namespace\"}[$__rate_interval])) by (pod)) * on(pod) kube_pod_info{namespace=~\"$namespace\", created_by_kind=\"$kind\", created_by_name=\"$workload\"})",
"interval": "",
"legendFormat": "Read",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Disk I/O",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": 1,
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": false,
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 0,
"label": "Data Source",
"name": "datasource",
"options": [
],
"query": "prometheus",
"refresh": 1,
"regex": "",
"type": "datasource"
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "namespace",
"query": "query_result(kube_pod_info{namespace!=\"\"} * on(pod) group_right(namespace, created_by_kind, created_by_name) count({__name__=~\"container_.*|windows_container_.*\", pod!=\"\"}) by (pod))",
"refresh": 2,
"regex": "/.*namespace=\"([^\"]*)\"/",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "kind",
"query": "query_result(kube_pod_info{namespace=\"$namespace\", created_by_kind!=\"\"} * on(pod) group_right(namespace, created_by_kind, created_by_name) count({__name__=~\"container_.*|windows_container_.*\", pod!=\"\"}) by (pod))",
"refresh": 2,
"regex": "/.*created_by_kind=\"([^\"]*)\"/",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "workload",
"query": "query_result(kube_pod_info{namespace=\"$namespace\", created_by_kind=\"$kind\", created_by_name!=\"\"} * on(pod) group_right(namespace, created_by_kind, created_by_name) count({__name__=~\"container_.*|windows_container_.*\", pod!=\"\"}) by (pod))",
"refresh": 2,
"regex": "/.*created_by_name=\"([^\"]*)\"/",
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Rancher / Workload",
"uid": "rancher-workload-1",
"version": 8
}

View File

@ -0,0 +1,34 @@
questions:
- variable: alertmanager.enabled
label: Enable Alertmanager
default: true
type: boolean
group: Alertmanager
- variable: grafana.enabled
label: Enable Grafana
default: true
type: boolean
group: Grafana
show_subquestion_if: true
subquestions:
- variable: grafana.adminUser
label: Grafana Admin User
type: string
default: admin
group: Grafana
- variable: grafana.adminPassword
label: Grafana Admin Password
type: string
default: prom-operator
group: Grafana
- variable: grafana.sidecar.dashboards.label
label: Default Grafana Dashboard Label
default: grafana_dashboard
description: All ConfigMaps with this label are parsed as Grafana Dashboards
type: string
group: Grafana
- variable: federate.enabled
label: Enable Federated Metrics From Cluster Prometheus
default: true
type: boolean
group: Federation

View File

@ -0,0 +1,4 @@
{{ $.Chart.Name }} has been installed. Check its status by running:
kubectl --namespace {{ template "project-prometheus-stack.namespace" . }} get pods -l "release={{ $.Release.Name }}"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.

View File

@ -0,0 +1,233 @@
# Rancher
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- end -}}
{{- end -}}
{{/*
https://github.com/helm/helm/issues/4535#issuecomment-477778391
Usage: {{ include "call-nested" (list . "SUBCHART_NAME" "TEMPLATE") }}
e.g. {{ include "call-nested" (list . "grafana" "grafana.fullname") }}
*/}}
{{- define "call-nested" }}
{{- $dot := index . 0 }}
{{- $subchart := index . 1 | splitList "." }}
{{- $template := index . 2 }}
{{- $values := $dot.Values }}
{{- range $subchart }}
{{- $values = index $values . }}
{{- end }}
{{- include $template (dict "Chart" (dict "Name" (last $subchart)) "Values" $values "Release" $dot.Release "Capabilities" $dot.Capabilities) }}
{{- end }}
# Windows Support
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
beta.kubernetes.io/os: linux
{{- else -}}
kubernetes.io/os: linux
{{- end -}}
{{- end -}}
# Prometheus Operator
{{/* Comma-delimited list of namespaces that need to be watched to configure Project Prometheus Stack components */}}
{{- define "project-prometheus-stack.projectNamespaceList" -}}
{{ append .Values.global.cattle.projectNamespaces .Release.Namespace | uniq | join "," }}
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/* Expand the name of the chart. This is suffixed with -alertmanager, which means subtract 13 from longest 63 available */}}
{{- define "project-prometheus-stack.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 50 | trimSuffix "-" -}}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
The components in this chart create additional resources that expand the longest created name strings.
The longest name that gets created adds and extra 37 characters, so truncation should be 63-35=26.
*/}}
{{- define "project-prometheus-stack.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 26 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 26 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 26 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/* Prometheus custom resource instance name */}}
{{- define "project-prometheus-stack.prometheus.crname" -}}
{{- if .Values.cleanPrometheusOperatorObjectNames }}
{{- include "project-prometheus-stack.fullname" . }}
{{- else }}
{{- print (include "project-prometheus-stack.fullname" .) "-prometheus" }}
{{- end }}
{{- end }}
{{/* Alertmanager custom resource instance name */}}
{{- define "project-prometheus-stack.alertmanager.crname" -}}
{{- if .Values.cleanPrometheusOperatorObjectNames }}
{{- include "project-prometheus-stack.fullname" . }}
{{- else }}
{{- print (include "project-prometheus-stack.fullname" .) "-alertmanager" -}}
{{- end }}
{{- end }}
{{/* Create chart name and version as used by the chart label. */}}
{{- define "project-prometheus-stack.chartref" -}}
{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}}
{{- end }}
{{/* Generate basic labels */}}
{{- define "project-prometheus-stack.labels" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: "{{ replace "+" "_" .Chart.Version }}"
app.kubernetes.io/part-of: {{ template "project-prometheus-stack.name" . }}
chart: {{ template "project-prometheus-stack.chartref" . }}
release: {{ $.Release.Name | quote }}
heritage: {{ $.Release.Service | quote }}
{{- if .Values.commonLabels}}
{{ toYaml .Values.commonLabels }}
{{- end }}
{{- end }}
{{/* Create the name of prometheus service account to use */}}
{{- define "project-prometheus-stack.prometheus.serviceAccountName" -}}
{{- if .Values.prometheus.serviceAccount.create -}}
{{ default (print (include "project-prometheus-stack.fullname" .) "-prometheus") .Values.prometheus.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.prometheus.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/* Create the name of alertmanager service account to use */}}
{{- define "project-prometheus-stack.alertmanager.serviceAccountName" -}}
{{- if .Values.alertmanager.serviceAccount.create -}}
{{ default (print (include "project-prometheus-stack.fullname" .) "-alertmanager") .Values.alertmanager.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.alertmanager.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "project-prometheus-stack.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/*
Use the grafana namespace override for multi-namespace deployments in combined charts
*/}}
{{- define "project-prometheus-stack-grafana.namespace" -}}
{{- if .Values.grafana.namespaceOverride -}}
{{- .Values.grafana.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/* Allow KubeVersion to be overridden. */}}
{{- define "project-prometheus-stack.kubeVersion" -}}
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersionOverride -}}
{{- end -}}
{{/* Get Ingress API Version */}}
{{- define "project-prometheus-stack.ingress.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" (include "project-prometheus-stack.kubeVersion" .)) -}}
{{- print "networking.k8s.io/v1" -}}
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "extensions/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/* Check Ingress stability */}}
{{- define "project-prometheus-stack.ingress.isStable" -}}
{{- eq (include "project-prometheus-stack.ingress.apiVersion" .) "networking.k8s.io/v1" -}}
{{- end -}}
{{/* Check Ingress supports pathType */}}
{{/* pathType was added to networking.k8s.io/v1beta1 in Kubernetes 1.18 */}}
{{- define "project-prometheus-stack.ingress.supportsPathType" -}}
{{- or (eq (include "project-prometheus-stack.ingress.isStable" .) "true") (and (eq (include "project-prometheus-stack.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" (include "project-prometheus-stack.kubeVersion" .))) -}}
{{- end -}}
{{/* Get Policy API Version */}}
{{- define "project-prometheus-stack.pdb.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "policy/v1") (semverCompare ">= 1.21-0" (include "project-prometheus-stack.kubeVersion" .)) -}}
{{- print "policy/v1" -}}
{{- else -}}
{{- print "policy/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/* Get value based on current Kubernetes version */}}
{{- define "project-prometheus-stack.kubeVersionDefaultValue" -}}
{{- $values := index . 0 }}
{{- $kubeVersion := index . 1 }}
{{- $old := index . 2 }}
{{- $new := index . 3 }}
{{- $default := index . 4 }}
{{- if kindIs "invalid" $default -}}
{{- if semverCompare $kubeVersion (include "project-prometheus-stack.kubeVersion" $values) -}}
{{- print $new -}}
{{- else -}}
{{- print $old -}}
{{- end -}}
{{- else -}}
{{- print $default }}
{{- end -}}
{{- end -}}
{{/*
To help compatibility with other charts which use global.imagePullSecrets.
Allow either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style).
global:
imagePullSecrets:
- name: pullSecret1
- name: pullSecret2
or
global:
imagePullSecrets:
- pullSecret1
- pullSecret2
*/}}
{{- define "project-prometheus-stack.imagePullSecrets" -}}
{{- range .Values.global.imagePullSecrets }}
{{- if eq (typeOf .) "map[string]interface {}" }}
- {{ toYaml . | trim }}
{{- else }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,165 @@
{{- if .Values.alertmanager.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: {{ template "project-prometheus-stack.alertmanager.crname" . }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
{{- if .Values.alertmanager.annotations }}
annotations:
{{ toYaml .Values.alertmanager.annotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.alertmanager.alertmanagerSpec.image }}
{{- if and .Values.alertmanager.alertmanagerSpec.image.tag .Values.alertmanager.alertmanagerSpec.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.alertmanager.alertmanagerSpec.image.repository }}:{{ .Values.alertmanager.alertmanagerSpec.image.tag }}@sha256:{{ .Values.alertmanager.alertmanagerSpec.image.sha }}"
{{- else if .Values.alertmanager.alertmanagerSpec.image.sha }}
image: "{{ template "system_default_registry" . }}{{ .Values.alertmanager.alertmanagerSpec.image.repository }}@sha256:{{ .Values.alertmanager.alertmanagerSpec.image.sha }}"
{{- else if .Values.alertmanager.alertmanagerSpec.image.tag }}
image: "{{ template "system_default_registry" . }}{{ .Values.alertmanager.alertmanagerSpec.image.repository }}:{{ .Values.alertmanager.alertmanagerSpec.image.tag }}"
{{- else }}
image: "{{ template "system_default_registry" . }}{{ .Values.alertmanager.alertmanagerSpec.image.repository }}"
{{- end }}
version: {{ .Values.alertmanager.alertmanagerSpec.image.tag }}
{{- if .Values.alertmanager.alertmanagerSpec.image.sha }}
sha: {{ .Values.alertmanager.alertmanagerSpec.image.sha }}
{{- end }}
{{- end }}
replicas: {{ .Values.alertmanager.alertmanagerSpec.replicas }}
listenLocal: {{ .Values.alertmanager.alertmanagerSpec.listenLocal }}
serviceAccountName: {{ template "project-prometheus-stack.alertmanager.serviceAccountName" . }}
{{- if .Values.alertmanager.alertmanagerSpec.externalUrl }}
externalUrl: "{{ tpl .Values.alertmanager.alertmanagerSpec.externalUrl . }}"
{{- else if and .Values.alertmanager.ingress.enabled .Values.alertmanager.ingress.hosts }}
externalUrl: "http://{{ tpl (index .Values.alertmanager.ingress.hosts 0) . }}{{ .Values.alertmanager.alertmanagerSpec.routePrefix }}"
{{- else if not (or (empty .Values.global.cattle.url) (empty .Values.global.cattle.clusterId)) }}
externalUrl: "{{ .Values.global.cattle.url }}/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ template "project-prometheus-stack.namespace" . }}/services/http:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}/proxy"
{{- else }}
externalUrl: http://{{ template "project-prometheus-stack.fullname" . }}-alertmanager.{{ template "project-prometheus-stack.namespace" . }}:{{ .Values.alertmanager.service.port }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 4 }}
{{- if .Values.alertmanager.alertmanagerSpec.nodeSelector }}
{{ toYaml .Values.alertmanager.alertmanagerSpec.nodeSelector | indent 4 }}
{{- end }}
paused: {{ .Values.alertmanager.alertmanagerSpec.paused }}
logFormat: {{ .Values.alertmanager.alertmanagerSpec.logFormat | quote }}
logLevel: {{ .Values.alertmanager.alertmanagerSpec.logLevel | quote }}
retention: {{ .Values.alertmanager.alertmanagerSpec.retention | quote }}
{{- if .Values.alertmanager.alertmanagerSpec.secrets }}
secrets:
{{ toYaml .Values.alertmanager.alertmanagerSpec.secrets | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.configSecret }}
configSecret: {{ .Values.alertmanager.alertmanagerSpec.configSecret }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.configMaps }}
configMaps:
{{ toYaml .Values.alertmanager.alertmanagerSpec.configMaps | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.alertmanagerConfigSelector }}
alertmanagerConfigSelector:
{{ toYaml .Values.alertmanager.alertmanagerSpec.alertmanagerConfigSelector | indent 4}}
{{ else }}
alertmanagerConfigSelector: {}
{{- end }}
alertmanagerConfigNamespaceSelector: {{ .Values.global.cattle.projectNamespaceSelector | toYaml | nindent 4 }}
{{- if .Values.alertmanager.alertmanagerSpec.web }}
web:
{{ toYaml .Values.alertmanager.alertmanagerSpec.web | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.alertmanagerConfiguration }}
alertmanagerConfiguration:
{{ toYaml .Values.alertmanager.alertmanagerSpec.alertmanagerConfiguration | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.resources }}
resources:
{{ toYaml .Values.alertmanager.alertmanagerSpec.resources | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.routePrefix }}
routePrefix: "{{ .Values.alertmanager.alertmanagerSpec.routePrefix }}"
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.securityContext }}
securityContext:
{{ toYaml .Values.alertmanager.alertmanagerSpec.securityContext | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.storage }}
storage:
{{ tpl (toYaml .Values.alertmanager.alertmanagerSpec.storage | indent 4) . }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.podMetadata }}
podMetadata:
{{ toYaml .Values.alertmanager.alertmanagerSpec.podMetadata | indent 4 }}
{{- end }}
{{- if or .Values.alertmanager.alertmanagerSpec.podAntiAffinity .Values.alertmanager.alertmanagerSpec.affinity }}
affinity:
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.affinity }}
{{ toYaml .Values.alertmanager.alertmanagerSpec.affinity | indent 4 }}
{{- end }}
{{- if eq .Values.alertmanager.alertmanagerSpec.podAntiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: {{ .Values.alertmanager.alertmanagerSpec.podAntiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- {key: app.kubernetes.io/name, operator: In, values: [alertmanager]}
- {key: alertmanager, operator: In, values: [{{ template "project-prometheus-stack.alertmanager.crname" . }}]}
{{- else if eq .Values.alertmanager.alertmanagerSpec.podAntiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: {{ .Values.alertmanager.alertmanagerSpec.podAntiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- {key: app.kubernetes.io/name, operator: In, values: [alertmanager]}
- {key: alertmanager, operator: In, values: [{{ template "project-prometheus-stack.alertmanager.crname" . }}]}
{{- end }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 4 }}
{{- if .Values.alertmanager.alertmanagerSpec.tolerations }}
{{ toYaml .Values.alertmanager.alertmanagerSpec.tolerations | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.topologySpreadConstraints }}
topologySpreadConstraints:
{{ toYaml .Values.alertmanager.alertmanagerSpec.topologySpreadConstraints | indent 4 }}
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{ include "project-prometheus-stack.imagePullSecrets" . | trim | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.containers }}
containers:
{{ toYaml .Values.alertmanager.alertmanagerSpec.containers | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.initContainers }}
initContainers:
{{ toYaml .Values.alertmanager.alertmanagerSpec.initContainers | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.priorityClassName }}
priorityClassName: {{.Values.alertmanager.alertmanagerSpec.priorityClassName }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.additionalPeers }}
additionalPeers:
{{ toYaml .Values.alertmanager.alertmanagerSpec.additionalPeers | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.volumes }}
volumes:
{{ toYaml .Values.alertmanager.alertmanagerSpec.volumes | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.volumeMounts }}
volumeMounts:
{{ toYaml .Values.alertmanager.alertmanagerSpec.volumeMounts | indent 4 }}
{{- end }}
portName: {{ .Values.alertmanager.alertmanagerSpec.portName }}
{{- if .Values.alertmanager.alertmanagerSpec.clusterAdvertiseAddress }}
clusterAdvertiseAddress: {{ .Values.alertmanager.alertmanagerSpec.clusterAdvertiseAddress }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.forceEnableClusterMode }}
forceEnableClusterMode: {{ .Values.alertmanager.alertmanagerSpec.forceEnableClusterMode }}
{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.minReadySeconds }}
minReadySeconds: {{ .Values.alertmanager.alertmanagerSpec.minReadySeconds }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if .Values.alertmanager.extraSecret.data -}}
{{- $secretName := printf "alertmanager-%s-extra" (include "project-prometheus-stack.fullname" . ) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ default $secretName .Values.alertmanager.extraSecret.name }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
{{- if .Values.alertmanager.extraSecret.annotations }}
annotations:
{{ toYaml .Values.alertmanager.extraSecret.annotations | indent 4 }}
{{- end }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
app.kubernetes.io/component: alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
data:
{{- range $key, $val := .Values.alertmanager.extraSecret.data }}
{{ $key }}: {{ $val | b64enc | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,77 @@
{{- if and .Values.alertmanager.enabled .Values.alertmanager.ingress.enabled }}
{{- $pathType := .Values.alertmanager.ingress.pathType | default "ImplementationSpecific" }}
{{- $serviceName := printf "%s-%s" (include "project-prometheus-stack.fullname" .) "alertmanager" }}
{{- $servicePort := .Values.alertmanager.ingress.servicePort | default .Values.alertmanager.service.port -}}
{{- $routePrefix := list .Values.alertmanager.alertmanagerSpec.routePrefix }}
{{- $paths := .Values.alertmanager.ingress.paths | default $routePrefix -}}
{{- $apiIsStable := eq (include "project-prometheus-stack.ingress.isStable" .) "true" -}}
{{- $ingressSupportsPathType := eq (include "project-prometheus-stack.ingress.supportsPathType" .) "true" -}}
apiVersion: {{ include "project-prometheus-stack.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ $serviceName }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
{{- if .Values.alertmanager.ingress.annotations }}
annotations:
{{ toYaml .Values.alertmanager.ingress.annotations | indent 4 }}
{{- end }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{- if .Values.alertmanager.ingress.labels }}
{{ toYaml .Values.alertmanager.ingress.labels | indent 4 }}
{{- end }}
{{ include "project-prometheus-stack.labels" . | indent 4 }}
spec:
{{- if $apiIsStable }}
{{- if .Values.alertmanager.ingress.ingressClassName }}
ingressClassName: {{ .Values.alertmanager.ingress.ingressClassName }}
{{- end }}
{{- end }}
rules:
{{- if .Values.alertmanager.ingress.hosts }}
{{- range $host := .Values.alertmanager.ingress.hosts }}
- host: {{ tpl $host $ }}
http:
paths:
{{- range $p := $paths }}
- path: {{ tpl $p $ }}
{{- if and $pathType $ingressSupportsPathType }}
pathType: {{ $pathType }}
{{- end }}
backend:
{{- if $apiIsStable }}
service:
name: {{ $serviceName }}
port:
number: {{ $servicePort }}
{{- else }}
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end -}}
{{- end -}}
{{- else }}
- http:
paths:
{{- range $p := $paths }}
- path: {{ tpl $p $ }}
{{- if and $pathType $ingressSupportsPathType }}
pathType: {{ $pathType }}
{{- end }}
backend:
{{- if $apiIsStable }}
service:
name: {{ $serviceName }}
port:
number: {{ $servicePort }}
{{- else }}
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end -}}
{{- end -}}
{{- if .Values.alertmanager.ingress.tls }}
tls:
{{ tpl (toYaml .Values.alertmanager.ingress.tls | indent 4) . }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{- if and .Values.alertmanager.enabled .Values.alertmanager.podDisruptionBudget.enabled }}
apiVersion: {{ include "project-prometheus-stack.pdb.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
spec:
{{- if .Values.alertmanager.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.alertmanager.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.alertmanager.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.alertmanager.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
app.kubernetes.io/name: alertmanager
alertmanager: {{ template "project-prometheus-stack.alertmanager.crname" . }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if and .Values.alertmanager.enabled .Values.global.rbac.create .Values.global.cattle.psp.enabled }}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
rules:
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if semverCompare "> 1.15.0-0" $kubeTargetVersion }}
- apiGroups: ['policy']
{{- else }}
- apiGroups: ['extensions']
{{- end }}
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "project-prometheus-stack.fullname" . }}-alertmanager
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if and .Values.alertmanager.enabled .Values.global.rbac.create .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
subjects:
- kind: ServiceAccount
name: {{ template "project-prometheus-stack.alertmanager.serviceAccountName" . }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
{{- end }}

View File

@ -0,0 +1,45 @@
{{- if and .Values.alertmanager.enabled .Values.global.rbac.create .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{- if .Values.global.rbac.pspAnnotations }}
annotations:
{{ toYaml .Values.global.rbac.pspAnnotations | indent 4 }}
{{- end }}
{{ include "project-prometheus-stack.labels" . | indent 4 }}
spec:
privileged: false
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Permits the container to run with root privileges as well.
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Allow adding the root group.
- min: 0
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Allow adding the root group.
- min: 0
max: 65535
readOnlyRootFilesystem: false
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.alertmanager.enabled }}
{{/* This file is applied when the operation is helm install and the target secret does not exist. */}}
{{- $secretName := (printf "%s-alertmanager-secret" (include "project-prometheus-stack.fullname" .)) }}
{{- if (not (lookup "v1" "Secret" (include "project-prometheus-stack.namespace" .) $secretName)) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $secretName }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "3"
"helm.sh/resource-policy": keep
{{- if .Values.alertmanager.secret.annotations }}
{{ toYaml .Values.alertmanager.secret.annotations | indent 4 }}
{{- end }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
data:
{{- if .Values.alertmanager.tplConfig }}
{{- if eq (typeOf .Values.alertmanager.config) "string" }}
alertmanager.yaml: {{ tpl (.Values.alertmanager.config) . | b64enc | quote }}
{{- else }}
alertmanager.yaml: {{ tpl (toYaml .Values.alertmanager.config) . | b64enc | quote }}
{{- end }}
{{- else }}
alertmanager.yaml: {{ toYaml .Values.alertmanager.config | b64enc | quote }}
{{- end}}
{{- range $key, $val := .Values.alertmanager.templateFiles }}
{{ $key }}: {{ $val | b64enc | quote }}
{{- end }}
{{- end }}{{- end }}

View File

@ -0,0 +1,53 @@
{{- if .Values.alertmanager.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
self-monitor: {{ .Values.alertmanager.serviceMonitor.selfMonitor | quote }}
{{ include "project-prometheus-stack.labels" . | indent 4 }}
{{- if .Values.alertmanager.service.labels }}
{{ toYaml .Values.alertmanager.service.labels | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.service.annotations }}
annotations:
{{ toYaml .Values.alertmanager.service.annotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.alertmanager.service.clusterIP }}
clusterIP: {{ .Values.alertmanager.service.clusterIP }}
{{- end }}
{{- if .Values.alertmanager.service.externalIPs }}
externalIPs:
{{ toYaml .Values.alertmanager.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.alertmanager.service.loadBalancerIP }}
{{- end }}
{{- if .Values.alertmanager.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $cidr := .Values.alertmanager.service.loadBalancerSourceRanges }}
- {{ $cidr }}
{{- end }}
{{- end }}
{{- if ne .Values.alertmanager.service.type "ClusterIP" }}
externalTrafficPolicy: {{ .Values.alertmanager.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: {{ .Values.alertmanager.alertmanagerSpec.portName }}
{{- if eq .Values.alertmanager.service.type "NodePort" }}
nodePort: {{ .Values.alertmanager.service.nodePort }}
{{- end }}
port: {{ .Values.alertmanager.service.port }}
targetPort: {{ .Values.alertmanager.service.targetPort }}
protocol: TCP
{{- if .Values.alertmanager.service.additionalPorts }}
{{ toYaml .Values.alertmanager.service.additionalPorts | indent 2 }}
{{- end }}
selector:
app.kubernetes.io/name: alertmanager
alertmanager: {{ template "project-prometheus-stack.alertmanager.crname" . }}
type: "{{ .Values.alertmanager.service.type }}"
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if and .Values.alertmanager.enabled .Values.alertmanager.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "project-prometheus-stack.alertmanager.serviceAccountName" . }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
app.kubernetes.io/name: {{ template "project-prometheus-stack.name" . }}-alertmanager
app.kubernetes.io/component: alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
{{- if .Values.alertmanager.serviceAccount.annotations }}
annotations:
{{ toYaml .Values.alertmanager.serviceAccount.annotations | indent 4 }}
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{ include "project-prometheus-stack.imagePullSecrets" . | trim | indent 2}}
{{- end }}
{{- end }}

View File

@ -0,0 +1,57 @@
{{- if and .Values.alertmanager.enabled .Values.alertmanager.serviceMonitor.selfMonitor }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-alertmanager
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
{{ include "project-prometheus-stack.labels" . | indent 4 }}
spec:
selector:
matchLabels:
app: {{ template "project-prometheus-stack.name" . }}-alertmanager
release: {{ $.Release.Name | quote }}
self-monitor: "true"
namespaceSelector:
matchNames:
- {{ printf "%s" (include "project-prometheus-stack.namespace" .) | quote }}
endpoints:
- port: {{ .Values.alertmanager.alertmanagerSpec.portName }}
{{- if .Values.alertmanager.serviceMonitor.interval }}
interval: {{ .Values.alertmanager.serviceMonitor.interval }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.proxyUrl }}
proxyUrl: {{ .Values.alertmanager.serviceMonitor.proxyUrl}}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.scheme }}
scheme: {{ .Values.alertmanager.serviceMonitor.scheme }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.bearerTokenFile }}
bearerTokenFile: {{ .Values.alertmanager.serviceMonitor.bearerTokenFile }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.tlsConfig }}
tlsConfig: {{ toYaml .Values.alertmanager.serviceMonitor.tlsConfig | nindent 6 }}
{{- end }}
path: "{{ trimSuffix "/" .Values.alertmanager.alertmanagerSpec.routePrefix }}/metrics"
{{- if .Values.alertmanager.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{ tpl (toYaml .Values.alertmanager.serviceMonitor.metricRelabelings | indent 6) . }}
{{ if .Values.global.cattle.clusterId }}
- sourceLabels: [__address__]
targetLabel: cluster_id
replacement: {{ .Values.global.cattle.clusterId }}
{{- end }}
{{ else }}
{{ if .Values.global.cattle.clusterId }}
metricRelabelings:
- sourceLabels: [__address__]
targetLabel: cluster_id
replacement: {{ .Values.global.cattle.clusterId }}
{{- end }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.relabelings }}
relabelings:
{{ toYaml .Values.alertmanager.serviceMonitor.relabelings | indent 6 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,141 @@
{{- if and .Values.global.rbac.create .Values.global.rbac.userRoles.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-admin
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels: {{ include "project-prometheus-stack.labels" . | nindent 4 }}
helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}
{{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
helm.cattle.io/project-helm-chart-role-aggregate-from: admin
{{- end }}
rules:
- apiGroups:
- ""
resources:
- services/proxy
resourceNames:
- "http:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}"
- "https:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}"
- "http:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}"
- "https:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}"
- "http:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}"
- "https:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}"
verbs:
- 'get'
- apiGroups:
- ""
resources:
- endpoints
resourceNames:
- {{ template "project-prometheus-stack.fullname" . }}-prometheus
- {{ template "project-prometheus-stack.fullname" . }}-alertmanager
- {{ include "call-nested" (list . "grafana" "grafana.fullname") }}
verbs:
- list
- apiGroups:
- ""
resources:
- "secrets"
resourceNames:
- {{ printf "%s-alertmanager-secret" (include "project-prometheus-stack.fullname" .) }}
verbs:
- get
- list
- watch
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-edit
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels: {{ include "project-prometheus-stack.labels" . | nindent 4 }}
helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}
{{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
helm.cattle.io/project-helm-chart-role-aggregate-from: edit
{{- end }}
rules:
- apiGroups:
- ""
resources:
- services/proxy
resourceNames:
- "http:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}"
- "https:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}"
- "http:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}"
- "https:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}"
- "http:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}"
- "https:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}"
verbs:
- 'get'
- apiGroups:
- ""
resources:
- endpoints
resourceNames:
- {{ template "project-prometheus-stack.fullname" . }}-prometheus
- {{ template "project-prometheus-stack.fullname" . }}-alertmanager
- {{ include "call-nested" (list . "grafana" "grafana.fullname") }}
verbs:
- list
- apiGroups:
- ""
resources:
- "secrets"
resourceNames:
- {{ printf "%s-alertmanager-secret" (include "project-prometheus-stack.fullname" .) }}
verbs:
- get
- list
- watch
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-view
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels: {{ include "project-prometheus-stack.labels" . | nindent 4 }}
helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}
{{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }}
helm.cattle.io/project-helm-chart-role-aggregate-from: view
{{- end }}
rules:
- apiGroups:
- ""
resources:
- services/proxy
resourceNames:
- "http:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}"
- "https:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}"
- "http:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}"
- "https:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}"
- "http:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}"
- "https:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}"
verbs:
- 'get'
- apiGroups:
- ""
resources:
- endpoints
resourceNames:
- {{ template "project-prometheus-stack.fullname" . }}-prometheus
- {{ template "project-prometheus-stack.fullname" . }}-alertmanager
- {{ include "call-nested" (list . "grafana" "grafana.fullname") }}
verbs:
- list
- apiGroups:
- ""
resources:
- "secrets"
resourceNames:
- {{ printf "%s-alertmanager-secret" (include "project-prometheus-stack.fullname" .) }}
verbs:
- get
- list
- watch
{{- end }}

View File

@ -0,0 +1,57 @@
{{- $rancherDashboards := dict }}
{{- range $glob := tuple "files/rancher/workloads/*" "files/rancher/pods/*" -}}
{{- range $dashboard, $_ := ($.Files.Glob $glob) }}
{{- $dashboardMap := ($.Files.Get $dashboard | fromJson) }}
{{- $_ := set $rancherDashboards (get $dashboardMap "uid") (get $dashboardMap "title") -}}
{{- end }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-dashboard-values
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels: {{ include "project-prometheus-stack.labels" . | nindent 4 }}
helm.cattle.io/dashboard-values-configmap: {{ .Release.Name }}
data:
values.json: |-
{
{{- if not .Values.prometheus.enabled }}
"prometheusURL": "",
{{- else if .Values.prometheus.prometheusSpec.externalUrl }}
"prometheusURL": "{{ tpl .Values.prometheus.prometheusSpec.externalUrl . }}",
{{- else if and .Values.prometheus.ingress.enabled .Values.prometheus.ingress.hosts }}
"prometheusURL": "http://{{ tpl (index .Values.prometheus.ingress.hosts 0) . }}{{ .Values.prometheus.prometheusSpec.routePrefix }}",
{{- else if not (or (empty .Values.global.cattle.url) (empty .Values.global.cattle.clusterId)) }}
"prometheusURL": "{{ .Values.global.cattle.url }}/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ template "project-prometheus-stack.namespace" . }}/services/http:{{ template "project-prometheus-stack.fullname" . }}-prometheus:{{ .Values.prometheus.service.port }}/proxy",
{{- else }}
"prometheusURL": "http://{{ template "project-prometheus-stack.fullname" . }}-prometheus.{{ template "project-prometheus-stack.namespace" . }}:{{ .Values.prometheus.service.port }}",
{{- end }}
{{- if not .Values.grafana.enabled }}
"grafanaURL": "",
{{- else if and .Values.grafana.ingress.enabled .Values.grafana.ingress.hosts }}
"grafanaURL": "http://{{ tpl (index .Values.grafana.ingress.hosts 0) . }}{{ .Values.grafana.grafanaSpec.ingress.path }}",
{{- else if not (or (empty .Values.global.cattle.url) (empty .Values.global.cattle.clusterId)) }}
"grafanaURL": "{{ .Values.global.cattle.url }}/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ template "project-prometheus-stack.namespace" . }}/services/http:{{ include "call-nested" (list . "grafana" "grafana.fullname") }}:{{ .Values.grafana.service.port }}/proxy",
{{- else }}
"grafanaURL": "http://{{ include "call-nested" (list . "grafana" "grafana.fullname") }}.{{ template "project-prometheus-stack.namespace" . }}:{{ .Values.grafana.service.port }}",
{{- end }}
{{- if not .Values.alertmanager.enabled }}
"alertmanagerURL": "",
{{- else if .Values.alertmanager.alertmanagerSpec.externalUrl }}
"alertmanagerURL": "{{ tpl .Values.alertmanager.alertmanagerSpec.externalUrl . }}",
{{- else if and .Values.alertmanager.ingress.enabled .Values.alertmanager.ingress.hosts }}
"alertmanagerURL": "http://{{ tpl (index .Values.alertmanager.ingress.hosts 0) . }}{{ .Values.alertmanager.alertmanagerSpec.routePrefix }}",
{{- else if not (or (empty .Values.global.cattle.url) (empty .Values.global.cattle.clusterId)) }}
"alertmanagerURL": "{{ .Values.global.cattle.url }}/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ template "project-prometheus-stack.namespace" . }}/services/http:{{ template "project-prometheus-stack.fullname" . }}-alertmanager:{{ .Values.alertmanager.service.port }}/proxy",
{{- else }}
"alertmanagerURL": "http://{{ template "project-prometheus-stack.fullname" . }}-alertmanager.{{ template "project-prometheus-stack.namespace" . }}:{{ .Values.alertmanager.service.port }}",
{{- end }}
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
"rancherDashboards": {{- $rancherDashboards | toPrettyJson | nindent 8 }}
{{- else }}
"rancherDashboards": []
{{- end }}
}

View File

@ -0,0 +1,17 @@
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ template "project-prometheus-stack.namespace" . }}
name: rancher-default-dashboards-pods
annotations:
{{ toYaml .Values.grafana.sidecar.dashboards.annotations | indent 4 }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: {{ ternary $.Values.grafana.sidecar.dashboards.labelValue "1" (not (empty $.Values.grafana.sidecar.dashboards.labelValue)) | quote }}
{{- end }}
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 4 }}
data:
{{ (.Files.Glob "files/rancher/pods/*").AsConfig | indent 2 }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ template "project-prometheus-stack.namespace" . }}
name: rancher-default-dashboards-workloads
annotations:
{{ toYaml .Values.grafana.sidecar.dashboards.annotations | indent 4 }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: {{ ternary $.Values.grafana.sidecar.dashboards.labelValue "1" (not (empty $.Values.grafana.sidecar.dashboards.labelValue)) | quote }}
{{- end }}
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 4 }}
data:
{{ (.Files.Glob "files/rancher/workloads/*").AsConfig | indent 2 }}
{{- end }}

View File

@ -0,0 +1,24 @@
{{- if or (and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled) .Values.grafana.forceDeployDashboards }}
{{- $files := .Files.Glob "dashboards-1.14/*.json" }}
{{- if $files }}
apiVersion: v1
kind: ConfigMapList
items:
{{- range $path, $fileContents := $files }}
{{- $dashboardName := regexReplaceAll "(^.*/)(.*)\\.json$" $path "${2}" }}
- apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "project-prometheus-stack.fullname" $) $dashboardName | trunc 63 | trimSuffix "-" }}
namespace: {{ template "project-prometheus-stack.namespace" . }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: {{ ternary $.Values.grafana.sidecar.dashboards.labelValue "1" (not (empty $.Values.grafana.sidecar.dashboards.labelValue)) | quote }}
{{- end }}
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 6 }}
data:
{{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,46 @@
{{- if or (and .Values.grafana.enabled .Values.grafana.sidecar.datasources.enabled) .Values.grafana.forceDeployDatasources }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "project-prometheus-stack.fullname" . }}-grafana-datasource
namespace: {{ include "project-prometheus-stack.namespace" . }}
{{- if .Values.grafana.sidecar.datasources.annotations }}
annotations:
{{ toYaml .Values.grafana.sidecar.datasources.annotations | indent 4 }}
{{- end }}
labels:
{{ $.Values.grafana.sidecar.datasources.label }}: "1"
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 4 }}
data:
datasource.yaml: |-
apiVersion: 1
{{- if .Values.grafana.deleteDatasources }}
deleteDatasources:
{{ tpl (toYaml .Values.grafana.deleteDatasources | indent 6) . }}
{{- end }}
datasources:
{{- $scrapeInterval := .Values.grafana.sidecar.datasources.defaultDatasourceScrapeInterval | default .Values.prometheus.prometheusSpec.scrapeInterval | default "30s" }}
{{- if .Values.grafana.sidecar.datasources.defaultDatasourceEnabled }}
- name: Prometheus
type: prometheus
uid: {{ .Values.grafana.sidecar.datasources.uid }}
{{- if .Values.grafana.sidecar.datasources.url }}
url: {{ .Values.grafana.sidecar.datasources.url }}
{{- else }}
url: http://{{ template "project-prometheus-stack.fullname" . }}-prometheus.{{ template "project-prometheus-stack.namespace" . }}:{{ .Values.prometheus.service.port }}/{{ trimPrefix "/" .Values.prometheus.prometheusSpec.routePrefix }}
{{- end }}
access: proxy
isDefault: true
jsonData:
timeInterval: {{ $scrapeInterval }}
{{- if .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations }}
exemplarTraceIdDestinations:
- datasourceUid: {{ .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.datasourceUid }}
name: {{ .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.traceIdLabelName }}
{{- end }}
{{- end }}
{{- if .Values.grafana.additionalDataSources }}
{{ tpl (toYaml .Values.grafana.additionalDataSources | indent 4) . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,616 @@
{{- /*
Generated from 'alertmanager-overview' from https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/grafana-dashboardDefinitions.yaml
Do not change in-place! In order to change this file first read following link:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/hack
*/ -}}
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if and (or .Values.grafana.enabled .Values.grafana.forceDeployDashboards) (semverCompare ">=1.14.0-0" $kubeTargetVersion) (semverCompare "<9.9.9-9" $kubeTargetVersion) .Values.grafana.defaultDashboardsEnabled }}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.serviceMonitor.selfMonitor }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ template "project-prometheus-stack.namespace" . }}
name: {{ printf "%s-%s" (include "project-prometheus-stack.fullname" $) "alertmanager-overview" | trunc 63 | trimSuffix "-" }}
annotations:
{{ toYaml .Values.grafana.sidecar.dashboards.annotations | indent 4 }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: {{ ternary $.Values.grafana.sidecar.dashboards.labelValue "1" (not (empty $.Values.grafana.sidecar.dashboards.labelValue)) | quote }}
{{- end }}
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 4 }}
data:
alertmanager-overview.json: |-
{
"__inputs": [
],
"__requires": [
],
"annotations": {
"list": [
]
},
"editable": false,
"gnetId": null,
"graphTooltip": 1,
"hideControls": false,
"id": null,
"links": [
],
"refresh": "30s",
"rows": [
{
"collapse": false,
"collapsed": false,
"panels": [
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "current set of alerts stored in the Alertmanager",
"fill": 1,
"fillGradient": 0,
"gridPos": {
},
"id": 2,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": false,
"sideWidth": null,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [
],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
],
"spaceLength": 10,
"span": 6,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "sum(alertmanager_alerts{namespace=~\"$namespace\",service=~\"$service\"}) by (namespace,service,instance)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}}",
"refId": "A"
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "Alerts",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "none",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "none",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "rate of successful and invalid alerts received by the Alertmanager",
"fill": 1,
"fillGradient": 0,
"gridPos": {
},
"id": 3,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": false,
"sideWidth": null,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [
],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
],
"spaceLength": 10,
"span": 6,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(alertmanager_alerts_received_total{namespace=~\"$namespace\",service=~\"$service\"}[$__rate_interval])) by (namespace,service,instance)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} Received",
"refId": "A"
},
{
"expr": "sum(rate(alertmanager_alerts_invalid_total{namespace=~\"$namespace\",service=~\"$service\"}[$__rate_interval])) by (namespace,service,instance)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} Invalid",
"refId": "B"
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "Alerts receive rate",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "ops",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "ops",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": true,
"title": "Alerts",
"titleSize": "h6",
"type": "row"
},
{
"collapse": false,
"collapsed": false,
"panels": [
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "rate of successful and invalid notifications sent by the Alertmanager",
"fill": 1,
"fillGradient": 0,
"gridPos": {
},
"id": 4,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": false,
"sideWidth": null,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [
],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "integration",
"seriesOverrides": [
],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(alertmanager_notifications_total{namespace=~\"$namespace\",service=~\"$service\", integration=\"$integration\"}[$__rate_interval])) by (integration,namespace,service,instance)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} Total",
"refId": "A"
},
{
"expr": "sum(rate(alertmanager_notifications_failed_total{namespace=~\"$namespace\",service=~\"$service\", integration=\"$integration\"}[$__rate_interval])) by (integration,namespace,service,instance)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} Failed",
"refId": "B"
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "$integration: Notifications Send Rate",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "ops",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "ops",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "latency of notifications sent by the Alertmanager",
"fill": 1,
"fillGradient": 0,
"gridPos": {
},
"id": 5,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": false,
"sideWidth": null,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [
],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "integration",
"seriesOverrides": [
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.99,\n sum(rate(alertmanager_notification_latency_seconds_bucket{namespace=~\"$namespace\",service=~\"$service\", integration=\"$integration\"}[$__rate_interval])) by (le,namespace,service,instance)\n) \n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} 99th Percentile",
"refId": "A"
},
{
"expr": "histogram_quantile(0.50,\n sum(rate(alertmanager_notification_latency_seconds_bucket{namespace=~\"$namespace\",service=~\"$service\", integration=\"$integration\"}[$__rate_interval])) by (le,namespace,service,instance)\n) \n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} Median",
"refId": "B"
},
{
"expr": "sum(rate(alertmanager_notification_latency_seconds_sum{namespace=~\"$namespace\",service=~\"$service\", integration=\"$integration\"}[$__rate_interval])) by (namespace,service,instance)\n/\nsum(rate(alertmanager_notification_latency_seconds_count{namespace=~\"$namespace\",service=~\"$service\", integration=\"$integration\"}[$__rate_interval])) by (namespace,service,instance)\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}instance{{`}}`}} Average",
"refId": "C"
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "$integration: Notification Duration",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "s",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "s",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": true,
"title": "Notifications",
"titleSize": "h6",
"type": "row"
}
],
"schemaVersion": 14,
"style": "dark",
"tags": [
"alertmanager-mixin"
],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 0,
"label": "Data Source",
"name": "datasource",
"options": [
],
"query": "prometheus",
"refresh": 1,
"regex": "",
"type": "datasource"
},
{
"allValue": null,
"current": {
"text": "",
"value": ""
},
"datasource": "$datasource",
"hide": 0,
"includeAll": false,
"label": "namespace",
"multi": false,
"name": "namespace",
"options": [
],
"query": "label_values(alertmanager_alerts, namespace)",
"refresh": 2,
"regex": "",
"sort": 1,
"tagValuesQuery": "",
"tags": [
],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {
"text": "",
"value": ""
},
"datasource": "$datasource",
"hide": 0,
"includeAll": false,
"label": "service",
"multi": false,
"name": "service",
"options": [
],
"query": "label_values(alertmanager_alerts, service)",
"refresh": 2,
"regex": "",
"sort": 1,
"tagValuesQuery": "",
"tags": [
],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {
"text": "all",
"value": "$__all"
},
"datasource": "$datasource",
"hide": 2,
"includeAll": true,
"label": null,
"multi": false,
"name": "integration",
"options": [
],
"query": "label_values(alertmanager_notifications_total{integration=~\".*\"}, integration)",
"refresh": 2,
"regex": "",
"sort": 1,
"tagValuesQuery": "",
"tags": [
],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "{{ .Values.grafana.defaultDashboardsTimezone }}",
"title": "Alertmanager / Overview",
"uid": "alertmanager-overview",
"version": 0
}
{{- end }}
{{- end }}

View File

@ -0,0 +1,635 @@
{{- /*
Generated from 'grafana-overview' from https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/grafana-dashboardDefinitions.yaml
Do not change in-place! In order to change this file first read following link:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/hack
*/ -}}
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if and (or .Values.grafana.enabled .Values.grafana.forceDeployDashboards) (semverCompare ">=1.14.0-0" $kubeTargetVersion) (semverCompare "<9.9.9-9" $kubeTargetVersion) .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ template "project-prometheus-stack-grafana.namespace" . }}
name: {{ printf "%s-%s" (include "project-prometheus-stack.fullname" $) "grafana-overview" | trunc 63 | trimSuffix "-" }}
annotations:
{{ toYaml .Values.grafana.sidecar.dashboards.annotations | indent 4 }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: {{ ternary $.Values.grafana.sidecar.dashboards.labelValue "1" (not (empty $.Values.grafana.sidecar.dashboards.labelValue)) | quote }}
{{- end }}
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 4 }}
data:
grafana-overview.json: |-
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [
],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 3085,
"iteration": 1631554945276,
"links": [
],
"panels": [
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"mappings": [
],
"noValue": "0",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
]
},
"gridPos": {
"h": 5,
"w": 6,
"x": 0,
"y": 0
},
"id": 6,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"text": {
},
"textMode": "auto"
},
"pluginVersion": "8.1.3",
"targets": [
{
"expr": "grafana_alerting_result_total{job=~\"$job\", instance=~\"$instance\", state=\"alerting\"}",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Firing Alerts",
"type": "stat"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"mappings": [
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
]
},
"gridPos": {
"h": 5,
"w": 6,
"x": 6,
"y": 0
},
"id": 8,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"text": {
},
"textMode": "auto"
},
"pluginVersion": "8.1.3",
"targets": [
{
"expr": "sum(grafana_stat_totals_dashboard{job=~\"$job\", instance=~\"$instance\"})",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Dashboards",
"type": "stat"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"displayMode": "auto"
},
"mappings": [
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
]
},
"gridPos": {
"h": 5,
"w": 12,
"x": 12,
"y": 0
},
"id": 10,
"options": {
"showHeader": true
},
"pluginVersion": "8.1.3",
"targets": [
{
"expr": "grafana_build_info{job=~\"$job\", instance=~\"$instance\"}",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Build Info",
"transformations": [
{
"id": "labelsToFields",
"options": {
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"branch": true,
"container": true,
"goversion": true,
"namespace": true,
"pod": true,
"revision": true
},
"indexByName": {
"Time": 7,
"Value": 11,
"branch": 4,
"container": 8,
"edition": 2,
"goversion": 6,
"instance": 1,
"job": 0,
"namespace": 9,
"pod": 10,
"revision": 5,
"version": 3
},
"renameByName": {
}
}
}
],
"type": "table"
},
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"links": [
]
},
"overrides": [
]
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 5
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "8.1.3",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [
],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "sum by (status_code) (irate(grafana_http_request_duration_seconds_count{job=~\"$job\", instance=~\"$instance\"}[1m])) ",
"interval": "",
"legendFormat": "{{`{{`}}status_code{{`}}`}}",
"refId": "A"
}
],
"thresholds": [
],
"timeFrom": null,
"timeRegions": [
],
"timeShift": null,
"title": "RPS",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"$$hashKey": "object:157",
"format": "reqps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:158",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"links": [
]
},
"overrides": [
]
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 5
},
"hiddenSeries": false,
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "8.1.3",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"exemplar": true,
"expr": "histogram_quantile(0.99, sum(irate(grafana_http_request_duration_seconds_bucket{instance=~\"$instance\", job=~\"$job\"}[$__rate_interval])) by (le)) * 1",
"interval": "",
"legendFormat": "99th Percentile",
"refId": "A"
},
{
"exemplar": true,
"expr": "histogram_quantile(0.50, sum(irate(grafana_http_request_duration_seconds_bucket{instance=~\"$instance\", job=~\"$job\"}[$__rate_interval])) by (le)) * 1",
"interval": "",
"legendFormat": "50th Percentile",
"refId": "B"
},
{
"exemplar": true,
"expr": "sum(irate(grafana_http_request_duration_seconds_sum{instance=~\"$instance\", job=~\"$job\"}[$__rate_interval])) * 1 / sum(irate(grafana_http_request_duration_seconds_count{instance=~\"$instance\", job=~\"$job\"}[$__rate_interval]))",
"interval": "",
"legendFormat": "Average",
"refId": "C"
}
],
"thresholds": [
],
"timeFrom": null,
"timeRegions": [
],
"timeShift": null,
"title": "Request Latency",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"$$hashKey": "object:210",
"format": "ms",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:211",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"schemaVersion": 30,
"style": "dark",
"tags": [
],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"description": null,
"error": null,
"hide": 0,
"includeAll": false,
"label": "Data Source",
"multi": false,
"name": "datasource",
"options": [
],
"query": "prometheus",
"queryValue": "",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"type": "datasource"
},
{
"allValue": ".*",
"current": {
"selected": false,
"text": [
"default/grafana"
],
"value": [
"default/grafana"
]
},
"datasource": "$datasource",
"definition": "label_values(grafana_build_info, job)",
"description": null,
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": true,
"name": "job",
"options": [
],
"query": {
"query": "label_values(grafana_build_info, job)",
"refId": "Billing Admin-job-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": ".*",
"current": {
"selected": false,
"text": "All",
"value": "$__all"
},
"datasource": "$datasource",
"definition": "label_values(grafana_build_info, instance)",
"description": null,
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": true,
"name": "instance",
"options": [
],
"query": {
"query": "label_values(grafana_build_info, instance)",
"refId": "Billing Admin-instance-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "{{ .Values.grafana.defaultDashboardsTimezone }}",
"title": "Grafana Overview",
"uid": "6be0s85Mk",
"version": 2
}
{{- end }}

View File

@ -0,0 +1,963 @@
{{- /*
Generated from 'k8s-resources-node' from https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/grafana-dashboardDefinitions.yaml
Do not change in-place! In order to change this file first read following link:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/hack
*/ -}}
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if and (or .Values.grafana.enabled .Values.grafana.forceDeployDashboards) (semverCompare ">=1.14.0-0" $kubeTargetVersion) (semverCompare "<9.9.9-9" $kubeTargetVersion) .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ template "project-prometheus-stack.namespace" . }}
name: {{ printf "%s-%s" (include "project-prometheus-stack.fullname" $) "k8s-resources-node" | trunc 63 | trimSuffix "-" }}
annotations:
{{ toYaml .Values.grafana.sidecar.dashboards.annotations | indent 4 }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: {{ ternary $.Values.grafana.sidecar.dashboards.labelValue "1" (not (empty $.Values.grafana.sidecar.dashboards.labelValue)) | quote }}
{{- end }}
app: {{ template "project-prometheus-stack.name" $ }}-grafana
{{ include "project-prometheus-stack.labels" $ | indent 4 }}
data:
k8s-resources-node.json: |-
{
"annotations": {
"list": [
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"hideControls": false,
"links": [
],
"refresh": "10s",
"rows": [
{
"collapse": false,
"height": "250px",
"panels": [
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fill": 10,
"id": 1,
"interval": "1m",
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 0,
"links": [
],
"nullPointMode": "null as zero",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [
],
"spaceLength": 10,
"span": 12,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{node=~\"$node\"}) by (pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}pod{{`}}`}}",
"legendLink": null,
"step": 10
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "CPU Usage",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": true,
"title": "CPU Usage",
"titleSize": "h6"
},
{
"collapse": false,
"height": "250px",
"panels": [
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"id": 2,
"interval": "1m",
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [
],
"nullPointMode": "null as zero",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [
],
"spaceLength": 10,
"span": 12,
"stack": false,
"steppedLine": false,
"styles": [
{
"alias": "Time",
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"pattern": "Time",
"type": "hidden"
},
{
"alias": "CPU Usage",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #A",
"thresholds": [
],
"type": "number",
"unit": "short"
},
{
"alias": "CPU Requests",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #B",
"thresholds": [
],
"type": "number",
"unit": "short"
},
{
"alias": "CPU Requests %",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #C",
"thresholds": [
],
"type": "number",
"unit": "percentunit"
},
{
"alias": "CPU Limits",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #D",
"thresholds": [
],
"type": "number",
"unit": "short"
},
{
"alias": "CPU Limits %",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #E",
"thresholds": [
],
"type": "number",
"unit": "percentunit"
},
{
"alias": "Pod",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "pod",
"thresholds": [
],
"type": "number",
"unit": "short"
},
{
"alias": "",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"pattern": "/.*/",
"thresholds": [
],
"type": "string",
"unit": "short"
}
],
"targets": [
{
"expr": "sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "A",
"step": 10
},
{
"expr": "sum(cluster:namespace:pod_cpu:active:kube_pod_container_resource_requests{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "B",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{node=~\"$node\"}) by (pod) / sum(cluster:namespace:pod_cpu:active:kube_pod_container_resource_requests{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "C",
"step": 10
},
{
"expr": "sum(cluster:namespace:pod_cpu:active:kube_pod_container_resource_limits{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "D",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{node=~\"$node\"}) by (pod) / sum(cluster:namespace:pod_cpu:active:kube_pod_container_resource_limits{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "E",
"step": 10
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "CPU Quota",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"transform": "table",
"type": "table",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": true,
"title": "CPU Quota",
"titleSize": "h6"
},
{
"collapse": false,
"height": "250px",
"panels": [
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fill": 10,
"id": 3,
"interval": "1m",
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 0,
"links": [
],
"nullPointMode": "null as zero",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [
],
"spaceLength": 10,
"span": 12,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "sum(node_namespace_pod_container:container_memory_working_set_bytes{node=~\"$node\", container!=\"\"}) by (pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{`}}pod{{`}}`}}",
"legendLink": null,
"step": 10
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "Memory Usage (w/o cache)",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": true,
"title": "Memory Usage",
"titleSize": "h6"
},
{
"collapse": false,
"height": "250px",
"panels": [
{
"aliasColors": {
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"id": 4,
"interval": "1m",
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [
],
"nullPointMode": "null as zero",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [
],
"spaceLength": 10,
"span": 12,
"stack": false,
"steppedLine": false,
"styles": [
{
"alias": "Time",
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"pattern": "Time",
"type": "hidden"
},
{
"alias": "Memory Usage",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #A",
"thresholds": [
],
"type": "number",
"unit": "bytes"
},
{
"alias": "Memory Requests",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #B",
"thresholds": [
],
"type": "number",
"unit": "bytes"
},
{
"alias": "Memory Requests %",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #C",
"thresholds": [
],
"type": "number",
"unit": "percentunit"
},
{
"alias": "Memory Limits",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #D",
"thresholds": [
],
"type": "number",
"unit": "bytes"
},
{
"alias": "Memory Limits %",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #E",
"thresholds": [
],
"type": "number",
"unit": "percentunit"
},
{
"alias": "Memory Usage (RSS)",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #F",
"thresholds": [
],
"type": "number",
"unit": "bytes"
},
{
"alias": "Memory Usage (Cache)",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #G",
"thresholds": [
],
"type": "number",
"unit": "bytes"
},
{
"alias": "Memory Usage (Swap)",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "Value #H",
"thresholds": [
],
"type": "number",
"unit": "bytes"
},
{
"alias": "Pod",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"linkTargetBlank": false,
"linkTooltip": "Drill down",
"linkUrl": "",
"pattern": "pod",
"thresholds": [
],
"type": "number",
"unit": "short"
},
{
"alias": "",
"colorMode": null,
"colors": [
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"pattern": "/.*/",
"thresholds": [
],
"type": "string",
"unit": "short"
}
],
"targets": [
{
"expr": "sum(node_namespace_pod_container:container_memory_working_set_bytes{node=~\"$node\",container!=\"\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "A",
"step": 10
},
{
"expr": "sum(cluster:namespace:pod_memory:active:kube_pod_container_resource_requests{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "B",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_memory_working_set_bytes{node=~\"$node\",container!=\"\"}) by (pod) / sum(cluster:namespace:pod_memory:active:kube_pod_container_resource_requests{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "C",
"step": 10
},
{
"expr": "sum(cluster:namespace:pod_memory:active:kube_pod_container_resource_limits{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "D",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_memory_working_set_bytes{node=~\"$node\",container!=\"\"}) by (pod) / sum(cluster:namespace:pod_memory:active:kube_pod_container_resource_limits{node=~\"$node\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "E",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_memory_rss{node=~\"$node\",container!=\"\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "F",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_memory_cache{node=~\"$node\",container!=\"\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "G",
"step": 10
},
{
"expr": "sum(node_namespace_pod_container:container_memory_swap{node=~\"$node\",container!=\"\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "H",
"step": 10
}
],
"thresholds": [
],
"timeFrom": null,
"timeShift": null,
"title": "Memory Quota",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"transform": "table",
"type": "table",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [
]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": false
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": true,
"title": "Memory Quota",
"titleSize": "h6"
}
],
"schemaVersion": 14,
"style": "dark",
"tags": [
"kubernetes-mixin"
],
"templating": {
"list": [
{
"current": {
"text": "Prometheus",
"value": "Prometheus"
},
"hide": 0,
"label": "Data Source",
"name": "datasource",
"options": [
],
"query": "prometheus",
"refresh": 1,
"regex": "",
"type": "datasource"
},
{
"allValue": null,
"current": {
"text": "",
"value": ""
},
"datasource": "$datasource",
"hide": 0,
"includeAll": false,
"label": null,
"multi": true,
"name": "node",
"options": [
],
"query": "label_values(kube_pod_info, node)",
"refresh": 2,
"regex": "",
"sort": 1,
"tagValuesQuery": "",
"tags": [
],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "{{ .Values.grafana.defaultDashboardsTimezone }}",
"title": "Kubernetes / Compute Resources / Node (Pods)",
"uid": "200ac8fdbfbb74b39aff88118e4d1c2c",
"version": 0
}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More