commit
e544044682
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,27 @@
|
|||
annotations:
|
||||
artifacthub.io/license: Apache-2.0
|
||||
artifacthub.io/links: |
|
||||
- name: Documentation
|
||||
url: https://scod.hpedev.io/csi_driver
|
||||
artifacthub.io/prerelease: "false"
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: HPE CSI Driver for Kubernetes
|
||||
catalog.cattle.io/release-name: hpe-csi-driver
|
||||
apiVersion: v1
|
||||
appVersion: 2.1.1
|
||||
description: A Helm chart for installing the HPE CSI Driver for Kubernetes
|
||||
home: https://hpe.com/storage/containers
|
||||
icon: https://raw.githubusercontent.com/hpe-storage/co-deployments/master/docs/assets/hpedev.png
|
||||
keywords:
|
||||
- HPE
|
||||
- Storage
|
||||
- CSI
|
||||
kubeVersion: 1.21 - 1.23
|
||||
maintainers:
|
||||
- email: datamattsson@hpe.com
|
||||
name: datamattsson
|
||||
name: hpe-csi-driver
|
||||
sources:
|
||||
- https://github.com/hpe-storage/co-deployments
|
||||
- https://github.com/hpe-storage/csi-driver
|
||||
version: 2.1.1
|
|
@ -0,0 +1,154 @@
|
|||
# HPE CSI Driver for Kubernetes Helm chart
|
||||
|
||||
The [HPE CSI Driver for Kubernetes](https://scod.hpedev.io/csi_driver/index.html) leverages Hewlett Packard Enterprise storage platforms to provide scalable and persistent storage for stateful applications.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Upstream Kubernetes version >= 1.18
|
||||
- Most Kubernetes distributions are supported
|
||||
- Recent Ubuntu, SLES, CentOS or RHEL compute nodes connected to their respective official package repositories
|
||||
- Helm 3 (Version >= 3.2.0 required)
|
||||
|
||||
Depending on which [Container Storage Provider](https://scod.hpedev.io/container_storage_provider/index.html) (CSP) is being used, other prerequisites and requirements may apply, such as storage platform OS and features.
|
||||
|
||||
- [HPE Alletra 6000 and Nimble Storage](https://scod.hpedev.io/container_storage_provider/hpe_nimble_storage/index.html)
|
||||
- [HPE Alletra 9000, Primera and 3PAR](https://scod.hpedev.io/container_storage_provider/hpe_3par_primera/index.html)
|
||||
|
||||
## Configuration and installation
|
||||
|
||||
The following table lists the configurable parameters of the chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|---------------------------|------------------------------------------------------------------------|------------------|
|
||||
| disable.nimble | Disable HPE Nimble Storage CSP `Service`. | false |
|
||||
| disable.primera | Disable HPE Primera (and 3PAR) CSP `Service`. | false |
|
||||
| disable.alletra6000 | Disable HPE Alletra 6000 CSP `Service`. | false |
|
||||
| disable.alletra9000 | Disable HPE Alletra 9000 CSP `Service`. | false |
|
||||
| disableNodeConformance | Disable automatic installation of iSCSI/Multipath Packages. | false |
|
||||
| disableNodeGetVolumeStats | Disable NodeGetVolumeStats call to CSI driver. | false |
|
||||
| imagePullPolicy | Image pull policy (`Always`, `IfNotPresent`, `Never`). | IfNotPresent |
|
||||
| iscsi.chapUser | Username for iSCSI CHAP authentication. | "" |
|
||||
| iscsi.chapPassword | Password for iSCSI CHAP authentication. | "" |
|
||||
| logLevel | Log level. Can be one of `info`, `debug`, `trace`, `warn` and `error`. | info |
|
||||
| registry | Registry to pull HPE CSI Driver container images from. | quay.io |
|
||||
| kubeletRootDir | The kubelet root directory path. | /var/lib/kubelet |
|
||||
|
||||
It's recommended to create a [values.yaml](https://github.com/hpe-storage/co-deployments/blob/master/helm/values/csi-driver) file from the corresponding release of the chart and edit it to fit the environment the chart is being deployed to. Download and edit [a sample file](https://github.com/hpe-storage/co-deployments/blob/master/helm/values/csi-driver).
|
||||
|
||||
These are the bare minimum required parameters for a successful deployment to an iSCSI environment if CHAP authentication is required.
|
||||
|
||||
```
|
||||
iscsi:
|
||||
chapUser: "<username>"
|
||||
chapPassword: "<password>"
|
||||
```
|
||||
|
||||
Tweak any additional parameters to suit the environment or as prescribed by HPE.
|
||||
|
||||
### Installing the chart
|
||||
|
||||
To install the chart with the name `my-hpe-csi-driver`:
|
||||
|
||||
Add HPE helm repo:
|
||||
|
||||
```
|
||||
helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/
|
||||
helm repo update
|
||||
```
|
||||
|
||||
Install the latest chart:
|
||||
|
||||
```
|
||||
kubectl create ns hpe-storage
|
||||
helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage -f myvalues.yaml
|
||||
```
|
||||
|
||||
**Note**: `myvalues.yaml` is optional if no parameters are overridden from defaults. Also pay attention to what the latest version of the chart is. If it's labeled with `prerelease` and a "beta" tag, add `--version X.Y.Z` to install a "stable" chart.
|
||||
|
||||
### Upgrading the chart
|
||||
|
||||
Due to the [helm limitation](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations) to not support upgrade of CRDs between different chart versions, helm chart upgrade is not supported.
|
||||
Our recommendation is to uninstall the existing chart and install the chart with the desired version. CRDs will be preserved between uninstall and install.
|
||||
|
||||
#### Upgrading 2.0.0 to 2.1.0
|
||||
|
||||
Before version 2.0.0 is uninstalled, the following CRDs needs to be updated.
|
||||
|
||||
**Important:** If there are HPE Alletra 9000, Primera or 3PAR Remote Copy Groups configured on the cluster, follow the [next steps](#update-rcg-info) before uninstallation.
|
||||
|
||||
##### Update RCG Info
|
||||
|
||||
This step is only necessary if there are HPE Alletra 9000, Primera or 3PAR Remote Copy Groups configured on the cluster. If there are none, proceed to the [next step](#update-crds).
|
||||
|
||||
Change kubectl context into the Namespace where the HPE CSI Driver is installed. The most common is "hpe-storage".
|
||||
|
||||
```
|
||||
kubectl config set-context --current --namespace=hpe-storage
|
||||
```
|
||||
|
||||
Create the Job using the below commands, which will modify the "rcg-info" record to the new key "RCGCreatedByCSP".
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/rcg-info/v1.0.0/convert-rcg-info.yaml
|
||||
```
|
||||
|
||||
Completion of job status can be verified using the below command.
|
||||
|
||||
```
|
||||
kubectl wait --for=condition=complete --timeout=600s job/primera3par-rcg-info
|
||||
```
|
||||
|
||||
Continue to [update the CRDs](#update-crds) followed by [uninstalling the chart](#uninstalling-the-chart).
|
||||
|
||||
##### Update CRDs
|
||||
|
||||
Before reinstallation of the driver, apply the new CRDs.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/charts/hpe-csi-driver/crds/hpevolumeinfos_v2_crd.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/charts/hpe-csi-driver/crds/hpevolumegroupinfos_v2_crd.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/charts/hpe-csi-driver/crds/snapshotgroupinfos_v2_crd.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/charts/hpe-csi-driver/crds/hpereplicated_deviceinfo_v2_crd.yaml
|
||||
```
|
||||
|
||||
#### Uninstalling the chart
|
||||
|
||||
To uninstall the `my-hpe-csi-driver` chart:
|
||||
|
||||
```
|
||||
helm uninstall my-hpe-csi-driver -n hpe-storage
|
||||
```
|
||||
|
||||
**Note**: Due to a limitation in Helm, CRDs are not deleted as part of the chart uninstall.
|
||||
|
||||
### Alternative install method
|
||||
|
||||
In some cases it's more practical to provide the local configuration via the `helm` CLI directly. Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. These will take precedence over entries in [values.yaml](https://github.com/hpe-storage/co-deployments/blob/master/helm/values/csi-driver). For example:
|
||||
|
||||
```
|
||||
helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage \
|
||||
--set iscsi.chapUsername=admin \
|
||||
--set iscsi.chapPassword=xxxxxxxx
|
||||
```
|
||||
|
||||
## Using persistent storage with Kubernetes
|
||||
|
||||
Enable dynamic provisioning of persistent storage by creating a `StorageClass` API object that references a `Secret` which maps to a supported HPE primary storage backend. Refer to the [HPE CSI Driver for Kubernetes](https://scod.hpedev.io/csi_driver/deployment.html#add_a_hpe_storage_backend) documentation on [HPE Storage Container Orchestration Documentation](https://scod.hpedev.io/). Also, it's helpful to be familiar with [persistent storage concepts](https://kubernetes.io/docs/concepts/storage/volumes/) in Kubernetes prior to deploying stateful workloads.
|
||||
|
||||
## Support
|
||||
|
||||
The HPE CSI Driver for Kubernetes Helm chart is fully supported by HPE.
|
||||
|
||||
Formal support statements for each HPE supported CSP is [available on SCOD](https://scod.hpedev.io/legal/support). Use this facility for formal support of your HPE storage products, including the Helm chart.
|
||||
|
||||
## Community
|
||||
|
||||
Please file any issues, questions or feature requests you may have [here](https://github.com/hpe-storage/co-deployments/issues) (do not use this facility for support inquiries of your HPE storage product, see [SCOD](https://scod.hpedev.io/legal/support) for support). You may also join our Slack community to chat with HPE folks close to this project. We hang out in `#NimbleStorage`, `#3par-primera`, and `#Kubernetes`. Sign up at [slack.hpedev.io](https://slack.hpedev.io/) and login at [hpedev.slack.com](https://hpedev.slack.com/)
|
||||
|
||||
## Contributing
|
||||
|
||||
We value all feedback and contributions. If you find any issues or want to contribute, please feel free to open an issue or file a PR. More details in [CONTRIBUTING.md](https://github.com/hpe-storage/co-deployments/blob/master/CONTRIBUTING.md)
|
||||
|
||||
## License
|
||||
|
||||
This is open source software licensed using the Apache License 2.0. Please see [LICENSE](https://github.com/hpe-storage/co-deployments/blob/master/LICENSE) for details.
|
|
@ -0,0 +1,3 @@
|
|||
# HPE CSI Driver for Kubernetes
|
||||
|
||||
The [HPE CSI Driver for Kubernetes](https://github.com/hpe-storage/csi-driver) leverages HPE storage platforms to provide scalable and persistent storage for stateful applications.
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
#############################################
|
||||
############ HPE Node Info CRD ############
|
||||
#############################################
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: hpenodeinfos.storage.hpe.com
|
||||
spec:
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: HPENodeInfo
|
||||
plural: hpenodeinfos
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
apiVersion:
|
||||
description: "APIVersion defines the versioned schema of this representation of an object."
|
||||
type: string
|
||||
kind:
|
||||
description: "Kind is a string value representing the REST resource this object represents"
|
||||
type: string
|
||||
spec:
|
||||
description: "spec defines the desired characteristics of a HPE nodeinfo requested by a user."
|
||||
properties:
|
||||
chapPassword:
|
||||
description: "The CHAP Password"
|
||||
type: string
|
||||
chapUser:
|
||||
description: "The CHAP User Name"
|
||||
type: string
|
||||
iqns:
|
||||
description: "List of IQNs configured on the node."
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
networks:
|
||||
description: "List of networks configured on the node."
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
uuid:
|
||||
description: "The UUID of the node."
|
||||
type: string
|
||||
wwpns:
|
||||
description: "List of WWPNs configured on the node."
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- uuid
|
||||
- networks
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
|
@ -0,0 +1,115 @@
|
|||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: hpereplicationdeviceinfos.storage.hpe.com
|
||||
spec:
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: HPEReplicationDeviceInfo
|
||||
plural: hpereplicationdeviceinfos
|
||||
shortNames:
|
||||
- hperdi
|
||||
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: false
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
#x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeReplicationDeviceInfos:
|
||||
description: List of HPE Replicated Device Information
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
targets:
|
||||
description: List of Target Array Details
|
||||
type: object
|
||||
items:
|
||||
description: Target Array Details
|
||||
type: object
|
||||
properties:
|
||||
targetName:
|
||||
description: Target Name of the array
|
||||
type: string
|
||||
targetCpg:
|
||||
description: Target CPG of the array
|
||||
type: string
|
||||
targetSnapCpg:
|
||||
description: Target Snap CPG of the array
|
||||
type: string
|
||||
targetSecret:
|
||||
description: Secret of the replicated array
|
||||
type: string
|
||||
targetMode:
|
||||
description: Replication Mode
|
||||
type: string
|
||||
targetSecretNamespace:
|
||||
description: Namespace of secret
|
||||
type: string
|
||||
required:
|
||||
- targetName
|
||||
- targetCpg
|
||||
- targetSecret
|
||||
- targetSecretNamespace
|
||||
- name: v2
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeReplicationDeviceInfos:
|
||||
description: List of HPE Replicated Device Information
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
targets:
|
||||
description: List of Target Array Details
|
||||
type: object
|
||||
items:
|
||||
description: Target Array Details
|
||||
type: object
|
||||
properties:
|
||||
targetName:
|
||||
description: Target Name of the array
|
||||
type: string
|
||||
targetCpg:
|
||||
description: Target CPG of the array
|
||||
type: string
|
||||
targetSnapCpg:
|
||||
description: Target Snap CPG of the array
|
||||
type: string
|
||||
targetSecret:
|
||||
description: Secret of the replicated array
|
||||
type: string
|
||||
targetMode:
|
||||
description: Replication Mode
|
||||
type: string
|
||||
targetSecretNamespace:
|
||||
description: Namespace of secret
|
||||
type: string
|
||||
required:
|
||||
- targetName
|
||||
- targetCpg
|
||||
- targetSecret
|
||||
- targetSecretNamespace
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: hpevolumegroupinfos.storage.hpe.com
|
||||
spec:
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: HPEVolumeGroupInfo
|
||||
plural: hpevolumegroupinfos
|
||||
shortNames:
|
||||
- hpevgi
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: false
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
#x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeVolumeGroupInfos:
|
||||
description: List of HPE volume groups configured for 3PAR/Primera arrays.
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uuid:
|
||||
description: The UUID of the node.
|
||||
type: string
|
||||
|
||||
record:
|
||||
description: Metadata for the volume group
|
||||
type: object
|
||||
|
||||
snapshotGroups:
|
||||
description: Snapshot groups that are linked to this volume group
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
id:
|
||||
description: ID of the snapshot group
|
||||
type: string
|
||||
|
||||
name:
|
||||
description: Name of the snapshot group
|
||||
type: string
|
||||
type: object
|
||||
volumes:
|
||||
description: Volumes that are members in this volume group
|
||||
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
volumeId:
|
||||
description: ID of the member volume
|
||||
type: string
|
||||
|
||||
volumeName:
|
||||
description: Name of the member volume
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
- name: v2
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeVolumeGroupInfos:
|
||||
description: List of HPE volume groups configured for 3PAR/Primera arrays.
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uuid:
|
||||
description: The UUID of the node.
|
||||
type: string
|
||||
|
||||
record:
|
||||
description: Metadata for the volume group
|
||||
type: object
|
||||
|
||||
snapshotGroups:
|
||||
description: Snapshot groups that are linked to this volume group
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
id:
|
||||
description: ID of the snapshot group
|
||||
type: string
|
||||
|
||||
name:
|
||||
description: Name of the snapshot group
|
||||
type: string
|
||||
type: object
|
||||
volumes:
|
||||
description: Volumes that are members in this volume group
|
||||
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
volumeId:
|
||||
description: ID of the member volume
|
||||
type: string
|
||||
|
||||
volumeName:
|
||||
description: Name of the member volume
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: hpevolumeinfos.storage.hpe.com
|
||||
spec:
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: HPEVolumeInfo
|
||||
plural: hpevolumeinfos
|
||||
scope: Cluster
|
||||
# list of versions supported by this CustomResourceDefinition
|
||||
versions:
|
||||
- name: v1
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: false
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
#x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeVolumes:
|
||||
description: List of HPE volumes configured for 3PAR/Primera arrays.
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uuid:
|
||||
description: The UUID of the node.
|
||||
type: string
|
||||
|
||||
record:
|
||||
description: Metadata for the volume
|
||||
type: object
|
||||
- name: v2
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
|
||||
properties:
|
||||
hpeVolumes:
|
||||
description: List of HPE volumes configured for 3PAR/Primera arrays.
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uuid:
|
||||
description: The UUID of the node.
|
||||
type: string
|
||||
|
||||
record:
|
||||
description: Metadata for the volume
|
||||
type: object
|
||||
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
|
|
@ -0,0 +1,112 @@
|
|||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: hpesnapshotgroupinfos.storage.hpe.com
|
||||
spec:
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: HPESnapshotGroupInfo
|
||||
plural: hpesnapshotgroupinfos
|
||||
shortNames:
|
||||
- hpesgi
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: false
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
#x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeSnapshotGroupInfos:
|
||||
description: List of HPE snapshot groups created for 3PAR/Primera arrays.
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uuid:
|
||||
description: The UUID of the node.
|
||||
type: string
|
||||
|
||||
record:
|
||||
description: Metadata for the volume group
|
||||
type: object
|
||||
|
||||
snapshotVolumes:
|
||||
description: Snapshot volumes that are part of this snapshot group
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
srcVolumeId:
|
||||
description: ID of the volume that is the source of this snapshot volume
|
||||
type: string
|
||||
|
||||
srcVolumeName:
|
||||
description: Name of the volume that is the source of this snapshot volume
|
||||
type: string
|
||||
|
||||
snapshotId:
|
||||
description: Snapshot volume Id
|
||||
type: string
|
||||
|
||||
snapshotName:
|
||||
description: Snapshot volume name
|
||||
type: string
|
||||
- name: v2
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
properties:
|
||||
hpeSnapshotGroupInfos:
|
||||
description: List of HPE snapshot groups created for 3PAR/Primera arrays.
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uuid:
|
||||
description: The UUID of the node.
|
||||
type: string
|
||||
|
||||
record:
|
||||
description: Metadata for the volume group
|
||||
type: object
|
||||
|
||||
snapshotVolumes:
|
||||
description: Snapshot volumes that are part of this snapshot group
|
||||
type: object
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
srcVolumeId:
|
||||
description: ID of the volume that is the source of this snapshot volume
|
||||
type: string
|
||||
|
||||
srcVolumeName:
|
||||
description: Name of the volume that is the source of this snapshot volume
|
||||
type: string
|
||||
|
||||
snapshotId:
|
||||
description: Snapshot volume Id
|
||||
type: string
|
||||
|
||||
snapshotName:
|
||||
description: Snapshot volume name
|
||||
type: string
|
||||
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: snapshotgroupclasses.storage.hpe.com
|
||||
spec:
|
||||
conversion:
|
||||
strategy: None
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: SnapshotGroupClass
|
||||
listKind: SnapshotGroupClassList
|
||||
plural: snapshotgroupclasses
|
||||
singular: snapshotgroupclass
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: SnapshotGroupClass specifies parameters that a underlying
|
||||
storage system uses when creating a volumegroup snapshot. A specific SnapshotGroupClass
|
||||
is used by specifying its name in a VolumeGroupSnapshot object. SnapshotGroupClasses
|
||||
are non-namespaced
|
||||
properties:
|
||||
apiVersion:
|
||||
description: APIVersion defines the versioned schema of this representation
|
||||
of an object.
|
||||
type: string
|
||||
deletionPolicy:
|
||||
description: deletionPolicy determines whether a SnapshotGroupContent
|
||||
created through the SnapshotGroupClass should be deleted when its
|
||||
bound SnapshotGroup is deleted. Supported values are "Retain" and
|
||||
"Delete". "Retain" means that the SnapshotGroupContent and its physical
|
||||
snapshotGroup on underlying storage system are kept. "Delete" means that
|
||||
the SnapshotGroupContent and its physical snapshotGroup on underlying
|
||||
storage system are deleted. Required.
|
||||
enum:
|
||||
- Delete
|
||||
- Retain
|
||||
type: string
|
||||
snapshotter:
|
||||
description: snapshotter is the name of the storage driver that handles this
|
||||
SnapshotGroupClass. Required.
|
||||
type: string
|
||||
kind:
|
||||
description: Kind is a string value representing the REST resource
|
||||
this object represents.
|
||||
type: string
|
||||
parameters:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: parameters is a key-value map with storage driver specific
|
||||
parameters for creating snapshotGroups. These values are opaque to Kubernetes.
|
||||
type: object
|
||||
required:
|
||||
- deletionPolicy
|
||||
- snapshotter
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
|
@ -0,0 +1,104 @@
|
|||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: snapshotgroupcontents.storage.hpe.com
|
||||
spec:
|
||||
conversion:
|
||||
strategy: None
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: SnapshotGroupContent
|
||||
listKind: SnapshotGroupContentList
|
||||
plural: snapshotgroupcontents
|
||||
singular: snapshotgroupcontent
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: SnapshotGroupContent represents the actual "on-disk" snapshotGroup
|
||||
object in the underlying storage system
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource
|
||||
this object represents. Servers may infer this from the endpoint the
|
||||
client submits requests to. Cannot be updated. In CamelCase. More
|
||||
info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
spec:
|
||||
description: spec defines properties of a SnapshotGroupContent created
|
||||
by the underlying storage system. Required.
|
||||
properties:
|
||||
deletionPolicy:
|
||||
description: deletionPolicy determines whether this SnapshotGroupContent
|
||||
and its physical snapshotgroup on the underlying storage system should
|
||||
be deleted when its bound SnapshotGroup is deleted. Supported
|
||||
values are "Retain" and "Delete". "Retain" means that the SnapshotGroupContent
|
||||
and its physical snapshotGroup on underlying storage system are kept.
|
||||
"Delete" means that the SnapshotGroupContent and its physical
|
||||
snapshotGroup on underlying storage system are deleted.
|
||||
Required.
|
||||
enum:
|
||||
- Delete
|
||||
- Retain
|
||||
type: string
|
||||
source:
|
||||
description: source specifies from where a snapshotGroup will be created.Required.
|
||||
properties:
|
||||
snapshotGroupHandle:
|
||||
description: snapshotGroupHandle specifies the snapshotGroup Id
|
||||
of a pre-existing snapshotGroup on the underlying storage system.
|
||||
This field is immutable.
|
||||
type: string
|
||||
type: object
|
||||
snapshotGroupClassName:
|
||||
description: name of the SnapshotGroupClass to which this snapshotGroup belongs.
|
||||
type: string
|
||||
snapshotGroupRef:
|
||||
description: snapshotGroupRef specifies the SnapshotGroup object
|
||||
to which this SnapshotGroupContent object is bound. SnapshotGroup.Spec.SnapshotGroupContentName
|
||||
field must reference to this SnapshotGroupContent's name for
|
||||
the bidirectional binding to be valid.
|
||||
Required.
|
||||
properties:
|
||||
apiVersion:
|
||||
description: API version of the referent.
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
|
||||
type: string
|
||||
namespace:
|
||||
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
|
||||
type: string
|
||||
resourceVersion:
|
||||
description: 'Specific resourceVersion to which this reference
|
||||
is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency'
|
||||
type: string
|
||||
uid:
|
||||
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
|
||||
type: string
|
||||
type: object
|
||||
volumeSnapshotContentNames:
|
||||
description: list of volumeSnapshotContentNames associated with this snapshotGroups
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
required:
|
||||
- deletionPolicy
|
||||
- source
|
||||
- snapshotGroupClassName
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: snapshotgroups.storage.hpe.com
|
||||
spec:
|
||||
conversion:
|
||||
strategy: None
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: SnapshotGroup
|
||||
listKind: SnapshotGroupList
|
||||
plural: snapshotgroups
|
||||
singular: snapshotgroup
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: SnapshotGroup is a user's request for creating a snapshotgroup
|
||||
properties:
|
||||
apiVersion:
|
||||
description: APIVersion defines the versioned schema of this representation
|
||||
of an object.
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource
|
||||
this object represents'
|
||||
type: string
|
||||
spec:
|
||||
description: spec defines the desired characteristics of a snapshotGroup
|
||||
requested by a user.
|
||||
Required.
|
||||
properties:
|
||||
source:
|
||||
description: source specifies where a snapshotGroup will be created.
|
||||
This field is immutable after creation. Required.
|
||||
properties:
|
||||
kind:
|
||||
description: kind of the source (VolumeGroup) is the only supported one.
|
||||
type: string
|
||||
apiGroup:
|
||||
description: apiGroup of the source. Current supported is storage.hpe.com
|
||||
type: string
|
||||
name:
|
||||
description: name specifies the volumeGroupName of the VolumeGroup object in the same namespace as the SnapshotGroup object where the snapshotGroup should be dynamically taken from. This field is immutable.
|
||||
type: string
|
||||
type: object
|
||||
volumeSnapshotClassName:
|
||||
description: name of the volumeSnapshotClass to create pre-provisioned snapshots
|
||||
type: string
|
||||
snapshotGroupClassName:
|
||||
description: snapshotGroupClassName is the name of the SnapshotGroupClass requested by the SnapshotGroup.
|
||||
type: string
|
||||
snapshotGroupContentName:
|
||||
description: snapshotGroupContentName is the name of the snapshotGroupContent the snapshotGroup is bound.
|
||||
type: string
|
||||
required:
|
||||
- source
|
||||
- volumeSnapshotClassName
|
||||
- snapshotGroupClassName
|
||||
type: object
|
||||
status:
|
||||
description: status represents the current information of a snapshotGroup.
|
||||
properties:
|
||||
creationTime:
|
||||
description: creationTime is the timestamp when the point-in-time
|
||||
snapshotGroup is taken by the underlying storage system.
|
||||
format: date-time
|
||||
type: string
|
||||
phase:
|
||||
description: the state of the snapshotgroup
|
||||
enum:
|
||||
- Pending
|
||||
- Ready
|
||||
- Failed
|
||||
type: string
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: volumegroupclasses.storage.hpe.com
|
||||
spec:
|
||||
conversion:
|
||||
strategy: None
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: VolumeGroupClass
|
||||
listKind: VolumeGroupClassList
|
||||
plural: volumegroupclasses
|
||||
singular: volumegroupclass
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: VolumeGroupClass specifies parameters that a underlying
|
||||
storage system uses when creating a volumegroup. A specific VolumeGroupClass
|
||||
is used by specifying its name in a VolumeGroup object. VolumeGroupClasses
|
||||
are non-namespaced
|
||||
properties:
|
||||
apiVersion:
|
||||
description: APIVersion defines the versioned schema of this representation
|
||||
of an object.
|
||||
type: string
|
||||
deletionPolicy:
|
||||
description: deletionPolicy determines whether a VolumeGroupContent
|
||||
created through the VolumeGroupClass should be deleted when its
|
||||
bound VolumeGroup is deleted. Supported values are "Retain" and
|
||||
"Delete". "Retain" means that the VolumeGroupContent and its physical
|
||||
volumeGroup on underlying storage system are kept. "Delete" means that
|
||||
the VolumeGroupContent and its physical volumeGroup on underlying
|
||||
storage system are deleted. Required.
|
||||
enum:
|
||||
- Delete
|
||||
- Retain
|
||||
type: string
|
||||
provisioner:
|
||||
description: provisioner is the name of the storage driver that handles this
|
||||
VolumeGroupClass. Required.
|
||||
type: string
|
||||
kind:
|
||||
description: Kind is a string value representing the REST resource
|
||||
this object represents.
|
||||
type: string
|
||||
parameters:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: parameters is a key-value map with storage driver specific
|
||||
parameters for creating volumeGroups. These values are opaque to Kubernetes.
|
||||
type: object
|
||||
required:
|
||||
- deletionPolicy
|
||||
- provisioner
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: volumegroupcontents.storage.hpe.com
|
||||
spec:
|
||||
conversion:
|
||||
strategy: None
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: VolumeGroupContent
|
||||
listKind: VolumeGroupContentList
|
||||
plural: volumegroupcontents
|
||||
singular: volumegroupcontent
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: VolumeGroupContent represents the actual "on-disk" volumeGroup
|
||||
object in the underlying storage system
|
||||
properties:
|
||||
apiVersion:
|
||||
description: APIVersion defines the versioned schema of this representation
|
||||
of an object.
|
||||
type: string
|
||||
kind:
|
||||
description: Kind is a string value representing the REST resource
|
||||
this object represents.
|
||||
type: string
|
||||
spec:
|
||||
description: spec defines properties of a VolumeGroupContent created
|
||||
by the underlying storage system. Required.
|
||||
properties:
|
||||
deletionPolicy:
|
||||
description: deletionPolicy determines whether this VolumeGroupContent
|
||||
and its physical volumegroup on the underlying storage system should
|
||||
be deleted when its bound VolumeGroup is deleted. Supported
|
||||
values are "Retain" and "Delete". "Retain" means that the VolumeGroupContent
|
||||
and its physical volumeGroup on underlying storage system are kept.
|
||||
"Delete" means that the VolumeGroupContent and its physical
|
||||
volumeGroup on underlying storage system are deleted.
|
||||
Required.
|
||||
enum:
|
||||
- Delete
|
||||
- Retain
|
||||
type: string
|
||||
source:
|
||||
description: source specifies from where a volumeGroup will be created.Required.
|
||||
properties:
|
||||
volumeGroupHandle:
|
||||
description: volumeGroupHandle specifies the volumeGroup Id
|
||||
of a pre-existing volumeGroup on the underlying storage system.
|
||||
This field is immutable.
|
||||
type: string
|
||||
type: object
|
||||
volumeGroupClassName:
|
||||
description: name of the VolumeGroupClass to which this volumeGroup belongs.
|
||||
type: string
|
||||
volumeGroupRef:
|
||||
description: volumeGroupRef specifies the VolumeGroup object
|
||||
to which this VolumeGroupContent object is bound. VolumeGroup.Spec.VolumeGroupContentName
|
||||
field must reference to this VolumeGroupContent's name for
|
||||
the bidirectional binding to be valid.
|
||||
Required.
|
||||
properties:
|
||||
apiVersion:
|
||||
description: API version of the referent.
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
name:
|
||||
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
|
||||
type: string
|
||||
namespace:
|
||||
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
|
||||
type: string
|
||||
resourceVersion:
|
||||
description: 'Specific resourceVersion to which this reference
|
||||
is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency'
|
||||
type: string
|
||||
uid:
|
||||
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
|
||||
type: string
|
||||
type: object
|
||||
required:
|
||||
- deletionPolicy
|
||||
- source
|
||||
- volumeGroupClassName
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
|
@ -0,0 +1,69 @@
|
|||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: volumegroups.storage.hpe.com
|
||||
spec:
|
||||
conversion:
|
||||
strategy: None
|
||||
group: storage.hpe.com
|
||||
names:
|
||||
kind: VolumeGroup
|
||||
listKind: VolumeGroupList
|
||||
plural: volumegroups
|
||||
singular: volumegroup
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: VolumeGroup is a user's request for creating a volumegroup
|
||||
properties:
|
||||
apiVersion:
|
||||
description: APIVersion defines the versioned schema of this representation
|
||||
of an object.
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource
|
||||
this object represents'
|
||||
type: string
|
||||
spec:
|
||||
description: spec defines the desired characteristics of a volumeGroup
|
||||
requested by a user.
|
||||
Required.
|
||||
properties:
|
||||
volumeGroupClassName:
|
||||
description: name of the volumeGroupClassName to create volumeGroups
|
||||
type: string
|
||||
persistentVolumeClaimNames:
|
||||
description: persistentVolumeClaimNames are the name of the PVC associated with this volumeGroup.
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
volumeGroupContentName:
|
||||
description: volumeGroupContentName is the name of the volumeGroupContent to which the volumeGroup is bound.
|
||||
type: string
|
||||
required:
|
||||
- volumeGroupClassName
|
||||
type: object
|
||||
status:
|
||||
description: status represents the current information of a volumeGroup.
|
||||
properties:
|
||||
creationTime:
|
||||
description: creationTime is the timestamp when the point-in-time
|
||||
volumeGroup is taken by the underlying storage system.
|
||||
format: date-time
|
||||
type: string
|
||||
phase:
|
||||
description: the state of the volumegroup
|
||||
enum:
|
||||
- Pending
|
||||
- Ready
|
||||
- Failed
|
||||
type: string
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
|
@ -0,0 +1,128 @@
|
|||
[
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "warning",
|
||||
"description": "Manual startup of iSCSI nodes on boot. Can be set in /etc/iscsi/iscsid.conf",
|
||||
"parameter": "startup",
|
||||
"recommendation": "manual"
|
||||
},
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "warning",
|
||||
"description": "Replacement_timeout of 10 seconds is recommended for faster failover of I/O by multipath on path failures. Can be set in /etc/iscsi/iscsid.conf",
|
||||
"parameter": "replacement_timeout",
|
||||
"recommendation": "10"
|
||||
},
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "warning",
|
||||
"description": "Minimum login timeout of 15 seconds is recommended with iSCSI. Can be set in /etc/iscsi/iscsid.conf",
|
||||
"parameter": "login_timeout",
|
||||
"recommendation": "15"
|
||||
},
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "warning",
|
||||
"description": "Minimum timeout of 10 seconds is recommended with noop requests. Can be set in /etc/iscsi/iscsid.conf",
|
||||
"parameter": "noop_out_timeout",
|
||||
"recommendation": "10"
|
||||
},
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "info",
|
||||
"description": "Minimum cmds_max of 512 is recommended for each session if handling multiple LUN's. Can be set in /etc/iscsi/iscsid.conf",
|
||||
"parameter": "cmds_max",
|
||||
"recommendation": "512"
|
||||
},
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "warning",
|
||||
"description": "Minimum queue_depth of 256 is recommended for each iSCSI session/path. Can be set in /etc/iscsi/iscsid.conf",
|
||||
"parameter": "queue_depth",
|
||||
"recommendation": "256"
|
||||
},
|
||||
{
|
||||
"category": "iscsi",
|
||||
"severity": "info",
|
||||
"description": "Minimum number of sessions per iSCSI login is recommended to be 1 by default. If additional sessions are needed this can be set in /etc/iscsi/iscsid.conf. If NCM is running, please change min_session_per_array in /etc/ncm.conf and restart nlt service instead",
|
||||
"parameter": "nr_sessions",
|
||||
"recommendation": "1"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "product attribute recommended to be set to Server in /etc/multipath.conf",
|
||||
"parameter": "product",
|
||||
"recommendation": "\"Server\""
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "alua prioritizer is recommended. Can be set in /etc/multipath.conf",
|
||||
"parameter": "prio",
|
||||
"recommendation": "alua"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "scsi_dh_alua device handler is recommended. Can be set in /etc/multipath.conf",
|
||||
"parameter": "hardware_handler",
|
||||
"recommendation": "\"1 alua\""
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "warning",
|
||||
"description": "immediate failback setting is recommended. Can be set in /etc/multipath.conf",
|
||||
"parameter": "failback",
|
||||
"recommendation": "immediate"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "immediately fail i/o on transient path failures to retry on other paths, value=1. Can be set in /etc/multipath.conf",
|
||||
"parameter": "fast_io_fail_tmo",
|
||||
"recommendation": "5"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "queueing is recommended for 150 seconds, with no_path_retry value of 30. Can be set in /etc/multipath.conf",
|
||||
"parameter": "no_path_retry",
|
||||
"recommendation": "30"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "warning",
|
||||
"description": "service-time path selector is recommended. Can be set in /etc/multipath.conf",
|
||||
"parameter": "path_selector",
|
||||
"recommendation": "\"service-time 0\""
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "vendor attribute recommended to be set to Nimble in /etc/multipath.conf",
|
||||
"parameter": "vendor",
|
||||
"recommendation": "\"Nimble\""
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "group paths according to ALUA path priority of active/standby. Recommended to be set to group_by_prio in /etc/multipath.conf",
|
||||
"parameter": "path_grouping_policy",
|
||||
"recommendation": "group_by_prio"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "tur path checker is recommended. Can be set in /etc/multipath.conf",
|
||||
"parameter": "path_checker",
|
||||
"recommendation": "tur"
|
||||
},
|
||||
{
|
||||
"category": "multipath",
|
||||
"severity": "critical",
|
||||
"description": "infinite value is recommended for timeout in cases of device loss for FC. Can be set in /etc/multipath.conf",
|
||||
"parameter": "dev_loss_tmo",
|
||||
"recommendation": "infinity"
|
||||
}
|
||||
]
|
|
@ -0,0 +1,87 @@
|
|||
labels:
|
||||
io.rancher.certified: partner
|
||||
questions:
|
||||
- variable: imagePullPolicy
|
||||
label: "ImagePullPolicy"
|
||||
default: "IfNotPresent"
|
||||
type: enum
|
||||
options:
|
||||
- "IfNotPresent"
|
||||
- "Always"
|
||||
- "Never"
|
||||
description: "ImagePullPolicy for all CSI driver images"
|
||||
group: "HPE CSI Driver settings"
|
||||
- variable: disableNodeConformance
|
||||
label: "Disable automatic installation of iSCSI/Multipath Packages"
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Disable automatic installation of iSCSI/Multipath Packages"
|
||||
group: "HPE CSI Driver settings"
|
||||
- variable: iscsi.chapUser
|
||||
label: "iSCSI CHAP Username"
|
||||
type: string
|
||||
required: false
|
||||
description: "Specify username for iSCSI CHAP authentication"
|
||||
group: "HPE iSCSI settings"
|
||||
- variable: iscsi.chapPassword
|
||||
label: "iSCSI CHAP Password"
|
||||
type: password
|
||||
min_length: 12
|
||||
max_length: 16
|
||||
required: false
|
||||
description: "Specify password for iSCSI CHAP authentication"
|
||||
group: "HPE iSCSI settings"
|
||||
- variable: registry
|
||||
label: "Registry"
|
||||
type: string
|
||||
default: "quay.io"
|
||||
description: "Specify registry prefix (hostname[:port]) for CSI driver images"
|
||||
group: "HPE CSI Driver settings"
|
||||
- variable: disable.nimble
|
||||
label: "Disable Nimble"
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Disable HPE Nimble Storage CSP Service"
|
||||
group: "Disable Container Storage Providers"
|
||||
- variable: disable.primera
|
||||
label: "Disable Primera"
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Disable HPE Primera (and 3PAR) CSP Service"
|
||||
group: "Disable Container Storage Providers"
|
||||
- variable: disable.alletra6000
|
||||
label: "Disable Alletra 6000"
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Disable HPE Alletra 6000 CSP Service"
|
||||
group: "Disable Container Storage Providers"
|
||||
- variable: disable.alletra9000
|
||||
label: "Disable Alletra 9000"
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Disable HPE Alletra 9000 CSP Service"
|
||||
group: "Disable Container Storage Providers"
|
||||
- variable: disableNodeGetVolumeStats
|
||||
label: "Disable NoteGetVolumeStats"
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Disable NodeGetVolumeStats call to CSI driver"
|
||||
group: "HPE CSI Driver settings"
|
||||
- variable: kubeletRootDir
|
||||
label: "Set kubeletRootDir"
|
||||
type: string
|
||||
default: "/var/lib/kubelet"
|
||||
description: "The kubelet root directory path"
|
||||
group: "HPE CSI Driver settings"
|
||||
- variable: logLevel
|
||||
label: "Set log level"
|
||||
default: "info"
|
||||
type: enum
|
||||
options:
|
||||
- "info"
|
||||
- "debug"
|
||||
- "trace"
|
||||
- "warn"
|
||||
- "error"
|
||||
description: "Sets the CSI driver and sidecar log level"
|
||||
group: "HPE CSI Driver settings"
|
|
@ -0,0 +1,32 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "hpe-csi-storage.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "hpe-csi-storage.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "hpe-csi-storage.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,24 @@
|
|||
|
||||
|
||||
|
||||
---
|
||||
|
||||
################# CSI Driver ###########
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "18") }}
|
||||
apiVersion: storage.k8s.io/v1
|
||||
{{- else if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "14") }}
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
{{- end }}
|
||||
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "14") }}
|
||||
kind: CSIDriver
|
||||
metadata:
|
||||
name: csi.hpe.com
|
||||
spec:
|
||||
podInfoOnMount: true
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "16") }}
|
||||
volumeLifecycleModes:
|
||||
- Persistent
|
||||
- Ephemeral
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,240 @@
|
|||
---
|
||||
|
||||
#############################################
|
||||
############ Controller driver ############
|
||||
#############################################
|
||||
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: hpe-csi-controller
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hpe-csi-controller
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hpe-csi-controller
|
||||
role: hpe-csi
|
||||
spec:
|
||||
serviceAccountName: hpe-csi-controller-sa
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
priorityClassName: system-cluster-critical
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
dnsConfig:
|
||||
options:
|
||||
- name: ndots
|
||||
value: "1"
|
||||
containers:
|
||||
- name: csi-provisioner
|
||||
{{- if and (.Values.registry) (eq .Values.registry "quay.io") }}
|
||||
image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
|
||||
{{- else if .Values.registry }}
|
||||
image: {{ .Values.registry }}/sig-storage/csi-provisioner:v3.1.0
|
||||
{{- else }}
|
||||
image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
|
||||
{{- end }}
|
||||
args:
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
- "--v=5"
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "13") }}
|
||||
- "--timeout=30s"
|
||||
- "--worker-threads=16"
|
||||
{{- end }}
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy
|
||||
- name: csi-attacher
|
||||
{{- if and (.Values.registry) (eq .Values.registry "quay.io") }}
|
||||
image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
|
||||
{{- else if .Values.registry }}
|
||||
image: {{ .Values.registry }}/sig-storage/csi-attacher:v3.4.0
|
||||
{{- else }}
|
||||
image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
|
||||
{{- end }}
|
||||
args:
|
||||
- "--v=5"
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
{{- if and ( or (eq .Values.disable.primera false) (eq .Values.disable.alletra9000 false) ) ( or (eq .Values.disable.nimble true) (eq .Values.disable.alletra6000 true) ) }}
|
||||
- "--timeout=180s"
|
||||
{{- end }}
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy
|
||||
- name: csi-snapshotter
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "20") }}
|
||||
{{- if and (.Values.registry) (eq .Values.registry "quay.io") }}
|
||||
image: k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
|
||||
{{- else if .Values.registry }}
|
||||
image: {{ .Values.registry }}/sig-storage/csi-snapshotter:v5.0.1
|
||||
{{- else }}
|
||||
image: k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
|
||||
{{- end }}
|
||||
{{- else if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/k8scsi/csi-snapshotter:v3.0.3
|
||||
{{- else }}
|
||||
image: quay.io/k8scsi/csi-snapshotter:v3.0.3
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
args:
|
||||
- "--v=5"
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy/
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "15") }}
|
||||
- name: csi-resizer
|
||||
{{- if and (.Values.registry) (eq .Values.registry "quay.io") }}
|
||||
image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
|
||||
{{- else if .Values.registry }}
|
||||
image: {{ .Values.registry }}/sig-storage/csi-resizer:v1.4.0
|
||||
{{- else }}
|
||||
image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
|
||||
{{- end }}
|
||||
args:
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
- "--v=5"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy
|
||||
{{- end }}
|
||||
- name: hpe-csi-driver
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/csi-driver:v2.1.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/csi-driver:v2.1.1
|
||||
{{- end }}
|
||||
args :
|
||||
- "--endpoint=$(CSI_ENDPOINT)"
|
||||
- "--flavor=kubernetes"
|
||||
- "--pod-monitor"
|
||||
- "--pod-monitor-interval=30"
|
||||
env:
|
||||
- name: CSI_ENDPOINT
|
||||
value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
|
||||
- name: LOG_LEVEL
|
||||
value: {{ .Values.logLevel }}
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy
|
||||
- name: log-dir
|
||||
mountPath: /var/log
|
||||
- name: k8s
|
||||
mountPath: /etc/kubernetes
|
||||
- name: hpeconfig
|
||||
mountPath: /etc/hpe-storage
|
||||
- name: root-dir
|
||||
mountPath: /host
|
||||
- name: csi-volume-mutator
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/volume-mutator:v1.3.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/volume-mutator:v1.3.1
|
||||
{{- end }}
|
||||
args:
|
||||
- "--v=5"
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi-extensions.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy/
|
||||
- name: csi-volume-group-snapshotter
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/volume-group-snapshotter:v1.0.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/volume-group-snapshotter:v1.0.1
|
||||
{{- end }}
|
||||
args:
|
||||
- "--v=5"
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi-extensions.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy/
|
||||
- name: csi-volume-group-provisioner
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/volume-group-provisioner:v1.0.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/volume-group-provisioner:v1.0.1
|
||||
{{- end }}
|
||||
args:
|
||||
- "--v=5"
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /var/lib/csi/sockets/pluginproxy/csi-extensions.sock
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy/
|
||||
- name: csi-extensions
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/csi-extensions:v1.2.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/csi-extensions:v1.2.1
|
||||
{{- end }}
|
||||
args:
|
||||
- "--v=5"
|
||||
- "--endpoint=$(CSI_ENDPOINT)"
|
||||
env:
|
||||
- name: CSI_ENDPOINT
|
||||
value: unix:///var/lib/csi/sockets/pluginproxy/csi-extensions.sock
|
||||
- name: LOG_LEVEL
|
||||
value: {{ .Values.logLevel }}
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /var/lib/csi/sockets/pluginproxy/
|
||||
volumes:
|
||||
- name: socket-dir
|
||||
emptyDir: {}
|
||||
- name: log-dir
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: k8s
|
||||
hostPath:
|
||||
path: /etc/kubernetes
|
||||
- name: hpeconfig
|
||||
hostPath:
|
||||
path: /etc/hpe-storage
|
||||
- name: root-dir
|
||||
hostPath:
|
||||
path: /
|
||||
tolerations:
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/unreachable
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
|
@ -0,0 +1,201 @@
|
|||
---
|
||||
|
||||
#######################################
|
||||
############ Node driver ############
|
||||
#######################################
|
||||
|
||||
kind: DaemonSet
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: hpe-csi-node
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hpe-csi-node
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hpe-csi-node
|
||||
role: hpe-csi
|
||||
spec:
|
||||
serviceAccountName: hpe-csi-node-sa
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
priorityClassName: system-node-critical
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
dnsConfig:
|
||||
options:
|
||||
- name: ndots
|
||||
value: "1"
|
||||
containers:
|
||||
- name: csi-node-driver-registrar
|
||||
{{- if and (.Values.registry) (eq .Values.registry "quay.io") }}
|
||||
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
|
||||
{{- else if .Values.registry }}
|
||||
image: {{ .Values.registry }}/sig-storage/csi-node-driver-registrar:v2.5.0
|
||||
{{- else }}
|
||||
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
|
||||
{{- end}}
|
||||
args:
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
|
||||
- "--v=5"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /csi/csi.sock
|
||||
- name: DRIVER_REG_SOCK_PATH
|
||||
{{- if .Values.kubeletRootDir }}
|
||||
value: {{ .Values.kubeletRootDir }}/plugins/csi.hpe.com/csi.sock
|
||||
{{- else }}
|
||||
value: /var/lib/kubelet/plugins/csi.hpe.com/csi.sock
|
||||
{{- end }}
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( eq ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "12") }}
|
||||
- name: KUBE_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: spec.nodeName
|
||||
{{- end }}
|
||||
imagePullPolicy: "Always"
|
||||
volumeMounts:
|
||||
- name: plugin-dir
|
||||
mountPath: /csi
|
||||
- name: registration-dir
|
||||
mountPath: /registration
|
||||
- name: hpe-csi-driver
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/csi-driver:v2.1.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/csi-driver:v2.1.1
|
||||
{{- end}}
|
||||
args :
|
||||
- "--endpoint=$(CSI_ENDPOINT)"
|
||||
- "--node-service"
|
||||
- "--flavor=kubernetes"
|
||||
env:
|
||||
- name: CSI_ENDPOINT
|
||||
value: unix:///csi/csi.sock
|
||||
- name: LOG_LEVEL
|
||||
value: {{ .Values.logLevel }}
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
{{ if and .Values.iscsi.chapUser .Values.iscsi.chapPassword }}
|
||||
- name: CHAP_USER
|
||||
value: {{ .Values.iscsi.chapUser }}
|
||||
- name: CHAP_PASSWORD
|
||||
value: {{ .Values.iscsi.chapPassword }}
|
||||
{{- end }}
|
||||
{{ if .Values.disableNodeConformance -}}
|
||||
- name: DISABLE_NODE_CONFORMANCE
|
||||
value: "true"
|
||||
{{- end }}
|
||||
{{- if .Values.kubeletRootDir }}
|
||||
- name: KUBELET_ROOT_DIR
|
||||
value: {{ .Values.kubeletRootDir }}
|
||||
{{- end }}
|
||||
{{ if .Values.disableNodeGetVolumeStats -}}
|
||||
- name: DISABLE_NODE_GET_VOLUMESTATS
|
||||
value: "true"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
capabilities:
|
||||
add: ["SYS_ADMIN"]
|
||||
allowPrivilegeEscalation: true
|
||||
volumeMounts:
|
||||
- name: plugin-dir
|
||||
mountPath: /csi
|
||||
- name: pods-mount-dir
|
||||
{{- if .Values.kubeletRootDir }}
|
||||
mountPath: {{ .Values.kubeletRootDir }}
|
||||
{{- else }}
|
||||
mountPath: /var/lib/kubelet
|
||||
{{- end }}
|
||||
# needed so that any mounts setup inside this container are
|
||||
# propagated back to the host machine.
|
||||
mountPropagation: "Bidirectional"
|
||||
- name: root-dir
|
||||
mountPath: /host
|
||||
mountPropagation: "Bidirectional"
|
||||
- name: device-dir
|
||||
mountPath: /dev
|
||||
- name: log-dir
|
||||
mountPath: /var/log
|
||||
- name: etc-hpe-storage-dir
|
||||
mountPath: /etc/hpe-storage
|
||||
- name: etc-kubernetes
|
||||
mountPath: /etc/kubernetes
|
||||
- name: sys
|
||||
mountPath: /sys
|
||||
- name: runsystemd
|
||||
mountPath: /run/systemd
|
||||
- name: etcsystemd
|
||||
mountPath: /etc/systemd/system
|
||||
- name: linux-config-file
|
||||
mountPath: /opt/hpe-storage/nimbletune/config.json
|
||||
subPath: config.json
|
||||
volumes:
|
||||
- name: registration-dir
|
||||
hostPath:
|
||||
{{ if .Values.kubeletRootDir }}
|
||||
path: {{ .Values.kubeletRootDir }}/plugins_registry
|
||||
{{- else }}
|
||||
path: /var/lib/kubelet/plugins_registry
|
||||
{{- end }}
|
||||
type: Directory
|
||||
- name: plugin-dir
|
||||
hostPath:
|
||||
{{ if .Values.kubeletRootDir }}
|
||||
path: {{ .Values.kubeletRootDir }}/plugins/csi.hpe.com
|
||||
{{- else }}
|
||||
path: /var/lib/kubelet/plugins/csi.hpe.com
|
||||
{{- end }}
|
||||
type: DirectoryOrCreate
|
||||
- name: pods-mount-dir
|
||||
hostPath:
|
||||
{{ if .Values.kubeletRootDir }}
|
||||
path: {{ .Values.kubeletRootDir }}
|
||||
{{- else }}
|
||||
path: /var/lib/kubelet
|
||||
{{- end }}
|
||||
- name: root-dir
|
||||
hostPath:
|
||||
path: /
|
||||
- name: device-dir
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: log-dir
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: etc-hpe-storage-dir
|
||||
hostPath:
|
||||
path: /etc/hpe-storage
|
||||
- name: etc-kubernetes
|
||||
hostPath:
|
||||
path: /etc/kubernetes
|
||||
- name: runsystemd
|
||||
hostPath:
|
||||
path: /run/systemd
|
||||
- name: etcsystemd
|
||||
hostPath:
|
||||
path: /etc/systemd/system
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: linux-config-file
|
||||
configMap:
|
||||
name: hpe-linux-config
|
||||
tolerations:
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/unreachable
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
|
@ -0,0 +1,565 @@
|
|||
---
|
||||
|
||||
kind: ServiceAccount
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-provisioner-role
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
verbs: ["get", "list", "create"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["serviceaccounts"]
|
||||
verbs: ["get", "list", "create"]
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
verbs: ["get", "create"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete", "update"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["services"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["deployments"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents"]
|
||||
verbs: ["get", "list"]
|
||||
{{- end }}
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "delete"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["volumeattachments"]
|
||||
verbs: ["get", "list", "watch", "update", "patch", "delete"]
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-provisioner-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: hpe-csi-provisioner-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-attacher-role
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["volumeattachments"]
|
||||
verbs: ["get", "list", "watch", "update", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["volumeattachments/status"]
|
||||
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( eq ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "12") }}
|
||||
resources: ["csinodeinfos"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
{{- else if and (eq .Capabilities.KubeVersion.Major "1") ( eq ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "13") }}
|
||||
- apiGroups: ["csi.storage.k8s.io"]
|
||||
resources: ["csinodeinfos"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
{{ else }}
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["csinodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
{{- end }}
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-attacher-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: hpe-csi-attacher-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
---
|
||||
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-snapshotter-role
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots"]
|
||||
verbs: ["create", "update", "delete", "get", "list", "watch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots/status"]
|
||||
verbs: ["update"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents"]
|
||||
verbs: ["create", "update", "delete", "get", "list", "watch", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents/status"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["apiextensions.k8s.io"]
|
||||
resources: ["customresourcedefinitions"]
|
||||
verbs: ["get", "list", "watch", "create", "delete", "update"]
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-snapshotter-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: hpe-csi-snapshotter-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
{{- end }}
|
||||
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "15") }}
|
||||
---
|
||||
# Resizer must be able to work with PVCs, PVs, SCs.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: external-resizer-role
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims/status"]
|
||||
verbs: ["update", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-resizer-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: external-resizer-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
|
||||
# Resizer must be able to work with end point in current namespace
|
||||
# if (and only if) leadership election is enabled
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: external-resizer-cfg
|
||||
rules:
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||
|
||||
---
|
||||
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-resizer-role-cfg
|
||||
namespace: {{ .Release.Namespace }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: external-resizer-cfg
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
|
||||
---
|
||||
# cluster role to support volumegroup
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-volumegroup-role
|
||||
rules:
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroups"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroupcontents"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroupclasses"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroups/status"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroupcontents/status"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
verbs: ["get", "list", "create"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims/status"]
|
||||
verbs: ["update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get"]
|
||||
- apiGroups: ["apiextensions.k8s.io"]
|
||||
resources: ["customresourcedefinitions"]
|
||||
verbs: ["create", "list", "watch", "delete", "get", "update"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-volumegroup-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: hpe-csi-volumegroup-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
# cluster role to support snapshotgroup
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-snapshotgroup-role
|
||||
rules:
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["snapshotgroups"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["snapshotgroupcontents"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["snapshotgroupclasses"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["snapshotgroups/status"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["snapshotgroupcontents/status"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
verbs: ["get", "list", "create"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims/status"]
|
||||
verbs: ["update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get"]
|
||||
- apiGroups: ["apiextensions.k8s.io"]
|
||||
resources: ["customresourcedefinitions"]
|
||||
verbs: ["create", "list", "watch", "delete", "get", "update"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroups"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroupcontents"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["volumegroupclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents/status"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots/status"]
|
||||
verbs: ["update"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotclasses"]
|
||||
verbs: ["get", "list"]
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-snapshotgroup-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: hpe-csi-snapshotgroup-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
# mutator must be able to work with PVCs, PVs, SCs.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-mutator-role
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims/status"]
|
||||
verbs: ["update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-mutator-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
# replace with non-default namespace name
|
||||
namespace: {{ .Release.Namespace }}
|
||||
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: csi-mutator-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
# mutator must be able to work with end point in current namespace
|
||||
# if (and only if) leadership election is enabled
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: csi-mutator-cfg
|
||||
rules:
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-mutator-role-cfg
|
||||
namespace: {{ .Release.Namespace }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: csi-mutator-cfg
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
{{- end }}
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-driver-role
|
||||
rules:
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["hpenodeinfos"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["hpevolumeinfos"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["hpereplicationdeviceinfos"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["hpevolumegroupinfos"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: ["storage.hpe.com"]
|
||||
resources: ["hpesnapshotgroupinfos"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["services"]
|
||||
verbs: ["get"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: hpe-csi-node-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: hpe-csi-driver-binding
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-controller-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csi-node-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
- kind: ServiceAccount
|
||||
name: hpe-csp-sa
|
||||
namespace: {{ .Release.Namespace }}
|
||||
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: hpe-csi-driver-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
|
||||
kind: ServiceAccount
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: hpe-csp-sa
|
||||
namespace: {{ .Release.Namespace }}
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: hpe-linux-config
|
||||
namespace: {{ .Release.Namespace }}
|
||||
data:
|
||||
{{ if and .Values.iscsi.chapUser .Values.iscsi.chapPassword }}
|
||||
CHAP_USER: {{ .Values.iscsi.chapUser | quote }}
|
||||
CHAP_PASSWORD: {{ .Values.iscsi.chapPassword | quote }}
|
||||
{{- end }}
|
||||
config.json: |-
|
||||
{{ (.Files.Get "files/config.json") | indent 4 }}
|
|
@ -0,0 +1,87 @@
|
|||
{{- if not .Values.disable.alletra6000 }}
|
||||
|
||||
---
|
||||
### Alletra 6000 CSP Service ###
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: alletra6000-csp-svc
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: alletra6000-csp-svc
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: nimble-csp
|
||||
{{- end }}
|
||||
|
||||
{{- if not .Values.disable.nimble }}
|
||||
---
|
||||
### Nimble CSP Service ###
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nimble-csp-svc
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: nimble-csp-svc
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: nimble-csp
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{- if or (not .Values.disable.alletra6000) (not .Values.disable.nimble) }}
|
||||
---
|
||||
### CSP deployment ###
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: nimble-csp
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nimble-csp
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nimble-csp
|
||||
spec:
|
||||
serviceAccountName: hpe-csp-sa
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
priorityClassName: system-cluster-critical
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: nimble-csp
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/alletra-6000-and-nimble-csp:v2.1.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.1.1
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
volumeMounts:
|
||||
- name: log-dir
|
||||
mountPath: /var/log
|
||||
volumes:
|
||||
- name: log-dir
|
||||
hostPath:
|
||||
path: /var/log
|
||||
tolerations:
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/unreachable
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
||||
{{- end }}
|
|
@ -0,0 +1,94 @@
|
|||
{{- if not .Values.disable.alletra9000 }}
|
||||
---
|
||||
### Alletra9000 CSP Service ###
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: alletra9000-csp-svc
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: alletra9000-csp-svc
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: primera3par-csp
|
||||
|
||||
{{- end }}
|
||||
|
||||
{{- if not .Values.disable.primera }}
|
||||
---
|
||||
### Primera3par CSP Service ###
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: primera3par-csp-svc
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: primera3par-csp-svc
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: primera3par-csp
|
||||
{{- end }}
|
||||
|
||||
{{- if or (not .Values.disable.alletra9000) (not .Values.disable.primera) }}
|
||||
|
||||
---
|
||||
### CSP deployment ###
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: primera3par-csp
|
||||
labels:
|
||||
app: primera3par-csp
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: primera3par-csp
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: primera3par-csp
|
||||
spec:
|
||||
serviceAccountName: hpe-csp-sa
|
||||
{{- if and (eq .Capabilities.KubeVersion.Major "1") ( ge ( trimSuffix "+" .Capabilities.KubeVersion.Minor ) "17") }}
|
||||
priorityClassName: system-cluster-critical
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: primera3par-csp
|
||||
{{- if .Values.registry }}
|
||||
image: {{ .Values.registry }}/hpestorage/alletra-9000-primera-and-3par-csp:v2.1.1
|
||||
{{- else }}
|
||||
image: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.1.1
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
|
||||
env:
|
||||
- name: CRD_CLIENT_CONFIG_QPS
|
||||
value: "35"
|
||||
- name: CRD_CLIENT_CONFIG_BURST
|
||||
value: "20"
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
volumeMounts:
|
||||
- name: log-dir
|
||||
mountPath: /var/log
|
||||
volumes:
|
||||
- name: log-dir
|
||||
hostPath:
|
||||
path: /var/log
|
||||
tolerations:
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
||||
- effect: NoExecute
|
||||
key: node.kubernetes.io/unreachable
|
||||
operator: Exists
|
||||
tolerationSeconds: 30
|
||||
{{- end }}
|
|
@ -0,0 +1,159 @@
|
|||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema",
|
||||
"$id": "http://example.com/example.json",
|
||||
"title": "HPE CSI Driver for Kubernetes Helm Chart JSON Schema",
|
||||
"type": "object",
|
||||
"default":
|
||||
{
|
||||
"disable": {
|
||||
"nimble": false,
|
||||
"primera": false,
|
||||
"alletra6000": false,
|
||||
"alletra9000": false
|
||||
},
|
||||
"disableNodeConformance": false,
|
||||
"imagePullPolicy": "IfNotPresent",
|
||||
"iscsi": {
|
||||
"chapUser": "",
|
||||
"chapPassword": ""
|
||||
},
|
||||
"logLevel": "info",
|
||||
"registry": "quay.io",
|
||||
"kubeletRootDir": "/var/lib/kubelet/",
|
||||
"disableNodeGetVolumeStats": false
|
||||
},
|
||||
"required": [
|
||||
"disable",
|
||||
"disableNodeConformance",
|
||||
"imagePullPolicy",
|
||||
"iscsi",
|
||||
"logLevel",
|
||||
"registry",
|
||||
"kubeletRootDir",
|
||||
"disableNodeGetVolumeStats"
|
||||
],
|
||||
"properties": {
|
||||
"disable": {
|
||||
"$id": "#/properties/disable",
|
||||
"title": "CSP Deployment and Service backend exclusion",
|
||||
"description": "All backend Deployments and Services are installed by default.",
|
||||
"type": "object",
|
||||
"default":
|
||||
{
|
||||
"nimble": false,
|
||||
"primera": false,
|
||||
"alletra6000": false,
|
||||
"alletra9000": false
|
||||
},
|
||||
"required": [
|
||||
"nimble",
|
||||
"primera",
|
||||
"alletra6000",
|
||||
"alletra9000"
|
||||
],
|
||||
"properties": {
|
||||
"nimble": {
|
||||
"$id": "#/properties/disable/properties/nimble",
|
||||
"title": "HPE Nimble Storage",
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
},
|
||||
"primera": {
|
||||
"$id": "#/properties/disable/properties/primera",
|
||||
"title": "HPE Primera",
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
},
|
||||
"alletra6000": {
|
||||
"$id": "#/properties/disable/properties/alletra6000",
|
||||
"title": "HPE Alletra 6000",
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
},
|
||||
"alletra9000": {
|
||||
"$id": "#/properties/disable/properties/alletra9000",
|
||||
"title": "HPE Alletra 9000",
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"disableNodeConformance": {
|
||||
"$id": "#/properties/disableNodeConformance",
|
||||
"title": "Disable node conformance",
|
||||
"description": "Disabling node conformance forces the cluster administrator to install required packages and ensure the correct node services are started to use external block storage.",
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
},
|
||||
"imagePullPolicy": {
|
||||
"$id": "#/properties/imagePullPolicy",
|
||||
"title": "CSI driver image pull policy",
|
||||
"type": "string",
|
||||
"default": "IfNotPresent",
|
||||
"enum": [ "Always", "IfNotPresent", "Never" ]
|
||||
},
|
||||
"iscsi": {
|
||||
"$id": "#/properties/iscsi",
|
||||
"title": "iSCSI CHAP credentials",
|
||||
"type": "object",
|
||||
"default":
|
||||
{
|
||||
"chapUser": "",
|
||||
"chapPassword": ""
|
||||
},
|
||||
"required": [
|
||||
"chapUser",
|
||||
"chapPassword"
|
||||
],
|
||||
"properties": {
|
||||
"chapUser": {
|
||||
"$id": "#/properties/iscsi/properties/chapUser",
|
||||
"title": "CHAP username",
|
||||
"type": "string",
|
||||
"default": ""
|
||||
},
|
||||
"chapPassword": {
|
||||
"$id": "#/properties/iscsi/properties/chapPassword",
|
||||
"title": "CHAP password",
|
||||
"description": "Between 12 and 16 characters",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"pattern": "^$|^[a-zA-Z0-9+_)(*^%$#@!]{12,16}$"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"logLevel": {
|
||||
"$id": "#/properties/logLevel",
|
||||
"title": "Set the log level of the HPE CSI Driver images",
|
||||
"type": "string",
|
||||
"default": "info",
|
||||
"enum": [ "info", "debug", "trace", "warn", "error" ]
|
||||
},
|
||||
"registry": {
|
||||
"$id": "#/properties/registry",
|
||||
"title": "Pull images from a different registry than default",
|
||||
"description": "SIG Storage images needs to be mirrored from k8s.gcr.io to this registry if this parameter is changed.",
|
||||
"type": "string",
|
||||
"default": "quay.io"
|
||||
},
|
||||
"kubeletRootDir": {
|
||||
"$id": "#/properties/kubeletRootDir",
|
||||
"title": "Kubelet root directory",
|
||||
"description": "Only change this if the kubelet root dir has been altered by the Kubernetes platform installer.",
|
||||
"type": "string",
|
||||
"default": "/var/lib/kubelet",
|
||||
"pattern": "^/"
|
||||
},
|
||||
"disableNodeGetVolumeStats": {
|
||||
"$id": "#/properties/disableNodeGetVolumeStats",
|
||||
"title": "Disable the CSI nodeGetVolumeStats call",
|
||||
"description": "In very large environments, disabling this feature may alleviate pressure on the CSP.",
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
},
|
||||
"global": {}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
|
@ -0,0 +1,34 @@
|
|||
# Default values for hpe-csi-driver Helm chart
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
# Control CSP Service and Deployments for HPE storage products
|
||||
disable:
|
||||
nimble: false
|
||||
primera: false
|
||||
alletra6000: false
|
||||
alletra9000: false
|
||||
|
||||
# For controlling automatic iscsi/multipath package installation
|
||||
disableNodeConformance: false
|
||||
|
||||
# imagePullPolicy applied for all hpe-csi-driver images
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
|
||||
# Cluster wide values for CHAP authentication
|
||||
iscsi:
|
||||
chapUser: ""
|
||||
chapPassword: ""
|
||||
|
||||
# Log level for all hpe-csi-driver components
|
||||
logLevel: "info"
|
||||
|
||||
# Registry prefix for hpe-csi-driver images
|
||||
registry: "quay.io"
|
||||
|
||||
# Kubelet root directory path
|
||||
kubeletRootDir: "/var/lib/kubelet/"
|
||||
|
||||
# NodeGetVolumestats will be called by default, set true to disable the call
|
||||
disableNodeGetVolumeStats: false
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: K10
|
||||
catalog.cattle.io/release-name: k10
|
||||
apiVersion: v2
|
||||
appVersion: 4.5.14
|
||||
description: Kasten’s K10 Data Management Platform
|
||||
home: https://kasten.io/
|
||||
icon: https://docs.kasten.io/_static/kasten-logo-vertical.png
|
||||
kubeVersion: '>= 1.17.0-0'
|
||||
maintainers:
|
||||
- email: support@kasten.io
|
||||
name: kastenIO
|
||||
name: k10
|
||||
version: 4.5.1400
|
|
@ -0,0 +1,227 @@
|
|||
# Kasten's K10 Helm chart.
|
||||
|
||||
[Kasten's k10](https://docs.kasten.io/) is a data lifecycle management system for all your persistence.enabled container-based applications.
|
||||
|
||||
## TL;DR;
|
||||
|
||||
```console
|
||||
$ helm install kasten/k10 --name=k10 --namespace=kasten-io
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps Kasten's K10 platform on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
## Prerequisites
|
||||
- Kubernetes 1.7+ with Beta APIs enabled
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart on a [GKE](https://cloud.google.com/container-engine/) cluster
|
||||
|
||||
```console
|
||||
$ helm install kasten/k10 --name=k10 --namespace=kasten-io
|
||||
```
|
||||
|
||||
To install the chart on an [AWS](https://aws.amazon.com/) [kops](https://github.com/kubernetes/kops)-created cluster
|
||||
|
||||
```console
|
||||
$ helm install kasten/k10 --name=k10 --namespace=kasten-io --set secrets.awsAccessKeyId="${AWS_ACCESS_KEY_ID}" \
|
||||
--set secrets.awsSecretAccessKey="${AWS_SECRET_ACCESS_KEY}"
|
||||
```
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `k10` application:
|
||||
|
||||
```console
|
||||
$ helm delete k10 --purge
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the configurable parameters of the K10
|
||||
chart and their default values.
|
||||
|
||||
Parameter | Description | Default
|
||||
--- | --- | ---
|
||||
`eula.accept`| Whether to enable accept EULA before installation | `false`
|
||||
`eula.company` | Company name. Required field if EULA is accepted | `None`
|
||||
`eula.email` | Contact email. Required field if EULA is accepted | `None`
|
||||
`license` | License string obtained from Kasten | `None`
|
||||
`rbac.create` | Whether to enable RBAC with a specific cluster role and binding for K10 | `true`
|
||||
`scc.create` | Whether to create a SecurityContextConstraints for K10 ServiceAccounts | `false`
|
||||
`services.dashboardbff.hostNetwork` | Whether the dashboardbff pods may use the node network | `false`
|
||||
`services.executor.hostNetwork` | Whether the executor pods may use the node network | `false`
|
||||
`services.aggregatedapis.hostNetwork` | Whether the aggregatedapis pods may use the node network | `false`
|
||||
`serviceAccount.create`| Specifies whether a ServiceAccount should be created | `true`
|
||||
`serviceAccount.name` | The name of the ServiceAccount to use. If not set, a name is derived using the release and chart names. | `None`
|
||||
`ingress.create` | Specifies whether the K10 dashboard should be exposed via ingress | `false`
|
||||
`ingress.class` | Cluster ingress controller class: `nginx`, `GCE` | `None`
|
||||
`ingress.host` | FQDN (e.g., `k10.example.com`) for name-based virtual host | `None`
|
||||
`ingress.urlPath` | URL path for K10 Dashboard (e.g., `/k10`) | `Release.Name`
|
||||
`ingress.annotations` | Additional Ingress object annotations | `{}`
|
||||
`ingress.tls.enabled` | Configures a TLS use for `ingress.host` | `false`
|
||||
`ingress.tls.secretName` | Specifies a name of TLS secret | `None`
|
||||
`ingress.pathType` | Specifies the path type for the ingress resource | `ImplementationSpecific`
|
||||
`global.persistence.enabled` | Use PVS to persist data | `true`
|
||||
`global.persistence.size` | Default global size of volumes for K10 persistent services | `20Gi`
|
||||
`global.persistence.catalog.size` | Size of a volume for catalog service | `global.persistence.size`
|
||||
`global.persistence.jobs.size` | Size of a volume for jobs service | `global.persistence.size`
|
||||
`global.persistence.logging.size` | Size of a volume for logging service | `global.persistence.size`
|
||||
`global.persistence.metering.size` | Size of a volume for metering service | `global.persistence.size`
|
||||
`global.persistence.storageClass` | Specified StorageClassName will be used for PVCs | `None`
|
||||
`global.airgapped.repository` | Specify the helm repository for offline (airgapped) installation | `''`
|
||||
`global.imagePullSecret` | Provide secret which contains docker config for private repository. Use `k10-ecr` when secrets.dockerConfigPath is used. | `''`
|
||||
`secrets.awsAccessKeyId` | AWS access key ID (required for AWS deployment) | `None`
|
||||
`secrets.awsSecretAccessKey` | AWS access key secret | `None`
|
||||
`secrets.awsIamRole` | ARN of the AWS IAM role assumed by K10 to perform any AWS operation. | `None`
|
||||
`secrets.googleApiKey` | Non-default base64 encoded GCP Service Account key file | `None`
|
||||
`secrets.azureTenantId` | Azure tenant ID (required for Azure deployment) | `None`
|
||||
`secrets.azureClientId` | Azure Service App ID | `None`
|
||||
`secrets.azureClientSecret` | Azure Service APP secret | `None`
|
||||
`secrets.azureResourceGroup` | Resource Group name that was created for the Kubernetes cluster | `None`
|
||||
`secrets.azureSubscriptionID` | Subscription ID in your Azure tenant | `None`
|
||||
`secrets.azureResourceMgrEndpoint` | Resource management endpoint for the Azure Stack instance | `None`
|
||||
`secrets.azureADEndpoint` | Azure Active Directory login endpoint | `None`
|
||||
`secrets.azureADResourceID` | Azure Active Directory resource ID to obtain AD tokens | `None`
|
||||
`secrets.azureCloudEnvID` | Azure Cloud Environment ID | `None`
|
||||
`secrets.vsphereEndpoint` | vSphere endpoint for login | `None`
|
||||
`secrets.vsphereUsername` | vSphere username for login | `None`
|
||||
`secrets.vspherePassword` | vSphere password for login | `None`
|
||||
`secrets.dockerConfigPath` | Use --set-file secrets.dockerConfigPath=path_to_docker_config.yaml to specify docker config for image pull | `None`
|
||||
`cacertconfigmap.name` | Name of the ConfigMap that contains a certificate for a trusted root certificate authority | `None`
|
||||
`clusterName` | Cluster name for better logs visibility | `None`
|
||||
`metering.awsRegion` | Sets AWS_REGION for metering service | `None`
|
||||
`metering.mode` | Control license reporting (set to `airgap` for private-network installs) | `None`
|
||||
`metering.reportCollectionPeriod` | Sets metric report collection period (in seconds) | `1800`
|
||||
`metering.reportPushPeriod` | Sets metric report push period (in seconds) | `3600`
|
||||
`metering.promoID` | Sets K10 promotion ID from marketing campaigns | `None`
|
||||
`metering.awsMarketplace` | Sets AWS cloud metering license mode | `false`
|
||||
`metering.awsManagedLicense` | Sets AWS managed license mode | `false`
|
||||
`metering.redhatMarketplacePayg` | Sets Red Hat cloud metering license mode | `false`
|
||||
`metering.licenseConfigSecretName` | Sets AWS managed license config secret | `None`
|
||||
`externalGateway.create` | Configures an external gateway for K10 API services | `false`
|
||||
`externalGateway.annotations` | Standard annotations for the services | `None`
|
||||
`externalGateway.fqdn.name` | Domain name for the K10 API services | `None`
|
||||
`externalGateway.fqdn.type` | Supported gateway type: `route53-mapper` or `external-dns` | `None`
|
||||
`externalGateway.awsSSLCertARN` | ARN for the AWS ACM SSL certificate used in the K10 API server | `None`
|
||||
`auth.basicAuth.enabled` | Configures basic authentication for the K10 dashboard | `false`
|
||||
`auth.basicAuth.htpasswd` | A username and password pair separated by a colon character | `None`
|
||||
`auth.basicAuth.secretName` | Name of an existing Secret that contains a file generated with htpasswd | `None`
|
||||
`auth.k10AdminGroups` | A list of groups whose members are granted admin level access to K10's dashboard | `None`
|
||||
`auth.k10AdminUsers` | A list of users who are granted admin level access to K10's dashboard | `None`
|
||||
`auth.tokenAuth.enabled` | Configures token based authentication for the K10 dashboard | `false`
|
||||
`auth.oidcAuth.enabled` | Configures Open ID Connect based authentication for the K10 dashboard | `false`
|
||||
`auth.oidcAuth.providerURL` | URL for the OIDC Provider | `None`
|
||||
`auth.oidcAuth.redirectURL` | URL to the K10 gateway service | `None`
|
||||
`auth.oidcAuth.scopes` | Space separated OIDC scopes required for userinfo. Example: "profile email" | `None`
|
||||
`auth.oidcAuth.prompt` | The type of prompt to be used during authentication (none, consent, login or select_account) | `select_account`
|
||||
`auth.oidcAuth.clientID` | Client ID given by the OIDC provider for K10 | `None`
|
||||
`auth.oidcAuth.clientSecret` | Client secret given by the OIDC provider for K10 | `None`
|
||||
`auth.oidcAuth.usernameClaim` | The claim to be used as the username | `sub`
|
||||
`auth.oidcAuth.usernamePrefix` | Prefix that has to be used with the username obtained from the username claim | `None`
|
||||
`auth.oidcAuth.groupClaim` | Name of a custom OpenID Connect claim for specifying user groups | `None`
|
||||
`auth.oidcAuth.groupPrefix` | All groups will be prefixed with this value to prevent conflicts | `None`
|
||||
`auth.openshift.enabled` | Enables access to the K10 dashboard by authenticating with the OpenShift OAuth server | `false`
|
||||
`auth.openshift.serviceAccount` | Name of the service account that represents an OAuth client | `None`
|
||||
`auth.openshift.clientSecret` | The token corresponding to the service account | `None`
|
||||
`auth.openshift.dashboardURL` | The URL used for accessing K10's dashboard | `None`
|
||||
`auth.openshift.openshiftURL` | The URL for accessing OpenShift's API server | `None`
|
||||
`auth.openshift.insecureCA` | To turn off SSL verification of connections to OpenShift | `false`
|
||||
`auth.openshift.useServiceAccountCA` | Set this to true to use the CA certificate corresponding to the Service Account ``auth.openshift.serviceAccount`` usually found at ``/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`` | `false`
|
||||
`auth.ldap.enabled` | Configures Active Directory/LDAP based authentication for the K10 dashboard | `false`
|
||||
`auth.ldap.restartPod` | To force a restart of the authentication service pod (useful when updating authentication config) | `false`
|
||||
`auth.ldap.dashboardURL` | The URL used for accessing K10's dashboard | `None`
|
||||
`auth.ldap.host` | Host and optional port of the AD/LDAP server in the form `host:port` | `None`
|
||||
`auth.ldap.insecureNoSSL` | Required if the AD/LDAP host is not using TLS | `false`
|
||||
`auth.ldap.insecureSkipVerifySSL` | To turn off SSL verification of connections to the AD/LDAP host | `false`
|
||||
`auth.ldap.startTLS` | When set to true, ldap:// is used to connect to the server followed by creation of a TLS session. When set to false, ldaps:// is used. | `false`
|
||||
`auth.ldap.bindDN` | The Distinguished Name(username) used for connecting to the AD/LDAP host | `None`
|
||||
`auth.ldap.bindPW` | The password corresponding to the `bindDN` for connecting to the AD/LDAP host | `None`
|
||||
`auth.ldap.bindPWSecretName` | The name of the secret that contains the password corresponding to the `bindDN` for connecting to the AD/LDAP host | `None`
|
||||
`auth.ldap.userSearch.baseDN` | The base Distinguished Name to start the AD/LDAP search from | `None`
|
||||
`auth.ldap.userSearch.filter` | Optional filter to apply when searching the directory | `None`
|
||||
`auth.ldap.userSearch.username` | Attribute used for comparing user entries when searching the directory | `None`
|
||||
`auth.ldap.userSearch.idAttr` | AD/LDAP attribute in a user's entry that should map to the user ID field in a token | `None`
|
||||
`auth.ldap.userSearch.emailAttr` | AD/LDAP attribute in a user's entry that should map to the email field in a token | `None`
|
||||
`auth.ldap.userSearch.nameAttr` | AD/LDAP attribute in a user's entry that should map to the name field in a token | `None`
|
||||
`auth.ldap.userSearch.preferredUsernameAttr` | AD/LDAP attribute in a user's entry that should map to the preferred_username field in a token | `None`
|
||||
`auth.ldap.groupSearch.baseDN` | The base Distinguished Name to start the AD/LDAP group search from | `None`
|
||||
`auth.ldap.groupSearch.filter` | Optional filter to apply when searching the directory for groups | `None`
|
||||
`auth.ldap.groupSearch.nameAttr` | The AD/LDAP attribute that represents a group's name in the directory | `None`
|
||||
`auth.ldap.groupSearch.userMatchers` | List of field pairs that are used to match a user to a group. | `None`
|
||||
`auth.ldap.groupSearch.userMatchers.userAttr` | Attribute in the user's entry that must match with the `groupAttr` while searching for groups | `None`
|
||||
`auth.ldap.groupSearch.userMatchers.groupAttr` | Attribute in the group's entry that must match with the `userAttr` while searching for groups | `None`
|
||||
`auth.groupAllowList` | A list of groups whose members are allowed access to K10's dashboard | `None`
|
||||
`services.securityContext` | Custom [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for K10 service containers | `{"runAsUser" : 1000, "fsGroup": 1000}`
|
||||
`services.securityContext.runAsUser` | User ID K10 service containers run as| `1000`
|
||||
`services.securityContext.runAsGroup` | Group ID K10 service containers run as| `1000`
|
||||
`services.securityContext.fsGroup` | FSGroup that owns K10 service container volumes | `1000`
|
||||
`injectKanisterSidecar.enabled` | Enable Kanister sidecar injection for workload pods | `false`
|
||||
`injectKanisterSidecar.namespaceSelector.matchLabels` | Set of labels to select namespaces in which sidecar injection is enabled for workloads | `{}`
|
||||
`injectKanisterSidecar.objectSelector.matchLabels` | Set of labels to filter workload objects in which the sidecar is injected | `{}`
|
||||
`injectKanisterSidecar.webhookServer.port` | Port number on which the mutating webhook server accepts request | `8080`
|
||||
`gateway.insecureDisableSSLVerify` | Specifies whether to disable SSL verification for gateway pods | `false`
|
||||
`gateway.exposeAdminPort` | Specifies whether to expose Admin port for gateway service | `true`
|
||||
`genericVolumeSnapshot.resources.[requests\|limits].[cpu\|memory]` | Resource requests and limits for Generic Volume Snapshot restore pods | `{}`
|
||||
`prometheus.server.enabled` | If false, K10's Prometheus server will not be created, reducing the dashboard's functionality. | `true`
|
||||
`prometheus.server.persistentVolume.enabled` | If true, K10 Prometheus server will create a Persistent Volume Claim | `true`
|
||||
`prometheus.server.persistentVolume.size` | K10 Prometheus server data Persistent Volume size | `30Gi`
|
||||
`prometheus.server.persistentVolume.storageClass` | StorageClassName used to create Prometheus PVC. Setting this option overwrites global StorageClass value | `""`
|
||||
`prometheus.server.retention` | (optional) K10 Prometheus data retention | `"30d"`
|
||||
`prometheus.server.baseURL` | (optional) K10 Prometheus external url path at which the server can be accessed | `/k10/prometheus/`
|
||||
`prometheus.server.prefixURL` | (optional) K10 Prometheus prefix slug at which the server can be accessed | `/k10/prometheus/`
|
||||
`grafana.enabled` | (optional) If false Grafana will not be available | `true`
|
||||
`grafana.prometheusPrefixURL` | (optional) URL for Prometheus datasource in Grafana (must match `prometheus.server.prefixURL`) | `/k10/prometheus/`
|
||||
`resources.<podName>.<containerName>.[requests\|limits].[cpu\|memory]` | Overwrite default K10 [container resource requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) | varies by container
|
||||
`route.enabled` | Specifies whether the K10 dashboard should be exposed via route | `false`
|
||||
`route.host` | FQDN (e.g., `.k10.example.com`) for name-based virtual host | `""`
|
||||
`route.path` | URL path for K10 Dashboard (e.g., `/k10`) | `/`
|
||||
`route.annotations` | Additional Route object annotations | `{}`
|
||||
`route.labels` | Additional Route object labels | `{}`
|
||||
`route.tls.enabled` | Configures a TLS use for `route.host` | `false`
|
||||
`route.tls.insecureEdgeTerminationPolicy` | Specifies behavior for insecure scheme traffic | `Redirect`
|
||||
`route.tls.termination` | Specifies the TLS termination of the route | `edge`
|
||||
`apigateway.serviceResolver` | Specifies the resolver used for service discovery in the API gateway (`dns` or `endpoint`) | `dns`
|
||||
`limiter.genericVolumeSnapshots` | Limit of concurrent generic volume snapshot create operations | `10`
|
||||
`limiter.genericVolumeCopies` | Limit of concurrent generic volume snapshot copy operations | `10`
|
||||
`limiter.genericVolumeRestores` | Limit of concurrent generic volume snapshot restore operations | `10`
|
||||
`limiter.csiSnapshots` | Limit of concurrent CSI snapshot create operations | `10`
|
||||
`limiter.providerSnapshots` | Limit of concurrent cloud provider create operations | `10`
|
||||
`cluster.domainName` | Specifies the domain name of the cluster | `cluster.local`
|
||||
`kanister.backupTimeout` | Specifies timeout to set on Kanister backup operations | `45`
|
||||
`kanister.restoreTimeout` | Specifies timeout to set on Kanister restore operations | `600`
|
||||
`kanister.deleteTimeout` | Specifies timeout to set on Kanister delete operations | `45`
|
||||
`kanister.hookTimeout` | Specifies timeout to set on Kanister pre-hook and post-hook operations | `20`
|
||||
`kanister.checkRepoTimeout` | Specifies timeout to set on Kanister checkRepo operations | `20`
|
||||
`kanister.statsTimeout` | Specifies timeout to set on Kanister stats operations | `20`
|
||||
`kanister.efsPostRestoreTimeout` | Specifies timeout to set on Kanister efsPostRestore operations | `45`
|
||||
`awsConfig.assumeRoleDuration` | Duration of a session token generated by AWS for an IAM role. The minimum value is 15 minutes and the maximum value is the maximum duration setting for that IAM role. For documentation about how to view and edit the maximum session duration for an IAM role see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session. The value accepts a number along with a single character ``m``(for minutes) or ``h`` (for hours) Examples: 60m or 2h | `''`
|
||||
`awsConfig.efsBackupVaultName` | Specifies the AWS EFS backup vault name | `k10vault`
|
||||
`vmWare.taskTimeoutMin` | Specifies the timeout for VMWare operations | `60`
|
||||
`encryption.primaryKey.awsCmkKeyId` | Specifies the AWS CMK key ID for encrypting K10 Primary Key | `None`
|
||||
## Helm tips and tricks
|
||||
|
||||
There is a way of setting values via a yaml file instead of using `--set`.
|
||||
You can copy/paste values into a file (e.g., my_values.yaml):
|
||||
|
||||
```yaml
|
||||
secrets:
|
||||
awsAccessKeyId: ${AWS_ACCESS_KEY_ID}
|
||||
awsSecretAccessKey: ${AWS_SECRET_ACCESS_KEY}
|
||||
```
|
||||
and then run:
|
||||
```bash
|
||||
envsubst < my_values.yaml > my_values_out.yaml && helm install helm/k10 -f my_values_out.yaml
|
||||
```
|
||||
|
||||
To use non-default GCP ServiceAccount (SA) credentials, the credentials JSON file needs to be encoded into a base64 string.
|
||||
|
||||
|
||||
```bash
|
||||
sa_key=$(base64 -w0 sa-key.json)
|
||||
helm install kasten/k10 --name=k10 --namespace=kasten-io --set secrets.googleApiKey=$sa_key
|
||||
```
|
|
@ -0,0 +1,5 @@
|
|||
The K10 data management platform, purpose-built for Kubernetes, provides enterprise operations teams an easy-to-use, scalable, and secure system for backup/restore, disaster recovery, and mobility of Kubernetes applications.
|
||||
|
||||
K10’s application-centric approach and deep integrations with relational and NoSQL databases, Kubernetes distributions, and all clouds provide teams the freedom of infrastructure choice without sacrificing operational simplicity. Policy-driven and extensible, K10 provides a native Kubernetes API and includes features such as full-spectrum consistency, database integrations, automatic application discovery, multi-cloud mobility, and a powerful web-based user interface.
|
||||
|
||||
For more information, refer to the docs [https://docs.kasten.io/](https://docs.kasten.io/)
|
|
@ -0,0 +1,23 @@
|
|||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.vscode
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
OWNERS
|
|
@ -0,0 +1,22 @@
|
|||
apiVersion: v2
|
||||
appVersion: 8.1.0
|
||||
description: The leading tool for querying and visualizing time series and metrics.
|
||||
home: https://grafana.net
|
||||
icon: https://raw.githubusercontent.com/grafana/grafana/master/public/img/logo_transparent_400x.png
|
||||
kubeVersion: ^1.8.0-0
|
||||
maintainers:
|
||||
- email: zanhsieh@gmail.com
|
||||
name: zanhsieh
|
||||
- email: rluckie@cisco.com
|
||||
name: rtluckie
|
||||
- email: maor.friedman@redhat.com
|
||||
name: maorfr
|
||||
- email: miroslav.hadzhiev@gmail.com
|
||||
name: Xtigyro
|
||||
- email: mail@torstenwalter.de
|
||||
name: torstenwalter
|
||||
name: grafana
|
||||
sources:
|
||||
- https://github.com/grafana/grafana
|
||||
type: application
|
||||
version: 6.15.0
|
|
@ -0,0 +1,528 @@
|
|||
# Grafana Helm Chart
|
||||
|
||||
* Installs the web dashboarding system [Grafana](http://grafana.org/)
|
||||
|
||||
## Get Repo Info
|
||||
|
||||
```console
|
||||
helm repo add grafana https://grafana.github.io/helm-charts
|
||||
helm repo update
|
||||
```
|
||||
|
||||
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
helm install my-release grafana/grafana
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the my-release deployment:
|
||||
|
||||
```console
|
||||
helm delete my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Upgrading an existing Release to a new major version
|
||||
|
||||
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an
|
||||
incompatible breaking change needing manual actions.
|
||||
|
||||
### To 4.0.0 (And 3.12.1)
|
||||
|
||||
This version requires Helm >= 2.12.0.
|
||||
|
||||
### To 5.0.0
|
||||
|
||||
You have to add --force to your helm upgrade command as the labels of the chart have changed.
|
||||
|
||||
### To 6.0.0
|
||||
|
||||
This version requires Helm >= 3.1.0.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
|
||||
| `replicas` | Number of nodes | `1` |
|
||||
| `podDisruptionBudget.minAvailable` | Pod disruption minimum available | `nil` |
|
||||
| `podDisruptionBudget.maxUnavailable` | Pod disruption maximum unavailable | `nil` |
|
||||
| `deploymentStrategy` | Deployment strategy | `{ "type": "RollingUpdate" }` |
|
||||
| `livenessProbe` | Liveness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }` |
|
||||
| `readinessProbe` | Readiness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } }`|
|
||||
| `securityContext` | Deployment securityContext | `{"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472}` |
|
||||
| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
|
||||
| `image.repository` | Image repository | `grafana/grafana` |
|
||||
| `image.tag` | Image tag (`Must be >= 5.0.0`) | `8.0.3` |
|
||||
| `image.sha` | Image sha (optional) | `80c6d6ac633ba5ab3f722976fb1d9a138f87ca6a9934fcd26a5fc28cbde7dbfa` |
|
||||
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Image pull secrets | `{}` |
|
||||
| `service.enabled` | Enable grafana service | `true` |
|
||||
| `service.type` | Kubernetes service type | `ClusterIP` |
|
||||
| `service.port` | Kubernetes port where service is exposed | `80` |
|
||||
| `service.portName` | Name of the port on the service | `service` |
|
||||
| `service.targetPort` | Internal service is port | `3000` |
|
||||
| `service.nodePort` | Kubernetes service nodePort | `nil` |
|
||||
| `service.annotations` | Service annotations | `{}` |
|
||||
| `service.labels` | Custom labels | `{}` |
|
||||
| `service.clusterIP` | internal cluster service IP | `nil` |
|
||||
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `nil` |
|
||||
| `service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to lb (if supported) | `[]` |
|
||||
| `service.externalIPs` | service external IP addresses | `[]` |
|
||||
| `extraExposePorts` | Additional service ports for sidecar containers| `[]` |
|
||||
| `hostAliases` | adds rules to the pod's /etc/hosts | `[]` |
|
||||
| `ingress.enabled` | Enables Ingress | `false` |
|
||||
| `ingress.annotations` | Ingress annotations (values are templated) | `{}` |
|
||||
| `ingress.labels` | Custom labels | `{}` |
|
||||
| `ingress.path` | Ingress accepted path | `/` |
|
||||
| `ingress.pathType` | Ingress type of path | `Prefix` |
|
||||
| `ingress.hosts` | Ingress accepted hostnames | `["chart-example.local"]` |
|
||||
| `ingress.extraPaths` | Ingress extra paths to prepend to every host configuration. Useful when configuring [custom actions with AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#actions). Requires `ingress.hosts` to have one or more host entries. | `[]` |
|
||||
| `ingress.tls` | Ingress TLS configuration | `[]` |
|
||||
| `resources` | CPU/Memory resource requests/limits | `{}` |
|
||||
| `nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `tolerations` | Toleration labels for pod assignment | `[]` |
|
||||
| `affinity` | Affinity settings for pod assignment | `{}` |
|
||||
| `extraInitContainers` | Init containers to add to the grafana pod | `{}` |
|
||||
| `extraContainers` | Sidecar containers to add to the grafana pod | `{}` |
|
||||
| `extraContainerVolumes` | Volumes that can be mounted in sidecar containers | `[]` |
|
||||
| `extraLabels` | Custom labels for all manifests | `{}` |
|
||||
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
|
||||
| `global.persistence.enabled` | Use persistent volume to store data | `false` |
|
||||
| `persistence.type` | Type of persistence (`pvc` or `statefulset`) | `pvc` |
|
||||
| `global.persistence.size` | Size of persistent volume claim | `20Gi` |
|
||||
| `persistence.existingClaim` | Use an existing PVC to persist data | `nil` |
|
||||
| `global.persistence.storageClass` | Type of persistent volume claim | `nil` |
|
||||
| `global.persistence.accessMode` | Persistence access modes | `[ReadWriteOnce]` |
|
||||
| `persistence.annotations` | PersistentVolumeClaim annotations | `{}` |
|
||||
| `persistence.finalizers` | PersistentVolumeClaim finalizers | `[ "kubernetes.io/pvc-protection" ]` |
|
||||
| `persistence.subPath` | Mount a sub dir of the persistent volume | `nil` |
|
||||
| `persistence.inMemory.enabled` | If persistence is not enabled, whether to mount the local storage in-memory to improve performance | `false` |
|
||||
| `persistence.inMemory.sizeLimit` | SizeLimit for the in-memory local storage | `nil` |
|
||||
| `initChownData.enabled` | If false, don't reset data ownership at startup | true |
|
||||
| `initChownData.image.repository` | init-chown-data container image repository | `busybox` |
|
||||
| `initChownData.image.tag` | init-chown-data container image tag | `1.31.1` |
|
||||
| `initChownData.image.sha` | init-chown-data container image sha (optional)| `""` |
|
||||
| `initChownData.image.pullPolicy` | init-chown-data container image pull policy | `IfNotPresent` |
|
||||
| `initChownData.resources` | init-chown-data pod resource requests & limits | `{}` |
|
||||
| `schedulerName` | Alternate scheduler name | `nil` |
|
||||
| `env` | Extra environment variables passed to pods | `{}` |
|
||||
| `envValueFrom` | Environment variables from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. | `{}` |
|
||||
| `envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
|
||||
| `envRenderSecret` | Sensible environment variables passed to pods and stored as secret | `{}` |
|
||||
| `enableServiceLinks` | Inject Kubernetes services as environment variables. | `true` |
|
||||
| `extraSecretMounts` | Additional grafana server secret mounts | `[]` |
|
||||
| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` |
|
||||
| `extraConfigmapMounts` | Additional grafana server configMap volume mounts | `[]` |
|
||||
| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` |
|
||||
| `plugins` | Plugins to be loaded along with Grafana | `[]` |
|
||||
| `datasources` | Configure grafana datasources (passed through tpl) | `{}` |
|
||||
| `notifiers` | Configure grafana notifiers | `{}` |
|
||||
| `dashboardProviders` | Configure grafana dashboard providers | `{}` |
|
||||
| `dashboards` | Dashboards to import | `{}` |
|
||||
| `dashboardsConfigMaps` | ConfigMaps reference that contains dashboards | `{}` |
|
||||
| `grafana.ini` | Grafana's primary configuration | `{}` |
|
||||
| `ldap.enabled` | Enable LDAP authentication | `false` |
|
||||
| `ldap.existingSecret` | The name of an existing secret containing the `ldap.toml` file, this must have the key `ldap-toml`. | `""` |
|
||||
| `ldap.config` | Grafana's LDAP configuration | `""` |
|
||||
| `annotations` | Deployment annotations | `{}` |
|
||||
| `labels` | Deployment labels | `{}` |
|
||||
| `podAnnotations` | Pod annotations | `{}` |
|
||||
| `podLabels` | Pod labels | `{}` |
|
||||
| `podPortName` | Name of the grafana port on the pod | `grafana` |
|
||||
| `sidecar.image.repository` | Sidecar image repository | `quay.io/kiwigrid/k8s-sidecar` |
|
||||
| `sidecar.image.tag` | Sidecar image tag | `1.12.2` |
|
||||
| `sidecar.image.sha` | Sidecar image sha (optional) | `""` |
|
||||
| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` |
|
||||
| `sidecar.resources` | Sidecar resources | `{}` |
|
||||
| `sidecar.enableUniqueFilenames` | Sets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable | `false` |
|
||||
| `sidecar.dashboards.enabled` | Enables the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` |
|
||||
| `sidecar.dashboards.SCProvider` | Enables creation of sidecar provider | `true` |
|
||||
| `sidecar.dashboards.provider.name` | Unique name of the grafana provider | `sidecarProvider` |
|
||||
| `sidecar.dashboards.provider.orgid` | Id of the organisation, to which the dashboards should be added | `1` |
|
||||
| `sidecar.dashboards.provider.folder` | Logical folder in which grafana groups dashboards | `""` |
|
||||
| `sidecar.dashboards.provider.disableDelete` | Activate to avoid the deletion of imported dashboards | `false` |
|
||||
| `sidecar.dashboards.provider.allowUiUpdates` | Allow updating provisioned dashboards from the UI | `false` |
|
||||
| `sidecar.dashboards.provider.type` | Provider type | `file` |
|
||||
| `sidecar.dashboards.provider.foldersFromFilesStructure` | Allow Grafana to replicate dashboard structure from filesystem. | `false` |
|
||||
| `sidecar.dashboards.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
|
||||
| `sidecar.skipTlsVerify` | Set to true to skip tls verification for kube api calls | `nil` |
|
||||
| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added | `grafana_dashboard` |
|
||||
| `sidecar.dashboards.labelValue` | Label value that config maps with dashboards should have to be added | `nil` |
|
||||
| `sidecar.dashboards.folder` | Folder in the pod that should hold the collected dashboards (unless `sidecar.dashboards.defaultFolderName` is set). This path will be mounted. | `/tmp/dashboards` |
|
||||
| `sidecar.dashboards.folderAnnotation` | The annotation the sidecar will look for in configmaps to override the destination folder for files | `nil` |
|
||||
| `sidecar.dashboards.defaultFolderName` | The default folder name, it will create a subfolder under the `sidecar.dashboards.folder` and put dashboards in there instead | `nil` |
|
||||
| `sidecar.dashboards.searchNamespace` | If specified, the sidecar will search for dashboard config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces | `nil` |
|
||||
| `sidecar.dashboards.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
|
||||
| `sidecar.datasources.enabled` | Enables the cluster wide search for datasources and adds/updates/deletes them in grafana |`false` |
|
||||
| `sidecar.datasources.label` | Label that config maps with datasources should have to be added | `grafana_datasource` |
|
||||
| `sidecar.datasources.labelValue` | Label value that config maps with datasources should have to be added | `nil` |
|
||||
| `sidecar.datasources.searchNamespace` | If specified, the sidecar will search for datasources config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces | `nil` |
|
||||
| `sidecar.datasources.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
|
||||
| `sidecar.notifiers.enabled` | Enables the cluster wide search for notifiers and adds/updates/deletes them in grafana | `false` |
|
||||
| `sidecar.notifiers.label` | Label that config maps with notifiers should have to be added | `grafana_notifier` |
|
||||
| `sidecar.notifiers.searchNamespace` | If specified, the sidecar will search for notifiers config-maps (or secrets) inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces | `nil` |
|
||||
| `sidecar.notifiers.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
|
||||
| `smtp.existingSecret` | The name of an existing secret containing the SMTP credentials. | `""` |
|
||||
| `smtp.userKey` | The key in the existing SMTP secret containing the username. | `"user"` |
|
||||
| `smtp.passwordKey` | The key in the existing SMTP secret containing the password. | `"password"` |
|
||||
| `admin.existingSecret` | The name of an existing secret containing the admin credentials. | `""` |
|
||||
| `admin.userKey` | The key in the existing admin secret containing the username. | `"admin-user"` |
|
||||
| `admin.passwordKey` | The key in the existing admin secret containing the password. | `"admin-password"` |
|
||||
| `serviceAccount.autoMount` | Automount the service account token in the pod| `true` |
|
||||
| `serviceAccount.annotations` | ServiceAccount annotations | |
|
||||
| `serviceAccount.create` | Create service account | `true` |
|
||||
| `serviceAccount.name` | Service account name to use, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `` |
|
||||
| `serviceAccount.nameTest` | Service account name to use for test, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `nil` |
|
||||
| `rbac.create` | Create and use RBAC resources | `true` |
|
||||
| `rbac.namespaced` | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the grafana instance | `false` |
|
||||
| `rbac.useExistingRole` | Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here. | `nil` |
|
||||
| `rbac.pspEnabled` | Create PodSecurityPolicy (with `rbac.create`, grant roles permissions as well) | `true` |
|
||||
| `rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires `rbac.pspEnabled`) | `true` |
|
||||
| `rbac.extraRoleRules` | Additional rules to add to the Role | [] |
|
||||
| `rbac.extraClusterRoleRules` | Additional rules to add to the ClusterRole | [] |
|
||||
| `command` | Define command to be executed by grafana container at startup | `nil` |
|
||||
| `testFramework.enabled` | Whether to create test-related resources | `true` |
|
||||
| `testFramework.image` | `test-framework` image repository. | `bats/bats` |
|
||||
| `testFramework.tag` | `test-framework` image tag. | `v1.1.0` |
|
||||
| `testFramework.imagePullPolicy` | `test-framework` image pull policy. | `IfNotPresent` |
|
||||
| `testFramework.securityContext` | `test-framework` securityContext | `{}` |
|
||||
| `downloadDashboards.env` | Environment variables to be passed to the `download-dashboards` container | `{}` |
|
||||
| `downloadDashboards.envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
|
||||
| `downloadDashboards.resources` | Resources of `download-dashboards` container | `{}` |
|
||||
| `downloadDashboardsImage.repository` | Curl docker image repo | `curlimages/curl` |
|
||||
| `downloadDashboardsImage.tag` | Curl docker image tag | `7.73.0` |
|
||||
| `downloadDashboardsImage.sha` | Curl docker image sha (optional) | `""` |
|
||||
| `downloadDashboardsImage.pullPolicy` | Curl docker image pull policy | `IfNotPresent` |
|
||||
| `namespaceOverride` | Override the deployment namespace | `""` (`Release.Namespace`) |
|
||||
| `serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
|
||||
| `serviceMonitor.namespace` | Namespace this servicemonitor is installed in | |
|
||||
| `serviceMonitor.interval` | How frequently Prometheus should scrape | `1m` |
|
||||
| `serviceMonitor.path` | Path to scrape | `/metrics` |
|
||||
| `serviceMonitor.scheme` | Scheme to use for metrics scraping | `http` |
|
||||
| `serviceMonitor.tlsConfig` | TLS configuration block for the endpoint | `{}` |
|
||||
| `serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` |
|
||||
| `serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `30s` |
|
||||
| `serviceMonitor.relabelings` | MetricRelabelConfigs to apply to samples before ingestion. | `[]` |
|
||||
| `revisionHistoryLimit` | Number of old ReplicaSets to retain | `10` |
|
||||
| `imageRenderer.enabled` | Enable the image-renderer deployment & service | `false` |
|
||||
| `imageRenderer.image.repository` | image-renderer Image repository | `grafana/grafana-image-renderer` |
|
||||
| `imageRenderer.image.tag` | image-renderer Image tag | `latest` |
|
||||
| `imageRenderer.image.sha` | image-renderer Image sha (optional) | `""` |
|
||||
| `imageRenderer.image.pullPolicy` | image-renderer ImagePullPolicy | `Always` |
|
||||
| `imageRenderer.env` | extra env-vars for image-renderer | `{}` |
|
||||
| `imageRenderer.serviceAccountName` | image-renderer deployment serviceAccountName | `""` |
|
||||
| `imageRenderer.securityContext` | image-renderer deployment securityContext | `{}` |
|
||||
| `imageRenderer.hostAliases` | image-renderer deployment Host Aliases | `[]` |
|
||||
| `imageRenderer.priorityClassName` | image-renderer deployment priority class | `''` |
|
||||
| `imageRenderer.service.enabled` | Enable the image-renderer service | `true` |
|
||||
| `imageRenderer.service.portName` | image-renderer service port name | `'http'` |
|
||||
| `imageRenderer.service.port` | image-renderer service port used by both service and deployment | `8081` |
|
||||
| `imageRenderer.grafanaSubPath` | Grafana sub path to use for image renderer callback url | `''` |
|
||||
| `imageRenderer.podPortName` | name of the image-renderer port on the pod | `http` |
|
||||
| `imageRenderer.revisionHistoryLimit` | number of image-renderer replica sets to keep | `10` |
|
||||
| `imageRenderer.networkPolicy.limitIngress` | Enable a NetworkPolicy to limit inbound traffic from only the created grafana pods | `true` |
|
||||
| `imageRenderer.networkPolicy.limitEgress` | Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods | `false` |
|
||||
| `imageRenderer.resources` | Set resource limits for image-renderer pdos | `{}` |
|
||||
|
||||
### Example ingress with path
|
||||
|
||||
With grafana 6.3 and above
|
||||
```yaml
|
||||
grafana.ini:
|
||||
server:
|
||||
domain: monitoring.example.com
|
||||
root_url: "%(protocol)s://%(domain)s/grafana"
|
||||
serve_from_sub_path: true
|
||||
ingress:
|
||||
enabled: true
|
||||
hosts:
|
||||
- "monitoring.example.com"
|
||||
path: "/grafana"
|
||||
```
|
||||
|
||||
### Example of extraVolumeMounts
|
||||
|
||||
Volume can be type persistentVolumeClaim or hostPath but not both at same time.
|
||||
If none existingClaim or hostPath argument is givent then type is emptyDir.
|
||||
|
||||
```yaml
|
||||
- extraVolumeMounts:
|
||||
- name: plugins
|
||||
mountPath: /var/lib/grafana/plugins
|
||||
subPath: configs/grafana/plugins
|
||||
existingClaim: existing-grafana-claim
|
||||
readOnly: false
|
||||
- name: dashboards
|
||||
mountPath: /var/lib/grafana/dashboards
|
||||
hostPath: /usr/shared/grafana/dashboards
|
||||
readOnly: false
|
||||
```
|
||||
|
||||
## Import dashboards
|
||||
|
||||
There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method:
|
||||
|
||||
```yaml
|
||||
dashboards:
|
||||
default:
|
||||
some-dashboard:
|
||||
json: |
|
||||
{
|
||||
"annotations":
|
||||
|
||||
...
|
||||
# Complete json file here
|
||||
...
|
||||
|
||||
"title": "Some Dashboard",
|
||||
"uid": "abcd1234",
|
||||
"version": 1
|
||||
}
|
||||
custom-dashboard:
|
||||
# This is a path to a file inside the dashboards directory inside the chart directory
|
||||
file: dashboards/custom-dashboard.json
|
||||
prometheus-stats:
|
||||
# Ref: https://grafana.com/dashboards/2
|
||||
gnetId: 2
|
||||
revision: 2
|
||||
datasource: Prometheus
|
||||
local-dashboard:
|
||||
url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
|
||||
```
|
||||
|
||||
## BASE64 dashboards
|
||||
|
||||
Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit)
|
||||
A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk.
|
||||
If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.
|
||||
|
||||
### Gerrit use case
|
||||
|
||||
Gerrit API for download files has the following schema: <https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content> where {project-name} and
|
||||
{file-id} usually has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard
|
||||
the url value is <https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content>
|
||||
|
||||
## Sidecar for dashboards
|
||||
|
||||
If the parameter `sidecar.dashboards.enabled` is set, a sidecar container is deployed in the grafana
|
||||
pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with
|
||||
a label as defined in `sidecar.dashboards.label`. The files defined in those configmaps are written
|
||||
to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported
|
||||
dashboards are deleted/updated.
|
||||
|
||||
A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside
|
||||
one configmap is currently not properly mirrored in grafana.
|
||||
|
||||
Example dashboard config:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: sample-grafana-dashboard
|
||||
labels:
|
||||
grafana_dashboard: "1"
|
||||
data:
|
||||
k8s-dashboard.json: |-
|
||||
[...]
|
||||
```
|
||||
|
||||
## Sidecar for datasources
|
||||
|
||||
If the parameter `sidecar.datasources.enabled` is set, an init container is deployed in the grafana
|
||||
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
|
||||
filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in
|
||||
those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
|
||||
the data sources in grafana can be imported.
|
||||
|
||||
Secrets are recommended over configmaps for this usecase because datasources usually contain private
|
||||
data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
|
||||
|
||||
Example values to add a datasource adapted from [Grafana](http://docs.grafana.org/administration/provisioning/#example-datasource-config-file):
|
||||
|
||||
```yaml
|
||||
datasources:
|
||||
datasources.yaml:
|
||||
apiVersion: 1
|
||||
datasources:
|
||||
# <string, required> name of the datasource. Required
|
||||
- name: Graphite
|
||||
# <string, required> datasource type. Required
|
||||
type: graphite
|
||||
# <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
|
||||
access: proxy
|
||||
# <int> org id. will default to orgId 1 if not specified
|
||||
orgId: 1
|
||||
# <string> url
|
||||
url: http://localhost:8080
|
||||
# <string> database password, if used
|
||||
password:
|
||||
# <string> database user, if used
|
||||
user:
|
||||
# <string> database name, if used
|
||||
database:
|
||||
# <bool> enable/disable basic auth
|
||||
basicAuth:
|
||||
# <string> basic auth username
|
||||
basicAuthUser:
|
||||
# <string> basic auth password
|
||||
basicAuthPassword:
|
||||
# <bool> enable/disable with credentials headers
|
||||
withCredentials:
|
||||
# <bool> mark as default datasource. Max one per org
|
||||
isDefault:
|
||||
# <map> fields that will be converted to json and stored in json_data
|
||||
jsonData:
|
||||
graphiteVersion: "1.1"
|
||||
tlsAuth: true
|
||||
tlsAuthWithCACert: true
|
||||
# <string> json object of data that will be encrypted.
|
||||
secureJsonData:
|
||||
tlsCACert: "..."
|
||||
tlsClientCert: "..."
|
||||
tlsClientKey: "..."
|
||||
version: 1
|
||||
# <bool> allow users to edit datasources from the UI.
|
||||
editable: false
|
||||
```
|
||||
|
||||
## Sidecar for notifiers
|
||||
|
||||
If the parameter `sidecar.notifiers.enabled` is set, an init container is deployed in the grafana
|
||||
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
|
||||
filters out the ones with a label as defined in `sidecar.notifiers.label`. The files defined in
|
||||
those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
|
||||
the notification channels in grafana can be imported. The secrets must be created before
|
||||
`helm install` so that the notifiers init container can list the secrets.
|
||||
|
||||
Secrets are recommended over configmaps for this usecase because alert notification channels usually contain
|
||||
private data like SMTP usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
|
||||
|
||||
Example datasource config adapted from [Grafana](https://grafana.com/docs/grafana/latest/administration/provisioning/#alert-notification-channels):
|
||||
|
||||
```yaml
|
||||
notifiers:
|
||||
- name: notification-channel-1
|
||||
type: slack
|
||||
uid: notifier1
|
||||
# either
|
||||
org_id: 2
|
||||
# or
|
||||
org_name: Main Org.
|
||||
is_default: true
|
||||
send_reminder: true
|
||||
frequency: 1h
|
||||
disable_resolve_message: false
|
||||
# See `Supported Settings` section for settings supporter for each
|
||||
# alert notification type.
|
||||
settings:
|
||||
recipient: 'XXX'
|
||||
token: 'xoxb'
|
||||
uploadImage: true
|
||||
url: https://slack.com
|
||||
|
||||
delete_notifiers:
|
||||
- name: notification-channel-1
|
||||
uid: notifier1
|
||||
org_id: 2
|
||||
- name: notification-channel-2
|
||||
# default org_id: 1
|
||||
```
|
||||
|
||||
## How to serve Grafana with a path prefix (/grafana)
|
||||
|
||||
In order to serve Grafana with a prefix (e.g., <http://example.com/grafana>), add the following to your values.yaml.
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /$1
|
||||
nginx.ingress.kubernetes.io/use-regex: "true"
|
||||
|
||||
path: /grafana/?(.*)
|
||||
hosts:
|
||||
- k8s.example.dev
|
||||
|
||||
grafana.ini:
|
||||
server:
|
||||
root_url: http://localhost:3000/grafana # this host can be localhost
|
||||
```
|
||||
|
||||
## How to securely reference secrets in grafana.ini
|
||||
|
||||
This example uses Grafana uses [file providers](https://grafana.com/docs/grafana/latest/administration/configuration/#file-provider) for secret values and the `extraSecretMounts` configuration flag (Additional grafana server secret mounts) to mount the secrets.
|
||||
|
||||
In grafana.ini:
|
||||
|
||||
```yaml
|
||||
grafana.ini:
|
||||
[auth.generic_oauth]
|
||||
enabled = true
|
||||
client_id = $__file{/etc/secrets/auth_generic_oauth/client_id}
|
||||
client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret}
|
||||
```
|
||||
|
||||
Existing secret, or created along with helm:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: auth-generic-oauth-secret
|
||||
type: Opaque
|
||||
stringData:
|
||||
client_id: <value>
|
||||
client_secret: <value>
|
||||
```
|
||||
|
||||
Include in the `extraSecretMounts` configuration flag:
|
||||
|
||||
```yaml
|
||||
- extraSecretMounts:
|
||||
- name: auth-generic-oauth-secret-mount
|
||||
secretName: auth-generic-oauth-secret
|
||||
defaultMode: 0440
|
||||
mountPath: /etc/secrets/auth_generic_oauth
|
||||
readOnly: true
|
||||
```
|
||||
|
||||
### extraSecretMounts using a Container Storage Interface (CSI) provider
|
||||
|
||||
This example uses a CSI driver e.g. retrieving secrets using [Azure Key Vault Provider](https://github.com/Azure/secrets-store-csi-driver-provider-azure)
|
||||
|
||||
```yaml
|
||||
- extraSecretMounts:
|
||||
- name: secrets-store-inline
|
||||
mountPath: /run/secrets
|
||||
readOnly: true
|
||||
csi:
|
||||
driver: secrets-store.csi.k8s.io
|
||||
readOnly: true
|
||||
volumeAttributes:
|
||||
secretProviderClass: "my-provider"
|
||||
nodePublishSecretRef:
|
||||
name: akv-creds
|
||||
```
|
||||
|
||||
## Image Renderer Plug-In
|
||||
|
||||
This chart supports enabling [remote image rendering](https://github.com/grafana/grafana-image-renderer/blob/master/docs/remote_rendering_using_docker.md)
|
||||
|
||||
```yaml
|
||||
imageRenderer:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
### Image Renderer NetworkPolicy
|
||||
|
||||
By default the image-renderer pods will have a network policy which only allows ingress traffic from the created grafana instance
|
|
@ -0,0 +1,54 @@
|
|||
1. Get your '{{ .Values.adminUser }}' user password by running:
|
||||
|
||||
kubectl get secret --namespace {{ template "grafana.namespace" . }} {{ template "grafana.fullname" . }} -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
||||
|
||||
2. The Grafana server can be accessed via port {{ .Values.service.port }} on the following DNS name from within your cluster:
|
||||
|
||||
{{ template "grafana.fullname" . }}.{{ template "grafana.namespace" . }}.svc.cluster.local
|
||||
{{ if .Values.ingress.enabled }}
|
||||
If you bind grafana to 80, please update values in values.yaml and reinstall:
|
||||
```
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
runAsGroup: 0
|
||||
fsGroup: 0
|
||||
|
||||
command:
|
||||
- "setcap"
|
||||
- "'cap_net_bind_service=+ep'"
|
||||
- "/usr/sbin/grafana-server &&"
|
||||
- "sh"
|
||||
- "/run.sh"
|
||||
```
|
||||
Details refer to https://grafana.com/docs/installation/configuration/#http-port.
|
||||
Or grafana would always crash.
|
||||
|
||||
From outside the cluster, the server URL(s) are:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
http://{{ . }}
|
||||
{{- end }}
|
||||
{{ else }}
|
||||
Get the Grafana URL to visit by running these commands in the same shell:
|
||||
{{ if contains "NodePort" .Values.service.type -}}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ template "grafana.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "grafana.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ template "grafana.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{ else if contains "LoadBalancer" .Values.service.type -}}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc --namespace {{ template "grafana.namespace" . }} -w {{ template "grafana.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ template "grafana.namespace" . }} {{ template "grafana.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
http://$SERVICE_IP:{{ .Values.service.port -}}
|
||||
{{ else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ template "grafana.namespace" . }} -l "app={{ template "grafana.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace {{ template "grafana.namespace" . }} port-forward $POD_NAME 3000
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
3. Login with the password from step 1 and the username: {{ .Values.adminUser }}
|
||||
|
||||
{{- if not .Values.global.persistence.enabled }}
|
||||
#################################################################################
|
||||
###### WARNING: Persistence is disabled!!! You will lose your data when #####
|
||||
###### the Grafana pod is terminated. #####
|
||||
#################################################################################
|
||||
{{- end }}
|
|
@ -0,0 +1,3 @@
|
|||
{{/* Autogenerated, do NOT modify */}}
|
||||
{{- define "k10.grafanaImageTag" -}}8.1.8{{- end -}}
|
||||
{{- define "k10.grafanaInitContainerImageTag" -}}8.5-240.1648458092{{- end -}}
|
|
@ -0,0 +1,235 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "grafana.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "grafana.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "grafana.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account
|
||||
*/}}
|
||||
{{- define "grafana.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "grafana.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "grafana.serviceAccountNameTest" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (print (include "grafana.fullname" .) "-test") .Values.serviceAccount.nameTest }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.nameTest }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
|
||||
*/}}
|
||||
{{- define "grafana.namespace" -}}
|
||||
{{- if .Values.namespaceOverride -}}
|
||||
{{- .Values.namespaceOverride -}}
|
||||
{{- else -}}
|
||||
{{- .Release.Namespace -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "grafana.labels" -}}
|
||||
helm.sh/chart: {{ include "grafana.chart" . }}
|
||||
{{ include "grafana.selectorLabels" . }}
|
||||
{{- if or .Chart.AppVersion .Values.image.tag }}
|
||||
app.kubernetes.io/version: {{ .Values.image.tag | default .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- if .Values.extraLabels }}
|
||||
{{ toYaml .Values.extraLabels }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "grafana.selectorLabels" -}}
|
||||
app: {{ include "grafana.name" . }}
|
||||
release: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "grafana.imageRenderer.labels" -}}
|
||||
helm.sh/chart: {{ include "grafana.chart" . }}
|
||||
{{ include "grafana.imageRenderer.selectorLabels" . }}
|
||||
{{- if or .Chart.AppVersion .Values.image.tag }}
|
||||
app.kubernetes.io/version: {{ .Values.image.tag | default .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Selector labels ImageRenderer
|
||||
*/}}
|
||||
{{- define "grafana.imageRenderer.selectorLabels" -}}
|
||||
app: {{ include "grafana.name" . }}-image-renderer
|
||||
release: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Looks if there's an existing secret and reuse its password. If not it generates
|
||||
new password and use it.
|
||||
*/}}
|
||||
{{- define "grafana.password" -}}
|
||||
{{- $secret := (lookup "v1" "Secret" (include "grafana.namespace" .) (include "grafana.fullname" .) ) -}}
|
||||
{{- if $secret -}}
|
||||
{{- index $secret "data" "admin-password" -}}
|
||||
{{- else -}}
|
||||
{{- (randAlphaNum 40) | b64enc | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for rbac.
|
||||
*/}}
|
||||
{{- define "grafana.rbac.apiVersion" -}}
|
||||
{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }}
|
||||
{{- print "rbac.authorization.k8s.io/v1" -}}
|
||||
{{- else -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for ingress.
|
||||
*/}}
|
||||
{{- define "grafana.ingress.apiVersion" -}}
|
||||
{{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return if ingress is stable.
|
||||
*/}}
|
||||
{{- define "grafana.ingress.isStable" -}}
|
||||
{{- eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return if ingress supports ingressClassName.
|
||||
*/}}
|
||||
{{- define "grafana.ingress.supportsIngressClassName" -}}
|
||||
{{- or (eq (include "grafana.ingress.isStable" .) "true") (and (eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return if ingress supports pathType.
|
||||
*/}}
|
||||
{{- define "grafana.ingress.supportsPathType" -}}
|
||||
{{- or (eq (include "grafana.ingress.isStable" .) "true") (and (eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Figure out the grafana image tag
|
||||
based on the value of global.upstreamCertifiedImages
|
||||
*/}}
|
||||
{{- define "get.grafanaImageTag"}}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "k10-%s" (include "k10.grafanaImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s" (include "k10.grafanaImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "get.grafanaImageRepo" }}
|
||||
{{- if .Values.global.upstreamCertifiedImages }}
|
||||
{{- printf "%s/%s/grafana" .Values.k10image.registry .Values.k10image.repository }}
|
||||
{{- else }}
|
||||
{{- print .Values.image.repository }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the config based on
|
||||
the value of airgapped.repository
|
||||
*/}}
|
||||
{{- define "get.grafanaServerimage" }}
|
||||
{{- if not .Values.global.rhMarketPlace }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "%s/grafana:%s" .Values.global.airgapped.repository (include "get.grafanaImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s:%s" (include "get.grafanaImageRepo" .) (include "get.grafanaImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- printf "%s" .Values.global.images.grafana }}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the grafana init container busy box image tag
|
||||
based on the value of global.airgapped.repository
|
||||
*/}}
|
||||
{{- define "get.grafanaInitContainerImageTag"}}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "k10-%s" (include "k10.grafanaInitContainerImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s" (include "k10.grafanaInitContainerImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "get.grafanaInitContainerImageRepo" }}
|
||||
{{- if .Values.global.upstreamCertifiedImages }}
|
||||
{{- printf "%s/%s/ubi-minimal" .Values.k10image.registry .Values.k10image.repository }}
|
||||
{{- else }}
|
||||
{{- print .Values.ubi.image.repository }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the config based on
|
||||
the value of airgapped.repository
|
||||
*/}}
|
||||
{{- define "get.grafanaInitContainerImage" }}
|
||||
{{- if not .Values.global.rhMarketPlace }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "%s/ubi-minimal:%s" .Values.global.airgapped.repository (include "get.grafanaInitContainerImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s:%s" (include "get.grafanaInitContainerImageRepo" .) (include "get.grafanaInitContainerImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- printf "%s:%s" (include "get.grafanaInitContainerImageRepo" .) (include "get.grafanaInitContainerImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,509 @@
|
|||
|
||||
{{- define "grafana.pod" -}}
|
||||
{{- if .Values.schedulerName }}
|
||||
schedulerName: "{{ .Values.schedulerName }}"
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "grafana.serviceAccountName" . }}
|
||||
automountServiceAccountToken: {{ .Values.serviceAccount.autoMount }}
|
||||
{{- if .Values.securityContext }}
|
||||
securityContext:
|
||||
{{ toYaml .Values.securityContext | indent 2 }}
|
||||
{{- end }}
|
||||
{{- if .Values.hostAliases }}
|
||||
hostAliases:
|
||||
{{ toYaml .Values.hostAliases | indent 2 }}
|
||||
{{- end }}
|
||||
{{- if .Values.priorityClassName }}
|
||||
priorityClassName: {{ .Values.priorityClassName }}
|
||||
{{- end }}
|
||||
{{- if ( or .Values.global.persistence.enabled .Values.dashboards .Values.sidecar.datasources.enabled .Values.sidecar.notifiers.enabled .Values.extraInitContainers) }}
|
||||
initContainers:
|
||||
{{- end }}
|
||||
{{- if ( and .Values.global.persistence.enabled .Values.initChownData.enabled ) }}
|
||||
- name: init-chown-data
|
||||
image: "{{ include "get.grafanaInitContainerImage" . }}"
|
||||
imagePullPolicy: {{ .Values.ubi.image.pullPolicy }}
|
||||
securityContext:
|
||||
runAsNonRoot: false
|
||||
runAsUser: 0
|
||||
command: ["chown", "-R", "{{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.runAsGroup }}", "/var/lib/grafana"]
|
||||
resources:
|
||||
{{ toYaml .Values.initChownData.resources | indent 6 }}
|
||||
volumeMounts:
|
||||
- name: storage
|
||||
mountPath: "/var/lib/grafana"
|
||||
{{- if .Values.persistence.subPath }}
|
||||
subPath: {{ .Values.persistence.subPath }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.dashboards }}
|
||||
- name: download-dashboards
|
||||
{{- if .Values.downloadDashboardsImage.sha }}
|
||||
image: "{{ .Values.downloadDashboardsImage.repository }}:{{ .Values.downloadDashboardsImage.tag }}@sha256:{{ .Values.downloadDashboardsImage.sha }}"
|
||||
{{- else }}
|
||||
image: "{{ include "get.grafanaInitContainerImage" . }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.downloadDashboardsImage.pullPolicy }}
|
||||
command: ["/bin/sh"]
|
||||
args: [ "-c", "mkdir -p /var/lib/grafana/dashboards/default && /bin/sh -x /etc/grafana/download_dashboards.sh" ]
|
||||
resources:
|
||||
{{ toYaml .Values.downloadDashboards.resources | indent 6 }}
|
||||
env:
|
||||
{{- range $key, $value := .Values.downloadDashboards.env }}
|
||||
- name: "{{ $key }}"
|
||||
value: "{{ $value }}"
|
||||
{{- end }}
|
||||
{{- if .Values.downloadDashboards.envFromSecret }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ tpl .Values.downloadDashboards.envFromSecret . }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: "/etc/grafana/download_dashboards.sh"
|
||||
subPath: download_dashboards.sh
|
||||
- name: storage
|
||||
mountPath: "/var/lib/grafana"
|
||||
{{- if .Values.persistence.subPath }}
|
||||
subPath: {{ .Values.persistence.subPath }}
|
||||
{{- end }}
|
||||
{{- range .Values.extraSecretMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.datasources.enabled }}
|
||||
- name: {{ template "grafana.name" . }}-sc-datasources
|
||||
{{- if .Values.sidecar.image.sha }}
|
||||
image: "{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
|
||||
{{- else }}
|
||||
image: "{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
|
||||
env:
|
||||
- name: METHOD
|
||||
value: LIST
|
||||
- name: LABEL
|
||||
value: "{{ .Values.sidecar.datasources.label }}"
|
||||
{{- if .Values.sidecar.datasources.labelValue }}
|
||||
- name: LABEL_VALUE
|
||||
value: {{ quote .Values.sidecar.datasources.labelValue }}
|
||||
{{- end }}
|
||||
- name: FOLDER
|
||||
value: "/etc/grafana/provisioning/datasources"
|
||||
- name: RESOURCE
|
||||
value: {{ quote .Values.sidecar.datasources.resource }}
|
||||
{{- if .Values.sidecar.enableUniqueFilenames }}
|
||||
- name: UNIQUE_FILENAMES
|
||||
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.datasources.searchNamespace }}
|
||||
- name: NAMESPACE
|
||||
value: "{{ .Values.sidecar.datasources.searchNamespace }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.skipTlsVerify }}
|
||||
- name: SKIP_TLS_VERIFY
|
||||
value: "{{ .Values.sidecar.skipTlsVerify }}"
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.sidecar.resources | indent 6 }}
|
||||
volumeMounts:
|
||||
- name: sc-datasources-volume
|
||||
mountPath: "/etc/grafana/provisioning/datasources"
|
||||
{{- end}}
|
||||
{{- if .Values.sidecar.notifiers.enabled }}
|
||||
- name: {{ template "grafana.name" . }}-sc-notifiers
|
||||
{{- if .Values.sidecar.image.sha }}
|
||||
image: "{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
|
||||
{{- else }}
|
||||
image: "{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
|
||||
env:
|
||||
- name: METHOD
|
||||
value: LIST
|
||||
- name: LABEL
|
||||
value: "{{ .Values.sidecar.notifiers.label }}"
|
||||
- name: FOLDER
|
||||
value: "/etc/grafana/provisioning/notifiers"
|
||||
- name: RESOURCE
|
||||
value: {{ quote .Values.sidecar.notifiers.resource }}
|
||||
{{- if .Values.sidecar.enableUniqueFilenames }}
|
||||
- name: UNIQUE_FILENAMES
|
||||
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.notifiers.searchNamespace }}
|
||||
- name: NAMESPACE
|
||||
value: "{{ .Values.sidecar.notifiers.searchNamespace }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.skipTlsVerify }}
|
||||
- name: SKIP_TLS_VERIFY
|
||||
value: "{{ .Values.sidecar.skipTlsVerify }}"
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.sidecar.resources | indent 6 }}
|
||||
volumeMounts:
|
||||
- name: sc-notifiers-volume
|
||||
mountPath: "/etc/grafana/provisioning/notifiers"
|
||||
{{- end}}
|
||||
{{- if .Values.extraInitContainers }}
|
||||
{{ toYaml .Values.extraInitContainers | indent 2 }}
|
||||
{{- end }}
|
||||
{{- if (or .Values.global.imagePullSecret .Values.image.pullSecrets) }}
|
||||
imagePullSecrets:
|
||||
{{- if .Values.global.imagePullSecret }}
|
||||
- name: {{ .Values.global.imagePullSecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
{{- range .Values.image.pullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
enableServiceLinks: {{ .Values.enableServiceLinks }}
|
||||
containers:
|
||||
{{- if .Values.sidecar.dashboards.enabled }}
|
||||
- name: {{ template "grafana.name" . }}-sc-dashboard
|
||||
{{- if .Values.sidecar.image.sha }}
|
||||
image: "{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}"
|
||||
{{- else }}
|
||||
image: "{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
|
||||
env:
|
||||
- name: METHOD
|
||||
value: {{ .Values.sidecar.dashboards.watchMethod }}
|
||||
- name: LABEL
|
||||
value: "{{ .Values.sidecar.dashboards.label }}"
|
||||
{{- if .Values.sidecar.dashboards.labelValue }}
|
||||
- name: LABEL_VALUE
|
||||
value: {{ quote .Values.sidecar.dashboards.labelValue }}
|
||||
{{- end }}
|
||||
- name: FOLDER
|
||||
value: "{{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}"
|
||||
- name: RESOURCE
|
||||
value: {{ quote .Values.sidecar.dashboards.resource }}
|
||||
{{- if .Values.sidecar.enableUniqueFilenames }}
|
||||
- name: UNIQUE_FILENAMES
|
||||
value: "{{ .Values.sidecar.enableUniqueFilenames }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.dashboards.searchNamespace }}
|
||||
- name: NAMESPACE
|
||||
value: "{{ .Values.sidecar.dashboards.searchNamespace }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.skipTlsVerify }}
|
||||
- name: SKIP_TLS_VERIFY
|
||||
value: "{{ .Values.sidecar.skipTlsVerify }}"
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.dashboards.folderAnnotation }}
|
||||
- name: FOLDER_ANNOTATION
|
||||
value: "{{ .Values.sidecar.dashboards.folderAnnotation }}"
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.sidecar.resources | indent 6 }}
|
||||
volumeMounts:
|
||||
- name: sc-dashboard-volume
|
||||
mountPath: {{ .Values.sidecar.dashboards.folder | quote }}
|
||||
{{- end}}
|
||||
- name: {{ .Chart.Name }}
|
||||
{{- if .Values.image.sha }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}@sha256:{{ .Values.image.sha }}"
|
||||
{{- else }}
|
||||
image: "{{ include "get.grafanaServerimage" . }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- if .Values.command }}
|
||||
command:
|
||||
{{- range .Values.command }}
|
||||
- {{ . }}
|
||||
{{- end }}
|
||||
{{- end}}
|
||||
{{- if .Values.containerSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.containerSecurityContext | nindent 6 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: "/etc/grafana/grafana.ini"
|
||||
subPath: grafana.ini
|
||||
{{- if .Values.ldap.enabled }}
|
||||
- name: ldap
|
||||
mountPath: "/etc/grafana/ldap.toml"
|
||||
subPath: ldap.toml
|
||||
{{- end }}
|
||||
{{- range .Values.extraConfigmapMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
subPath: {{ .subPath | default "" }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
- name: storage
|
||||
mountPath: "/var/lib/grafana"
|
||||
{{- if .Values.persistence.subPath }}
|
||||
subPath: {{ .Values.persistence.subPath }}
|
||||
{{- end }}
|
||||
{{- if .Values.dashboards }}
|
||||
{{- range $provider, $dashboards := .Values.dashboards }}
|
||||
{{- range $key, $value := $dashboards }}
|
||||
{{- if (or (hasKey $value "json") (hasKey $value "file")) }}
|
||||
- name: dashboards-{{ $provider }}
|
||||
mountPath: "/var/lib/grafana/dashboards/{{ $provider }}/{{ $key }}.json"
|
||||
subPath: "{{ $key }}.json"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- if .Values.dashboardsConfigMaps }}
|
||||
{{- range (keys .Values.dashboardsConfigMaps | sortAlpha) }}
|
||||
- name: dashboards-{{ . }}
|
||||
mountPath: "/var/lib/grafana/dashboards/{{ . }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{/* Mounting default datasources in pod as yaml */}}
|
||||
- name: config
|
||||
mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
|
||||
subPath: datasources.yaml
|
||||
{{- if .Values.notifiers }}
|
||||
- name: config
|
||||
mountPath: "/etc/grafana/provisioning/notifiers/notifiers.yaml"
|
||||
subPath: notifiers.yaml
|
||||
{{- end }}
|
||||
{{- if .Values.dashboardProviders }}
|
||||
- name: config
|
||||
mountPath: "/etc/grafana/provisioning/dashboards/dashboardproviders.yaml"
|
||||
subPath: dashboardproviders.yaml
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.dashboards.enabled }}
|
||||
- name: sc-dashboard-volume
|
||||
mountPath: {{ .Values.sidecar.dashboards.folder | quote }}
|
||||
{{ if .Values.sidecar.dashboards.SCProvider }}
|
||||
- name: sc-dashboard-provider
|
||||
mountPath: "/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml"
|
||||
subPath: provider.yaml
|
||||
{{- end}}
|
||||
{{- end}}
|
||||
{{- if .Values.sidecar.datasources.enabled }}
|
||||
- name: sc-datasources-volume
|
||||
mountPath: "/etc/grafana/provisioning/datasources"
|
||||
{{- end}}
|
||||
{{- if .Values.sidecar.notifiers.enabled }}
|
||||
- name: sc-notifiers-volume
|
||||
mountPath: "/etc/grafana/provisioning/notifiers"
|
||||
{{- end}}
|
||||
{{- range .Values.extraSecretMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
subPath: {{ .subPath | default "" }}
|
||||
{{- end }}
|
||||
{{- range .Values.extraVolumeMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
subPath: {{ .subPath | default "" }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
{{- range .Values.extraEmptyDirMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: {{ .Values.service.portName }}
|
||||
containerPort: {{ .Values.service.port }}
|
||||
protocol: TCP
|
||||
- name: {{ .Values.podPortName }}
|
||||
containerPort: 3000
|
||||
protocol: TCP
|
||||
env:
|
||||
{{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
|
||||
- name: GF_SECURITY_ADMIN_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ .Values.admin.existingSecret | default (include "grafana.fullname" .) }}
|
||||
key: {{ .Values.admin.userKey | default "admin-user" }}
|
||||
{{- end }}
|
||||
{{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
|
||||
- name: GF_SECURITY_ADMIN_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ .Values.admin.existingSecret | default (include "grafana.fullname" .) }}
|
||||
key: {{ .Values.admin.passwordKey | default "admin-password" }}
|
||||
{{- end }}
|
||||
{{- if .Values.plugins }}
|
||||
- name: GF_INSTALL_PLUGINS
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
key: plugins
|
||||
{{- end }}
|
||||
{{- if .Values.smtp.existingSecret }}
|
||||
- name: GF_SMTP_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ .Values.smtp.existingSecret }}
|
||||
key: {{ .Values.smtp.userKey | default "user" }}
|
||||
- name: GF_SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ .Values.smtp.existingSecret }}
|
||||
key: {{ .Values.smtp.passwordKey | default "password" }}
|
||||
{{- end }}
|
||||
{{ if .Values.imageRenderer.enabled }}
|
||||
- name: GF_RENDERING_SERVER_URL
|
||||
value: http://{{ template "grafana.fullname" . }}-image-renderer.{{ template "grafana.namespace" . }}:{{ .Values.imageRenderer.service.port }}/render
|
||||
- name: GF_RENDERING_CALLBACK_URL
|
||||
value: http://{{ template "grafana.fullname" . }}.{{ template "grafana.namespace" . }}:{{ .Values.service.port }}/{{ .Values.imageRenderer.grafanaSubPath }}
|
||||
{{ end }}
|
||||
- name: GF_PATHS_DATA
|
||||
value: {{ (get .Values "grafana.ini").paths.data }}
|
||||
- name: GF_PATHS_LOGS
|
||||
value: {{ (get .Values "grafana.ini").paths.logs }}
|
||||
- name: GF_PATHS_PLUGINS
|
||||
value: {{ (get .Values "grafana.ini").paths.plugins }}
|
||||
- name: GF_PATHS_PROVISIONING
|
||||
value: {{ (get .Values "grafana.ini").paths.provisioning }}
|
||||
{{- range $key, $value := .Values.envValueFrom }}
|
||||
- name: {{ $key | quote }}
|
||||
valueFrom:
|
||||
{{ toYaml $value | indent 10 }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := .Values.env }}
|
||||
- name: "{{ tpl $key $ }}"
|
||||
value: "{{ tpl (print $value) $ }}"
|
||||
{{- end }}
|
||||
{{- if .Values.envFromSecret }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ tpl .Values.envFromSecret . }}
|
||||
{{- end }}
|
||||
{{- if .Values.envRenderSecret }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ template "grafana.fullname" . }}-env
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{ toYaml .Values.livenessProbe | indent 6 }}
|
||||
readinessProbe:
|
||||
{{ toYaml .Values.readinessProbe | indent 6 }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 6 }}
|
||||
{{- with .Values.extraContainers }}
|
||||
{{ tpl . $ | indent 2 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml . | indent 2 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{ toYaml . | indent 2 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml . | indent 2 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
{{- range .Values.extraConfigmapMounts }}
|
||||
- name: {{ .name }}
|
||||
configMap:
|
||||
name: {{ .configMap }}
|
||||
{{- end }}
|
||||
{{- if .Values.dashboards }}
|
||||
{{- range (keys .Values.dashboards | sortAlpha) }}
|
||||
- name: dashboards-{{ . }}
|
||||
configMap:
|
||||
name: {{ template "grafana.fullname" $ }}-dashboards-{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.dashboardsConfigMaps }}
|
||||
{{ $root := . }}
|
||||
{{- range $provider, $name := .Values.dashboardsConfigMaps }}
|
||||
- name: dashboards-{{ $provider }}
|
||||
configMap:
|
||||
name: {{ tpl $name $root }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.ldap.enabled }}
|
||||
- name: ldap
|
||||
secret:
|
||||
{{- if .Values.ldap.existingSecret }}
|
||||
secretName: {{ .Values.ldap.existingSecret }}
|
||||
{{- else }}
|
||||
secretName: {{ template "grafana.fullname" . }}
|
||||
{{- end }}
|
||||
items:
|
||||
- key: ldap-toml
|
||||
path: ldap.toml
|
||||
{{- end }}
|
||||
{{- if and .Values.global.persistence.enabled (eq .Values.persistence.type "pvc") }}
|
||||
- name: storage
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.persistence.existingClaim | default (include "grafana.fullname" .) }}
|
||||
{{- else if and .Values.global.persistence.enabled (eq .Values.persistence.type "statefulset") }}
|
||||
# nothing
|
||||
{{- else }}
|
||||
- name: storage
|
||||
{{- if .Values.persistence.inMemory.enabled }}
|
||||
emptyDir:
|
||||
medium: Memory
|
||||
{{- if .Values.persistence.inMemory.sizeLimit }}
|
||||
sizeLimit: {{ .Values.persistence.inMemory.sizeLimit }}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if .Values.sidecar.dashboards.enabled }}
|
||||
- name: sc-dashboard-volume
|
||||
emptyDir: {}
|
||||
{{- if .Values.sidecar.dashboards.SCProvider }}
|
||||
- name: sc-dashboard-provider
|
||||
configMap:
|
||||
name: {{ template "grafana.fullname" . }}-config-dashboards
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.sidecar.datasources.enabled }}
|
||||
- name: sc-datasources-volume
|
||||
emptyDir: {}
|
||||
{{- end -}}
|
||||
{{- if .Values.sidecar.notifiers.enabled }}
|
||||
- name: sc-notifiers-volume
|
||||
emptyDir: {}
|
||||
{{- end -}}
|
||||
{{- range .Values.extraSecretMounts }}
|
||||
{{- if .secretName }}
|
||||
- name: {{ .name }}
|
||||
secret:
|
||||
secretName: {{ .secretName }}
|
||||
defaultMode: {{ .defaultMode }}
|
||||
{{- else if .projected }}
|
||||
- name: {{ .name }}
|
||||
projected: {{- toYaml .projected | nindent 6 }}
|
||||
{{- else if .csi }}
|
||||
- name: {{ .name }}
|
||||
csi: {{- toYaml .csi | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- range .Values.extraVolumeMounts }}
|
||||
- name: {{ .name }}
|
||||
{{- if .existingClaim }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .existingClaim }}
|
||||
{{- else if .hostPath }}
|
||||
hostPath:
|
||||
path: {{ .hostPath }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- range .Values.extraEmptyDirMounts }}
|
||||
- name: {{ .name }}
|
||||
emptyDir: {}
|
||||
{{- end -}}
|
||||
{{- if .Values.extraContainerVolumes }}
|
||||
{{ toYaml .Values.extraContainerVolumes | indent 2 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,27 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and .Values.rbac.create (not .Values.rbac.namespaced) (not .Values.rbac.useExistingRole) }}
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "grafana.fullname" . }}-clusterrole
|
||||
{{- if or .Values.sidecar.dashboards.enabled (or .Values.sidecar.datasources.enabled .Values.rbac.extraClusterRoleRules) }}
|
||||
rules:
|
||||
{{- if or .Values.sidecar.dashboards.enabled .Values.sidecar.datasources.enabled }}
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources: ["configmaps", "secrets"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
{{- end}}
|
||||
{{- with .Values.rbac.extraClusterRoleRules }}
|
||||
{{ toYaml . | indent 0 }}
|
||||
{{- end}}
|
||||
{{- else }}
|
||||
rules: []
|
||||
{{- end}}
|
||||
{{- end}}
|
||||
{{- end}}
|
|
@ -0,0 +1,26 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and .Values.rbac.create (not .Values.rbac.namespaced) }}
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-clusterrolebinding
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "grafana.serviceAccountName" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
{{- if (not .Values.rbac.useExistingRole) }}
|
||||
name: {{ template "grafana.fullname" . }}-clusterrole
|
||||
{{- else }}
|
||||
name: {{ .Values.rbac.useExistingRole }}
|
||||
{{- end }}
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,31 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.sidecar.dashboards.enabled }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "grafana.fullname" . }}-config-dashboards
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
data:
|
||||
provider.yaml: |-
|
||||
apiVersion: 1
|
||||
providers:
|
||||
- name: '{{ .Values.sidecar.dashboards.provider.name }}'
|
||||
orgId: {{ .Values.sidecar.dashboards.provider.orgid }}
|
||||
{{- if not .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
|
||||
folder: '{{ .Values.sidecar.dashboards.provider.folder }}'
|
||||
{{- end}}
|
||||
type: {{ .Values.sidecar.dashboards.provider.type }}
|
||||
disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }}
|
||||
allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }}
|
||||
updateIntervalSeconds: {{ .Values.sidecar.dashboards.provider.updateIntervalSeconds | default 30 }}
|
||||
options:
|
||||
foldersFromFilesStructure: {{ .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
|
||||
path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
|
||||
{{- end}}
|
||||
{{- end}}
|
|
@ -0,0 +1,99 @@
|
|||
{{- if .Values.enabled }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
# Adding default prometheus datasource for grafana
|
||||
datasources.yaml: |
|
||||
apiVersion: 1
|
||||
datasources:
|
||||
- access: proxy
|
||||
editable: false
|
||||
isDefault: true
|
||||
name: Prometheus
|
||||
type: prometheus
|
||||
url: http://{{ .Values.prometheusName | trimSuffix "/" }}-exp/{{ .Values.prometheusPrefixURL | trimPrefix "/"}}
|
||||
jsonData:
|
||||
timeInterval: '1m'
|
||||
{{- if .Values.plugins }}
|
||||
plugins: {{ join "," .Values.plugins }}
|
||||
{{- end }}
|
||||
grafana.ini: |
|
||||
{{- range $key, $value := index .Values "grafana.ini" }}
|
||||
[{{ $key }}]
|
||||
{{- range $elem, $elemVal := $value }}
|
||||
{{- if kindIs "invalid" $elemVal }}
|
||||
{{ $elem }} =
|
||||
{{- else if kindIs "string" $elemVal }}
|
||||
{{ $elem }} = {{ tpl $elemVal $ }}
|
||||
{{- else }}
|
||||
{{ $elem }} = {{ $elemVal }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
[server]
|
||||
root_url=/{{ include "k10.ingressPath" . | trimSuffix "/"}}/grafana
|
||||
serve_from_sub_path=true
|
||||
|
||||
{{- if .Values.datasources }}
|
||||
{{ $root := . }}
|
||||
{{- range $key, $value := .Values.datasources }}
|
||||
{{ $key }}: |
|
||||
{{ tpl (toYaml $value | indent 4) $root }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if .Values.notifiers }}
|
||||
{{- range $key, $value := .Values.notifiers }}
|
||||
{{ $key }}: |
|
||||
{{ toYaml $value | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if .Values.dashboardProviders }}
|
||||
{{- range $key, $value := .Values.dashboardProviders }}
|
||||
{{ $key }}: |
|
||||
{{ toYaml $value | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if .Values.dashboards }}
|
||||
download_dashboards.sh: |
|
||||
#!/usr/bin/env sh
|
||||
set -euf
|
||||
{{- if .Values.dashboardProviders }}
|
||||
{{- range $key, $value := .Values.dashboardProviders }}
|
||||
{{- range $value.providers }}
|
||||
mkdir -p {{ .options.path }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- range $provider, $dashboards := .Values.dashboards }}
|
||||
{{- range $key, $value := $dashboards }}
|
||||
{{- if (or (hasKey $value "gnetId") (hasKey $value "url")) }}
|
||||
curl -skf \
|
||||
--connect-timeout 60 \
|
||||
--max-time 60 \
|
||||
{{- if not $value.b64content }}
|
||||
-H "Accept: application/json" \
|
||||
{{- if $value.token }}
|
||||
-H "Authorization: token {{ $value.token }}" \
|
||||
{{- end }}
|
||||
-H "Content-Type: application/json;charset=UTF-8" \
|
||||
{{ end }}
|
||||
{{- if $value.url -}}"{{ $value.url }}"{{- else -}}"https://grafana.com/api/dashboards/{{ $value.gnetId }}/revisions/{{- if $value.revision -}}{{ $value.revision }}{{- else -}}1{{- end -}}/download"{{- end -}}{{ if $value.datasource }} | sed '/-- .* --/! s/"datasource":.*,/"datasource": "{{ $value.datasource }}",/g'{{ end }}{{- if $value.b64content -}} | base64 -d {{- end -}} \
|
||||
> "/var/lib/grafana/dashboards/{{ $provider }}/{{ $key }}.json"
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,37 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.dashboards }}
|
||||
{{ $files := .Files }}
|
||||
{{- range $provider, $dashboards := .Values.dashboards }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" $ }}-dashboards-{{ $provider }}
|
||||
namespace: {{ template "grafana.namespace" $ }}
|
||||
labels:
|
||||
{{- include "grafana.labels" $ | nindent 4 }}
|
||||
dashboard-provider: {{ $provider }}
|
||||
{{- if $dashboards }}
|
||||
data:
|
||||
{{- $dashboardFound := false }}
|
||||
{{- range $key, $value := $dashboards }}
|
||||
{{- if (or (hasKey $value "json") (hasKey $value "file")) }}
|
||||
{{- $dashboardFound = true }}
|
||||
{{ print $key | indent 2 }}.json:
|
||||
{{- if hasKey $value "json" }}
|
||||
|-
|
||||
{{ $value.json | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if hasKey $value "file" }}
|
||||
{{ toYaml ( $files.Get $value.file ) | indent 4}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if not $dashboardFound }}
|
||||
{}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
---
|
||||
{{- end }}
|
||||
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,52 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{ if (or (not .Values.global.persistence.enabled) (eq .Values.persistence.type "pvc")) }}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- if .Values.labels }}
|
||||
{{ toYaml .Values.labels | indent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if not .Values.autoscaling.enabled }}
|
||||
replicas: {{ .Values.replicas }}
|
||||
{{- end }}
|
||||
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 6 }}
|
||||
{{- with .Values.deploymentStrategy }}
|
||||
strategy:
|
||||
{{ toYaml . | trim | indent 4 }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 8 }}
|
||||
{{- with .Values.podLabels }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
|
||||
checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
|
||||
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
|
||||
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- if .Values.envRenderSecret }}
|
||||
checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- include "grafana.pod" . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,20 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and .Values.global.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "statefulset")}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-headless
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
{{- include "grafana.selectorLabels" . | nindent 4 }}
|
||||
type: ClusterIP
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,22 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.autoscaling.enabled }}
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "grafana.name" . }}
|
||||
helm.sh/chart: {{ template "grafana.chart" . }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
minReplicas: {{ .Values.autoscaling.minReplicas }}
|
||||
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
|
||||
metrics:
|
||||
{{ toYaml .Values.autoscaling.metrics | indent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,117 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{ if .Values.imageRenderer.enabled }}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-image-renderer
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.imageRenderer.labels" . | nindent 4 }}
|
||||
{{- if .Values.imageRenderer.labels }}
|
||||
{{ toYaml .Values.imageRenderer.labels | indent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.imageRenderer.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
replicas: {{ .Values.imageRenderer.replicas }}
|
||||
revisionHistoryLimit: {{ .Values.imageRenderer.revisionHistoryLimit }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
|
||||
{{- with .Values.imageRenderer.deploymentStrategy }}
|
||||
strategy:
|
||||
{{ toYaml . | trim | indent 4 }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 8 }}
|
||||
{{- with .Values.imageRenderer.podLabels }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{- with .Values.imageRenderer.podAnnotations }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
|
||||
{{- if .Values.imageRenderer.schedulerName }}
|
||||
schedulerName: "{{ .Values.imageRenderer.schedulerName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.imageRenderer.serviceAccountName }}
|
||||
serviceAccountName: "{{ .Values.imageRenderer.serviceAccountName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.imageRenderer.securityContext }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.imageRenderer.securityContext | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.imageRenderer.hostAliases }}
|
||||
hostAliases:
|
||||
{{- toYaml .Values.imageRenderer.hostAliases | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.imageRenderer.priorityClassName }}
|
||||
priorityClassName: {{ .Values.imageRenderer.priorityClassName }}
|
||||
{{- end }}
|
||||
{{- if .Values.imageRenderer.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range .Values.imageRenderer.image.pullSecrets }}
|
||||
- name: {{ . }}
|
||||
{{- end}}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}-image-renderer
|
||||
{{- if .Values.imageRenderer.image.sha }}
|
||||
image: "{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}@sha256:{{ .Values.imageRenderer.image.sha }}"
|
||||
{{- else }}
|
||||
image: "{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.imageRenderer.image.pullPolicy }}
|
||||
{{- if .Values.imageRenderer.command }}
|
||||
command:
|
||||
{{- range .Values.imageRenderer.command }}
|
||||
- {{ . }}
|
||||
{{- end }}
|
||||
{{- end}}
|
||||
ports:
|
||||
- name: {{ .Values.imageRenderer.service.portName }}
|
||||
containerPort: {{ .Values.imageRenderer.service.port }}
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: HTTP_PORT
|
||||
value: {{ .Values.imageRenderer.service.port | quote }}
|
||||
{{- range $key, $value := .Values.imageRenderer.env }}
|
||||
- name: {{ $key | quote }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop: ['all']
|
||||
allowPrivilegeEscalation: false
|
||||
readOnlyRootFilesystem: true
|
||||
volumeMounts:
|
||||
- mountPath: /tmp
|
||||
name: image-renderer-tmpfs
|
||||
{{- with .Values.imageRenderer.resources }}
|
||||
resources:
|
||||
{{ toYaml . | indent 12 }}
|
||||
{{- end }}
|
||||
{{- with .Values.imageRenderer.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.imageRenderer.affinity }}
|
||||
affinity:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.imageRenderer.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: image-renderer-tmpfs
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,78 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and (.Values.imageRenderer.enabled) (.Values.imageRenderer.networkPolicy.limitIngress) }}
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-image-renderer-ingress
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
annotations:
|
||||
comment: Limit image-renderer ingress traffic from grafana
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
|
||||
{{- if .Values.imageRenderer.podLabels }}
|
||||
{{ toYaml .Values.imageRenderer.podLabels | nindent 6 }}
|
||||
{{- end }}
|
||||
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- ports:
|
||||
- port: {{ .Values.imageRenderer.service.port }}
|
||||
protocol: TCP
|
||||
from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: {{ template "grafana.namespace" . }}
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 14 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{ toYaml .Values.podLabels | nindent 14 }}
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
|
||||
{{- if and (.Values.imageRenderer.enabled) (.Values.imageRenderer.networkPolicy.limitEgress) }}
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-image-renderer-egress
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
annotations:
|
||||
comment: Limit image-renderer egress traffic to grafana
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
|
||||
{{- if .Values.imageRenderer.podLabels }}
|
||||
{{ toYaml .Values.imageRenderer.podLabels | nindent 6 }}
|
||||
{{- end }}
|
||||
|
||||
policyTypes:
|
||||
- Egress
|
||||
egress:
|
||||
# allow dns resolution
|
||||
- ports:
|
||||
- port: 53
|
||||
protocol: UDP
|
||||
- port: 53
|
||||
protocol: TCP
|
||||
# talk only to grafana
|
||||
- ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
protocol: TCP
|
||||
to:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: {{ template "grafana.namespace" . }}
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 14 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{ toYaml .Values.podLabels | nindent 14 }}
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,32 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{ if .Values.imageRenderer.enabled }}
|
||||
{{ if .Values.imageRenderer.service.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-image-renderer
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.imageRenderer.labels" . | nindent 4 }}
|
||||
{{- if .Values.imageRenderer.service.labels }}
|
||||
{{ toYaml .Values.imageRenderer.service.labels | indent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.imageRenderer.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
{{- if .Values.imageRenderer.service.clusterIP }}
|
||||
clusterIP: {{ .Values.imageRenderer.service.clusterIP }}
|
||||
{{end}}
|
||||
ports:
|
||||
- name: {{ .Values.imageRenderer.service.portName }}
|
||||
port: {{ .Values.imageRenderer.service.port }}
|
||||
protocol: TCP
|
||||
targetPort: {{ .Values.imageRenderer.service.targetPort }}
|
||||
selector:
|
||||
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 4 }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,80 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $ingressApiIsStable := eq (include "grafana.ingress.isStable" .) "true" -}}
|
||||
{{- $ingressSupportsIngressClassName := eq (include "grafana.ingress.supportsIngressClassName" .) "true" -}}
|
||||
{{- $ingressSupportsPathType := eq (include "grafana.ingress.supportsPathType" .) "true" -}}
|
||||
{{- $fullName := include "grafana.fullname" . -}}
|
||||
{{- $servicePort := .Values.service.port -}}
|
||||
{{- $ingressPath := .Values.ingress.path -}}
|
||||
{{- $ingressPathType := .Values.ingress.pathType -}}
|
||||
{{- $extraPaths := .Values.ingress.extraPaths -}}
|
||||
apiVersion: {{ include "grafana.ingress.apiVersion" . }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- if .Values.ingress.labels }}
|
||||
{{ toYaml .Values.ingress.labels | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.ingress.annotations }}
|
||||
{{ $key }}: {{ tpl $value $ | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if and $ingressSupportsIngressClassName .Values.ingress.ingressClassName }}
|
||||
ingressClassName: {{ .Values.ingress.ingressClassName }}
|
||||
{{- end -}}
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{ tpl (toYaml .Values.ingress.tls) $ | indent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- if .Values.ingress.hosts }}
|
||||
{{- range .Values.ingress.hosts }}
|
||||
- host: {{ tpl . $}}
|
||||
http:
|
||||
paths:
|
||||
{{- if $extraPaths }}
|
||||
{{ toYaml $extraPaths | indent 10 }}
|
||||
{{- end }}
|
||||
- path: {{ $ingressPath }}
|
||||
{{- if $ingressSupportsPathType }}
|
||||
pathType: {{ $ingressPathType }}
|
||||
{{- end }}
|
||||
backend:
|
||||
{{- if $ingressApiIsStable }}
|
||||
service:
|
||||
name: {{ $fullName }}
|
||||
port:
|
||||
number: {{ $servicePort }}
|
||||
{{- else }}
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: {{ $servicePort }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
- http:
|
||||
paths:
|
||||
- backend:
|
||||
{{- if $ingressApiIsStable }}
|
||||
service:
|
||||
name: {{ $fullName }}
|
||||
port:
|
||||
number: {{ $servicePort }}
|
||||
{{- else }}
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: {{ $servicePort }}
|
||||
{{- end }}
|
||||
{{- if $ingressPath }}
|
||||
path: {{ $ingressPath }}
|
||||
{{- end }}
|
||||
{{- if $ingressSupportsPathType }}
|
||||
pathType: {{ $ingressPathType }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,18 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{ if .Values.service.enabled}}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ template "grafana.name" . }}-network-policy
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
release: {{ .Release.Name }}
|
||||
app: {{ template "grafana.name" . }}
|
||||
ingress:
|
||||
- { }
|
||||
egress:
|
||||
- { }
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,24 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.podDisruptionBudget }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- if .Values.labels }}
|
||||
{{ toYaml .Values.labels | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,51 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.rbac.pspEnabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
|
||||
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
|
||||
{{- if .Values.rbac.pspUseAppArmor }}
|
||||
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
|
||||
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
|
||||
{{- end }}
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
# Default set from Docker, with DAC_OVERRIDE and CHOWN
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'csi'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
- 'persistentVolumeClaim'
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: false
|
||||
runAsUser:
|
||||
rule: 'MustRunAsNonRoot'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,33 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and .Values.global.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "pvc")}}
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.persistence.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.persistence.finalizers }}
|
||||
finalizers:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.global.persistence.accessMode }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ default .Values.global.persistence.size .Values.global.persistence.grafana.size | quote }}
|
||||
{{- if .Values.global.persistence.storageClass }}
|
||||
storageClassName: {{ .Values.global.persistence.storageClass }}
|
||||
{{- end -}}
|
||||
{{- with .Values.persistence.selectorLabels }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{ toYaml . | indent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,34 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and .Values.rbac.create (not .Values.rbac.useExistingRole) -}}
|
||||
apiVersion: {{ template "grafana.rbac.apiVersion" . }}
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.rbac.pspEnabled (and .Values.rbac.namespaced (or .Values.sidecar.dashboards.enabled (or .Values.sidecar.datasources.enabled .Values.rbac.extraRoleRules))) }}
|
||||
rules:
|
||||
{{- if .Values.rbac.pspEnabled }}
|
||||
- apiGroups: ['extensions']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames: [{{ template "grafana.fullname" . }}]
|
||||
{{- end }}
|
||||
{{- if and .Values.rbac.namespaced (or .Values.sidecar.dashboards.enabled .Values.sidecar.datasources.enabled) }}
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources: ["configmaps", "secrets"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
{{- end }}
|
||||
{{- with .Values.rbac.extraRoleRules }}
|
||||
{{ toYaml . | indent 0 }}
|
||||
{{- end}}
|
||||
{{- else }}
|
||||
rules: []
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,27 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.rbac.create -}}
|
||||
apiVersion: {{ template "grafana.rbac.apiVersion" . }}
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
{{- if (not .Values.rbac.useExistingRole) }}
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
{{- else }}
|
||||
name: {{ .Values.rbac.useExistingRole }}
|
||||
{{- end }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "grafana.serviceAccountName" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
{{- end -}}
|
||||
{{- end}}
|
|
@ -0,0 +1,16 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.envRenderSecret }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}-env
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{- range $key, $val := .Values.envRenderSecret }}
|
||||
{{ $key }}: {{ $val | b64enc | quote }}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,28 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{- if and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }}
|
||||
admin-user: {{ .Values.adminUser | b64enc | quote }}
|
||||
{{- if .Values.adminPassword }}
|
||||
admin-password: {{ .Values.adminPassword | b64enc | quote }}
|
||||
{{- else }}
|
||||
admin-password: {{ template "grafana.password" . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if not .Values.ldap.existingSecret }}
|
||||
ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,59 @@
|
|||
{{- if .Values.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- if .Values.service.labels }}
|
||||
{{ toYaml .Values.service.labels | indent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
getambassador.io/config: |
|
||||
---
|
||||
apiVersion: getambassador.io/v3alpha1
|
||||
kind: Mapping
|
||||
name: grafana-server-mapping
|
||||
prefix: /{{- include "k10.ingressPath" . | trimSuffix "/" }}/grafana/
|
||||
rewrite: /
|
||||
service: {{ template "grafana.fullname" .}}:{{ .Values.service.port }}
|
||||
timeout_ms: 15000
|
||||
hostname: "*"
|
||||
|
||||
spec:
|
||||
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
|
||||
type: ClusterIP
|
||||
{{- if .Values.service.clusterIP }}
|
||||
clusterIP: {{ .Values.service.clusterIP }}
|
||||
{{end}}
|
||||
{{- else if eq .Values.service.type "LoadBalancer" }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- if .Values.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: {{ .Values.service.portName }}
|
||||
port: {{ .Values.service.port }}
|
||||
protocol: TCP
|
||||
targetPort: {{ .Values.service.targetPort }}
|
||||
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
|
||||
nodePort: {{.Values.service.nodePort}}
|
||||
{{ end }}
|
||||
{{- if .Values.extraExposePorts }}
|
||||
{{- tpl (toYaml .Values.extraExposePorts) . | indent 4 }}
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "grafana.selectorLabels" . | nindent 4 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,15 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "grafana.serviceAccountName" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
{{- end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,42 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if .Values.serviceMonitor.enabled }}
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
{{- if .Values.serviceMonitor.namespace }}
|
||||
namespace: {{ .Values.serviceMonitor.namespace }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- if .Values.serviceMonitor.labels }}
|
||||
{{- toYaml .Values.serviceMonitor.labels | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
endpoints:
|
||||
- interval: {{ .Values.serviceMonitor.interval }}
|
||||
{{- if .Values.serviceMonitor.scrapeTimeout }}
|
||||
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
|
||||
{{- end }}
|
||||
honorLabels: true
|
||||
port: {{ .Values.service.portName }}
|
||||
path: {{ .Values.serviceMonitor.path }}
|
||||
scheme: {{ .Values.serviceMonitor.scheme }}
|
||||
{{- if .Values.serviceMonitor.tlsConfig }}
|
||||
tlsConfig:
|
||||
{{- toYaml .Values.serviceMonitor.tlsConfig | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.serviceMonitor.relabelings }}
|
||||
relabelings:
|
||||
{{- toYaml .Values.serviceMonitor.relabelings | nindent 4 }}
|
||||
{{- end }}
|
||||
jobLabel: "{{ .Release.Name }}"
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 8 }}
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- end}}
|
|
@ -0,0 +1,55 @@
|
|||
{{- if .Values.enabled }}
|
||||
{{- if and .Values.global.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "statefulset")}}
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
namespace: {{ template "grafana.namespace" . }}
|
||||
labels:
|
||||
{{- include "grafana.labels" . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 6 }}
|
||||
serviceName: {{ template "grafana.fullname" . }}-headless
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "grafana.selectorLabels" . | nindent 8 }}
|
||||
{{- with .Values.podLabels }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
|
||||
checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
|
||||
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
|
||||
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- include "grafana.pod" . | nindent 6 }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: storage
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.global.persistence.accessMode }}
|
||||
storageClassName: {{ .Values.global.persistence.storageClass }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.global.persistence.size }}
|
||||
{{- with .Values.persistence.selectorLabels }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{ toYaml . | indent 10 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end}}
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,30 @@
|
|||
apiVersion: v2
|
||||
appVersion: 2.26.0
|
||||
dependencies:
|
||||
- condition: kubeStateMetrics.enabled
|
||||
name: kube-state-metrics
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 3.4.*
|
||||
description: Prometheus is a monitoring system and time series database.
|
||||
home: https://prometheus.io/
|
||||
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
|
||||
maintainers:
|
||||
- email: gianrubio@gmail.com
|
||||
name: gianrubio
|
||||
- email: zanhsieh@gmail.com
|
||||
name: zanhsieh
|
||||
- email: miroslav.hadzhiev@gmail.com
|
||||
name: Xtigyro
|
||||
- email: monotek23@gmail.com
|
||||
name: monotek
|
||||
- email: naseem@transit.app
|
||||
name: naseemkullah
|
||||
name: prometheus
|
||||
sources:
|
||||
- https://github.com/prometheus/alertmanager
|
||||
- https://github.com/prometheus/prometheus
|
||||
- https://github.com/prometheus/pushgateway
|
||||
- https://github.com/prometheus/node_exporter
|
||||
- https://github.com/kubernetes/kube-state-metrics
|
||||
type: application
|
||||
version: 14.6.0
|
|
@ -0,0 +1,224 @@
|
|||
# Prometheus
|
||||
|
||||
[Prometheus](https://prometheus.io/), a [Cloud Native Computing Foundation](https://cncf.io/) project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
|
||||
|
||||
This chart bootstraps a [Prometheus](https://prometheus.io/) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.16+
|
||||
- Helm 3+
|
||||
|
||||
## Get Repo Info
|
||||
|
||||
```console
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics
|
||||
helm repo update
|
||||
```
|
||||
|
||||
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
|
||||
|
||||
## Install Chart
|
||||
|
||||
```console
|
||||
# Helm
|
||||
$ helm install [RELEASE_NAME] prometheus-community/prometheus
|
||||
```
|
||||
|
||||
_See [configuration](#configuration) below._
|
||||
|
||||
_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
|
||||
|
||||
## Dependencies
|
||||
|
||||
By default this chart installs additional, dependent charts:
|
||||
|
||||
- [stable/kube-state-metrics](https://github.com/helm/charts/tree/master/stable/kube-state-metrics)
|
||||
|
||||
To disable the dependency during installation, set `kubeStateMetrics.enabled` to `false`.
|
||||
|
||||
_See [helm dependency](https://helm.sh/docs/helm/helm_dependency/) for command documentation._
|
||||
|
||||
## Uninstall Chart
|
||||
|
||||
```console
|
||||
# Helm
|
||||
$ helm uninstall [RELEASE_NAME]
|
||||
```
|
||||
|
||||
This removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
|
||||
|
||||
## Upgrading Chart
|
||||
|
||||
```console
|
||||
# Helm
|
||||
$ helm upgrade [RELEASE_NAME] [CHART] --install
|
||||
```
|
||||
|
||||
_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
|
||||
|
||||
### To 9.0
|
||||
|
||||
Version 9.0 adds a new option to enable or disable the Prometheus Server. This supports the use case of running a Prometheus server in one k8s cluster and scraping exporters in another cluster while using the same chart for each deployment. To install the server `server.enabled` must be set to `true`.
|
||||
|
||||
### To 5.0
|
||||
|
||||
As of version 5.0, this chart uses Prometheus 2.x. This version of prometheus introduces a new data format and is not compatible with prometheus 1.x. It is recommended to install this as a new release, as updating existing releases will not work. See the [prometheus docs](https://prometheus.io/docs/prometheus/latest/migration/#storage) for instructions on retaining your old data.
|
||||
|
||||
Prometheus version 2.x has made changes to alertmanager, storage and recording rules. Check out the migration guide [here](https://prometheus.io/docs/prometheus/2.0/migration/).
|
||||
|
||||
Users of this chart will need to update their alerting rules to the new format before they can upgrade.
|
||||
|
||||
### Example Migration
|
||||
|
||||
Assuming you have an existing release of the prometheus chart, named `prometheus-old`. In order to update to prometheus 2.x while keeping your old data do the following:
|
||||
|
||||
1. Update the `prometheus-old` release. Disable scraping on every component besides the prometheus server, similar to the configuration below:
|
||||
|
||||
```yaml
|
||||
alertmanager:
|
||||
enabled: false
|
||||
alertmanagerFiles:
|
||||
alertmanager.yml: ""
|
||||
kubeStateMetrics:
|
||||
enabled: false
|
||||
nodeExporter:
|
||||
enabled: false
|
||||
pushgateway:
|
||||
enabled: false
|
||||
server:
|
||||
extraArgs:
|
||||
storage.local.retention: 720h
|
||||
serverFiles:
|
||||
alerts: ""
|
||||
prometheus.yml: ""
|
||||
rules: ""
|
||||
```
|
||||
|
||||
1. Deploy a new release of the chart with version 5.0+ using prometheus 2.x. In the values.yaml set the scrape config as usual, and also add the `prometheus-old` instance as a remote-read target.
|
||||
|
||||
```yaml
|
||||
prometheus.yml:
|
||||
...
|
||||
remote_read:
|
||||
- url: http://prometheus-old/api/v1/read
|
||||
...
|
||||
```
|
||||
|
||||
Old data will be available when you query the new prometheus instance.
|
||||
|
||||
## Configuration
|
||||
|
||||
See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands:
|
||||
|
||||
```console
|
||||
# Helm 2
|
||||
$ helm inspect values prometheus-community/prometheus
|
||||
|
||||
# Helm 3
|
||||
$ helm show values prometheus-community/prometheus
|
||||
```
|
||||
|
||||
You may similarly use the above configuration commands on each chart [dependency](#dependencies) to see it's configurations.
|
||||
|
||||
### Scraping Pod Metrics via Annotations
|
||||
|
||||
This chart uses a default configuration that causes prometheus to scrape a variety of kubernetes resource types, provided they have the correct annotations. In this section we describe how to configure pods to be scraped; for information on how other resource types can be scraped you can do a `helm template` to get the kubernetes resource definitions, and then reference the prometheus configuration in the ConfigMap against the prometheus documentation for [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) and [kubernetes_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config).
|
||||
|
||||
In order to get prometheus to scrape pods, you must add annotations to the the pods as below:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/path: /metrics
|
||||
prometheus.io/port: "8080"
|
||||
```
|
||||
|
||||
You should adjust `prometheus.io/path` based on the URL that your pod serves metrics from. `prometheus.io/port` should be set to the port that your pod serves metrics from. Note that the values for `prometheus.io/scrape` and `prometheus.io/port` must be enclosed in double quotes.
|
||||
|
||||
### Sharing Alerts Between Services
|
||||
|
||||
Note that when [installing](#install-chart) or [upgrading](#upgrading-chart) you may use multiple values override files. This is particularly useful when you have alerts belonging to multiple services in the cluster. For example,
|
||||
|
||||
```yaml
|
||||
# values.yaml
|
||||
# ...
|
||||
|
||||
# service1-alert.yaml
|
||||
serverFiles:
|
||||
alerts:
|
||||
service1:
|
||||
- alert: anAlert
|
||||
# ...
|
||||
|
||||
# service2-alert.yaml
|
||||
serverFiles:
|
||||
alerts:
|
||||
service2:
|
||||
- alert: anAlert
|
||||
# ...
|
||||
```
|
||||
|
||||
```console
|
||||
helm install [RELEASE_NAME] prometheus-community/prometheus -f values.yaml -f service1-alert.yaml -f service2-alert.yaml
|
||||
```
|
||||
|
||||
### RBAC Configuration
|
||||
|
||||
Roles and RoleBindings resources will be created automatically for `server` service.
|
||||
|
||||
To manually setup RBAC you need to set the parameter `rbac.create=false` and specify the service account to be used for each service by setting the parameters: `serviceAccounts.{{ component }}.create` to `false` and `serviceAccounts.{{ component }}.name` to the name of a pre-existing service account.
|
||||
|
||||
> **Tip**: You can refer to the default `*-clusterrole.yaml` and `*-clusterrolebinding.yaml` files in [templates](templates/) to customize your own.
|
||||
|
||||
### ConfigMap Files
|
||||
|
||||
AlertManager is configured through [alertmanager.yml](https://prometheus.io/docs/alerting/configuration/). This file (and any others listed in `alertmanagerFiles`) will be mounted into the `alertmanager` pod.
|
||||
|
||||
Prometheus is configured through [prometheus.yml](https://prometheus.io/docs/operating/configuration/). This file (and any others listed in `serverFiles`) will be mounted into the `server` pod.
|
||||
|
||||
### Ingress TLS
|
||||
|
||||
If your cluster allows automatic creation/retrieval of TLS certificates (e.g. [cert-manager](https://github.com/jetstack/cert-manager)), please refer to the documentation for that mechanism.
|
||||
|
||||
To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:
|
||||
|
||||
```console
|
||||
kubectl create secret tls prometheus-server-tls --cert=path/to/tls.cert --key=path/to/tls.key
|
||||
```
|
||||
|
||||
Include the secret's name, along with the desired hostnames, in the alertmanager/server Ingress TLS section of your custom `values.yaml` file:
|
||||
|
||||
```yaml
|
||||
server:
|
||||
ingress:
|
||||
## If true, Prometheus server Ingress will be created
|
||||
##
|
||||
enabled: true
|
||||
|
||||
## Prometheus server Ingress hostnames
|
||||
## Must be provided if Ingress is enabled
|
||||
##
|
||||
hosts:
|
||||
- prometheus.domain.com
|
||||
|
||||
## Prometheus server Ingress TLS configuration
|
||||
## Secrets must be manually created in the namespace
|
||||
##
|
||||
tls:
|
||||
- secretName: prometheus-server-tls
|
||||
hosts:
|
||||
- prometheus.domain.com
|
||||
```
|
||||
|
||||
### NetworkPolicy
|
||||
|
||||
Enabling Network Policy for Prometheus will secure connections to Alert Manager and Kube State Metrics by only accepting connections from Prometheus Server. All inbound connections to Prometheus Server are still allowed.
|
||||
|
||||
To enable network policy for Prometheus, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set `networkPolicy.enabled` to true.
|
||||
|
||||
If NetworkPolicy is enabled for Prometheus' scrape targets, you may also need to manually create a networkpolicy which allows it.
|
|
@ -0,0 +1,112 @@
|
|||
{{- if .Values.server.enabled -}}
|
||||
The Prometheus server can be accessed via port {{ .Values.server.service.servicePort }} on the following DNS name from within your cluster:
|
||||
{{ template "prometheus.server.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
|
||||
|
||||
{{ if .Values.server.ingress.enabled -}}
|
||||
From outside the cluster, the server URL(s) are:
|
||||
{{- range .Values.server.ingress.hosts }}
|
||||
http://{{ . }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
Get the Prometheus server URL by running these commands in the same shell:
|
||||
{{- if contains "NodePort" .Values.server.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus.server.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.server.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "prometheus.server.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "prometheus.server.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.server.service.servicePort }}
|
||||
{{- else if contains "ClusterIP" .Values.server.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "prometheus.name" . }},component={{ .Values.server.name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9090
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.server.persistentVolume.enabled }}
|
||||
{{- else }}
|
||||
#################################################################################
|
||||
###### WARNING: Persistence is disabled!!! You will lose your data when #####
|
||||
###### the Server pod is terminated. #####
|
||||
#################################################################################
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{ if .Values.alertmanager.enabled }}
|
||||
The Prometheus alertmanager can be accessed via port {{ .Values.alertmanager.service.servicePort }} on the following DNS name from within your cluster:
|
||||
{{ template "prometheus.alertmanager.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
|
||||
|
||||
{{ if .Values.alertmanager.ingress.enabled -}}
|
||||
From outside the cluster, the alertmanager URL(s) are:
|
||||
{{- range .Values.alertmanager.ingress.hosts }}
|
||||
http://{{ . }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
Get the Alertmanager URL by running these commands in the same shell:
|
||||
{{- if contains "NodePort" .Values.alertmanager.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus.alertmanager.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.alertmanager.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "prometheus.alertmanager.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "prometheus.alertmanager.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.alertmanager.service.servicePort }}
|
||||
{{- else if contains "ClusterIP" .Values.alertmanager.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "prometheus.name" . }},component={{ .Values.alertmanager.name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9093
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.alertmanager.persistentVolume.enabled }}
|
||||
{{- else }}
|
||||
#################################################################################
|
||||
###### WARNING: Persistence is disabled!!! You will lose your data when #####
|
||||
###### the AlertManager pod is terminated. #####
|
||||
#################################################################################
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.nodeExporter.podSecurityPolicy.enabled }}
|
||||
{{- else }}
|
||||
#################################################################################
|
||||
###### WARNING: Pod Security Policy has been moved to a global property. #####
|
||||
###### use .Values.podSecurityPolicy.enabled with pod-based #####
|
||||
###### annotations #####
|
||||
###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) #####
|
||||
#################################################################################
|
||||
{{- end }}
|
||||
|
||||
{{ if .Values.pushgateway.enabled }}
|
||||
The Prometheus PushGateway can be accessed via port {{ .Values.pushgateway.service.servicePort }} on the following DNS name from within your cluster:
|
||||
{{ template "prometheus.pushgateway.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
|
||||
|
||||
{{ if .Values.pushgateway.ingress.enabled -}}
|
||||
From outside the cluster, the pushgateway URL(s) are:
|
||||
{{- range .Values.pushgateway.ingress.hosts }}
|
||||
http://{{ . }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
Get the PushGateway URL by running these commands in the same shell:
|
||||
{{- if contains "NodePort" .Values.pushgateway.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus.pushgateway.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.pushgateway.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "prometheus.pushgateway.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "prometheus.pushgateway.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.pushgateway.service.servicePort }}
|
||||
{{- else if contains "ClusterIP" .Values.pushgateway.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "prometheus.name" . }},component={{ .Values.pushgateway.name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9091
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
For more information on running Prometheus, visit:
|
||||
https://prometheus.io/
|
|
@ -0,0 +1,3 @@
|
|||
{{/* Autogenerated, do NOT modify */}}
|
||||
{{- define "k10.prometheusImageTag" -}}v2.26.0{{- end -}}
|
||||
{{- define "k10.prometheusConfigMapReloaderImageTag" -}}v0.5.0{{- end -}}
|
|
@ -0,0 +1,400 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "prometheus.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "prometheus.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create unified labels for prometheus components
|
||||
*/}}
|
||||
{{- define "prometheus.common.matchLabels" -}}
|
||||
app: {{ template "prometheus.name" . }}
|
||||
release: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.common.metaLabels" -}}
|
||||
chart: {{ template "prometheus.chart" . }}
|
||||
heritage: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.alertmanager.labels" -}}
|
||||
{{ include "prometheus.alertmanager.matchLabels" . }}
|
||||
{{ include "prometheus.common.metaLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.alertmanager.matchLabels" -}}
|
||||
component: {{ .Values.alertmanager.name | quote }}
|
||||
{{ include "prometheus.common.matchLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.nodeExporter.labels" -}}
|
||||
{{ include "prometheus.nodeExporter.matchLabels" . }}
|
||||
{{ include "prometheus.common.metaLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.nodeExporter.matchLabels" -}}
|
||||
component: {{ .Values.nodeExporter.name | quote }}
|
||||
{{ include "prometheus.common.matchLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.pushgateway.labels" -}}
|
||||
{{ include "prometheus.pushgateway.matchLabels" . }}
|
||||
{{ include "prometheus.common.metaLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.pushgateway.matchLabels" -}}
|
||||
component: {{ .Values.pushgateway.name | quote }}
|
||||
{{ include "prometheus.common.matchLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.server.labels" -}}
|
||||
{{ include "prometheus.server.matchLabels" . }}
|
||||
{{ include "prometheus.common.metaLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "prometheus.server.matchLabels" -}}
|
||||
component: {{ .Values.server.name | quote }}
|
||||
{{ include "prometheus.common.matchLabels" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "prometheus.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Figure out the config based on
|
||||
the value of airgapped.repository
|
||||
*/}}
|
||||
{{- define "get.cmreloadimage" }}
|
||||
{{- if not .Values.global.rhMarketPlace }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "%s/configmap-reload:%s" .Values.global.airgapped.repository (include "get.cmReloadImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s:%s" (include "get.cmReloadImageRepo" .) (include "get.cmReloadImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- printf "%s" (get .Values.global.images "configmap-reload") }}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the config based on
|
||||
the value of airgapped.repository
|
||||
*/}}
|
||||
{{- define "get.serverimage" }}
|
||||
{{- if not .Values.global.rhMarketPlace }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "%s/prometheus:%s" .Values.global.airgapped.repository (include "get.promImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s:%s" (include "get.promImageRepo" .) (include "get.promImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- printf "%s" (get .Values.global.images "prometheus") }}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{/*
|
||||
Figure out the configmap-reload image tag
|
||||
based on the value of global.upstreamCertifiedImages
|
||||
*/}}
|
||||
{{- define "get.cmReloadImageTag"}}
|
||||
{{- if .Values.global.upstreamCertifiedImages }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "k10-%s-rh-ubi" (include "k10.prometheusConfigMapReloaderImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s-rh-ubi" (include "k10.prometheusConfigMapReloaderImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "k10-%s" (include "k10.prometheusConfigMapReloaderImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s" (include "k10.prometheusConfigMapReloaderImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the prometheus image tag
|
||||
based on the value of global.upstreamCertifiedImages
|
||||
*/}}
|
||||
{{- define "get.promImageTag"}}
|
||||
{{- if .Values.global.upstreamCertifiedImages }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "k10-%s-rh-ubi" (include "k10.prometheusImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s-rh-ubi" (include "k10.prometheusImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- if .Values.global.airgapped.repository }}
|
||||
{{- printf "k10-%s" (include "k10.prometheusImageTag" .) }}
|
||||
{{- else }}
|
||||
{{- printf "%s" (include "k10.prometheusImageTag" .) }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the configmap-reload image repo
|
||||
based on the value of global.upstreamCertifiedImages
|
||||
*/}}
|
||||
{{- define "get.cmReloadImageRepo" }}
|
||||
{{- if .Values.global.upstreamCertifiedImages }}
|
||||
{{- printf "%s/%s/configmap-reload" .Values.k10image.registry .Values.k10image.repository }}
|
||||
{{- else }}
|
||||
{{- print .Values.configmapReload.prometheus.image.repository }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Figure out the prom image repo
|
||||
based on the value of global.upstreamCertifiedImages
|
||||
*/}}
|
||||
{{- define "get.promImageRepo" }}
|
||||
{{- if .Values.global.upstreamCertifiedImages }}
|
||||
{{- printf "%s/%s/prometheus" .Values.k10image.registry .Values.k10image.repository }}
|
||||
{{- else }}
|
||||
{{- print .Values.server.image.repository }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a fully qualified alertmanager name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
|
||||
{{- define "prometheus.alertmanager.fullname" -}}
|
||||
{{- if .Values.alertmanager.fullnameOverride -}}
|
||||
{{- .Values.alertmanager.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- printf "%s-%s" .Release.Name .Values.alertmanager.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s-%s" .Release.Name $name .Values.alertmanager.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a fully qualified node-exporter name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "prometheus.nodeExporter.fullname" -}}
|
||||
{{- if .Values.nodeExporter.fullnameOverride -}}
|
||||
{{- .Values.nodeExporter.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- printf "%s-%s" .Release.Name .Values.nodeExporter.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s-%s" .Release.Name $name .Values.nodeExporter.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a fully qualified Prometheus server name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "prometheus.server.fullname" -}}
|
||||
{{- if .Values.server.fullnameOverride -}}
|
||||
{{- .Values.server.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- printf "%s-%s" .Release.Name .Values.server.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s-%s" .Release.Name $name .Values.server.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a fully qualified Prometheus server clusterrole name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "prometheus.server.clusterrolefullname" -}}
|
||||
{{- if .Values.server.clusterRoleNameOverride -}}
|
||||
{{- .Values.server.clusterRoleNameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- if .Values.server.fullnameOverride -}}
|
||||
{{- printf "%s-%s" .Release.Name .Values.server.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- printf "%s-%s" .Release.Name .Values.server.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s-%s" .Release.Name $name .Values.server.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a fully qualified pushgateway name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "prometheus.pushgateway.fullname" -}}
|
||||
{{- if .Values.pushgateway.fullnameOverride -}}
|
||||
{{- .Values.pushgateway.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- printf "%s-%s" .Release.Name .Values.pushgateway.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s-%s" .Release.Name $name .Values.pushgateway.name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Get KubeVersion removing pre-release information.
|
||||
*/}}
|
||||
{{- define "prometheus.kubeVersion" -}}
|
||||
{{- default .Capabilities.KubeVersion.Version (regexFind "v[0-9]+\\.[0-9]+\\.[0-9]+" .Capabilities.KubeVersion.Version) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return the appropriate apiVersion for deployment.
|
||||
*/}}
|
||||
{{- define "prometheus.deployment.apiVersion" -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{/*
|
||||
Return the appropriate apiVersion for daemonset.
|
||||
*/}}
|
||||
{{- define "prometheus.daemonset.apiVersion" -}}
|
||||
{{- print "apps/v1" -}}
|
||||
{{- end -}}
|
||||
{{/*
|
||||
Return the appropriate apiVersion for networkpolicy.
|
||||
*/}}
|
||||
{{- define "prometheus.networkPolicy.apiVersion" -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
{{/*
|
||||
Return the appropriate apiVersion for podsecuritypolicy.
|
||||
*/}}
|
||||
{{- define "prometheus.podSecurityPolicy.apiVersion" -}}
|
||||
{{- print "policy/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{/*
|
||||
Return the appropriate apiVersion for rbac.
|
||||
*/}}
|
||||
{{- define "rbac.apiVersion" -}}
|
||||
{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }}
|
||||
{{- print "rbac.authorization.k8s.io/v1" -}}
|
||||
{{- else -}}
|
||||
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{/*
|
||||
Return the appropriate apiVersion for ingress.
|
||||
*/}}
|
||||
{{- define "ingress.apiVersion" -}}
|
||||
{{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19.x" (include "prometheus.kubeVersion" .)) -}}
|
||||
{{- print "networking.k8s.io/v1" -}}
|
||||
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
|
||||
{{- print "networking.k8s.io/v1beta1" -}}
|
||||
{{- else -}}
|
||||
{{- print "extensions/v1beta1" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return if ingress is stable.
|
||||
*/}}
|
||||
{{- define "ingress.isStable" -}}
|
||||
{{- eq (include "ingress.apiVersion" .) "networking.k8s.io/v1" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Return if ingress supports ingressClassName.
|
||||
*/}}
|
||||
{{- define "ingress.supportsIngressClassName" -}}
|
||||
{{- or (eq (include "ingress.isStable" .) "true") (and (eq (include "ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18.x" (include "prometheus.kubeVersion" .))) -}}
|
||||
{{- end -}}
|
||||
{{/*
|
||||
Return if ingress supports pathType.
|
||||
*/}}
|
||||
{{- define "ingress.supportsPathType" -}}
|
||||
{{- or (eq (include "ingress.isStable" .) "true") (and (eq (include "ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18.x" (include "prometheus.kubeVersion" .))) -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the alertmanager component
|
||||
*/}}
|
||||
{{- define "prometheus.serviceAccountName.alertmanager" -}}
|
||||
{{- if .Values.serviceAccounts.alertmanager.create -}}
|
||||
{{ default (include "prometheus.alertmanager.fullname" .) .Values.serviceAccounts.alertmanager.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.alertmanager.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the nodeExporter component
|
||||
*/}}
|
||||
{{- define "prometheus.serviceAccountName.nodeExporter" -}}
|
||||
{{- if .Values.serviceAccounts.nodeExporter.create -}}
|
||||
{{ default (include "prometheus.nodeExporter.fullname" .) .Values.serviceAccounts.nodeExporter.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.nodeExporter.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the pushgateway component
|
||||
*/}}
|
||||
{{- define "prometheus.serviceAccountName.pushgateway" -}}
|
||||
{{- if .Values.serviceAccounts.pushgateway.create -}}
|
||||
{{ default (include "prometheus.pushgateway.fullname" .) .Values.serviceAccounts.pushgateway.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.pushgateway.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the server component
|
||||
*/}}
|
||||
{{- define "prometheus.serviceAccountName.server" -}}
|
||||
{{- if .Values.serviceAccounts.server.create -}}
|
||||
{{ default (include "prometheus.server.fullname" .) .Values.serviceAccounts.server.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.server.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Define the prometheus.namespace template if set with forceNamespace or .Release.Namespace is set
|
||||
*/}}
|
||||
{{- define "prometheus.namespace" -}}
|
||||
{{- if .Values.forceNamespace -}}
|
||||
{{ printf "namespace: %s" .Values.forceNamespace }}
|
||||
{{- else -}}
|
||||
{{ printf "namespace: %s" .Release.Namespace }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,21 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.rbac.create .Values.alertmanager.useClusterRole (not .Values.alertmanager.useExistingRole) -}}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
rules:
|
||||
{{- if .Values.podSecurityPolicy.enabled }}
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
resourceNames:
|
||||
- {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{- else }}
|
||||
[]
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,20 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.rbac.create .Values.alertmanager.useClusterRole -}}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "prometheus.serviceAccountName.alertmanager" . }}
|
||||
{{ include "prometheus.namespace" . | indent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
{{- if (not .Values.alertmanager.useExistingRole) }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{- else }}
|
||||
name: {{ .Values.alertmanager.useExistingRole }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,19 @@
|
|||
{{- if and .Values.alertmanager.enabled (and (empty .Values.alertmanager.configMapOverrideName) (empty .Values.alertmanager.configFromSecret)) -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
data:
|
||||
{{- $root := . -}}
|
||||
{{- range $key, $value := .Values.alertmanagerFiles }}
|
||||
{{- if $key | regexMatch ".*\\.ya?ml$" }}
|
||||
{{ $key }}: |
|
||||
{{ toYaml $value | default "{}" | indent 4 }}
|
||||
{{- else }}
|
||||
{{ $key }}: {{ toYaml $value | indent 4 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,161 @@
|
|||
{{- if and .Values.alertmanager.enabled (not .Values.alertmanager.statefulSet.enabled) -}}
|
||||
apiVersion: {{ template "prometheus.deployment.apiVersion" . }}
|
||||
kind: Deployment
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.deploymentAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.deploymentAnnotations | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.alertmanager.matchLabels" . | nindent 6 }}
|
||||
replicas: {{ .Values.alertmanager.replicaCount }}
|
||||
{{- if .Values.alertmanager.strategy }}
|
||||
strategy:
|
||||
{{ toYaml .Values.alertmanager.strategy | trim | indent 4 }}
|
||||
{{ if eq .Values.alertmanager.strategy.type "Recreate" }}rollingUpdate: null{{ end }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.podAnnotations | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 8 }}
|
||||
{{- if .Values.alertmanager.podLabels}}
|
||||
{{ toYaml .Values.alertmanager.podLabels | nindent 8 }}
|
||||
{{- end}}
|
||||
spec:
|
||||
{{- if .Values.alertmanager.schedulerName }}
|
||||
schedulerName: "{{ .Values.alertmanager.schedulerName }}"
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "prometheus.serviceAccountName.alertmanager" . }}
|
||||
{{- if .Values.alertmanager.extraInitContainers }}
|
||||
initContainers:
|
||||
{{ toYaml .Values.alertmanager.extraInitContainers | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.alertmanager.priorityClassName }}"
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ template "prometheus.name" . }}-{{ .Values.alertmanager.name }}
|
||||
image: "{{ .Values.alertmanager.image.repository }}:{{ .Values.alertmanager.image.tag }}"
|
||||
imagePullPolicy: "{{ .Values.alertmanager.image.pullPolicy }}"
|
||||
env:
|
||||
{{- range $key, $value := .Values.alertmanager.extraEnv }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value }}
|
||||
{{- end }}
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: status.podIP
|
||||
args:
|
||||
- --config.file=/etc/config/{{ .Values.alertmanager.configFileName }}
|
||||
- --storage.path={{ .Values.alertmanager.persistentVolume.mountPath }}
|
||||
- --cluster.advertise-address=[$(POD_IP)]:6783
|
||||
{{- range $key, $value := .Values.alertmanager.extraArgs }}
|
||||
- --{{ $key }}={{ $value }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.baseURL }}
|
||||
- --web.external-url={{ .Values.alertmanager.baseURL }}
|
||||
{{- end }}
|
||||
|
||||
ports:
|
||||
- containerPort: 9093
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: {{ .Values.alertmanager.prefixURL }}/-/ready
|
||||
port: 9093
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
resources:
|
||||
{{ toYaml .Values.alertmanager.resources | indent 12 }}
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
- name: storage-volume
|
||||
mountPath: "{{ .Values.alertmanager.persistentVolume.mountPath }}"
|
||||
subPath: "{{ .Values.alertmanager.persistentVolume.subPath }}"
|
||||
{{- range .Values.alertmanager.extraSecretMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
subPath: {{ .subPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.configmapReload.alertmanager.enabled }}
|
||||
- name: {{ template "prometheus.name" . }}-{{ .Values.alertmanager.name }}-{{ .Values.configmapReload.alertmanager.name }}
|
||||
image: "{{ include "get.cmreloadimage" .}}"
|
||||
imagePullPolicy: "{{ .Values.configmapReload.alertmanager.image.pullPolicy }}"
|
||||
args:
|
||||
- --volume-dir=/etc/config
|
||||
- --webhook-url=http://127.0.0.1:9093{{ .Values.alertmanager.prefixURL }}/-/reload
|
||||
resources:
|
||||
{{ toYaml .Values.configmapReload.alertmanager.resources | indent 12 }}
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.imagePullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.alertmanager.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.alertmanager.dnsConfig }}
|
||||
dnsConfig:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.securityContext }}
|
||||
securityContext:
|
||||
{{ toYaml .Values.alertmanager.securityContext | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.alertmanager.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.affinity }}
|
||||
affinity:
|
||||
{{ toYaml .Values.alertmanager.affinity | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config-volume
|
||||
{{- if empty .Values.alertmanager.configFromSecret }}
|
||||
configMap:
|
||||
name: {{ if .Values.alertmanager.configMapOverrideName }}{{ .Release.Name }}-{{ .Values.alertmanager.configMapOverrideName }}{{- else }}{{ template "prometheus.alertmanager.fullname" . }}{{- end }}
|
||||
{{- else }}
|
||||
secret:
|
||||
secretName: {{ .Values.alertmanager.configFromSecret }}
|
||||
{{- end }}
|
||||
{{- range .Values.alertmanager.extraSecretMounts }}
|
||||
- name: {{ .name }}
|
||||
secret:
|
||||
secretName: {{ .secretName }}
|
||||
{{- with .optional }}
|
||||
optional: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- name: storage-volume
|
||||
{{- if .Values.alertmanager.persistentVolume.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.alertmanager.persistentVolume.existingClaim }}{{ .Values.alertmanager.persistentVolume.existingClaim }}{{- else }}{{ template "prometheus.alertmanager.fullname" . }}{{- end }}
|
||||
{{- else }}
|
||||
emptyDir:
|
||||
{{- if .Values.alertmanager.emptyDir.sizeLimit }}
|
||||
sizeLimit: {{ .Values.alertmanager.emptyDir.sizeLimit }}
|
||||
{{- else }}
|
||||
{}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end }}
|
|
@ -0,0 +1,31 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.alertmanager.statefulSet.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.statefulSet.headless.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.statefulSet.headless.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
{{- if .Values.alertmanager.statefulSet.headless.labels }}
|
||||
{{ toYaml .Values.alertmanager.statefulSet.headless.labels | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}-headless
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: http
|
||||
port: {{ .Values.alertmanager.statefulSet.headless.servicePort }}
|
||||
protocol: TCP
|
||||
targetPort: 9093
|
||||
{{- if .Values.alertmanager.statefulSet.headless.enableMeshPeer }}
|
||||
- name: meshpeer
|
||||
port: 6783
|
||||
protocol: TCP
|
||||
targetPort: 6783
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "prometheus.alertmanager.matchLabels" . | nindent 4 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,57 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.alertmanager.ingress.enabled -}}
|
||||
{{- $ingressApiIsStable := eq (include "ingress.isStable" .) "true" -}}
|
||||
{{- $ingressSupportsIngressClassName := eq (include "ingress.supportsIngressClassName" .) "true" -}}
|
||||
{{- $ingressSupportsPathType := eq (include "ingress.supportsPathType" .) "true" -}}
|
||||
{{- $releaseName := .Release.Name -}}
|
||||
{{- $serviceName := include "prometheus.alertmanager.fullname" . }}
|
||||
{{- $servicePort := .Values.alertmanager.service.servicePort -}}
|
||||
{{- $ingressPath := .Values.alertmanager.ingress.path -}}
|
||||
{{- $ingressPathType := .Values.alertmanager.ingress.pathType -}}
|
||||
{{- $extraPaths := .Values.alertmanager.ingress.extraPaths -}}
|
||||
apiVersion: {{ template "ingress.apiVersion" . }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.ingress.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.ingress.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
{{- range $key, $value := .Values.alertmanager.ingress.extraLabels }}
|
||||
{{ $key }}: {{ $value }}
|
||||
{{- end }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
{{- if and $ingressSupportsIngressClassName .Values.alertmanager.ingress.ingressClassName }}
|
||||
ingressClassName: {{ .Values.alertmanager.ingress.ingressClassName }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.alertmanager.ingress.hosts }}
|
||||
{{- $url := splitList "/" . }}
|
||||
- host: {{ first $url }}
|
||||
http:
|
||||
paths:
|
||||
{{ if $extraPaths }}
|
||||
{{ toYaml $extraPaths | indent 10 }}
|
||||
{{- end }}
|
||||
- path: {{ $ingressPath }}
|
||||
{{- if $ingressSupportsPathType }}
|
||||
pathType: {{ $ingressPathType }}
|
||||
{{- end }}
|
||||
backend:
|
||||
{{- if $ingressApiIsStable }}
|
||||
service:
|
||||
name: {{ $serviceName }}
|
||||
port:
|
||||
number: {{ $servicePort }}
|
||||
{{- else }}
|
||||
serviceName: {{ $serviceName }}
|
||||
servicePort: {{ $servicePort }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- if .Values.alertmanager.ingress.tls }}
|
||||
tls:
|
||||
{{ toYaml .Values.alertmanager.ingress.tls | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,20 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.networkPolicy.enabled -}}
|
||||
apiVersion: {{ template "prometheus.networkPolicy.apiVersion" . }}
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.alertmanager.matchLabels" . | nindent 6 }}
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.server.matchLabels" . | nindent 12 }}
|
||||
- ports:
|
||||
- port: 9093
|
||||
{{- end -}}
|
|
@ -0,0 +1,14 @@
|
|||
{{- if .Values.alertmanager.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
spec:
|
||||
maxUnavailable: {{ .Values.alertmanager.podDisruptionBudget.maxUnavailable }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 6 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,46 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.rbac.create .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: {{ template "prometheus.podSecurityPolicy.apiVersion" . }}
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
{{- if .Values.alertmanager.podSecurityPolicy.annotations }}
|
||||
{{ toYaml .Values.alertmanager.podSecurityPolicy.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'persistentVolumeClaim'
|
||||
- 'emptyDir'
|
||||
- 'secret'
|
||||
allowedHostPaths:
|
||||
- pathPrefix: /etc
|
||||
readOnly: true
|
||||
- pathPrefix: {{ .Values.alertmanager.persistentVolume.mountPath }}
|
||||
hostNetwork: false
|
||||
hostPID: false
|
||||
hostIPC: false
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: true
|
||||
{{- end }}
|
|
@ -0,0 +1,39 @@
|
|||
{{- if not .Values.alertmanager.statefulSet.enabled -}}
|
||||
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
|
||||
{{- if not .Values.alertmanager.persistentVolume.existingClaim -}}
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.persistentVolume.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.persistentVolume.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
|
||||
{{- if .Values.alertmanager.persistentVolume.storageClass }}
|
||||
{{- if (eq "-" .Values.alertmanager.persistentVolume.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.alertmanager.persistentVolume.storageClass }}"
|
||||
{{- end }}
|
||||
{{- else if .Values.global.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.global.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.global.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.persistentVolume.volumeBindingMode }}
|
||||
volumeBindingModeName: "{{ .Values.alertmanager.persistentVolume.volumeBindingMode }}"
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.alertmanager.persistentVolume.size }}"
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,24 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.rbac.create (eq .Values.alertmanager.useClusterRole false) (not .Values.alertmanager.useExistingRole) -}}
|
||||
{{- range $.Values.alertmanager.namespaces }}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: Role
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" $ | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" $ }}
|
||||
namespace: {{ . }}
|
||||
rules:
|
||||
{{- if $.Values.podSecurityPolicy.enabled }}
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
resourceNames:
|
||||
- {{ template "prometheus.alertmanager.fullname" $ }}
|
||||
{{- else }}
|
||||
[]
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,23 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.rbac.create (eq .Values.alertmanager.useClusterRole false) -}}
|
||||
{{ range $.Values.alertmanager.namespaces }}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" $ | nindent 4 }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" $ }}
|
||||
namespace: {{ . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "prometheus.serviceAccountName.alertmanager" $ }}
|
||||
{{ include "prometheus.namespace" $ | indent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
{{- if (not $.Values.alertmanager.useExistingRole) }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" $ }}
|
||||
{{- else }}
|
||||
name: {{ $.Values.alertmanager.useExistingRole }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{ end }}
|
|
@ -0,0 +1,53 @@
|
|||
{{- if .Values.alertmanager.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.service.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
{{- if .Values.alertmanager.service.labels }}
|
||||
{{ toYaml .Values.alertmanager.service.labels | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
{{- if .Values.alertmanager.service.clusterIP }}
|
||||
clusterIP: {{ .Values.alertmanager.service.clusterIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.alertmanager.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.alertmanager.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{- range $cidr := .Values.alertmanager.service.loadBalancerSourceRanges }}
|
||||
- {{ $cidr }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: http
|
||||
port: {{ .Values.alertmanager.service.servicePort }}
|
||||
protocol: TCP
|
||||
targetPort: 9093
|
||||
{{- if .Values.alertmanager.service.nodePort }}
|
||||
nodePort: {{ .Values.alertmanager.service.nodePort }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.service.enableMeshPeer }}
|
||||
- name: meshpeer
|
||||
port: 6783
|
||||
protocol: TCP
|
||||
targetPort: 6783
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "prometheus.alertmanager.matchLabels" . | nindent 4 }}
|
||||
{{- if .Values.alertmanager.service.sessionAffinity }}
|
||||
sessionAffinity: {{ .Values.alertmanager.service.sessionAffinity }}
|
||||
{{- end }}
|
||||
type: "{{ .Values.alertmanager.service.type }}"
|
||||
{{- end }}
|
|
@ -0,0 +1,11 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.serviceAccounts.alertmanager.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.serviceAccountName.alertmanager" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
annotations:
|
||||
{{ toYaml .Values.serviceAccounts.alertmanager.annotations | indent 4 }}
|
||||
{{- end -}}
|
|
@ -0,0 +1,187 @@
|
|||
{{- if and .Values.alertmanager.enabled .Values.alertmanager.statefulSet.enabled -}}
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.statefulSet.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.statefulSet.annotations | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 4 }}
|
||||
{{- if .Values.alertmanager.statefulSet.labels}}
|
||||
{{ toYaml .Values.alertmanager.statefulSet.labels | nindent 4 }}
|
||||
{{- end}}
|
||||
name: {{ template "prometheus.alertmanager.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
serviceName: {{ template "prometheus.alertmanager.fullname" . }}-headless
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.alertmanager.matchLabels" . | nindent 6 }}
|
||||
replicas: {{ .Values.alertmanager.replicaCount }}
|
||||
podManagementPolicy: {{ .Values.alertmanager.statefulSet.podManagementPolicy }}
|
||||
template:
|
||||
metadata:
|
||||
{{- if .Values.alertmanager.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.podAnnotations | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.alertmanager.labels" . | nindent 8 }}
|
||||
{{- if .Values.alertmanager.podLabels}}
|
||||
{{ toYaml .Values.alertmanager.podLabels | nindent 8 }}
|
||||
{{- end}}
|
||||
spec:
|
||||
{{- if .Values.alertmanager.affinity }}
|
||||
affinity:
|
||||
{{ toYaml .Values.alertmanager.affinity | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.schedulerName }}
|
||||
schedulerName: "{{ .Values.alertmanager.schedulerName }}"
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "prometheus.serviceAccountName.alertmanager" . }}
|
||||
{{- if .Values.alertmanager.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.alertmanager.priorityClassName }}"
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ template "prometheus.name" . }}-{{ .Values.alertmanager.name }}
|
||||
image: "{{ .Values.alertmanager.image.repository }}:{{ .Values.alertmanager.image.tag }}"
|
||||
imagePullPolicy: "{{ .Values.alertmanager.image.pullPolicy }}"
|
||||
env:
|
||||
{{- range $key, $value := .Values.alertmanager.extraEnv }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value }}
|
||||
{{- end }}
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: status.podIP
|
||||
args:
|
||||
- --config.file=/etc/config/alertmanager.yml
|
||||
- --storage.path={{ .Values.alertmanager.persistentVolume.mountPath }}
|
||||
{{- if .Values.alertmanager.statefulSet.headless.enableMeshPeer }}
|
||||
- --cluster.advertise-address=[$(POD_IP)]:6783
|
||||
- --cluster.listen-address=0.0.0.0:6783
|
||||
{{- range $n := until (.Values.alertmanager.replicaCount | int) }}
|
||||
- --cluster.peer={{ template "prometheus.alertmanager.fullname" $ }}-{{ $n }}.{{ template "prometheus.alertmanager.fullname" $ }}-headless:6783
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
- --cluster.listen-address=
|
||||
{{- end }}
|
||||
{{- range $key, $value := .Values.alertmanager.extraArgs }}
|
||||
- --{{ $key }}={{ $value }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.baseURL }}
|
||||
- --web.external-url={{ .Values.alertmanager.baseURL }}
|
||||
{{- end }}
|
||||
|
||||
ports:
|
||||
- containerPort: 9093
|
||||
{{- if .Values.alertmanager.statefulSet.headless.enableMeshPeer }}
|
||||
- containerPort: 6783
|
||||
{{- end }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: {{ .Values.alertmanager.prefixURL }}/#/status
|
||||
port: 9093
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
resources:
|
||||
{{ toYaml .Values.alertmanager.resources | indent 12 }}
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
- name: storage-volume
|
||||
mountPath: "{{ .Values.alertmanager.persistentVolume.mountPath }}"
|
||||
subPath: "{{ .Values.alertmanager.persistentVolume.subPath }}"
|
||||
{{- range .Values.alertmanager.extraSecretMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
subPath: {{ .subPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
{{- if .Values.configmapReload.alertmanager.enabled }}
|
||||
- name: {{ template "prometheus.name" . }}-{{ .Values.alertmanager.name }}-{{ .Values.configmapReload.alertmanager.name }}
|
||||
image: "{{ include "get.cmreloadimage" .}}"
|
||||
imagePullPolicy: "{{ .Values.configmapReload.alertmanager.image.pullPolicy }}"
|
||||
args:
|
||||
- --volume-dir=/etc/config
|
||||
- --webhook-url=http://localhost:9093{{ .Values.alertmanager.prefixURL }}/-/reload
|
||||
resources:
|
||||
{{ toYaml .Values.configmapReload.alertmanager.resources | indent 12 }}
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.imagePullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.alertmanager.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.securityContext }}
|
||||
securityContext:
|
||||
{{ toYaml .Values.alertmanager.securityContext | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.alertmanager.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config-volume
|
||||
{{- if empty .Values.alertmanager.configFromSecret }}
|
||||
configMap:
|
||||
name: {{ if .Values.alertmanager.configMapOverrideName }}{{ .Release.Name }}-{{ .Values.alertmanager.configMapOverrideName }}{{- else }}{{ template "prometheus.alertmanager.fullname" . }}{{- end }}
|
||||
{{- else }}
|
||||
secret:
|
||||
secretName: {{ .Values.alertmanager.configFromSecret }}
|
||||
{{- end }}
|
||||
{{- range .Values.alertmanager.extraSecretMounts }}
|
||||
- name: {{ .name }}
|
||||
secret:
|
||||
secretName: {{ .secretName }}
|
||||
{{- with .optional }}
|
||||
optional: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.alertmanager.persistentVolume.enabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: storage-volume
|
||||
{{- if .Values.alertmanager.persistentVolume.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.alertmanager.persistentVolume.annotations | indent 10 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 10 }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.alertmanager.persistentVolume.size }}"
|
||||
{{- if .Values.alertmanager.persistentVolume.storageClass }}
|
||||
{{- if (eq "-" .Values.alertmanager.persistentVolume.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.alertmanager.persistentVolume.storageClass }}"
|
||||
{{- end }}
|
||||
{{- else if .Values.global.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.global.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.global.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
- name: storage-volume
|
||||
emptyDir:
|
||||
{{- if .Values.alertmanager.emptyDir.sizeLimit }}
|
||||
sizeLimit: {{ .Values.alertmanager.emptyDir.sizeLimit }}
|
||||
{{- else }}
|
||||
{}
|
||||
{{- end -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,146 @@
|
|||
{{- if .Values.nodeExporter.enabled -}}
|
||||
apiVersion: {{ template "prometheus.daemonset.apiVersion" . }}
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
{{- if .Values.nodeExporter.deploymentAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.nodeExporter.deploymentAnnotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.nodeExporter.matchLabels" . | nindent 6 }}
|
||||
{{- if .Values.nodeExporter.updateStrategy }}
|
||||
updateStrategy:
|
||||
{{ toYaml .Values.nodeExporter.updateStrategy | indent 4 }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
{{- if .Values.nodeExporter.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.nodeExporter.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 8 }}
|
||||
{{- if .Values.nodeExporter.pod.labels }}
|
||||
{{ toYaml .Values.nodeExporter.pod.labels | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "prometheus.serviceAccountName.nodeExporter" . }}
|
||||
{{- if .Values.nodeExporter.extraInitContainers }}
|
||||
initContainers:
|
||||
{{ toYaml .Values.nodeExporter.extraInitContainers | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.nodeExporter.priorityClassName }}"
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ template "prometheus.name" . }}-{{ .Values.nodeExporter.name }}
|
||||
image: "{{ .Values.nodeExporter.image.repository }}:{{ .Values.nodeExporter.image.tag }}"
|
||||
imagePullPolicy: "{{ .Values.nodeExporter.image.pullPolicy }}"
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
{{- if .Values.nodeExporter.hostRootfs }}
|
||||
- --path.rootfs=/host/root
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.hostNetwork }}
|
||||
- --web.listen-address=:{{ .Values.nodeExporter.service.hostPort }}
|
||||
{{- end }}
|
||||
{{- range $key, $value := .Values.nodeExporter.extraArgs }}
|
||||
{{- if $value }}
|
||||
- --{{ $key }}={{ $value }}
|
||||
{{- else }}
|
||||
- --{{ $key }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: metrics
|
||||
{{- if .Values.nodeExporter.hostNetwork }}
|
||||
containerPort: {{ .Values.nodeExporter.service.hostPort }}
|
||||
{{- else }}
|
||||
containerPort: 9100
|
||||
{{- end }}
|
||||
hostPort: {{ .Values.nodeExporter.service.hostPort }}
|
||||
resources:
|
||||
{{ toYaml .Values.nodeExporter.resources | indent 12 }}
|
||||
volumeMounts:
|
||||
- name: proc
|
||||
mountPath: /host/proc
|
||||
readOnly: true
|
||||
- name: sys
|
||||
mountPath: /host/sys
|
||||
readOnly: true
|
||||
{{- if .Values.nodeExporter.hostRootfs }}
|
||||
- name: root
|
||||
mountPath: /host/root
|
||||
mountPropagation: HostToContainer
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- range .Values.nodeExporter.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- if .mountPropagation }}
|
||||
mountPropagation: {{ .mountPropagation }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- range .Values.nodeExporter.extraConfigmapMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.imagePullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.hostNetwork }}
|
||||
hostNetwork: true
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.hostPID }}
|
||||
hostPID: true
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.nodeExporter.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.nodeExporter.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeExporter.dnsConfig }}
|
||||
dnsConfig:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.securityContext }}
|
||||
securityContext:
|
||||
{{ toYaml .Values.nodeExporter.securityContext | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: proc
|
||||
hostPath:
|
||||
path: /proc
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
{{- if .Values.nodeExporter.hostRootfs }}
|
||||
- name: root
|
||||
hostPath:
|
||||
path: /
|
||||
{{- end }}
|
||||
{{- range .Values.nodeExporter.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
hostPath:
|
||||
path: {{ .hostPath }}
|
||||
{{- end }}
|
||||
{{- range .Values.nodeExporter.extraConfigmapMounts }}
|
||||
- name: {{ .name }}
|
||||
configMap:
|
||||
name: {{ .configMap }}
|
||||
{{- end }}
|
||||
|
||||
{{- end -}}
|
|
@ -0,0 +1,55 @@
|
|||
{{- if and .Values.nodeExporter.enabled .Values.rbac.create .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: {{ template "prometheus.podSecurityPolicy.apiVersion" . }}
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
{{- if .Values.nodeExporter.podSecurityPolicy.annotations }}
|
||||
{{ toYaml .Values.nodeExporter.podSecurityPolicy.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'hostPath'
|
||||
- 'secret'
|
||||
allowedHostPaths:
|
||||
- pathPrefix: /proc
|
||||
readOnly: true
|
||||
- pathPrefix: /sys
|
||||
readOnly: true
|
||||
- pathPrefix: /
|
||||
readOnly: true
|
||||
{{- range .Values.nodeExporter.extraHostPathMounts }}
|
||||
- pathPrefix: {{ .hostPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
hostNetwork: {{ .Values.nodeExporter.hostNetwork }}
|
||||
hostPID: {{ .Values.nodeExporter.hostPID }}
|
||||
hostIPC: false
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
hostPorts:
|
||||
- min: 1
|
||||
max: 65535
|
||||
{{- end }}
|
|
@ -0,0 +1,17 @@
|
|||
{{- if and .Values.nodeExporter.enabled .Values.rbac.create }}
|
||||
{{- if or (default .Values.nodeExporter.podSecurityPolicy.enabled false) (.Values.podSecurityPolicy.enabled) }}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 4 }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
rules:
|
||||
- apiGroups: ['extensions']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames:
|
||||
- {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,19 @@
|
|||
{{- if and .Values.nodeExporter.enabled .Values.rbac.create }}
|
||||
{{- if .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 4 }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "prometheus.serviceAccountName.nodeExporter" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,11 @@
|
|||
{{- if and .Values.nodeExporter.enabled .Values.serviceAccounts.nodeExporter.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.serviceAccountName.nodeExporter" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
annotations:
|
||||
{{ toYaml .Values.serviceAccounts.nodeExporter.annotations | indent 4 }}
|
||||
{{- end -}}
|
|
@ -0,0 +1,47 @@
|
|||
{{- if .Values.nodeExporter.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
{{- if .Values.nodeExporter.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.nodeExporter.service.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.nodeExporter.labels" . | nindent 4 }}
|
||||
{{- if .Values.nodeExporter.service.labels }}
|
||||
{{ toYaml .Values.nodeExporter.service.labels | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "prometheus.nodeExporter.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
{{- if .Values.nodeExporter.service.clusterIP }}
|
||||
clusterIP: {{ .Values.nodeExporter.service.clusterIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.nodeExporter.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.nodeExporter.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeExporter.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{- range $cidr := .Values.nodeExporter.service.loadBalancerSourceRanges }}
|
||||
- {{ $cidr }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: metrics
|
||||
{{- if .Values.nodeExporter.hostNetwork }}
|
||||
port: {{ .Values.nodeExporter.service.hostPort }}
|
||||
protocol: TCP
|
||||
targetPort: {{ .Values.nodeExporter.service.hostPort }}
|
||||
{{- else }}
|
||||
port: {{ .Values.nodeExporter.service.servicePort }}
|
||||
protocol: TCP
|
||||
targetPort: 9100
|
||||
{{- end }}
|
||||
selector:
|
||||
{{- include "prometheus.nodeExporter.matchLabels" . | nindent 4 }}
|
||||
type: "{{ .Values.nodeExporter.service.type }}"
|
||||
{{- end -}}
|
|
@ -0,0 +1,21 @@
|
|||
{{- if and .Values.pushgateway.enabled .Values.rbac.create -}}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
rules:
|
||||
{{- if .Values.podSecurityPolicy.enabled }}
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
resourceNames:
|
||||
- {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{- else }}
|
||||
[]
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,16 @@
|
|||
{{- if and .Values.pushgateway.enabled .Values.rbac.create -}}
|
||||
apiVersion: {{ template "rbac.apiVersion" . }}
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "prometheus.serviceAccountName.pushgateway" . }}
|
||||
{{ include "prometheus.namespace" . | indent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{- end }}
|
|
@ -0,0 +1,119 @@
|
|||
{{- if .Values.pushgateway.enabled -}}
|
||||
apiVersion: {{ template "prometheus.deployment.apiVersion" . }}
|
||||
kind: Deployment
|
||||
metadata:
|
||||
{{- if .Values.pushgateway.deploymentAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.pushgateway.deploymentAnnotations | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
selector:
|
||||
{{- if .Values.schedulerName }}
|
||||
schedulerName: "{{ .Values.schedulerName }}"
|
||||
{{- end }}
|
||||
matchLabels:
|
||||
{{- include "prometheus.pushgateway.matchLabels" . | nindent 6 }}
|
||||
replicas: {{ .Values.pushgateway.replicaCount }}
|
||||
{{- if .Values.pushgateway.strategy }}
|
||||
strategy:
|
||||
{{ toYaml .Values.pushgateway.strategy | trim | indent 4 }}
|
||||
{{ if eq .Values.pushgateway.strategy.type "Recreate" }}rollingUpdate: null{{ end }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
{{- if .Values.pushgateway.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.pushgateway.podAnnotations | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 8 }}
|
||||
{{- if .Values.pushgateway.podLabels }}
|
||||
{{ toYaml .Values.pushgateway.podLabels | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "prometheus.serviceAccountName.pushgateway" . }}
|
||||
{{- if .Values.pushgateway.extraInitContainers }}
|
||||
initContainers:
|
||||
{{ toYaml .Values.pushgateway.extraInitContainers | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.pushgateway.priorityClassName }}"
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ template "prometheus.name" . }}-{{ .Values.pushgateway.name }}
|
||||
image: "{{ .Values.pushgateway.image.repository }}:{{ .Values.pushgateway.image.tag }}"
|
||||
imagePullPolicy: "{{ .Values.pushgateway.image.pullPolicy }}"
|
||||
args:
|
||||
{{- range $key, $value := .Values.pushgateway.extraArgs }}
|
||||
{{- $stringvalue := toString $value }}
|
||||
{{- if eq $stringvalue "true" }}
|
||||
- --{{ $key }}
|
||||
{{- else }}
|
||||
- --{{ $key }}={{ $value }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- containerPort: 9091
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
{{- if (index .Values "pushgateway" "extraArgs" "web.route-prefix") }}
|
||||
path: /{{ index .Values "pushgateway" "extraArgs" "web.route-prefix" }}/-/healthy
|
||||
{{- else }}
|
||||
path: /-/healthy
|
||||
{{- end }}
|
||||
port: 9091
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
{{- if (index .Values "pushgateway" "extraArgs" "web.route-prefix") }}
|
||||
path: /{{ index .Values "pushgateway" "extraArgs" "web.route-prefix" }}/-/ready
|
||||
{{- else }}
|
||||
path: /-/ready
|
||||
{{- end }}
|
||||
port: 9091
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 10
|
||||
resources:
|
||||
{{ toYaml .Values.pushgateway.resources | indent 12 }}
|
||||
{{- if .Values.pushgateway.persistentVolume.enabled }}
|
||||
volumeMounts:
|
||||
- name: storage-volume
|
||||
mountPath: "{{ .Values.pushgateway.persistentVolume.mountPath }}"
|
||||
subPath: "{{ .Values.pushgateway.persistentVolume.subPath }}"
|
||||
{{- end }}
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.imagePullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.pushgateway.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.pushgateway.dnsConfig }}
|
||||
dnsConfig:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.securityContext }}
|
||||
securityContext:
|
||||
{{ toYaml .Values.pushgateway.securityContext | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.pushgateway.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.affinity }}
|
||||
affinity:
|
||||
{{ toYaml .Values.pushgateway.affinity | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.persistentVolume.enabled }}
|
||||
volumes:
|
||||
- name: storage-volume
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.pushgateway.persistentVolume.existingClaim }}{{ .Values.pushgateway.persistentVolume.existingClaim }}{{- else }}{{ template "prometheus.pushgateway.fullname" . }}{{- end }}
|
||||
{{- end -}}
|
||||
{{- end }}
|
|
@ -0,0 +1,54 @@
|
|||
{{- if and .Values.pushgateway.enabled .Values.pushgateway.ingress.enabled -}}
|
||||
{{- $ingressApiIsStable := eq (include "ingress.isStable" .) "true" -}}
|
||||
{{- $ingressSupportsIngressClassName := eq (include "ingress.supportsIngressClassName" .) "true" -}}
|
||||
{{- $ingressSupportsPathType := eq (include "ingress.supportsPathType" .) "true" -}}
|
||||
{{- $releaseName := .Release.Name -}}
|
||||
{{- $serviceName := include "prometheus.pushgateway.fullname" . }}
|
||||
{{- $servicePort := .Values.pushgateway.service.servicePort -}}
|
||||
{{- $ingressPath := .Values.pushgateway.ingress.path -}}
|
||||
{{- $ingressPathType := .Values.pushgateway.ingress.pathType -}}
|
||||
{{- $extraPaths := .Values.pushgateway.ingress.extraPaths -}}
|
||||
apiVersion: {{ template "ingress.apiVersion" . }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
{{- if .Values.pushgateway.ingress.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.pushgateway.ingress.annotations | indent 4}}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
{{- if and $ingressSupportsIngressClassName .Values.pushgateway.ingress.ingressClassName }}
|
||||
ingressClassName: {{ .Values.pushgateway.ingress.ingressClassName }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.pushgateway.ingress.hosts }}
|
||||
{{- $url := splitList "/" . }}
|
||||
- host: {{ first $url }}
|
||||
http:
|
||||
paths:
|
||||
{{ if $extraPaths }}
|
||||
{{ toYaml $extraPaths | indent 10 }}
|
||||
{{- end }}
|
||||
- path: {{ $ingressPath }}
|
||||
{{- if $ingressSupportsPathType }}
|
||||
pathType: {{ $ingressPathType }}
|
||||
{{- end }}
|
||||
backend:
|
||||
{{- if $ingressApiIsStable }}
|
||||
service:
|
||||
name: {{ $serviceName }}
|
||||
port:
|
||||
number: {{ $servicePort }}
|
||||
{{- else }}
|
||||
serviceName: {{ $serviceName }}
|
||||
servicePort: {{ $servicePort }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- if .Values.pushgateway.ingress.tls }}
|
||||
tls:
|
||||
{{ toYaml .Values.pushgateway.ingress.tls | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,20 @@
|
|||
{{- if and .Values.pushgateway.enabled .Values.networkPolicy.enabled -}}
|
||||
apiVersion: {{ template "prometheus.networkPolicy.apiVersion" . }}
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.pushgateway.matchLabels" . | nindent 6 }}
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.server.matchLabels" . | nindent 12 }}
|
||||
- ports:
|
||||
- port: 9091
|
||||
{{- end -}}
|
|
@ -0,0 +1,14 @@
|
|||
{{- if .Values.pushgateway.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
spec:
|
||||
maxUnavailable: {{ .Values.pushgateway.podDisruptionBudget.maxUnavailable }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 6 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,42 @@
|
|||
{{- if and .Values.pushgateway.enabled .Values.rbac.create .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: {{ template "prometheus.podSecurityPolicy.apiVersion" . }}
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
{{- if .Values.pushgateway.podSecurityPolicy.annotations }}
|
||||
{{ toYaml .Values.pushgateway.podSecurityPolicy.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
volumes:
|
||||
- 'persistentVolumeClaim'
|
||||
- 'secret'
|
||||
allowedHostPaths:
|
||||
- pathPrefix: {{ .Values.pushgateway.persistentVolume.mountPath }}
|
||||
hostNetwork: false
|
||||
hostPID: false
|
||||
hostIPC: false
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: true
|
||||
{{- end }}
|
|
@ -0,0 +1,37 @@
|
|||
{{- if .Values.pushgateway.persistentVolume.enabled -}}
|
||||
{{- if not .Values.pushgateway.persistentVolume.existingClaim -}}
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
{{- if .Values.pushgateway.persistentVolume.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.pushgateway.persistentVolume.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{ toYaml .Values.pushgateway.persistentVolume.accessModes | indent 4 }}
|
||||
{{- if .Values.pushgateway.persistentVolume.storageClass }}
|
||||
{{- if (eq "-" .Values.pushgateway.persistentVolume.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.pushgateway.persistentVolume.storageClass }}"
|
||||
{{- end }}
|
||||
{{- else if .Values.global.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.global.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.global.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.persistentVolume.volumeBindingMode }}
|
||||
volumeBindingModeName: "{{ .Values.pushgateway.persistentVolume.volumeBindingMode }}"
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.pushgateway.persistentVolume.size }}"
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,41 @@
|
|||
{{- if .Values.pushgateway.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
{{- if .Values.pushgateway.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.pushgateway.service.annotations | indent 4}}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
{{- if .Values.pushgateway.service.labels }}
|
||||
{{ toYaml .Values.pushgateway.service.labels | indent 4}}
|
||||
{{- end }}
|
||||
name: {{ template "prometheus.pushgateway.fullname" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
spec:
|
||||
{{- if .Values.pushgateway.service.clusterIP }}
|
||||
clusterIP: {{ .Values.pushgateway.service.clusterIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.pushgateway.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.pushgateway.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.pushgateway.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{- range $cidr := .Values.pushgateway.service.loadBalancerSourceRanges }}
|
||||
- {{ $cidr }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: http
|
||||
port: {{ .Values.pushgateway.service.servicePort }}
|
||||
protocol: TCP
|
||||
targetPort: 9091
|
||||
selector:
|
||||
{{- include "prometheus.pushgateway.matchLabels" . | nindent 4 }}
|
||||
type: "{{ .Values.pushgateway.service.type }}"
|
||||
{{- end }}
|
|
@ -0,0 +1,11 @@
|
|||
{{- if and .Values.pushgateway.enabled .Values.serviceAccounts.pushgateway.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "prometheus.pushgateway.labels" . | nindent 4 }}
|
||||
name: {{ template "prometheus.serviceAccountName.pushgateway" . }}
|
||||
{{ include "prometheus.namespace" . | indent 2 }}
|
||||
annotations:
|
||||
{{ toYaml .Values.serviceAccounts.pushgateway.annotations | indent 4 }}
|
||||
{{- end -}}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue