Charts CI
``` Updated: bitnami/mysql: - 9.13.0 bitnami/spark: - 8.0.2 datadog/datadog: - 3.40.3 dynatrace/dynatrace-operator: - 0.14.1 external-secrets/external-secrets: - 0.9.7 jenkins/jenkins: - 4.8.2 redpanda/redpanda: - 5.6.27 traefik/traefik: - 25.0.0 yugabyte/yugabyte: - 2.18.3+1 yugabyte/yugaware: - 2.18.3+1 ```pull/916/head
parent
06a127956c
commit
f9948d5f30
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -36,4 +36,4 @@ maintainers:
|
|||
name: mysql
|
||||
sources:
|
||||
- https://github.com/bitnami/charts/tree/main/bitnami/mysql
|
||||
version: 9.12.5
|
||||
version: 9.13.0
|
||||
|
|
|
@ -11,16 +11,18 @@ Trademarks: This software listing is packaged by Bitnami. The respective tradema
|
|||
## TL;DR
|
||||
|
||||
```console
|
||||
helm install my-release oci://registry-1.docker.io/bitnamicharts/mysql
|
||||
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mysql
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps a [MySQL](https://github.com/bitnami/containers/tree/main/bitnami/mysql) replication cluster deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.dev/) for deployment and management of Helm Charts in clusters.
|
||||
|
||||
Looking to use MySQL in production? Try [VMware Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
|
||||
Looking to use MySQL in production? Try [VMware Tanzu Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
@ -33,9 +35,11 @@ Looking to use MySQL in production? Try [VMware Application Catalog](https://bit
|
|||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
helm install my-release oci://registry-1.docker.io/bitnamicharts/mysql
|
||||
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mysql
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
These commands deploy MySQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
@ -79,30 +83,30 @@ The command removes all the Kubernetes components associated with the chart and
|
|||
|
||||
### MySQL common parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------- |
|
||||
| `image.registry` | MySQL image registry | `docker.io` |
|
||||
| `image.repository` | MySQL image repository | `bitnami/mysql` |
|
||||
| `image.tag` | MySQL image tag (immutable tags are recommended) | `8.0.34-debian-11-r75` |
|
||||
| `image.digest` | MySQL image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `image.pullPolicy` | MySQL image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `image.debug` | Specify if debug logs should be enabled | `false` |
|
||||
| `architecture` | MySQL architecture (`standalone` or `replication`) | `standalone` |
|
||||
| `auth.rootPassword` | Password for the `root` user. Ignored if existing secret is provided | `""` |
|
||||
| `auth.createDatabase` | Whether to create the .Values.auth.database or not | `true` |
|
||||
| `auth.database` | Name for a custom database to create | `my_database` |
|
||||
| `auth.username` | Name for a custom user to create | `""` |
|
||||
| `auth.password` | Password for the new user. Ignored if existing secret is provided | `""` |
|
||||
| `auth.replicationUser` | MySQL replication user | `replicator` |
|
||||
| `auth.replicationPassword` | MySQL replication user password. Ignored if existing secret is provided | `""` |
|
||||
| `auth.existingSecret` | Use existing secret for password details. The secret has to contain the keys `mysql-root-password`, `mysql-replication-password` and `mysql-password` | `""` |
|
||||
| `auth.usePasswordFiles` | Mount credentials as files instead of using an environment variable | `false` |
|
||||
| `auth.customPasswordFiles` | Use custom password files when `auth.usePasswordFiles` is set to `true`. Define path for keys `root` and `user`, also define `replicator` if `architecture` is set to `replication` | `{}` |
|
||||
| `initdbScripts` | Dictionary of initdb scripts | `{}` |
|
||||
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`) | `""` |
|
||||
| `startdbScripts` | Dictionary of startdb scripts | `{}` |
|
||||
| `startdbScriptsConfigMap` | ConfigMap with the startdb scripts (Note: Overrides `startdbScripts`) | `""` |
|
||||
| Name | Description | Value |
|
||||
| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
|
||||
| `image.registry` | MySQL image registry | `REGISTRY_NAME` |
|
||||
| `image.repository` | MySQL image repository | `REPOSITORY_NAME/mysql` |
|
||||
| `image.digest` | MySQL image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `image.pullPolicy` | MySQL image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `image.debug` | Specify if debug logs should be enabled | `false` |
|
||||
| `architecture` | MySQL architecture (`standalone` or `replication`) | `standalone` |
|
||||
| `auth.rootPassword` | Password for the `root` user. Ignored if existing secret is provided | `""` |
|
||||
| `auth.createDatabase` | Whether to create the .Values.auth.database or not | `true` |
|
||||
| `auth.database` | Name for a custom database to create | `my_database` |
|
||||
| `auth.username` | Name for a custom user to create | `""` |
|
||||
| `auth.password` | Password for the new user. Ignored if existing secret is provided | `""` |
|
||||
| `auth.replicationUser` | MySQL replication user | `replicator` |
|
||||
| `auth.replicationPassword` | MySQL replication user password. Ignored if existing secret is provided | `""` |
|
||||
| `auth.existingSecret` | Use existing secret for password details. The secret has to contain the keys `mysql-root-password`, `mysql-replication-password` and `mysql-password` | `""` |
|
||||
| `auth.usePasswordFiles` | Mount credentials as files instead of using an environment variable | `false` |
|
||||
| `auth.customPasswordFiles` | Use custom password files when `auth.usePasswordFiles` is set to `true`. Define path for keys `root` and `user`, also define `replicator` if `architecture` is set to `replication` | `{}` |
|
||||
| `auth.defaultAuthenticationPlugin` | Sets the default authentication plugin, by default it will use `mysql_native_password` | `""` |
|
||||
| `initdbScripts` | Dictionary of initdb scripts | `{}` |
|
||||
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`) | `""` |
|
||||
| `startdbScripts` | Dictionary of startdb scripts | `{}` |
|
||||
| `startdbScriptsConfigMap` | ConfigMap with the startdb scripts (Note: Overrides `startdbScripts`) | `""` |
|
||||
|
||||
### MySQL Primary parameters
|
||||
|
||||
|
@ -304,66 +308,64 @@ The command removes all the Kubernetes components associated with the chart and
|
|||
|
||||
### Volume Permissions parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
|
||||
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup` | `false` |
|
||||
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
|
||||
| `volumePermissions.image.repository` | Init container volume-permissions image repository | `bitnami/os-shell` |
|
||||
| `volumePermissions.image.tag` | Init container volume-permissions image tag (immutable tags are recommended) | `11-debian-11-r90` |
|
||||
| `volumePermissions.image.digest` | Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
|
||||
| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `volumePermissions.resources` | Init container volume-permissions resources | `{}` |
|
||||
| Name | Description | Value |
|
||||
| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
|
||||
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup` | `false` |
|
||||
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `REGISTRY_NAME` |
|
||||
| `volumePermissions.image.repository` | Init container volume-permissions image repository | `REPOSITORY_NAME/os-shell` |
|
||||
| `volumePermissions.image.digest` | Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
|
||||
| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `volumePermissions.resources` | Init container volume-permissions resources | `{}` |
|
||||
|
||||
### Metrics parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------- |
|
||||
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
|
||||
| `metrics.image.registry` | Exporter image registry | `docker.io` |
|
||||
| `metrics.image.repository` | Exporter image repository | `bitnami/mysqld-exporter` |
|
||||
| `metrics.image.tag` | Exporter image tag (immutable tags are recommended) | `0.15.0-debian-11-r70` |
|
||||
| `metrics.image.digest` | Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `metrics.image.pullPolicy` | Exporter image pull policy | `IfNotPresent` |
|
||||
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `metrics.containerSecurityContext.enabled` | MySQL metrics container securityContext | `true` |
|
||||
| `metrics.containerSecurityContext.runAsUser` | User ID for the MySQL metrics container | `1001` |
|
||||
| `metrics.containerSecurityContext.runAsNonRoot` | Set MySQL metrics container's Security Context runAsNonRoot | `true` |
|
||||
| `metrics.service.type` | Kubernetes service type for MySQL Prometheus Exporter | `ClusterIP` |
|
||||
| `metrics.service.clusterIP` | Kubernetes service clusterIP for MySQL Prometheus Exporter | `""` |
|
||||
| `metrics.service.port` | MySQL Prometheus Exporter service port | `9104` |
|
||||
| `metrics.service.annotations` | Prometheus exporter service annotations | `{}` |
|
||||
| `metrics.extraArgs.primary` | Extra args to be passed to mysqld_exporter on Primary pods | `[]` |
|
||||
| `metrics.extraArgs.secondary` | Extra args to be passed to mysqld_exporter on Secondary pods | `[]` |
|
||||
| `metrics.resources.limits` | The resources limits for MySQL prometheus exporter containers | `{}` |
|
||||
| `metrics.resources.requests` | The requested resources for MySQL prometheus exporter containers | `{}` |
|
||||
| `metrics.livenessProbe.enabled` | Enable livenessProbe | `true` |
|
||||
| `metrics.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `120` |
|
||||
| `metrics.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
|
||||
| `metrics.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `1` |
|
||||
| `metrics.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `3` |
|
||||
| `metrics.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
|
||||
| `metrics.readinessProbe.enabled` | Enable readinessProbe | `true` |
|
||||
| `metrics.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` |
|
||||
| `metrics.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
|
||||
| `metrics.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `1` |
|
||||
| `metrics.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `3` |
|
||||
| `metrics.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
|
||||
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor Resource for scraping metrics using PrometheusOperator | `false` |
|
||||
| `metrics.serviceMonitor.namespace` | Specify the namespace in which the serviceMonitor resource will be created | `""` |
|
||||
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `""` |
|
||||
| `metrics.serviceMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` |
|
||||
| `metrics.serviceMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `""` |
|
||||
| `metrics.serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
|
||||
| `metrics.serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion | `[]` |
|
||||
| `metrics.serviceMonitor.selector` | ServiceMonitor selector labels | `{}` |
|
||||
| `metrics.serviceMonitor.honorLabels` | Specify honorLabels parameter to add the scrape endpoint | `false` |
|
||||
| `metrics.serviceMonitor.labels` | Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with | `{}` |
|
||||
| `metrics.serviceMonitor.annotations` | ServiceMonitor annotations | `{}` |
|
||||
| `metrics.prometheusRule.enabled` | Creates a Prometheus Operator prometheusRule (also requires `metrics.enabled` to be `true` and `metrics.prometheusRule.rules`) | `false` |
|
||||
| `metrics.prometheusRule.namespace` | Namespace for the prometheusRule Resource (defaults to the Release Namespace) | `""` |
|
||||
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRule will be discovered by Prometheus | `{}` |
|
||||
| `metrics.prometheusRule.rules` | Prometheus Rule definitions | `[]` |
|
||||
| Name | Description | Value |
|
||||
| ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | --------------------------------- |
|
||||
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
|
||||
| `metrics.image.registry` | Exporter image registry | `REGISTRY_NAME` |
|
||||
| `metrics.image.repository` | Exporter image repository | `REPOSITORY_NAME/mysqld-exporter` |
|
||||
| `metrics.image.digest` | Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `metrics.image.pullPolicy` | Exporter image pull policy | `IfNotPresent` |
|
||||
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `metrics.containerSecurityContext.enabled` | MySQL metrics container securityContext | `true` |
|
||||
| `metrics.containerSecurityContext.runAsUser` | User ID for the MySQL metrics container | `1001` |
|
||||
| `metrics.containerSecurityContext.runAsNonRoot` | Set MySQL metrics container's Security Context runAsNonRoot | `true` |
|
||||
| `metrics.service.type` | Kubernetes service type for MySQL Prometheus Exporter | `ClusterIP` |
|
||||
| `metrics.service.clusterIP` | Kubernetes service clusterIP for MySQL Prometheus Exporter | `""` |
|
||||
| `metrics.service.port` | MySQL Prometheus Exporter service port | `9104` |
|
||||
| `metrics.service.annotations` | Prometheus exporter service annotations | `{}` |
|
||||
| `metrics.extraArgs.primary` | Extra args to be passed to mysqld_exporter on Primary pods | `[]` |
|
||||
| `metrics.extraArgs.secondary` | Extra args to be passed to mysqld_exporter on Secondary pods | `[]` |
|
||||
| `metrics.resources.limits` | The resources limits for MySQL prometheus exporter containers | `{}` |
|
||||
| `metrics.resources.requests` | The requested resources for MySQL prometheus exporter containers | `{}` |
|
||||
| `metrics.livenessProbe.enabled` | Enable livenessProbe | `true` |
|
||||
| `metrics.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `120` |
|
||||
| `metrics.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
|
||||
| `metrics.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `1` |
|
||||
| `metrics.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `3` |
|
||||
| `metrics.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
|
||||
| `metrics.readinessProbe.enabled` | Enable readinessProbe | `true` |
|
||||
| `metrics.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` |
|
||||
| `metrics.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
|
||||
| `metrics.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `1` |
|
||||
| `metrics.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `3` |
|
||||
| `metrics.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
|
||||
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor Resource for scraping metrics using PrometheusOperator | `false` |
|
||||
| `metrics.serviceMonitor.namespace` | Specify the namespace in which the serviceMonitor resource will be created | `""` |
|
||||
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `""` |
|
||||
| `metrics.serviceMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` |
|
||||
| `metrics.serviceMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `""` |
|
||||
| `metrics.serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
|
||||
| `metrics.serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion | `[]` |
|
||||
| `metrics.serviceMonitor.selector` | ServiceMonitor selector labels | `{}` |
|
||||
| `metrics.serviceMonitor.honorLabels` | Specify honorLabels parameter to add the scrape endpoint | `false` |
|
||||
| `metrics.serviceMonitor.labels` | Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with | `{}` |
|
||||
| `metrics.serviceMonitor.annotations` | ServiceMonitor annotations | `{}` |
|
||||
| `metrics.prometheusRule.enabled` | Creates a Prometheus Operator prometheusRule (also requires `metrics.enabled` to be `true` and `metrics.prometheusRule.rules`) | `false` |
|
||||
| `metrics.prometheusRule.namespace` | Namespace for the prometheusRule Resource (defaults to the Release Namespace) | `""` |
|
||||
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRule will be discovered by Prometheus | `{}` |
|
||||
| `metrics.prometheusRule.rules` | Prometheus Rule definitions | `[]` |
|
||||
|
||||
The above parameters map to the env variables defined in [bitnami/mysql](https://github.com/bitnami/containers/tree/main/bitnami/mysql). For more information please refer to the [bitnami/mysql](https://github.com/bitnami/containers/tree/main/bitnami/mysql) image documentation.
|
||||
|
||||
|
@ -372,9 +374,11 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
|
|||
```console
|
||||
helm install my-release \
|
||||
--set auth.rootPassword=secretpassword,auth.database=app_database \
|
||||
oci://registry-1.docker.io/bitnamicharts/mysql
|
||||
oci://REGISTRY_NAME/REPOSITORY_NAME/mysql
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
The above command sets the MySQL `root` account password to `secretpassword`. Additionally it creates a database named `app_database`.
|
||||
|
||||
> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
|
||||
|
@ -382,9 +386,10 @@ The above command sets the MySQL `root` account password to `secretpassword`. Ad
|
|||
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
helm install my-release -f values.yaml oci://registry-1.docker.io/bitnamicharts/mysql
|
||||
helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/mysql
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Configuration and installation details
|
||||
|
@ -473,9 +478,11 @@ Find more information about how to deal with common errors related to Bitnami's
|
|||
It's necessary to set the `auth.rootPassword` parameter when upgrading for readiness/liveness probes to work properly. When you install this chart for the first time, some notes will be displayed providing the credentials you must use under the 'Administrator credentials' section. Please note down the password and run the command below to upgrade your chart:
|
||||
|
||||
```console
|
||||
helm upgrade my-release oci://registry-1.docker.io/bitnamicharts/mysql --set auth.rootPassword=[ROOT_PASSWORD]
|
||||
helm upgrade my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mysql --set auth.rootPassword=[ROOT_PASSWORD]
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
| Note: you need to substitute the placeholder _[ROOT_PASSWORD]_ with the value obtained in the installation notes.
|
||||
|
||||
### To 9.0.0
|
||||
|
@ -516,9 +523,11 @@ Consequences:
|
|||
- Reuse the PVC used to hold the master data on your previous release. To do so, use the `primary.persistence.existingClaim` parameter. The following example assumes that the release name is `mysql`:
|
||||
|
||||
```console
|
||||
helm install mysql oci://registry-1.docker.io/bitnamicharts/mysql --set auth.rootPassword=[ROOT_PASSWORD] --set primary.persistence.existingClaim=[EXISTING_PVC]
|
||||
helm install mysql oci://REGISTRY_NAME/REPOSITORY_NAME/mysql --set auth.rootPassword=[ROOT_PASSWORD] --set primary.persistence.existingClaim=[EXISTING_PVC]
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
| Note: you need to substitute the placeholder _[EXISTING_PVC]_ with the name of the PVC used on your previous release, and _[ROOT_PASSWORD]_ with the root password used in your previous release.
|
||||
|
||||
### To 7.0.0
|
||||
|
|
|
@ -74,9 +74,9 @@ diagnosticMode:
|
|||
|
||||
## Bitnami MySQL image
|
||||
## ref: https://hub.docker.com/r/bitnami/mysql/tags/
|
||||
## @param image.registry MySQL image registry
|
||||
## @param image.repository MySQL image repository
|
||||
## @param image.tag MySQL image tag (immutable tags are recommended)
|
||||
## @param image.registry [default: REGISTRY_NAME] MySQL image registry
|
||||
## @param image.repository [default: REPOSITORY_NAME/mysql] MySQL image repository
|
||||
## @skip image.tag MySQL image tag (immutable tags are recommended)
|
||||
## @param image.digest MySQL image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
|
||||
## @param image.pullPolicy MySQL image pull policy
|
||||
## @param image.pullSecrets Specify docker-registry secret names as an array
|
||||
|
@ -150,6 +150,10 @@ auth:
|
|||
## replicator: /vault/secrets/mysql-replicator
|
||||
##
|
||||
customPasswordFiles: {}
|
||||
## @param auth.defaultAuthenticationPlugin Sets the default authentication plugin, by default it will use `mysql_native_password`
|
||||
## NOTE: `mysql_native_password` will be deprecated in future mysql version and it is used here for compatibility with previous version. If you want to use the new default authentication method set it to `caching_sha2_password`.
|
||||
##
|
||||
defaultAuthenticationPlugin: ""
|
||||
## @param initdbScripts Dictionary of initdb scripts
|
||||
## Specify dictionary of scripts to be run at first boot
|
||||
## Example:
|
||||
|
@ -200,7 +204,7 @@ primary:
|
|||
##
|
||||
configuration: |-
|
||||
[mysqld]
|
||||
default_authentication_plugin=mysql_native_password
|
||||
default_authentication_plugin={{- .Values.auth.defaultAuthPlugin | default "mysql_native_password" }}
|
||||
skip-name-resolve
|
||||
explicit_defaults_for_timestamp
|
||||
basedir=/opt/bitnami/mysql
|
||||
|
@ -1009,9 +1013,9 @@ volumePermissions:
|
|||
## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup`
|
||||
##
|
||||
enabled: false
|
||||
## @param volumePermissions.image.registry Init container volume-permissions image registry
|
||||
## @param volumePermissions.image.repository Init container volume-permissions image repository
|
||||
## @param volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended)
|
||||
## @param volumePermissions.image.registry [default: REGISTRY_NAME] Init container volume-permissions image registry
|
||||
## @param volumePermissions.image.repository [default: REPOSITORY_NAME/os-shell] Init container volume-permissions image repository
|
||||
## @skip volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended)
|
||||
## @param volumePermissions.image.digest Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
|
||||
## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
|
||||
## @param volumePermissions.image.pullSecrets Specify docker-registry secret names as an array
|
||||
|
@ -1043,9 +1047,9 @@ metrics:
|
|||
## @param metrics.enabled Start a side-car prometheus exporter
|
||||
##
|
||||
enabled: false
|
||||
## @param metrics.image.registry Exporter image registry
|
||||
## @param metrics.image.repository Exporter image repository
|
||||
## @param metrics.image.tag Exporter image tag (immutable tags are recommended)
|
||||
## @param metrics.image.registry [default: REGISTRY_NAME] Exporter image registry
|
||||
## @param metrics.image.repository [default: REPOSITORY_NAME/mysqld-exporter] Exporter image repository
|
||||
## @skip metrics.image.tag Exporter image tag (immutable tags are recommended)
|
||||
## @param metrics.image.digest Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
|
||||
## @param metrics.image.pullPolicy Exporter image pull policy
|
||||
## @param metrics.image.pullSecrets Specify docker-registry secret names as an array
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
dependencies:
|
||||
- name: common
|
||||
repository: oci://registry-1.docker.io/bitnamicharts
|
||||
version: 2.13.2
|
||||
digest: sha256:551ae9c020597fd0a1d62967d9899a3c57a12e92f49e7a3967b6a187efdcaead
|
||||
generated: "2023-10-11T19:24:47.809562539+02:00"
|
||||
version: 2.13.3
|
||||
digest: sha256:9a971689db0c66ea95ac2e911c05014c2b96c6077c991131ff84f2982f88fb83
|
||||
generated: "2023-10-22T15:11:15.989938898Z"
|
||||
|
|
|
@ -6,7 +6,7 @@ annotations:
|
|||
category: Infrastructure
|
||||
images: |
|
||||
- name: spark
|
||||
image: docker.io/bitnami/spark:3.5.0-debian-11-r0
|
||||
image: docker.io/bitnami/spark:3.5.0-debian-11-r10
|
||||
licenses: Apache-2.0
|
||||
apiVersion: v2
|
||||
appVersion: 3.5.0
|
||||
|
@ -30,4 +30,4 @@ maintainers:
|
|||
name: spark
|
||||
sources:
|
||||
- https://github.com/bitnami/charts/tree/main/bitnami/spark
|
||||
version: 8.0.1
|
||||
version: 8.0.2
|
||||
|
|
|
@ -11,9 +11,11 @@ Trademarks: This software listing is packaged by Bitnami. The respective tradema
|
|||
## TL;DR
|
||||
|
||||
```console
|
||||
helm install my-release oci://registry-1.docker.io/bitnamicharts/spark
|
||||
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/spark
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps an [Apache Spark](https://github.com/bitnami/containers/tree/main/bitnami/spark) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
@ -34,9 +36,11 @@ Looking to use Apache Spark in production? Try [VMware Application Catalog](http
|
|||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
helm install my-release oci://registry-1.docker.io/bitnamicharts/spark
|
||||
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/spark
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
These commands deploy Apache Spark on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
@ -82,16 +86,15 @@ The command removes all the Kubernetes components associated with the chart and
|
|||
|
||||
### Spark parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ------------------- | ----------------------------------------------------------------------------------------------------- | -------------------- |
|
||||
| `image.registry` | Spark image registry | `docker.io` |
|
||||
| `image.repository` | Spark image repository | `bitnami/spark` |
|
||||
| `image.tag` | Spark image tag (immutable tags are recommended) | `3.5.0-debian-11-r0` |
|
||||
| `image.digest` | Spark image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `image.pullPolicy` | Spark image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `image.debug` | Enable image debug mode | `false` |
|
||||
| `hostNetwork` | Enable HOST Network | `false` |
|
||||
| Name | Description | Value |
|
||||
| ------------------- | ----------------------------------------------------------------------------------------------------- | ----------------------- |
|
||||
| `image.registry` | Spark image registry | `REGISTRY_NAME` |
|
||||
| `image.repository` | Spark image repository | `REPOSITORY_NAME/spark` |
|
||||
| `image.digest` | Spark image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
|
||||
| `image.pullPolicy` | Spark image pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `image.debug` | Enable image debug mode | `false` |
|
||||
| `hostNetwork` | Enable HOST Network | `false` |
|
||||
|
||||
### Spark master parameters
|
||||
|
||||
|
@ -331,17 +334,20 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
|
|||
|
||||
```console
|
||||
helm install my-release \
|
||||
--set master.webPort=8081 oci://registry-1.docker.io/bitnamicharts/spark
|
||||
--set master.webPort=8081 oci://REGISTRY_NAME/REPOSITORY_NAME/spark
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
|
||||
The above command sets the spark master web port to `8081`.
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
helm install my-release -f values.yaml oci://registry-1.docker.io/bitnamicharts/spark
|
||||
helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/spark
|
||||
```
|
||||
|
||||
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Configuration and installation details
|
||||
|
|
|
@ -2,7 +2,7 @@ annotations:
|
|||
category: Infrastructure
|
||||
licenses: Apache-2.0
|
||||
apiVersion: v2
|
||||
appVersion: 2.13.2
|
||||
appVersion: 2.13.3
|
||||
description: A Library Helm Chart for grouping common logic between bitnami charts.
|
||||
This chart is not deployable by itself.
|
||||
home: https://bitnami.com
|
||||
|
@ -20,4 +20,4 @@ name: common
|
|||
sources:
|
||||
- https://github.com/bitnami/charts
|
||||
type: library
|
||||
version: 2.13.2
|
||||
version: 2.13.3
|
||||
|
|
|
@ -34,8 +34,8 @@ Looking to use our applications in production? Try [VMware Application Catalog](
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.19+
|
||||
- Helm 3.2.0+
|
||||
- Kubernetes 1.23+
|
||||
- Helm 3.8.0+
|
||||
|
||||
## Parameters
|
||||
|
||||
|
|
|
@ -184,7 +184,7 @@ Returns true if PodSecurityPolicy is supported
|
|||
{{/*
|
||||
Returns true if AdmissionConfiguration is supported
|
||||
*/}}
|
||||
{{- define "common.capabilities.admisionConfiguration.supported" -}}
|
||||
{{- define "common.capabilities.admissionConfiguration.supported" -}}
|
||||
{{- if semverCompare ">=1.23-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- true -}}
|
||||
{{- end -}}
|
||||
|
@ -193,7 +193,7 @@ Returns true if AdmissionConfiguration is supported
|
|||
{{/*
|
||||
Return the appropriate apiVersion for AdmissionConfiguration.
|
||||
*/}}
|
||||
{{- define "common.capabilities.admisionConfiguration.apiVersion" -}}
|
||||
{{- define "common.capabilities.admissionConfiguration.apiVersion" -}}
|
||||
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
{{- print "apiserver.config.k8s.io/v1alpha1" -}}
|
||||
{{- else if semverCompare "<1.25-0" (include "common.capabilities.kubeVersion" .) -}}
|
||||
|
|
|
@ -84,9 +84,9 @@ diagnosticMode:
|
|||
|
||||
## Bitnami Spark image version
|
||||
## ref: https://hub.docker.com/r/bitnami/spark/tags/
|
||||
## @param image.registry Spark image registry
|
||||
## @param image.repository Spark image repository
|
||||
## @param image.tag Spark image tag (immutable tags are recommended)
|
||||
## @param image.registry [default: REGISTRY_NAME] Spark image registry
|
||||
## @param image.repository [default: REPOSITORY_NAME/spark] Spark image repository
|
||||
## @skip image.tag Spark image tag (immutable tags are recommended)
|
||||
## @param image.digest Spark image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
|
||||
## @param image.pullPolicy Spark image pull policy
|
||||
## @param image.pullSecrets Specify docker-registry secret names as an array
|
||||
|
@ -95,7 +95,7 @@ diagnosticMode:
|
|||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/spark
|
||||
tag: 3.5.0-debian-11-r0
|
||||
tag: 3.5.0-debian-11-r10
|
||||
digest: ""
|
||||
## Specify a imagePullPolicy
|
||||
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
# Datadog changelog
|
||||
|
||||
## 3.40.3
|
||||
|
||||
* Default `Agent` and `Cluster-Agent` to `7.48.1` version.
|
||||
|
||||
## 3.40.2
|
||||
|
||||
* Gate `PodSecurityPolicy` RBAC for k8s versions which no longer support this deprecated API.
|
||||
|
|
|
@ -19,4 +19,4 @@ name: datadog
|
|||
sources:
|
||||
- https://app.datadoghq.com/account/settings#agent/kubernetes
|
||||
- https://github.com/DataDog/datadog-agent
|
||||
version: 3.40.2
|
||||
version: 3.40.3
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Datadog
|
||||
|
||||
![Version: 3.40.2](https://img.shields.io/badge/Version-3.40.2-informational?style=flat-square) ![AppVersion: 7](https://img.shields.io/badge/AppVersion-7-informational?style=flat-square)
|
||||
![Version: 3.40.3](https://img.shields.io/badge/Version-3.40.3-informational?style=flat-square) ![AppVersion: 7](https://img.shields.io/badge/AppVersion-7-informational?style=flat-square)
|
||||
|
||||
[Datadog](https://www.datadoghq.com/) is a hosted infrastructure monitoring platform. This chart adds the Datadog Agent to all nodes in your cluster via a DaemonSet. It also optionally depends on the [kube-state-metrics chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics). For more information about monitoring Kubernetes with Datadog, please refer to the [Datadog documentation website](https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/).
|
||||
|
||||
|
@ -450,7 +450,7 @@ helm install <RELEASE_NAME> \
|
|||
| agents.image.pullPolicy | string | `"IfNotPresent"` | Datadog Agent image pull policy |
|
||||
| agents.image.pullSecrets | list | `[]` | Datadog Agent repository pullSecret (ex: specify docker registry credentials) |
|
||||
| agents.image.repository | string | `nil` | Override default registry + image.name for Agent |
|
||||
| agents.image.tag | string | `"7.48.0"` | Define the Agent version to use |
|
||||
| agents.image.tag | string | `"7.48.1"` | Define the Agent version to use |
|
||||
| agents.image.tagSuffix | string | `""` | Suffix to append to Agent tag |
|
||||
| agents.localService.forceLocalServiceEnabled | bool | `false` | Force the creation of the internal traffic policy service to target the agent running on the local node. By default, the internal traffic service is created only on Kubernetes 1.22+ where the feature became beta and enabled by default. This option allows to force the creation of the internal traffic service on kubernetes 1.21 where the feature was alpha and required a feature gate to be explicitly enabled. |
|
||||
| agents.localService.overrideName | string | `""` | Name of the internal traffic service to target the agent running on the local node |
|
||||
|
@ -514,7 +514,7 @@ helm install <RELEASE_NAME> \
|
|||
| clusterAgent.image.pullPolicy | string | `"IfNotPresent"` | Cluster Agent image pullPolicy |
|
||||
| clusterAgent.image.pullSecrets | list | `[]` | Cluster Agent repository pullSecret (ex: specify docker registry credentials) |
|
||||
| clusterAgent.image.repository | string | `nil` | Override default registry + image.name for Cluster Agent |
|
||||
| clusterAgent.image.tag | string | `"7.48.0"` | Cluster Agent image tag to use |
|
||||
| clusterAgent.image.tag | string | `"7.48.1"` | Cluster Agent image tag to use |
|
||||
| clusterAgent.livenessProbe | object | Every 15s / 6 KO / 1 OK | Override default Cluster Agent liveness probe settings |
|
||||
| clusterAgent.metricsProvider.aggregator | string | `"avg"` | Define the aggregator the cluster agent will use to process the metrics. The options are (avg, min, max, sum) |
|
||||
| clusterAgent.metricsProvider.createReaderRbac | bool | `true` | Create `external-metrics-reader` RBAC automatically (to allow HPA to read data from Cluster Agent) |
|
||||
|
@ -564,7 +564,7 @@ helm install <RELEASE_NAME> \
|
|||
| clusterChecksRunner.image.pullPolicy | string | `"IfNotPresent"` | Datadog Agent image pull policy |
|
||||
| clusterChecksRunner.image.pullSecrets | list | `[]` | Datadog Agent repository pullSecret (ex: specify docker registry credentials) |
|
||||
| clusterChecksRunner.image.repository | string | `nil` | Override default registry + image.name for Cluster Check Runners |
|
||||
| clusterChecksRunner.image.tag | string | `"7.48.0"` | Define the Agent version to use |
|
||||
| clusterChecksRunner.image.tag | string | `"7.48.1"` | Define the Agent version to use |
|
||||
| clusterChecksRunner.image.tagSuffix | string | `""` | Suffix to append to Agent tag |
|
||||
| clusterChecksRunner.livenessProbe | object | Every 15s / 6 KO / 1 OK | Override default agent liveness probe settings |
|
||||
| clusterChecksRunner.networkPolicy.create | bool | `false` | If true, create a NetworkPolicy for the cluster checks runners. DEPRECATED. Use datadog.networkPolicy.create instead |
|
||||
|
|
|
@ -841,7 +841,7 @@ clusterAgent:
|
|||
name: cluster-agent
|
||||
|
||||
# clusterAgent.image.tag -- Cluster Agent image tag to use
|
||||
tag: 7.48.0
|
||||
tag: 7.48.1
|
||||
|
||||
# clusterAgent.image.digest -- Cluster Agent image digest to use, takes precedence over tag if specified
|
||||
digest: ""
|
||||
|
@ -1249,7 +1249,7 @@ agents:
|
|||
name: agent
|
||||
|
||||
# agents.image.tag -- Define the Agent version to use
|
||||
tag: 7.48.0
|
||||
tag: 7.48.1
|
||||
|
||||
# agents.image.digest -- Define Agent image digest to use, takes precedence over tag if specified
|
||||
digest: ""
|
||||
|
@ -1717,7 +1717,7 @@ clusterChecksRunner:
|
|||
name: agent
|
||||
|
||||
# clusterChecksRunner.image.tag -- Define the Agent version to use
|
||||
tag: 7.48.0
|
||||
tag: 7.48.1
|
||||
|
||||
# clusterChecksRunner.image.digest -- Define Agent image digest to use, takes precedence over tag if specified
|
||||
digest: ""
|
||||
|
|
|
@ -4,7 +4,7 @@ annotations:
|
|||
catalog.cattle.io/kube-version: '>=1.19.0-0'
|
||||
catalog.cattle.io/release-name: dynatrace-operator
|
||||
apiVersion: v2
|
||||
appVersion: 0.14.0
|
||||
appVersion: 0.14.1
|
||||
description: The Dynatrace Operator Helm chart for Kubernetes and OpenShift
|
||||
home: https://www.dynatrace.com/
|
||||
icon: https://assets.dynatrace.com/global/resources/Signet_Logo_RGB_CP_512x512px.png
|
||||
|
@ -20,4 +20,4 @@ name: dynatrace-operator
|
|||
sources:
|
||||
- https://github.com/Dynatrace/dynatrace-operator
|
||||
type: application
|
||||
version: 0.14.0
|
||||
version: 0.14.1
|
||||
|
|
|
@ -4,7 +4,7 @@ annotations:
|
|||
catalog.cattle.io/kube-version: '>= 1.19.0-0'
|
||||
catalog.cattle.io/release-name: external-secrets
|
||||
apiVersion: v2
|
||||
appVersion: v0.9.6
|
||||
appVersion: v0.9.7
|
||||
description: External secret management for Kubernetes
|
||||
home: https://github.com/external-secrets/external-secrets
|
||||
icon: https://raw.githubusercontent.com/external-secrets/external-secrets/main/assets/eso-logo-large.png
|
||||
|
@ -17,4 +17,4 @@ maintainers:
|
|||
name: mcavoyk
|
||||
name: external-secrets
|
||||
type: application
|
||||
version: 0.9.6
|
||||
version: 0.9.7
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
[//]: # (README.md generated by gotmpl. DO NOT EDIT.)
|
||||
|
||||
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.9.6](https://img.shields.io/badge/Version-0.9.6-informational?style=flat-square)
|
||||
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.9.7](https://img.shields.io/badge/Version-0.9.7-informational?style=flat-square)
|
||||
|
||||
External secret management for Kubernetes
|
||||
|
||||
|
|
|
@ -7,8 +7,8 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets-cert-controller
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
name: RELEASE-NAME-external-secrets-cert-controller
|
||||
namespace: NAMESPACE
|
||||
spec:
|
||||
|
@ -24,8 +24,8 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets-cert-controller
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
spec:
|
||||
automountServiceAccountToken: true
|
||||
containers:
|
||||
|
@ -38,7 +38,7 @@ should match snapshot of default values:
|
|||
- --secret-namespace=NAMESPACE
|
||||
- --metrics-addr=:8080
|
||||
- --healthz-addr=:8081
|
||||
image: ghcr.io/external-secrets/external-secrets:v0.9.6
|
||||
image: ghcr.io/external-secrets/external-secrets:v0.9.7
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: cert-controller
|
||||
ports:
|
||||
|
|
|
@ -7,8 +7,8 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
name: RELEASE-NAME-external-secrets
|
||||
namespace: NAMESPACE
|
||||
spec:
|
||||
|
@ -24,14 +24,14 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
spec:
|
||||
automountServiceAccountToken: true
|
||||
containers:
|
||||
- args:
|
||||
- --concurrent=1
|
||||
image: ghcr.io/external-secrets/external-secrets:v0.9.6
|
||||
image: ghcr.io/external-secrets/external-secrets:v0.9.7
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: external-secrets
|
||||
ports:
|
||||
|
|
|
@ -7,8 +7,8 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets-webhook
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
name: RELEASE-NAME-external-secrets-webhook
|
||||
namespace: NAMESPACE
|
||||
spec:
|
||||
|
@ -24,8 +24,8 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets-webhook
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
spec:
|
||||
automountServiceAccountToken: true
|
||||
containers:
|
||||
|
@ -37,7 +37,7 @@ should match snapshot of default values:
|
|||
- --check-interval=5m
|
||||
- --metrics-addr=:8080
|
||||
- --healthz-addr=:8081
|
||||
image: ghcr.io/external-secrets/external-secrets:v0.9.6
|
||||
image: ghcr.io/external-secrets/external-secrets:v0.9.7
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: webhook
|
||||
ports:
|
||||
|
@ -81,8 +81,8 @@ should match snapshot of default values:
|
|||
app.kubernetes.io/instance: RELEASE-NAME
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: external-secrets-webhook
|
||||
app.kubernetes.io/version: v0.9.6
|
||||
app.kubernetes.io/version: v0.9.7
|
||||
external-secrets.io/component: webhook
|
||||
helm.sh/chart: external-secrets-0.9.6
|
||||
helm.sh/chart: external-secrets-0.9.7
|
||||
name: RELEASE-NAME-external-secrets-webhook
|
||||
namespace: NAMESPACE
|
||||
|
|
|
@ -12,6 +12,10 @@ Use the following links to reference issues, PRs, and commits prior to v2.6.0.
|
|||
The changelog until v1.5.7 was auto-generated based on git commits.
|
||||
Those entries include a reference to the git commit to be able to get more details.
|
||||
|
||||
## 4.8.2
|
||||
|
||||
Add the ability to modify `retentionTimeout` and `waitForPodSec` default value in JCasC
|
||||
|
||||
## 4.8.1
|
||||
|
||||
Reintroduces changes from 4.7.0 (reverted in 4.7.1), with additional fixes:
|
||||
|
@ -43,7 +47,6 @@ Runs `config-reload` as an init container, in addition to the sidecar container,
|
|||
|
||||
Change jenkins-test image label to match the other jenkins images
|
||||
|
||||
|
||||
## 4.6.5
|
||||
|
||||
Update Jenkins image and appVersion to jenkins lts release version 2.414.2
|
||||
|
|
|
@ -49,4 +49,4 @@ sources:
|
|||
- https://github.com/jenkinsci/docker-inbound-agent
|
||||
- https://github.com/maorfr/kube-tasks
|
||||
- https://github.com/jenkinsci/configuration-as-code-plugin
|
||||
version: 4.8.1
|
||||
version: 4.8.2
|
||||
|
|
|
@ -297,6 +297,22 @@ agent:
|
|||
```
|
||||
This will change the configuration of the kubernetes "cloud" (as called by jenkins) that is created automatically as part of this helm chart.
|
||||
|
||||
### Change container cleanup timeout API
|
||||
For tasks that use very large images, this timeout can be increased to avoid early termination of the task while the Kubernetes pod is still deploying.
|
||||
```yaml
|
||||
agent:
|
||||
retentionTimeout: "32"
|
||||
```
|
||||
This will change the configuration of the kubernetes "cloud" (as called by jenkins) that is created automatically as part of this helm chart.
|
||||
|
||||
### Change seconds to wait for pod to be running
|
||||
This will change how long Jenkins will wait (seconds) for pod to be in running state.
|
||||
```yaml
|
||||
agent:
|
||||
waitForPodSec: "32"
|
||||
```
|
||||
This will change the configuration of the kubernetes "cloud" (as called by jenkins) that is created automatically as part of this helm chart.
|
||||
|
||||
### Mounting Volumes into Agent Pods
|
||||
|
||||
Your Jenkins Agents will run as pods, and it's possible to inject volumes where needed:
|
||||
|
|
|
@ -314,6 +314,8 @@ The following tables list the configurable parameters of the Jenkins chart and t
|
|||
| `agent.kubernetesConnectTimeout` | The connection timeout in seconds for connections to Kubernetes API. Minimum value is 5. | 5 |
|
||||
| `agent.kubernetesReadTimeout` | The read timeout in seconds for connections to Kubernetes API. Minimum value is 15. | 15 |
|
||||
| `agent.maxRequestsPerHostStr` | The maximum concurrent connections to Kubernetes API | 32 |
|
||||
| `agent.retentionTimeout` | Time in minutes after which the Kubernetes cloud plugin will clean up an idle worker that has not already terminated | 5 |
|
||||
| `agent.waitForPodSec` | Seconds to wait for pod to be running | 600 |
|
||||
| `agent.podLabels` | Custom Pod labels (an object with `label-key: label-value` pairs) | Not set |
|
||||
| `agent.jnlpregistry` | Custom docker registry used for to get agent jnlp image | Not set |
|
||||
|
||||
|
|
|
@ -165,6 +165,8 @@ jenkins:
|
|||
{{- end }}
|
||||
{{- end }}
|
||||
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
|
||||
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
|
||||
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
|
||||
name: "{{ .Values.controller.cloudName }}"
|
||||
namespace: "{{ template "jenkins.agent.namespace" . }}"
|
||||
serverUrl: "{{ .Values.kubernetesURL }}"
|
||||
|
|
|
@ -630,6 +630,8 @@ agent:
|
|||
kubernetesConnectTimeout: 5
|
||||
kubernetesReadTimeout: 15
|
||||
maxRequestsPerHostStr: "32"
|
||||
retentionTimeout: 5
|
||||
waitForPodSec: 600
|
||||
namespace:
|
||||
# private registry for agent image
|
||||
jnlpregistry:
|
||||
|
|
|
@ -6,4 +6,4 @@ dependencies:
|
|||
repository: https://charts.redpanda.com
|
||||
version: 0.1.7
|
||||
digest: sha256:2be209fa1660b3c8a030bb35e9e7fa25dcb81aa456ce7a73c2ab1ae6eebb3d04
|
||||
generated: "2023-10-20T19:56:08.921760698Z"
|
||||
generated: "2023-10-23T14:58:28.424100698Z"
|
||||
|
|
|
@ -37,4 +37,4 @@ name: redpanda
|
|||
sources:
|
||||
- https://github.com/redpanda-data/helm-charts
|
||||
type: application
|
||||
version: 5.6.25
|
||||
version: 5.6.27
|
||||
|
|
|
@ -102,3 +102,21 @@ podAntiAffinity:
|
|||
podAntiAffinity: {{ toYaml .Values.affinity.podAntiAffinity | nindent 2 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
statefulset-checksum-annotation calculates a checksum that is used
|
||||
as the value for the annotation, "checksum/conifg". When this value
|
||||
changes, kube-controller-manager will roll the pods.
|
||||
|
||||
Append any additional dependencies that require the pods to restart
|
||||
to the $dependencies list.
|
||||
*/}}
|
||||
{{- define "statefulset-checksum-annotation" -}}
|
||||
{{- $dependencies := list -}}
|
||||
{{- $dependencies = append $dependencies (include "configmap-content-no-seed" .) -}}
|
||||
{{- if .Values.external.enabled -}}
|
||||
{{- $dependencies = append $dependencies (dig "domain" "" .Values.external) -}}
|
||||
{{- $dependencies = append $dependencies (dig "addresses" "" .Values.external) -}}
|
||||
{{- end -}}
|
||||
{{- toJson $dependencies | sha256sum -}}
|
||||
{{- end -}}
|
|
@ -53,7 +53,7 @@ spec:
|
|||
labels: {{ (include "statefulset-pod-labels" .) | nindent 8 }}
|
||||
redpanda.com/poddisruptionbudget: {{ template "redpanda.fullname" . }}
|
||||
annotations:
|
||||
checksum/config: {{ include "configmap-content-no-seed" . | sha256sum }}
|
||||
config.redpanda.com/checksum: {{ include "statefulset-checksum-annotation" . }}
|
||||
{{- with $.Values.statefulset.annotations }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
|
|
|
@ -206,6 +206,14 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"secretRef": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"caEnabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
|
@ -1294,9 +1302,6 @@
|
|||
"cert": {
|
||||
"type": "string"
|
||||
},
|
||||
"secretRef": {
|
||||
"type": "string"
|
||||
},
|
||||
"requireClientAuth": {
|
||||
"type": "boolean"
|
||||
}
|
||||
|
@ -1319,9 +1324,6 @@
|
|||
"cert": {
|
||||
"type": "string"
|
||||
},
|
||||
"secretRef": {
|
||||
"type": "string"
|
||||
},
|
||||
"requireClientAuth": {
|
||||
"type": "boolean"
|
||||
}
|
||||
|
@ -1417,9 +1419,6 @@
|
|||
"cert": {
|
||||
"type": "string"
|
||||
},
|
||||
"secretRef": {
|
||||
"type": "string"
|
||||
},
|
||||
"requireClientAuth": {
|
||||
"type": "boolean"
|
||||
}
|
||||
|
@ -1442,9 +1441,6 @@
|
|||
"cert": {
|
||||
"type": "string"
|
||||
},
|
||||
"secretRef": {
|
||||
"type": "string"
|
||||
},
|
||||
"requireClientAuth": {
|
||||
"type": "boolean"
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,16 +1,22 @@
|
|||
annotations:
|
||||
artifacthub.io/changes: "- \"chore(release): \U0001F680 publish v24.0.0\"\n- \"fix:
|
||||
http3 support broken when advertisedPort set\"\n- \"fix: tracing.opentelemetry.tls
|
||||
is optional for all values\"\n- \"chore(deps): update docker.io/helmunittest/helm-unittest
|
||||
docker tag to v3.12.2\"\n- \"chore(tests): \U0001F527 fix typo on tracing test\"\n-
|
||||
\"fix: \U0001F4A5 BREAKING CHANGE on healthchecks and traefik port\"\n- \"feat:
|
||||
multi namespace RBAC manifests\"\n"
|
||||
artifacthub.io/changes: "- \"feat: ✨ add healthcheck ingressRoute\"\n- \"feat: :boom:
|
||||
support http redirections and http challenges with cert-manager\"\n- \"feat: :boom:
|
||||
rework and allow update of namespace policy for Gateway\"\n- \"fix: disable ClusterRole
|
||||
and ClusterRoleBinding when not needed\"\n- \"fix: detect correctly v3 version
|
||||
when using sha in `image.tag`\"\n- \"fix: allow updateStrategy.rollingUpdate.maxUnavailable
|
||||
to be passed in as an int or string\"\n- \"fix: add missing separator in crds\"\n-
|
||||
\"fix: add Prometheus scraping annotations only if serviceMonitor not created\"\n-
|
||||
\"docs: Fix typo in the default values file\"\n- \"chore: remove label whitespace
|
||||
at TLSOption\"\n- \"chore(release): \U0001F680 publish v25.0.0\"\n- \"chore(deps):
|
||||
update traefik docker tag to v2.10.5\"\n- \"chore(deps): update docker.io/helmunittest/helm-unittest
|
||||
docker tag to v3.12.3\"\n- \"chore(ci): \U0001F527 \U0001F477 add e2e test when
|
||||
releasing\"\n"
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Traefik Proxy
|
||||
catalog.cattle.io/kube-version: '>=1.16.0-0'
|
||||
catalog.cattle.io/release-name: traefik
|
||||
apiVersion: v2
|
||||
appVersion: v2.10.4
|
||||
appVersion: v2.10.5
|
||||
description: A Traefik based Kubernetes ingress controller
|
||||
home: https://traefik.io/
|
||||
icon: https://raw.githubusercontent.com/traefik/traefik/v2.3/docs/content/assets/img/traefik.logo.png
|
||||
|
@ -35,4 +41,4 @@ sources:
|
|||
- https://github.com/traefik/traefik
|
||||
- https://github.com/traefik/traefik-helm-chart
|
||||
type: application
|
||||
version: 24.0.0
|
||||
version: 25.0.0
|
||||
|
|
|
@ -0,0 +1,530 @@
|
|||
# Install as a DaemonSet
|
||||
|
||||
Default install is using a `Deployment` but it's possible to use `DaemonSet`
|
||||
|
||||
```yaml
|
||||
deployment:
|
||||
kind: DaemonSet
|
||||
```
|
||||
|
||||
# Install in a dedicated namespace, with limited RBAC
|
||||
|
||||
Default install is using Cluster-wide RBAC but it can be restricted to target namespace.
|
||||
|
||||
```yaml
|
||||
rbac:
|
||||
namespaced: true
|
||||
```
|
||||
|
||||
# Install with auto-scaling
|
||||
|
||||
When enabling [HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
|
||||
to adjust replicas count according to CPU Usage, you'll need to set resources and nullify replicas.
|
||||
|
||||
```yaml
|
||||
deployment:
|
||||
replicas: null
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "50Mi"
|
||||
limits:
|
||||
cpu: "300m"
|
||||
memory: "150Mi"
|
||||
autoscaling:
|
||||
enabled: true
|
||||
maxReplicas: 2
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 80
|
||||
```
|
||||
|
||||
# Access Traefik dashboard without exposing it
|
||||
|
||||
This HelmChart does not expose the Traefik dashboard by default, for security concerns.
|
||||
Thus, there are multiple ways to expose the dashboard.
|
||||
For instance, the dashboard access could be achieved through a port-forward :
|
||||
|
||||
```bash
|
||||
kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name) 9000:9000
|
||||
```
|
||||
|
||||
Accessible with the url: http://127.0.0.1:9000/dashboard/
|
||||
|
||||
# Publish and protect Traefik Dashboard with basic Auth
|
||||
|
||||
To expose the dashboard in a secure way as [recommended](https://doc.traefik.io/traefik/operations/dashboard/#dashboard-router-rule)
|
||||
in the documentation, it may be useful to override the router rule to specify
|
||||
a domain to match, or accept requests on the root path (/) in order to redirect
|
||||
them to /dashboard/.
|
||||
|
||||
```yaml
|
||||
# Create an IngressRoute for the dashboard
|
||||
ingressRoute:
|
||||
dashboard:
|
||||
enabled: true
|
||||
# Custom match rule with host domain
|
||||
matchRule: Host(`traefik-dashboard.example.com`)
|
||||
entryPoints: ["websecure"]
|
||||
# Add custom middlewares : authentication and redirection
|
||||
middlewares:
|
||||
- name: traefik-dashboard-auth
|
||||
|
||||
# Create the custom middlewares used by the IngressRoute dashboard (can also be created in another way).
|
||||
# /!\ Yes, you need to replace "changeme" password with a better one. /!\
|
||||
extraObjects:
|
||||
- apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: traefik-dashboard-auth-secret
|
||||
type: kubernetes.io/basic-auth
|
||||
stringData:
|
||||
username: admin
|
||||
password: changeme
|
||||
|
||||
- apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: traefik-dashboard-auth
|
||||
spec:
|
||||
basicAuth:
|
||||
secret: traefik-dashboard-auth-secret
|
||||
```
|
||||
|
||||
# Publish and protect Traefik Dashboard with an Ingress
|
||||
|
||||
To expose the dashboard without IngressRoute, it's more complicated and less
|
||||
secure. You'll need to create an internal Service exposing Traefik API with
|
||||
special _traefik_ entrypoint.
|
||||
|
||||
You'll need to double check:
|
||||
1. Service selector with your setup.
|
||||
2. Middleware annotation on the ingress, _default_ should be replaced with traefik's namespace
|
||||
|
||||
```yaml
|
||||
ingressRoute:
|
||||
dashboard:
|
||||
enabled: false
|
||||
additionalArguments:
|
||||
- "--api.insecure=true"
|
||||
# Create the service, middleware and Ingress used to expose the dashboard (can also be created in another way).
|
||||
# /!\ Yes, you need to replace "changeme" password with a better one. /!\
|
||||
extraObjects:
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: traefik-api
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-default
|
||||
ports:
|
||||
- port: 8080
|
||||
name: traefik
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
|
||||
- apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: traefik-dashboard-auth-secret
|
||||
type: kubernetes.io/basic-auth
|
||||
stringData:
|
||||
username: admin
|
||||
password: changeme
|
||||
|
||||
- apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: traefik-dashboard-auth
|
||||
spec:
|
||||
basicAuth:
|
||||
secret: traefik-dashboard-auth-secret
|
||||
|
||||
- apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: traefik-dashboard
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: default-traefik-dashboard-auth@kubernetescrd
|
||||
spec:
|
||||
rules:
|
||||
- host: traefik-dashboard.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: traefik-api
|
||||
port:
|
||||
name: traefik
|
||||
```
|
||||
|
||||
|
||||
# Install on AWS
|
||||
|
||||
It can use [native AWS support](https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support) on Kubernetes
|
||||
|
||||
```yaml
|
||||
service:
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-type: nlb
|
||||
```
|
||||
|
||||
Or if [AWS LB controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#legacy-cloud-provider) is installed :
|
||||
```yaml
|
||||
service:
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
|
||||
```
|
||||
|
||||
# Install on GCP
|
||||
|
||||
A [regional IP with a Service](https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#use_a_service) can be used
|
||||
```yaml
|
||||
service:
|
||||
spec:
|
||||
loadBalancerIP: "1.2.3.4"
|
||||
```
|
||||
|
||||
Or a [global IP on Ingress](https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#use_an_ingress)
|
||||
```yaml
|
||||
service:
|
||||
type: NodePort
|
||||
extraObjects:
|
||||
- apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: traefik
|
||||
annotations:
|
||||
kubernetes.io/ingress.global-static-ip-name: "myGlobalIpName"
|
||||
spec:
|
||||
defaultBackend:
|
||||
service:
|
||||
name: traefik
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
Or a [global IP on a Gateway](https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-gateways) with continuous HTTPS encryption.
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
websecure:
|
||||
appProtocol: HTTPS # Hint for Google L7 load balancer
|
||||
service:
|
||||
type: ClusterIP
|
||||
extraObjects:
|
||||
- apiVersion: gateway.networking.k8s.io/v1beta1
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: traefik
|
||||
annotations:
|
||||
networking.gke.io/certmap: "myCertificateMap"
|
||||
spec:
|
||||
gatewayClassName: gke-l7-global-external-managed
|
||||
addresses:
|
||||
- type: NamedAddress
|
||||
value: "myGlobalIPName"
|
||||
listeners:
|
||||
- name: https
|
||||
protocol: HTTPS
|
||||
port: 443
|
||||
- apiVersion: gateway.networking.k8s.io/v1beta1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: traefik
|
||||
spec:
|
||||
parentRefs:
|
||||
- kind: Gateway
|
||||
name: traefik
|
||||
rules:
|
||||
- backendRefs:
|
||||
- name: traefik
|
||||
port: 443
|
||||
- apiVersion: networking.gke.io/v1
|
||||
kind: HealthCheckPolicy
|
||||
metadata:
|
||||
name: traefik
|
||||
spec:
|
||||
default:
|
||||
config:
|
||||
type: HTTP
|
||||
httpHealthCheck:
|
||||
port: 9000
|
||||
requestPath: /ping
|
||||
targetRef:
|
||||
group: ""
|
||||
kind: Service
|
||||
name: traefik
|
||||
```
|
||||
|
||||
# Install on Azure
|
||||
|
||||
A [static IP on a resource group](https://learn.microsoft.com/en-us/azure/aks/static-ip) can be used:
|
||||
|
||||
```yaml
|
||||
service:
|
||||
spec:
|
||||
loadBalancerIP: "1.2.3.4"
|
||||
annotations:
|
||||
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
|
||||
```
|
||||
|
||||
# Use HTTP3
|
||||
|
||||
By default, it will use a Load balancers with mixed protocols on `websecure`
|
||||
entrypoint. They are available since v1.20 and in beta as of Kubernetes v1.24.
|
||||
Availability may depend on your Kubernetes provider.
|
||||
|
||||
When using TCP and UDP with a single service, you may encounter [this issue](https://github.com/kubernetes/kubernetes/issues/47249#issuecomment-587960741) from Kubernetes.
|
||||
If you want to avoid this issue, you can set `ports.websecure.http3.advertisedPort`
|
||||
to an other value than 443
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
websecure:
|
||||
http3:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
# Use ProxyProtocol on Digital Ocean
|
||||
|
||||
PROXY protocol is a protocol for sending client connection information, such as origin IP addresses and port numbers, to the final backend server, rather than discarding it at the load balancer.
|
||||
|
||||
```yaml
|
||||
service:
|
||||
enabled: true
|
||||
type: LoadBalancer
|
||||
annotations:
|
||||
# This will tell DigitalOcean to enable the proxy protocol.
|
||||
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
|
||||
spec:
|
||||
# This is the default and should stay as cluster to keep the DO health checks working.
|
||||
externalTrafficPolicy: Cluster
|
||||
|
||||
additionalArguments:
|
||||
# Tell Traefik to only trust incoming headers from the Digital Ocean Load Balancers.
|
||||
- "--entryPoints.web.proxyProtocol.trustedIPs=127.0.0.1/32,10.120.0.0/16"
|
||||
- "--entryPoints.websecure.proxyProtocol.trustedIPs=127.0.0.1/32,10.120.0.0/16"
|
||||
# Also whitelist the source of headers to trust, the private IPs on the load balancers displayed on the networking page of DO.
|
||||
- "--entryPoints.web.forwardedHeaders.trustedIPs=127.0.0.1/32,10.120.0.0/16"
|
||||
- "--entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.120.0.0/16"
|
||||
```
|
||||
|
||||
# Enable plugin storage
|
||||
|
||||
This chart follows common security practices: it runs as non root with a readonly root filesystem.
|
||||
When enabling a plugin which needs storage, you have to add it to the deployment.
|
||||
|
||||
Here is a simple example with crowdsec. You may want to replace with your plugin or see complete exemple on crowdsec [here](https://github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin/blob/main/exemples/kubernetes/README.md).
|
||||
|
||||
```yaml
|
||||
deployment:
|
||||
additionalVolumes:
|
||||
- name: plugins
|
||||
additionalVolumeMounts:
|
||||
- name: plugins
|
||||
mountPath: /plugins-storage
|
||||
additionalArguments:
|
||||
- "--experimental.plugins.bouncer.moduleName=github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin"
|
||||
- "--experimental.plugins.bouncer.version=v1.1.9"
|
||||
```
|
||||
|
||||
# Use Traefik native Let's Encrypt integration, without cert-manager
|
||||
|
||||
In Traefik Proxy, ACME certificates are stored in a JSON file.
|
||||
|
||||
This file needs to have 0600 permissions, meaning, only the owner of the file has full read and write access to it.
|
||||
By default, Kubernetes recursively changes ownership and permissions for the content of each volume.
|
||||
|
||||
=> An initContainer can be used to avoid an issue on this sensitive file.
|
||||
See [#396](https://github.com/traefik/traefik-helm-chart/issues/396) for more details.
|
||||
|
||||
```yaml
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClass: xxx
|
||||
certResolvers:
|
||||
letsencrypt:
|
||||
dnsChallenge:
|
||||
provider: cloudflare
|
||||
storage: /data/acme.json
|
||||
env:
|
||||
- name: CF_DNS_API_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: yyy
|
||||
key: zzz
|
||||
deployment:
|
||||
initContainers:
|
||||
- name: volume-permissions
|
||||
image: busybox:latest
|
||||
command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: data
|
||||
```
|
||||
|
||||
This example needs a CloudFlare token in a Kubernetes `Secret` and a working `StorageClass`.
|
||||
|
||||
See [the list of supported providers](https://doc.traefik.io/traefik/https/acme/#providers) for others.
|
||||
|
||||
# Provide default certificate with cert-manager and CloudFlare DNS
|
||||
|
||||
Setup:
|
||||
|
||||
* cert-manager installed in `cert-manager` namespace
|
||||
* A cloudflare account on a DNS Zone
|
||||
|
||||
**Step 1**: Create `Secret` and `Issuer` needed by `cert-manager` with your API Token.
|
||||
See [cert-manager documentation](https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/)
|
||||
for creating this token with needed rights:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cloudflare
|
||||
namespace: traefik
|
||||
type: Opaque
|
||||
stringData:
|
||||
api-token: XXX
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: cloudflare
|
||||
namespace: traefik
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
email: email@example.com
|
||||
privateKeySecretRef:
|
||||
name: cloudflare-key
|
||||
solvers:
|
||||
- dns01:
|
||||
cloudflare:
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare
|
||||
key: api-token
|
||||
```
|
||||
|
||||
**Step 2**: Create `Certificate` in traefik namespace
|
||||
|
||||
```yaml
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: wildcard-example-com
|
||||
namespace: traefik
|
||||
spec:
|
||||
secretName: wildcard-example-com-tls
|
||||
dnsNames:
|
||||
- "example.com"
|
||||
- "*.example.com"
|
||||
issuerRef:
|
||||
name: cloudflare
|
||||
kind: Issuer
|
||||
```
|
||||
|
||||
**Step 3**: Check that it's ready
|
||||
|
||||
```bash
|
||||
kubectl get certificate -n traefik
|
||||
```
|
||||
|
||||
If needed, logs of cert-manager pod can give you more information
|
||||
|
||||
**Step 4**: Use it on the TLS Store in **values.yaml** file for this Helm Chart
|
||||
|
||||
```yaml
|
||||
tlsStore:
|
||||
default:
|
||||
defaultCertificate:
|
||||
secretName: wildcard-example-com-tls
|
||||
```
|
||||
|
||||
**Step 5**: Enjoy. All your `IngressRoute` use this certificate by default now.
|
||||
|
||||
They should use websecure entrypoint like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: example-com-tls
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`test.example.com`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: XXXX
|
||||
port: 80
|
||||
```
|
||||
|
||||
# Use this Chart as a dependency of your own chart
|
||||
|
||||
|
||||
First, let's create a default Helm Chart, with Traefik as a dependency.
|
||||
```bash
|
||||
helm create foo
|
||||
cd foo
|
||||
echo "
|
||||
dependencies:
|
||||
- name: traefik
|
||||
version: "24.0.0"
|
||||
repository: "https://traefik.github.io/charts"
|
||||
" >> Chart.yaml
|
||||
```
|
||||
|
||||
Second, let's tune some values like enabling HPA:
|
||||
|
||||
```bash
|
||||
cat <<-EOF >> values.yaml
|
||||
traefik:
|
||||
autoscaling:
|
||||
enabled: true
|
||||
maxReplicas: 3
|
||||
EOF
|
||||
```
|
||||
|
||||
Third, one can see if it works as expected:
|
||||
```bash
|
||||
helm dependency update
|
||||
helm dependency build
|
||||
helm template . | grep -A 14 -B 3 Horizontal
|
||||
```
|
||||
|
||||
It should produce this output:
|
||||
|
||||
```yaml
|
||||
---
|
||||
# Source: foo/charts/traefik/templates/hpa.yaml
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: release-name-traefik
|
||||
namespace: flux-system
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: release-name-flux-system
|
||||
helm.sh/chart: traefik-24.0.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: release-name-traefik
|
||||
maxReplicas: 3
|
||||
```
|
|
@ -24,7 +24,7 @@ Accordingly, the encouraged approach to fulfill your needs:
|
|||
1. Override the default Traefik configuration values ([yaml file or cli](https://helm.sh/docs/chart_template_guide/values_files/))
|
||||
2. Append your own configurations (`kubectl apply -f myconf.yaml`)
|
||||
|
||||
If needed, one may use [extraObjects](./traefik/tests/values/extra.yaml) or extend this HelmChart [as a Subchart](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/)
|
||||
If needed, one may use [extraObjects](./traefik/tests/values/extra.yaml) or extend this HelmChart [as a Subchart](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/). In the [examples](EXAMPLES.md), one can see how to use this Chart as a dependency.
|
||||
|
||||
## Installing
|
||||
|
||||
|
@ -43,6 +43,15 @@ Due to changes in CRD version support, the following versions of the chart are u
|
|||
| Chart v10.0.0 and above | | [x] | [x] |
|
||||
| Chart v22.0.0 and above | | | [x] |
|
||||
|
||||
### CRDs Support of Traefik Proxy
|
||||
|
||||
Due to changes in API Group of Traefik CRDs from `containo.us` to `traefik.io`, this Chart install the two CRDs API Group on the following versions:
|
||||
|
||||
| | `containo.us` | `traefik.io` |
|
||||
|-------------------------|-----------------------------|------------------------|
|
||||
| Chart v22.0.0 and below | [x] | |
|
||||
| Chart v23.0.0 and above | [x] | [x] |
|
||||
|
||||
### Deploying Traefik
|
||||
|
||||
```bash
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# traefik
|
||||
|
||||
![Version: 23.2.0](https://img.shields.io/badge/Version-23.2.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v2.10.4](https://img.shields.io/badge/AppVersion-v2.10.4-informational?style=flat-square)
|
||||
![Version: 25.0.0](https://img.shields.io/badge/Version-25.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v2.10.5](https://img.shields.io/badge/AppVersion-v2.10.5-informational?style=flat-square)
|
||||
|
||||
A Traefik based Kubernetes ingress controller
|
||||
|
||||
|
@ -54,8 +54,7 @@ Kubernetes: `>=1.16.0-0`
|
|||
| env | list | `[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}]` | Environment variables to be passed to Traefik's binary |
|
||||
| envFrom | list | `[]` | Environment variables to be passed to Traefik's binary from configMaps or secrets |
|
||||
| experimental.kubernetesGateway.enabled | bool | `false` | Enable traefik experimental GatewayClass CRD |
|
||||
| experimental.kubernetesGateway.gateway.enabled | bool | `true` | Enable traefik regular kubernetes gateway |
|
||||
| experimental.plugins | object | `{"enabled":false}` | Enable traefik version 3 enabled: false |
|
||||
| experimental.plugins | object | `{"enabled":false}` | Enable traefik version 3 enabled: false |
|
||||
| experimental.plugins.enabled | bool | `false` | Enable traefik experimental plugins |
|
||||
| extraObjects | list | `[]` | Extra objects to deploy (value evaluated as a template) In some cases, it can avoid the need for additional, extended or adhoc deployments. See #595 for more details and traefik/tests/values/extra.yaml for example. |
|
||||
| globalArguments | list | `["--global.checknewversion","--global.sendanonymoususage"]` | Global command arguments to be passed to all traefik's pods |
|
||||
|
@ -72,6 +71,13 @@ Kubernetes: `>=1.16.0-0`
|
|||
| ingressRoute.dashboard.matchRule | string | `"PathPrefix(`/dashboard`) || PathPrefix(`/api`)"` | The router match rule used for the dashboard ingressRoute |
|
||||
| ingressRoute.dashboard.middlewares | list | `[]` | Additional ingressRoute middlewares (e.g. for authentication) |
|
||||
| ingressRoute.dashboard.tls | object | `{}` | TLS options (e.g. secret containing certificate) |
|
||||
| ingressRoute.healthcheck.annotations | object | `{}` | Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class) |
|
||||
| ingressRoute.healthcheck.enabled | bool | `false` | Create an IngressRoute for the healthcheck probe |
|
||||
| ingressRoute.healthcheck.entryPoints | list | `["traefik"]` | Specify the allowed entrypoints to use for the healthcheck ingress route, (e.g. traefik, web, websecure). By default, it's using traefik entrypoint, which is not exposed. |
|
||||
| ingressRoute.healthcheck.labels | object | `{}` | Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels) |
|
||||
| ingressRoute.healthcheck.matchRule | string | `"PathPrefix(`/ping`)"` | The router match rule used for the healthcheck ingressRoute |
|
||||
| ingressRoute.healthcheck.middlewares | list | `[]` | Additional ingressRoute middlewares (e.g. for authentication) |
|
||||
| ingressRoute.healthcheck.tls | object | `{}` | TLS options (e.g. secret containing certificate) |
|
||||
| livenessProbe.failureThreshold | int | `3` | The number of consecutive failures allowed before considering the probe as failed. |
|
||||
| livenessProbe.initialDelaySeconds | int | `2` | The number of seconds to wait before starting the first probe. |
|
||||
| livenessProbe.periodSeconds | int | `10` | The number of seconds to wait between consecutive probes. |
|
||||
|
@ -128,7 +134,7 @@ Kubernetes: `>=1.16.0-0`
|
|||
| providers.kubernetesCRD.namespaces | list | `[]` | Array of namespaces to watch. If left empty, Traefik watches all namespaces. |
|
||||
| providers.kubernetesIngress.allowEmptyServices | bool | `false` | Allows to return 503 when there is no endpoints available |
|
||||
| providers.kubernetesIngress.allowExternalNameServices | bool | `false` | Allows to reference ExternalName services in Ingress |
|
||||
| providers.kubernetesIngress.enabled | bool | `true` | Load Kubernetes IngressRoute provider |
|
||||
| providers.kubernetesIngress.enabled | bool | `true` | Load Kubernetes Ingress provider |
|
||||
| providers.kubernetesIngress.namespaces | list | `[]` | Array of namespaces to watch. If left empty, Traefik watches all namespaces. |
|
||||
| providers.kubernetesIngress.publishedService.enabled | bool | `false` | |
|
||||
| rbac | object | `{"enabled":true,"namespaced":false}` | Whether Role Based Access Control objects like roles and rolebindings should be created |
|
||||
|
@ -154,7 +160,7 @@ Kubernetes: `>=1.16.0-0`
|
|||
| tlsOptions | object | `{}` | TLS Options are created as TLSOption CRDs https://doc.traefik.io/traefik/https/tls/#tls-options When using `labelSelector`, you'll need to set labels on tlsOption accordingly. Example: tlsOptions: default: labels: {} sniStrict: true preferServerCipherSuites: true customOptions: labels: {} curvePreferences: - CurveP521 - CurveP384 |
|
||||
| tlsStore | object | `{}` | TLS Store are created as TLSStore CRDs. This is useful if you want to set a default certificate https://doc.traefik.io/traefik/https/tls/#default-certificate Example: tlsStore: default: defaultCertificate: secretName: tls-cert |
|
||||
| tolerations | list | `[]` | Tolerations allow the scheduler to schedule pods with matching taints. |
|
||||
| topologySpreadConstraints | list | `[]` | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains. |
|
||||
| topologySpreadConstraints | list | `[]` | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains. |
|
||||
| tracing | object | `{}` | https://doc.traefik.io/traefik/observability/tracing/overview/ |
|
||||
| updateStrategy.rollingUpdate.maxSurge | int | `1` | |
|
||||
| updateStrategy.rollingUpdate.maxUnavailable | int | `0` | |
|
||||
|
@ -162,4 +168,4 @@ Kubernetes: `>=1.16.0-0`
|
|||
| volumes | list | `[]` | Add volumes to the traefik pod. The volume name will be passed to tpl. This can be used to mount a cert pair or a configmap that holds a config.toml file. After the volume has been mounted, add the configs into traefik by using the `additionalArguments` list below, eg: `additionalArguments: - "--providers.file.filename=/config/dynamic.toml" - "--ping" - "--ping.entrypoint=web"` |
|
||||
|
||||
----------------------------------------------
|
||||
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)
|
||||
Autogenerated from chart metadata using [helm-docs v1.11.3](https://github.com/norwoodj/helm-docs/releases/v1.11.3)
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
|
|
|
@ -124,3 +124,8 @@ Renders a complete tree, even values that contains template.
|
|||
{{- tpl (.value | toYaml) .context }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "imageVersion" -}}
|
||||
{{ (split "@" (default $.Chart.AppVersion $.Values.image.tag))._0 }}
|
||||
{{- end -}}
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.metrics }}
|
||||
{{- if .Values.metrics.prometheus }}
|
||||
{{- if and (.Values.metrics.prometheus) (not .Values.metrics.prometheus.serviceMonitor) }}
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/path: "/metrics"
|
||||
prometheus.io/port: {{ quote (index .Values.ports .Values.metrics.prometheus.entryPoint).port }}
|
||||
|
@ -142,7 +142,7 @@
|
|||
{{- if $config }}
|
||||
- "--entrypoints.{{$name}}.address=:{{ $config.port }}/{{ default "tcp" $config.protocol | lower }}"
|
||||
{{- with $config.asDefault }}
|
||||
{{- if semverCompare "<3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare "<3.0.0-0" (include "imageVersion" $) }}
|
||||
{{- fail "ERROR: Default entrypoints are only available on Traefik v3. Please set `image.tag` to `v3.x`." }}
|
||||
{{- end }}
|
||||
- "--entrypoints.{{$name}}.asDefault={{ . }}"
|
||||
|
@ -298,7 +298,7 @@
|
|||
{{- end }}
|
||||
|
||||
{{- with .Values.metrics.openTelemetry }}
|
||||
{{- if semverCompare "<3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare "<3.0.0-0" (include "imageVersion" $) }}
|
||||
{{- fail "ERROR: OpenTelemetry features are only available on Traefik v3. Please set `image.tag` to `v3.x`." }}
|
||||
{{- end }}
|
||||
- "--metrics.openTelemetry=true"
|
||||
|
@ -357,7 +357,7 @@
|
|||
{{- if .Values.tracing }}
|
||||
|
||||
{{- if .Values.tracing.openTelemetry }}
|
||||
{{- if semverCompare "<3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare "<3.0.0-0" (include "imageVersion" $) }}
|
||||
{{- fail "ERROR: OpenTelemetry features are only available on Traefik v3. Please set `image.tag` to `v3.x`." }}
|
||||
{{- end }}
|
||||
- "--tracing.openTelemetry=true"
|
||||
|
@ -563,9 +563,15 @@
|
|||
{{- range $entrypoint, $config := $.Values.ports }}
|
||||
{{- if $config }}
|
||||
{{- if $config.redirectTo }}
|
||||
{{- $toPort := index $.Values.ports $config.redirectTo }}
|
||||
{{- if eq (typeOf $config.redirectTo) "string" }}
|
||||
{{- fail "ERROR: Syntax of `ports.web.redirectTo` has changed to `ports.web.redirectTo.port`. Details in PR #934." }}
|
||||
{{- end }}
|
||||
{{- $toPort := index $.Values.ports $config.redirectTo.port }}
|
||||
- "--entrypoints.{{ $entrypoint }}.http.redirections.entryPoint.to=:{{ $toPort.exposedPort }}"
|
||||
- "--entrypoints.{{ $entrypoint }}.http.redirections.entryPoint.scheme=https"
|
||||
{{- if $config.redirectTo.priority }}
|
||||
- "--entrypoints.{{ $entrypoint }}.http.redirections.entryPoint.priority={{ $config.redirectTo.priority }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if $config.middlewares }}
|
||||
- "--entrypoints.{{ $entrypoint }}.http.middlewares={{ join "," $config.middlewares }}"
|
||||
|
@ -591,10 +597,10 @@
|
|||
{{- end }}
|
||||
{{- if $config.http3 }}
|
||||
{{- if $config.http3.enabled }}
|
||||
{{- if semverCompare "<3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag)}}
|
||||
{{- if semverCompare "<3.0.0-0" (include "imageVersion" $)}}
|
||||
- "--experimental.http3=true"
|
||||
{{- end }}
|
||||
{{- if semverCompare ">=2.6.0-0" (default $.Chart.AppVersion $.Values.image.tag)}}
|
||||
{{- if semverCompare ">=2.6.0-0" (include "imageVersion" $)}}
|
||||
- "--entrypoints.{{ $entrypoint }}.http3"
|
||||
{{- else }}
|
||||
- "--entrypoints.{{ $entrypoint }}.enableHTTP3=true"
|
||||
|
|
|
@ -9,9 +9,13 @@
|
|||
{{- if eq (default .Chart.AppVersion .Values.image.tag) "latest" }}
|
||||
{{- fail "\n\n ERROR: latest tag should not be used" }}
|
||||
{{- end }}
|
||||
{{- if eq (.Values.updateStrategy.type) "RollingUpdate" }}
|
||||
{{- if and (lt .Values.updateStrategy.rollingUpdate.maxUnavailable 1.0) (.Values.hostNetwork) }}
|
||||
{{- fail "maxUnavailable should be greater than 1 when using hostNetwork." }}
|
||||
{{- with .Values.updateStrategy }}
|
||||
{{- if eq (.type) "RollingUpdate" }}
|
||||
{{- if not (contains "%" (toString .rollingUpdate.maxUnavailable)) }}
|
||||
{{- if and ($.Values.hostNetwork) (lt (float64 .rollingUpdate.maxUnavailable) 1.0) }}
|
||||
{{- fail "maxUnavailable should be greater than 1 when using hostNetwork." }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
{{- if .Values.experimental.kubernetesGateway.enabled }}
|
||||
{{- if .Values.experimental.kubernetesGateway.gateway.enabled }}
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1alpha2
|
||||
kind: Gateway
|
||||
|
@ -8,7 +7,7 @@ metadata:
|
|||
namespace: {{ default (include "traefik.namespace" .) .Values.experimental.kubernetesGateway.namespace }}
|
||||
labels:
|
||||
{{- include "traefik.labels" . | nindent 4 }}
|
||||
{{- with .Values.experimental.kubernetesGateway.gateway.annotations }}
|
||||
{{- with .Values.experimental.kubernetesGateway.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
@ -18,7 +17,11 @@ spec:
|
|||
- name: web
|
||||
port: {{ .Values.ports.web.port }}
|
||||
protocol: HTTP
|
||||
|
||||
{{- with .Values.experimental.kubernetesGateway.namespacePolicy }}
|
||||
allowedRoutes:
|
||||
namespaces:
|
||||
from: {{ . }}
|
||||
{{- end }}
|
||||
{{- if .Values.experimental.kubernetesGateway.certificate }}
|
||||
- name: websecure
|
||||
port: {{ $.Values.ports.websecure.port }}
|
||||
|
@ -29,5 +32,4 @@ spec:
|
|||
group: {{ .Values.experimental.kubernetesGateway.certificate.group }}
|
||||
kind: {{ .Values.experimental.kubernetesGateway.certificate.kind }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,36 @@
|
|||
{{- if .Values.ingressRoute.healthcheck.enabled -}}
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: {{ template "traefik.fullname" . }}-healthcheck
|
||||
namespace: {{ template "traefik.namespace" . }}
|
||||
annotations:
|
||||
{{- with .Values.ingressRoute.healthcheck.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "traefik.labels" . | nindent 4 }}
|
||||
{{- with .Values.ingressRoute.healthcheck.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
entryPoints:
|
||||
{{- range .Values.ingressRoute.healthcheck.entryPoints }}
|
||||
- {{ . }}
|
||||
{{- end }}
|
||||
routes:
|
||||
- match: {{ .Values.ingressRoute.healthcheck.matchRule }}
|
||||
kind: Rule
|
||||
services:
|
||||
- name: ping@internal
|
||||
kind: TraefikService
|
||||
{{- with .Values.ingressRoute.healthcheck.middlewares }}
|
||||
middlewares:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end -}}
|
||||
|
||||
{{- with .Values.ingressRoute.healthcheck.tls }}
|
||||
tls:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
|
@ -1,5 +1,5 @@
|
|||
{{- if .Values.ingressClass.enabled -}}
|
||||
{{- if (semverCompare "<2.3.0" (.Chart.AppVersion)) -}}
|
||||
{{- if (semverCompare "<2.3.0" (include "imageVersion" $)) -}}
|
||||
{{- fail "ERROR: IngressClass cannot be used with Traefik < 2.3.0" -}}
|
||||
{{- end -}}
|
||||
{{- if semverCompare ">=1.19.0-0" .Capabilities.KubeVersion.Version -}}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{{- if .Values.rbac.enabled -}}
|
||||
{{- if and .Values.rbac.enabled (or .Values.providers.kubernetesIngress.enabled (not .Values.rbac.namespaced)) -}}
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
@ -45,7 +45,7 @@ rules:
|
|||
{{- if .Values.providers.kubernetesCRD.enabled }}
|
||||
- apiGroups:
|
||||
- traefik.io
|
||||
{{- if semverCompare "<3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare "<3.0.0-0" (include "imageVersion" $) }}
|
||||
- traefik.containo.us
|
||||
{{- end }}
|
||||
resources:
|
||||
|
@ -58,7 +58,7 @@ rules:
|
|||
- tlsstores
|
||||
- traefikservices
|
||||
- serverstransports
|
||||
{{- if semverCompare ">=3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare ">=3.0.0-0" (include "imageVersion" $) }}
|
||||
- serverstransporttcps
|
||||
{{- end }}
|
||||
verbs:
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{{- if .Values.rbac.enabled -}}
|
||||
{{- if and .Values.rbac.enabled (or .Values.providers.kubernetesIngress.enabled (not .Values.rbac.namespaced)) -}}
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
|
|
@ -44,7 +44,7 @@ rules:
|
|||
{{- if (and (has . $CRDNamespaces) $.Values.providers.kubernetesCRD.enabled) }}
|
||||
- apiGroups:
|
||||
- traefik.io
|
||||
{{- if semverCompare "<3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare "<3.0.0-0" (include "imageVersion" $) }}
|
||||
- traefik.containo.us
|
||||
{{- end }}
|
||||
resources:
|
||||
|
@ -57,7 +57,7 @@ rules:
|
|||
- tlsstores
|
||||
- traefikservices
|
||||
- serverstransports
|
||||
{{- if semverCompare ">=3.0.0-0" (default $.Chart.AppVersion $.Values.image.tag) }}
|
||||
{{- if semverCompare ">=3.0.0-0" (include "imageVersion" $) }}
|
||||
- serverstransporttcps
|
||||
{{- end }}
|
||||
verbs:
|
||||
|
|
|
@ -8,7 +8,7 @@ metadata:
|
|||
{{- include "traefik.labels" $ | nindent 4 }}
|
||||
{{- with $config.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with $config.alpnProtocols }}
|
||||
alpnProtocols:
|
||||
|
|
|
@ -45,60 +45,60 @@ deployment:
|
|||
podLabels: {}
|
||||
# -- Additional containers (e.g. for metric offloading sidecars)
|
||||
additionalContainers: []
|
||||
# https://docs.datadoghq.com/developers/dogstatsd/unix_socket/?tab=host
|
||||
# - name: socat-proxy
|
||||
# image: alpine/socat:1.0.5
|
||||
# args: ["-s", "-u", "udp-recv:8125", "unix-sendto:/socket/socket"]
|
||||
# volumeMounts:
|
||||
# - name: dsdsocket
|
||||
# mountPath: /socket
|
||||
# https://docs.datadoghq.com/developers/dogstatsd/unix_socket/?tab=host
|
||||
# - name: socat-proxy
|
||||
# image: alpine/socat:1.0.5
|
||||
# args: ["-s", "-u", "udp-recv:8125", "unix-sendto:/socket/socket"]
|
||||
# volumeMounts:
|
||||
# - name: dsdsocket
|
||||
# mountPath: /socket
|
||||
# -- Additional volumes available for use with initContainers and additionalContainers
|
||||
additionalVolumes: []
|
||||
# - name: dsdsocket
|
||||
# hostPath:
|
||||
# path: /var/run/statsd-exporter
|
||||
# - name: dsdsocket
|
||||
# hostPath:
|
||||
# path: /var/run/statsd-exporter
|
||||
# -- Additional initContainers (e.g. for setting file permission as shown below)
|
||||
initContainers: []
|
||||
# The "volume-permissions" init container is required if you run into permission issues.
|
||||
# Related issue: https://github.com/traefik/traefik-helm-chart/issues/396
|
||||
# - name: volume-permissions
|
||||
# image: busybox:latest
|
||||
# command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
|
||||
# securityContext:
|
||||
# runAsNonRoot: true
|
||||
# runAsGroup: 65532
|
||||
# runAsUser: 65532
|
||||
# volumeMounts:
|
||||
# - name: data
|
||||
# mountPath: /data
|
||||
# The "volume-permissions" init container is required if you run into permission issues.
|
||||
# Related issue: https://github.com/traefik/traefik-helm-chart/issues/396
|
||||
# - name: volume-permissions
|
||||
# image: busybox:latest
|
||||
# command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
|
||||
# securityContext:
|
||||
# runAsNonRoot: true
|
||||
# runAsGroup: 65532
|
||||
# runAsUser: 65532
|
||||
# volumeMounts:
|
||||
# - name: data
|
||||
# mountPath: /data
|
||||
# -- Use process namespace sharing
|
||||
shareProcessNamespace: false
|
||||
# -- Custom pod DNS policy. Apply if `hostNetwork: true`
|
||||
# dnsPolicy: ClusterFirstWithHostNet
|
||||
dnsConfig: {}
|
||||
# nameservers:
|
||||
# - 192.0.2.1 # this is an example
|
||||
# searches:
|
||||
# - ns1.svc.cluster-domain.example
|
||||
# - my.dns.search.suffix
|
||||
# options:
|
||||
# - name: ndots
|
||||
# value: "2"
|
||||
# - name: edns0
|
||||
# nameservers:
|
||||
# - 192.0.2.1 # this is an example
|
||||
# searches:
|
||||
# - ns1.svc.cluster-domain.example
|
||||
# - my.dns.search.suffix
|
||||
# options:
|
||||
# - name: ndots
|
||||
# value: "2"
|
||||
# - name: edns0
|
||||
# -- Additional imagePullSecrets
|
||||
imagePullSecrets: []
|
||||
# - name: myRegistryKeySecretName
|
||||
# - name: myRegistryKeySecretName
|
||||
# -- Pod lifecycle actions
|
||||
lifecycle: {}
|
||||
# preStop:
|
||||
# exec:
|
||||
# command: ["/bin/sh", "-c", "sleep 40"]
|
||||
# postStart:
|
||||
# httpGet:
|
||||
# path: /ping
|
||||
# port: 9000
|
||||
# host: localhost
|
||||
# scheme: HTTP
|
||||
# preStop:
|
||||
# exec:
|
||||
# command: ["/bin/sh", "-c", "sleep 40"]
|
||||
# postStart:
|
||||
# httpGet:
|
||||
# path: /ping
|
||||
# port: 9000
|
||||
# host: localhost
|
||||
# scheme: HTTP
|
||||
|
||||
# -- Pod disruption budget
|
||||
podDisruptionBudget:
|
||||
|
@ -116,9 +116,9 @@ ingressClass:
|
|||
|
||||
# Traefik experimental features
|
||||
experimental:
|
||||
#This value is no longer used, set the image.tag to a semver higher than 3.0, e.g. "v3.0.0-beta3"
|
||||
#v3:
|
||||
# -- Enable traefik version 3
|
||||
# This value is no longer used, set the image.tag to a semver higher than 3.0, e.g. "v3.0.0-beta3"
|
||||
# v3:
|
||||
# -- Enable traefik version 3
|
||||
# enabled: false
|
||||
plugins:
|
||||
# -- Enable traefik experimental plugins
|
||||
|
@ -126,9 +126,9 @@ experimental:
|
|||
kubernetesGateway:
|
||||
# -- Enable traefik experimental GatewayClass CRD
|
||||
enabled: false
|
||||
gateway:
|
||||
# -- Enable traefik regular kubernetes gateway
|
||||
enabled: true
|
||||
## Routes are restricted to namespace of the gateway by default.
|
||||
## https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.FromNamespaces
|
||||
# namespacePolicy: All
|
||||
# certificate:
|
||||
# group: "core"
|
||||
# kind: "Secret"
|
||||
|
@ -159,6 +159,22 @@ ingressRoute:
|
|||
middlewares: []
|
||||
# -- TLS options (e.g. secret containing certificate)
|
||||
tls: {}
|
||||
healthcheck:
|
||||
# -- Create an IngressRoute for the healthcheck probe
|
||||
enabled: false
|
||||
# -- Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
|
||||
annotations: {}
|
||||
# -- Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
|
||||
labels: {}
|
||||
# -- The router match rule used for the healthcheck ingressRoute
|
||||
matchRule: PathPrefix(`/ping`)
|
||||
# -- Specify the allowed entrypoints to use for the healthcheck ingress route, (e.g. traefik, web, websecure).
|
||||
# By default, it's using traefik entrypoint, which is not exposed.
|
||||
entryPoints: ["traefik"]
|
||||
# -- Additional ingressRoute middlewares (e.g. for authentication)
|
||||
middlewares: []
|
||||
# -- TLS options (e.g. secret containing certificate)
|
||||
tls: {}
|
||||
|
||||
updateStrategy:
|
||||
# -- Customize updateStrategy: RollingUpdate or OnDelete
|
||||
|
@ -204,10 +220,10 @@ providers:
|
|||
# labelSelector: environment=production,method=traefik
|
||||
# -- Array of namespaces to watch. If left empty, Traefik watches all namespaces.
|
||||
namespaces: []
|
||||
# - "default"
|
||||
# - "default"
|
||||
|
||||
kubernetesIngress:
|
||||
# -- Load Kubernetes IngressRoute provider
|
||||
# -- Load Kubernetes Ingress provider
|
||||
enabled: true
|
||||
# -- Allows to reference ExternalName services in Ingress
|
||||
allowExternalNameServices: false
|
||||
|
@ -217,7 +233,7 @@ providers:
|
|||
# labelSelector: environment=production,method=traefik
|
||||
# -- Array of namespaces to watch. If left empty, Traefik watches all namespaces.
|
||||
namespaces: []
|
||||
# - "default"
|
||||
# - "default"
|
||||
# IP used for Kubernetes Ingress endpoints
|
||||
publishedService:
|
||||
enabled: false
|
||||
|
@ -243,9 +259,9 @@ volumes: []
|
|||
|
||||
# -- Additional volumeMounts to add to the Traefik container
|
||||
additionalVolumeMounts: []
|
||||
# -- For instance when using a logshipper for access logs
|
||||
# - name: traefik-logs
|
||||
# mountPath: /var/log/traefik
|
||||
# -- For instance when using a logshipper for access logs
|
||||
# - name: traefik-logs
|
||||
# mountPath: /var/log/traefik
|
||||
|
||||
logs:
|
||||
general:
|
||||
|
@ -270,26 +286,26 @@ logs:
|
|||
## Filtering
|
||||
# -- https://docs.traefik.io/observability/access-logs/#filtering
|
||||
filters: {}
|
||||
# statuscodes: "200,300-302"
|
||||
# retryattempts: true
|
||||
# minduration: 10ms
|
||||
# statuscodes: "200,300-302"
|
||||
# retryattempts: true
|
||||
# minduration: 10ms
|
||||
fields:
|
||||
general:
|
||||
# -- Available modes: keep, drop, redact.
|
||||
defaultmode: keep
|
||||
# -- Names of the fields to limit.
|
||||
names: {}
|
||||
## Examples:
|
||||
# ClientUsername: drop
|
||||
## Examples:
|
||||
# ClientUsername: drop
|
||||
headers:
|
||||
# -- Available modes: keep, drop, redact.
|
||||
defaultmode: drop
|
||||
# -- Names of the headers to limit.
|
||||
names: {}
|
||||
## Examples:
|
||||
# User-Agent: redact
|
||||
# Authorization: drop
|
||||
# Content-Type: keep
|
||||
## Examples:
|
||||
# User-Agent: redact
|
||||
# Authorization: drop
|
||||
# Content-Type: keep
|
||||
|
||||
metrics:
|
||||
## -- Prometheus is enabled by default.
|
||||
|
@ -308,118 +324,118 @@ metrics:
|
|||
## When manualRouting is true, it disables the default internal router in
|
||||
## order to allow creating a custom router for prometheus@internal service.
|
||||
# manualRouting: true
|
||||
# datadog:
|
||||
# ## Address instructs exporter to send metrics to datadog-agent at this address.
|
||||
# address: "127.0.0.1:8125"
|
||||
# ## The interval used by the exporter to push metrics to datadog-agent. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## The prefix to use for metrics collection. Default="traefik"
|
||||
# # prefix: traefik
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# influxdb:
|
||||
# ## Address instructs exporter to send metrics to influxdb at this address.
|
||||
# address: localhost:8089
|
||||
# ## InfluxDB's address protocol (udp or http). Default="udp"
|
||||
# protocol: udp
|
||||
# ## InfluxDB database used when protocol is http. Default=""
|
||||
# # database: ""
|
||||
# ## InfluxDB retention policy used when protocol is http. Default=""
|
||||
# # retentionPolicy: ""
|
||||
# ## InfluxDB username (only with http). Default=""
|
||||
# # username: ""
|
||||
# ## InfluxDB password (only with http). Default=""
|
||||
# # password: ""
|
||||
# ## The interval used by the exporter to push metrics to influxdb. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## Additional labels (influxdb tags) on all metrics.
|
||||
# # additionalLabels:
|
||||
# # env: production
|
||||
# # foo: bar
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# influxdb2:
|
||||
# ## Address instructs exporter to send metrics to influxdb v2 at this address.
|
||||
# address: localhost:8086
|
||||
# ## Token with which to connect to InfluxDB v2.
|
||||
# token: xxx
|
||||
# ## Organisation where metrics will be stored.
|
||||
# org: ""
|
||||
# ## Bucket where metrics will be stored.
|
||||
# bucket: ""
|
||||
# ## The interval used by the exporter to push metrics to influxdb. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## Additional labels (influxdb tags) on all metrics.
|
||||
# # additionalLabels:
|
||||
# # env: production
|
||||
# # foo: bar
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# statsd:
|
||||
# ## Address instructs exporter to send metrics to statsd at this address.
|
||||
# address: localhost:8125
|
||||
# ## The interval used by the exporter to push metrics to influxdb. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## The prefix to use for metrics collection. Default="traefik"
|
||||
# # prefix: traefik
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# openTelemetry:
|
||||
# ## Address of the OpenTelemetry Collector to send metrics to.
|
||||
# address: "localhost:4318"
|
||||
# ## Enable metrics on entry points.
|
||||
# addEntryPointsLabels: true
|
||||
# ## Enable metrics on routers.
|
||||
# addRoutersLabels: true
|
||||
# ## Enable metrics on services.
|
||||
# addServicesLabels: true
|
||||
# ## Explicit boundaries for Histogram data points.
|
||||
# explicitBoundaries:
|
||||
# - "0.1"
|
||||
# - "0.3"
|
||||
# - "1.2"
|
||||
# - "5.0"
|
||||
# ## Additional headers sent with metrics by the reporter to the OpenTelemetry Collector.
|
||||
# headers:
|
||||
# foo: bar
|
||||
# test: test
|
||||
# ## Allows reporter to send metrics to the OpenTelemetry Collector without using a secured protocol.
|
||||
# insecure: true
|
||||
# ## Interval at which metrics are sent to the OpenTelemetry Collector.
|
||||
# pushInterval: 10s
|
||||
# ## Allows to override the default URL path used for sending metrics. This option has no effect when using gRPC transport.
|
||||
# path: /foo/v1/traces
|
||||
# ## Defines the TLS configuration used by the reporter to send metrics to the OpenTelemetry Collector.
|
||||
# tls:
|
||||
# ## The path to the certificate authority, it defaults to the system bundle.
|
||||
# ca: path/to/ca.crt
|
||||
# ## The path to the public certificate. When using this option, setting the key option is required.
|
||||
# cert: path/to/foo.cert
|
||||
# ## The path to the private key. When using this option, setting the cert option is required.
|
||||
# key: path/to/key.key
|
||||
# ## If set to true, the TLS connection accepts any certificate presented by the server regardless of the hostnames it covers.
|
||||
# insecureSkipVerify: true
|
||||
# ## This instructs the reporter to send metrics to the OpenTelemetry Collector using gRPC.
|
||||
# grpc: true
|
||||
# datadog:
|
||||
# ## Address instructs exporter to send metrics to datadog-agent at this address.
|
||||
# address: "127.0.0.1:8125"
|
||||
# ## The interval used by the exporter to push metrics to datadog-agent. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## The prefix to use for metrics collection. Default="traefik"
|
||||
# # prefix: traefik
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# influxdb:
|
||||
# ## Address instructs exporter to send metrics to influxdb at this address.
|
||||
# address: localhost:8089
|
||||
# ## InfluxDB's address protocol (udp or http). Default="udp"
|
||||
# protocol: udp
|
||||
# ## InfluxDB database used when protocol is http. Default=""
|
||||
# # database: ""
|
||||
# ## InfluxDB retention policy used when protocol is http. Default=""
|
||||
# # retentionPolicy: ""
|
||||
# ## InfluxDB username (only with http). Default=""
|
||||
# # username: ""
|
||||
# ## InfluxDB password (only with http). Default=""
|
||||
# # password: ""
|
||||
# ## The interval used by the exporter to push metrics to influxdb. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## Additional labels (influxdb tags) on all metrics.
|
||||
# # additionalLabels:
|
||||
# # env: production
|
||||
# # foo: bar
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# influxdb2:
|
||||
# ## Address instructs exporter to send metrics to influxdb v2 at this address.
|
||||
# address: localhost:8086
|
||||
# ## Token with which to connect to InfluxDB v2.
|
||||
# token: xxx
|
||||
# ## Organisation where metrics will be stored.
|
||||
# org: ""
|
||||
# ## Bucket where metrics will be stored.
|
||||
# bucket: ""
|
||||
# ## The interval used by the exporter to push metrics to influxdb. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## Additional labels (influxdb tags) on all metrics.
|
||||
# # additionalLabels:
|
||||
# # env: production
|
||||
# # foo: bar
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# statsd:
|
||||
# ## Address instructs exporter to send metrics to statsd at this address.
|
||||
# address: localhost:8125
|
||||
# ## The interval used by the exporter to push metrics to influxdb. Default=10s
|
||||
# # pushInterval: 30s
|
||||
# ## The prefix to use for metrics collection. Default="traefik"
|
||||
# # prefix: traefik
|
||||
# ## Enable metrics on entry points. Default=true
|
||||
# # addEntryPointsLabels: false
|
||||
# ## Enable metrics on routers. Default=false
|
||||
# # addRoutersLabels: true
|
||||
# ## Enable metrics on services. Default=true
|
||||
# # addServicesLabels: false
|
||||
# openTelemetry:
|
||||
# ## Address of the OpenTelemetry Collector to send metrics to.
|
||||
# address: "localhost:4318"
|
||||
# ## Enable metrics on entry points.
|
||||
# addEntryPointsLabels: true
|
||||
# ## Enable metrics on routers.
|
||||
# addRoutersLabels: true
|
||||
# ## Enable metrics on services.
|
||||
# addServicesLabels: true
|
||||
# ## Explicit boundaries for Histogram data points.
|
||||
# explicitBoundaries:
|
||||
# - "0.1"
|
||||
# - "0.3"
|
||||
# - "1.2"
|
||||
# - "5.0"
|
||||
# ## Additional headers sent with metrics by the reporter to the OpenTelemetry Collector.
|
||||
# headers:
|
||||
# foo: bar
|
||||
# test: test
|
||||
# ## Allows reporter to send metrics to the OpenTelemetry Collector without using a secured protocol.
|
||||
# insecure: true
|
||||
# ## Interval at which metrics are sent to the OpenTelemetry Collector.
|
||||
# pushInterval: 10s
|
||||
# ## Allows to override the default URL path used for sending metrics. This option has no effect when using gRPC transport.
|
||||
# path: /foo/v1/traces
|
||||
# ## Defines the TLS configuration used by the reporter to send metrics to the OpenTelemetry Collector.
|
||||
# tls:
|
||||
# ## The path to the certificate authority, it defaults to the system bundle.
|
||||
# ca: path/to/ca.crt
|
||||
# ## The path to the public certificate. When using this option, setting the key option is required.
|
||||
# cert: path/to/foo.cert
|
||||
# ## The path to the private key. When using this option, setting the cert option is required.
|
||||
# key: path/to/key.key
|
||||
# ## If set to true, the TLS connection accepts any certificate presented by the server regardless of the hostnames it covers.
|
||||
# insecureSkipVerify: true
|
||||
# ## This instructs the reporter to send metrics to the OpenTelemetry Collector using gRPC.
|
||||
# grpc: true
|
||||
|
||||
## -- enable optional CRDs for Prometheus Operator
|
||||
##
|
||||
## -- enable optional CRDs for Prometheus Operator
|
||||
##
|
||||
## Create a dedicated metrics service for use with ServiceMonitor
|
||||
# service:
|
||||
# enabled: false
|
||||
|
@ -470,55 +486,55 @@ metrics:
|
|||
## Tracing
|
||||
# -- https://doc.traefik.io/traefik/observability/tracing/overview/
|
||||
tracing: {}
|
||||
# openTelemetry: # traefik v3+ only
|
||||
# grpc: {}
|
||||
# insecure: true
|
||||
# address: localhost:4317
|
||||
# instana:
|
||||
# localAgentHost: 127.0.0.1
|
||||
# localAgentPort: 42699
|
||||
# logLevel: info
|
||||
# enableAutoProfile: true
|
||||
# datadog:
|
||||
# localAgentHostPort: 127.0.0.1:8126
|
||||
# debug: false
|
||||
# globalTag: ""
|
||||
# prioritySampling: false
|
||||
# jaeger:
|
||||
# samplingServerURL: http://localhost:5778/sampling
|
||||
# samplingType: const
|
||||
# samplingParam: 1.0
|
||||
# localAgentHostPort: 127.0.0.1:6831
|
||||
# gen128Bit: false
|
||||
# propagation: jaeger
|
||||
# traceContextHeaderName: uber-trace-id
|
||||
# disableAttemptReconnecting: true
|
||||
# collector:
|
||||
# endpoint: ""
|
||||
# user: ""
|
||||
# password: ""
|
||||
# zipkin:
|
||||
# httpEndpoint: http://localhost:9411/api/v2/spans
|
||||
# sameSpan: false
|
||||
# id128Bit: true
|
||||
# sampleRate: 1.0
|
||||
# haystack:
|
||||
# localAgentHost: 127.0.0.1
|
||||
# localAgentPort: 35000
|
||||
# globalTag: ""
|
||||
# traceIDHeaderName: ""
|
||||
# parentIDHeaderName: ""
|
||||
# spanIDHeaderName: ""
|
||||
# baggagePrefixHeaderName: ""
|
||||
# elastic:
|
||||
# serverURL: http://localhost:8200
|
||||
# secretToken: ""
|
||||
# serviceEnvironment: ""
|
||||
# openTelemetry: # traefik v3+ only
|
||||
# grpc: {}
|
||||
# insecure: true
|
||||
# address: localhost:4317
|
||||
# instana:
|
||||
# localAgentHost: 127.0.0.1
|
||||
# localAgentPort: 42699
|
||||
# logLevel: info
|
||||
# enableAutoProfile: true
|
||||
# datadog:
|
||||
# localAgentHostPort: 127.0.0.1:8126
|
||||
# debug: false
|
||||
# globalTag: ""
|
||||
# prioritySampling: false
|
||||
# jaeger:
|
||||
# samplingServerURL: http://localhost:5778/sampling
|
||||
# samplingType: const
|
||||
# samplingParam: 1.0
|
||||
# localAgentHostPort: 127.0.0.1:6831
|
||||
# gen128Bit: false
|
||||
# propagation: jaeger
|
||||
# traceContextHeaderName: uber-trace-id
|
||||
# disableAttemptReconnecting: true
|
||||
# collector:
|
||||
# endpoint: ""
|
||||
# user: ""
|
||||
# password: ""
|
||||
# zipkin:
|
||||
# httpEndpoint: http://localhost:9411/api/v2/spans
|
||||
# sameSpan: false
|
||||
# id128Bit: true
|
||||
# sampleRate: 1.0
|
||||
# haystack:
|
||||
# localAgentHost: 127.0.0.1
|
||||
# localAgentPort: 35000
|
||||
# globalTag: ""
|
||||
# traceIDHeaderName: ""
|
||||
# parentIDHeaderName: ""
|
||||
# spanIDHeaderName: ""
|
||||
# baggagePrefixHeaderName: ""
|
||||
# elastic:
|
||||
# serverURL: http://localhost:8200
|
||||
# secretToken: ""
|
||||
# serviceEnvironment: ""
|
||||
|
||||
# -- Global command arguments to be passed to all traefik's pods
|
||||
globalArguments:
|
||||
- "--global.checknewversion"
|
||||
- "--global.sendanonymoususage"
|
||||
- "--global.checknewversion"
|
||||
- "--global.sendanonymoususage"
|
||||
|
||||
#
|
||||
# Configure Traefik static configuration
|
||||
|
@ -531,14 +547,14 @@ additionalArguments: []
|
|||
|
||||
# -- Environment variables to be passed to Traefik's binary
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
# - name: SOME_VAR
|
||||
# value: some-var-value
|
||||
# - name: SOME_VAR_FROM_CONFIG_MAP
|
||||
|
@ -600,7 +616,10 @@ ports:
|
|||
# Port Redirections
|
||||
# Added in 2.2, you can make permanent redirects via entrypoints.
|
||||
# https://docs.traefik.io/routing/entrypoints/#redirection
|
||||
# redirectTo: websecure
|
||||
# redirectTo:
|
||||
# port: websecure
|
||||
# (Optional)
|
||||
# priority: 10
|
||||
#
|
||||
# Trust forwarded headers information (X-Forwarded-*).
|
||||
# forwardedHeaders:
|
||||
|
@ -638,14 +657,14 @@ ports:
|
|||
# advertisedPort: 4443
|
||||
#
|
||||
## -- Trust forwarded headers information (X-Forwarded-*).
|
||||
#forwardedHeaders:
|
||||
# trustedIPs: []
|
||||
# insecure: false
|
||||
# forwardedHeaders:
|
||||
# trustedIPs: []
|
||||
# insecure: false
|
||||
#
|
||||
## -- Enable the Proxy Protocol header parsing for the entry point
|
||||
#proxyProtocol:
|
||||
# trustedIPs: []
|
||||
# insecure: false
|
||||
# proxyProtocol:
|
||||
# trustedIPs: []
|
||||
# insecure: false
|
||||
#
|
||||
## Set TLS at the entrypoint
|
||||
## https://doc.traefik.io/traefik/routing/entrypoints/#tls
|
||||
|
@ -728,16 +747,16 @@ service:
|
|||
# -- Additional entries here will be added to the service spec.
|
||||
# -- Cannot contain type, selector or ports entries.
|
||||
spec: {}
|
||||
# externalTrafficPolicy: Cluster
|
||||
# loadBalancerIP: "1.2.3.4"
|
||||
# clusterIP: "2.3.4.5"
|
||||
# externalTrafficPolicy: Cluster
|
||||
# loadBalancerIP: "1.2.3.4"
|
||||
# clusterIP: "2.3.4.5"
|
||||
loadBalancerSourceRanges: []
|
||||
# - 192.168.0.1/32
|
||||
# - 172.16.0.0/16
|
||||
# - 192.168.0.1/32
|
||||
# - 172.16.0.0/16
|
||||
## -- Class of the load balancer implementation
|
||||
# loadBalancerClass: service.k8s.aws/nlb
|
||||
externalIPs: []
|
||||
# - 1.2.3.4
|
||||
# - 1.2.3.4
|
||||
## One of SingleStack, PreferDualStack, or RequireDualStack.
|
||||
# ipFamilyPolicy: SingleStack
|
||||
## List of IP families (e.g. IPv4 and/or IPv6).
|
||||
|
@ -789,7 +808,7 @@ persistence:
|
|||
# It can be used to store TLS certificates, see `storage` in certResolvers
|
||||
enabled: false
|
||||
name: data
|
||||
# existingClaim: ""
|
||||
# existingClaim: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 128Mi
|
||||
# storageClass: ""
|
||||
|
@ -852,12 +871,12 @@ serviceAccountAnnotations: {}
|
|||
|
||||
# -- The resources parameter defines CPU and memory requirements and limits for Traefik's containers.
|
||||
resources: {}
|
||||
# requests:
|
||||
# cpu: "100m"
|
||||
# memory: "50Mi"
|
||||
# limits:
|
||||
# cpu: "300m"
|
||||
# memory: "150Mi"
|
||||
# requests:
|
||||
# cpu: "100m"
|
||||
# memory: "50Mi"
|
||||
# limits:
|
||||
# cpu: "300m"
|
||||
# memory: "150Mi"
|
||||
|
||||
# -- This example pod anti-affinity forces the scheduler to put traefik pods
|
||||
# -- on nodes where no other traefik pods are scheduled.
|
||||
|
|
317
index.yaml
317
index.yaml
|
@ -17071,6 +17071,43 @@ entries:
|
|||
- assets/weka/csi-wekafsplugin-0.6.400.tgz
|
||||
version: 0.6.400
|
||||
datadog:
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Datadog
|
||||
catalog.cattle.io/kube-version: '>=1.10-0'
|
||||
catalog.cattle.io/release-name: datadog
|
||||
apiVersion: v1
|
||||
appVersion: "7"
|
||||
created: "2023-10-23T16:59:26.354662828Z"
|
||||
dependencies:
|
||||
- condition: clusterAgent.metricsProvider.useDatadogMetrics
|
||||
name: datadog-crds
|
||||
repository: https://helm.datadoghq.com
|
||||
tags:
|
||||
- install-crds
|
||||
version: 1.0.1
|
||||
- condition: datadog.kubeStateMetricsEnabled
|
||||
name: kube-state-metrics
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 2.13.2
|
||||
description: Datadog Agent
|
||||
digest: 63a2e87ec6d0d4b535fe881b1f022b870b3beebaf8e9a0d09c7e5fe1304942ed
|
||||
home: https://www.datadoghq.com
|
||||
icon: https://datadog-live.imgix.net/img/dd_logo_70x75.png
|
||||
keywords:
|
||||
- monitoring
|
||||
- alerting
|
||||
- metric
|
||||
maintainers:
|
||||
- email: support@datadoghq.com
|
||||
name: Datadog
|
||||
name: datadog
|
||||
sources:
|
||||
- https://app.datadoghq.com/account/settings#agent/kubernetes
|
||||
- https://github.com/DataDog/datadog-agent
|
||||
urls:
|
||||
- assets/datadog/datadog-3.40.3.tgz
|
||||
version: 3.40.3
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Datadog
|
||||
|
@ -20419,6 +20456,33 @@ entries:
|
|||
- assets/dynatrace/dynatrace-oneagent-operator-0.8.000.tgz
|
||||
version: 0.8.000
|
||||
dynatrace-operator:
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Dynatrace Operator
|
||||
catalog.cattle.io/kube-version: '>=1.19.0-0'
|
||||
catalog.cattle.io/release-name: dynatrace-operator
|
||||
apiVersion: v2
|
||||
appVersion: 0.14.1
|
||||
created: "2023-10-23T16:59:26.637306615Z"
|
||||
description: The Dynatrace Operator Helm chart for Kubernetes and OpenShift
|
||||
digest: 93f3adf9a657070163b844381471a11986a768bae933a42c3e26a433118e7a41
|
||||
home: https://www.dynatrace.com/
|
||||
icon: https://assets.dynatrace.com/global/resources/Signet_Logo_RGB_CP_512x512px.png
|
||||
kubeVersion: '>=1.19.0-0'
|
||||
maintainers:
|
||||
- email: marcell.sevcsik@dynatrace.com
|
||||
name: 0sewa0
|
||||
- email: christoph.muellner@dynatrace.com
|
||||
name: chrismuellner
|
||||
- email: lukas.hinterreiter@dynatrace.com
|
||||
name: luhi-DT
|
||||
name: dynatrace-operator
|
||||
sources:
|
||||
- https://github.com/Dynatrace/dynatrace-operator
|
||||
type: application
|
||||
urls:
|
||||
- assets/dynatrace/dynatrace-operator-0.14.1.tgz
|
||||
version: 0.14.1
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Dynatrace Operator
|
||||
|
@ -20949,6 +21013,30 @@ entries:
|
|||
- assets/elastic/elasticsearch-7.17.3.tgz
|
||||
version: 7.17.3
|
||||
external-secrets:
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: External Secrets Operator
|
||||
catalog.cattle.io/kube-version: '>= 1.19.0-0'
|
||||
catalog.cattle.io/release-name: external-secrets
|
||||
apiVersion: v2
|
||||
appVersion: v0.9.7
|
||||
created: "2023-10-23T16:59:26.795360626Z"
|
||||
description: External secret management for Kubernetes
|
||||
digest: f1ca3a52b1582600d723b5e36c47d35dd279712a80500e4764390362e3dd8e65
|
||||
home: https://github.com/external-secrets/external-secrets
|
||||
icon: https://raw.githubusercontent.com/external-secrets/external-secrets/main/assets/eso-logo-large.png
|
||||
keywords:
|
||||
- kubernetes-external-secrets
|
||||
- secrets
|
||||
kubeVersion: '>= 1.19.0-0'
|
||||
maintainers:
|
||||
- email: kellinmcavoy@gmail.com
|
||||
name: mcavoyk
|
||||
name: external-secrets
|
||||
type: application
|
||||
urls:
|
||||
- assets/external-secrets/external-secrets-0.9.7.tgz
|
||||
version: 0.9.7
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: External Secrets Operator
|
||||
|
@ -26946,6 +27034,62 @@ entries:
|
|||
- assets/jaeger/jaeger-operator-2.36.0.tgz
|
||||
version: 2.36.0
|
||||
jenkins:
|
||||
- annotations:
|
||||
artifacthub.io/category: integration-delivery
|
||||
artifacthub.io/images: |
|
||||
- name: jenkins
|
||||
image: jenkins/jenkins:2.414.3-jdk11
|
||||
- name: k8s-sidecar
|
||||
image: kiwigrid/k8s-sidecar:1.24.4
|
||||
- name: inbound-agent
|
||||
image: jenkins/inbound-agent:3107.v665000b_51092-15
|
||||
- name: backup
|
||||
image: maorfr/kube-tasks:0.2.0
|
||||
artifacthub.io/license: Apache-2.0
|
||||
artifacthub.io/links: |
|
||||
- name: Chart Source
|
||||
url: https://github.com/jenkinsci/helm-charts/tree/main/charts/jenkins
|
||||
- name: Jenkins
|
||||
url: https://www.jenkins.io/
|
||||
- name: support
|
||||
url: https://github.com/jenkinsci/helm-charts/issues
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Jenkins
|
||||
catalog.cattle.io/kube-version: '>=1.14-0'
|
||||
catalog.cattle.io/release-name: jenkins
|
||||
apiVersion: v2
|
||||
appVersion: 2.414.3
|
||||
created: "2023-10-23T16:59:28.039726875Z"
|
||||
description: Jenkins - Build great things at any scale! The leading open source
|
||||
automation server, Jenkins provides over 1800 plugins to support building, deploying
|
||||
and automating any project.
|
||||
digest: cc972114025dcc8aa03943ea14d38004647bd31b3c5277f4e34f84922563b575
|
||||
home: https://jenkins.io/
|
||||
icon: https://get.jenkins.io/art/jenkins-logo/logo.svg
|
||||
keywords:
|
||||
- jenkins
|
||||
- ci
|
||||
- devops
|
||||
maintainers:
|
||||
- email: maor.friedman@redhat.com
|
||||
name: maorfr
|
||||
- email: mail@torstenwalter.de
|
||||
name: torstenwalter
|
||||
- email: garridomota@gmail.com
|
||||
name: mogaal
|
||||
- email: wmcdona89@gmail.com
|
||||
name: wmcdona89
|
||||
- email: timjacomb1@gmail.com
|
||||
name: timja
|
||||
name: jenkins
|
||||
sources:
|
||||
- https://github.com/jenkinsci/jenkins
|
||||
- https://github.com/jenkinsci/docker-inbound-agent
|
||||
- https://github.com/maorfr/kube-tasks
|
||||
- https://github.com/jenkinsci/configuration-as-code-plugin
|
||||
urls:
|
||||
- assets/jenkins/jenkins-4.8.2.tgz
|
||||
version: 4.8.2
|
||||
- annotations:
|
||||
artifacthub.io/category: integration-delivery
|
||||
artifacthub.io/images: |
|
||||
|
@ -38725,6 +38869,50 @@ entries:
|
|||
- assets/minio/minio-operator-4.4.1700.tgz
|
||||
version: 4.4.1700
|
||||
mysql:
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: MySQL
|
||||
catalog.cattle.io/kube-version: '>=1.19-0'
|
||||
catalog.cattle.io/release-name: mysql
|
||||
category: Database
|
||||
images: |
|
||||
- name: mysql
|
||||
image: docker.io/bitnami/mysql:8.0.34-debian-11-r75
|
||||
- name: mysqld-exporter
|
||||
image: docker.io/bitnami/mysqld-exporter:0.15.0-debian-11-r70
|
||||
- name: os-shell
|
||||
image: docker.io/bitnami/os-shell:11-debian-11-r90
|
||||
licenses: Apache-2.0
|
||||
apiVersion: v2
|
||||
appVersion: 8.0.34
|
||||
created: "2023-10-23T16:59:22.03631377Z"
|
||||
dependencies:
|
||||
- name: common
|
||||
repository: file://./charts/common
|
||||
tags:
|
||||
- bitnami-common
|
||||
version: 2.x.x
|
||||
description: MySQL is a fast, reliable, scalable, and easy to use open source
|
||||
relational database system. Designed to handle mission-critical, heavy-load
|
||||
production applications.
|
||||
digest: 5984febb35fdbb7918c7a5f56e94df8aaa740d1ca412bb4a86e310d7b1f7166e
|
||||
home: https://bitnami.com
|
||||
icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png
|
||||
keywords:
|
||||
- mysql
|
||||
- database
|
||||
- sql
|
||||
- cluster
|
||||
- high availability
|
||||
maintainers:
|
||||
- name: VMware, Inc.
|
||||
url: https://github.com/bitnami/charts
|
||||
name: mysql
|
||||
sources:
|
||||
- https://github.com/bitnami/charts/tree/main/bitnami/mysql
|
||||
urls:
|
||||
- assets/bitnami/mysql-9.13.0.tgz
|
||||
version: 9.13.0
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: MySQL
|
||||
|
@ -50930,6 +51118,50 @@ entries:
|
|||
- assets/bitnami/redis-17.3.7.tgz
|
||||
version: 17.3.7
|
||||
redpanda:
|
||||
- annotations:
|
||||
artifacthub.io/images: |
|
||||
- name: redpanda
|
||||
image: docker.redpanda.com/redpandadata/redpanda:v23.2.12
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
- name: mintel/docker-alpine-bash-curl-jq
|
||||
image: mintel/docker-alpine-bash-curl-jq:latest
|
||||
artifacthub.io/license: Apache-2.0
|
||||
artifacthub.io/links: |
|
||||
- name: Documentation
|
||||
url: https://docs.redpanda.com
|
||||
- name: "Helm (>= 3.6.0)"
|
||||
url: https://helm.sh/docs/intro/install/
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Redpanda
|
||||
catalog.cattle.io/kube-version: '>=1.21-0'
|
||||
catalog.cattle.io/release-name: redpanda
|
||||
apiVersion: v2
|
||||
appVersion: v23.2.12
|
||||
created: "2023-10-23T16:59:33.734985097Z"
|
||||
dependencies:
|
||||
- condition: console.enabled
|
||||
name: console
|
||||
repository: file://./charts/console
|
||||
version: '>=0.5 <1.0'
|
||||
- condition: connectors.enabled
|
||||
name: connectors
|
||||
repository: file://./charts/connectors
|
||||
version: '>=0.1.2 <1.0'
|
||||
description: Redpanda is the real-time engine for modern apps.
|
||||
digest: cf79a763405d7f987f547b1e10fd2a7d687d647ccbb1bca712d5abfc8d985175
|
||||
icon: https://images.ctfassets.net/paqvtpyf8rwu/3cYHw5UzhXCbKuR24GDFGO/73fb682e6157d11c10d5b2b5da1d5af0/skate-stand-panda.svg
|
||||
kubeVersion: '>=1.21-0'
|
||||
maintainers:
|
||||
- name: redpanda-data
|
||||
url: https://github.com/orgs/redpanda-data/people
|
||||
name: redpanda
|
||||
sources:
|
||||
- https://github.com/redpanda-data/helm-charts
|
||||
type: application
|
||||
urls:
|
||||
- assets/redpanda/redpanda-5.6.27.tgz
|
||||
version: 5.6.27
|
||||
- annotations:
|
||||
artifacthub.io/images: |
|
||||
- name: redpanda
|
||||
|
@ -55304,6 +55536,43 @@ entries:
|
|||
- assets/shipa/shipa-1.4.0.tgz
|
||||
version: 1.4.0
|
||||
spark:
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Apache Spark
|
||||
catalog.cattle.io/kube-version: '>=1.19-0'
|
||||
catalog.cattle.io/release-name: spark
|
||||
category: Infrastructure
|
||||
images: |
|
||||
- name: spark
|
||||
image: docker.io/bitnami/spark:3.5.0-debian-11-r10
|
||||
licenses: Apache-2.0
|
||||
apiVersion: v2
|
||||
appVersion: 3.5.0
|
||||
created: "2023-10-23T16:59:23.268287728Z"
|
||||
dependencies:
|
||||
- name: common
|
||||
repository: file://./charts/common
|
||||
tags:
|
||||
- bitnami-common
|
||||
version: 2.x.x
|
||||
description: Apache Spark is a high-performance engine for large-scale computing
|
||||
tasks, such as data processing, machine learning and real-time data streaming.
|
||||
It includes APIs for Java, Python, Scala and R.
|
||||
digest: 9b011187b47f4f2a214cbfe6082ac23f318a18359dbbdabc5513a1bacb6d1609
|
||||
home: https://bitnami.com
|
||||
icon: https://www.apache.org/logos/res/spark/default.png
|
||||
keywords:
|
||||
- apache
|
||||
- spark
|
||||
maintainers:
|
||||
- name: VMware, Inc.
|
||||
url: https://github.com/bitnami/charts
|
||||
name: spark
|
||||
sources:
|
||||
- https://github.com/bitnami/charts/tree/main/bitnami/spark
|
||||
urls:
|
||||
- assets/bitnami/spark-8.0.2.tgz
|
||||
version: 8.0.2
|
||||
- annotations:
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Apache Spark
|
||||
|
@ -61865,6 +62134,54 @@ entries:
|
|||
- assets/bitnami/tomcat-10.4.9.tgz
|
||||
version: 10.4.9
|
||||
traefik:
|
||||
- annotations:
|
||||
artifacthub.io/changes: "- \"feat: ✨ add healthcheck ingressRoute\"\n- \"feat:
|
||||
:boom: support http redirections and http challenges with cert-manager\"\n-
|
||||
\"feat: :boom: rework and allow update of namespace policy for Gateway\"\n-
|
||||
\"fix: disable ClusterRole and ClusterRoleBinding when not needed\"\n- \"fix:
|
||||
detect correctly v3 version when using sha in `image.tag`\"\n- \"fix: allow
|
||||
updateStrategy.rollingUpdate.maxUnavailable to be passed in as an int or string\"\n-
|
||||
\"fix: add missing separator in crds\"\n- \"fix: add Prometheus scraping annotations
|
||||
only if serviceMonitor not created\"\n- \"docs: Fix typo in the default values
|
||||
file\"\n- \"chore: remove label whitespace at TLSOption\"\n- \"chore(release):
|
||||
\U0001F680 publish v25.0.0\"\n- \"chore(deps): update traefik docker tag to
|
||||
v2.10.5\"\n- \"chore(deps): update docker.io/helmunittest/helm-unittest docker
|
||||
tag to v3.12.3\"\n- \"chore(ci): \U0001F527 \U0001F477 add e2e test when releasing\"\n"
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Traefik Proxy
|
||||
catalog.cattle.io/kube-version: '>=1.16.0-0'
|
||||
catalog.cattle.io/release-name: traefik
|
||||
apiVersion: v2
|
||||
appVersion: v2.10.5
|
||||
created: "2023-10-23T16:59:34.54536469Z"
|
||||
description: A Traefik based Kubernetes ingress controller
|
||||
digest: 97f681bcb9a9b8cbe9c88ffc8b524152f18d87319d279d24fe0e466f0c200ee1
|
||||
home: https://traefik.io/
|
||||
icon: https://raw.githubusercontent.com/traefik/traefik/v2.3/docs/content/assets/img/traefik.logo.png
|
||||
keywords:
|
||||
- traefik
|
||||
- ingress
|
||||
- networking
|
||||
kubeVersion: '>=1.16.0-0'
|
||||
maintainers:
|
||||
- email: emile@vauge.com
|
||||
name: emilevauge
|
||||
- email: daniel.tomcej@gmail.com
|
||||
name: dtomcej
|
||||
- email: ldez@traefik.io
|
||||
name: ldez
|
||||
- email: michel.loiseleur@traefik.io
|
||||
name: mloiseleur
|
||||
- email: charlie.haley@traefik.io
|
||||
name: charlie-haley
|
||||
name: traefik
|
||||
sources:
|
||||
- https://github.com/traefik/traefik
|
||||
- https://github.com/traefik/traefik-helm-chart
|
||||
type: application
|
||||
urls:
|
||||
- assets/traefik/traefik-25.0.0.tgz
|
||||
version: 25.0.0
|
||||
- annotations:
|
||||
artifacthub.io/changes: "- \"chore(release): \U0001F680 publish v24.0.0\"\n-
|
||||
\"fix: http3 support broken when advertisedPort set\"\n- \"fix: tracing.opentelemetry.tls
|
||||
|
|
Loading…
Reference in New Issue