diff --git a/assets/argo/argo-cd-5.13.7.tgz b/assets/argo/argo-cd-5.13.7.tgz new file mode 100644 index 000000000..b8ec06b57 Binary files /dev/null and b/assets/argo/argo-cd-5.13.7.tgz differ diff --git a/assets/asserts/asserts-1.10.0.tgz b/assets/asserts/asserts-1.10.0.tgz new file mode 100644 index 000000000..12b36e32e Binary files /dev/null and b/assets/asserts/asserts-1.10.0.tgz differ diff --git a/assets/bitnami/airflow-14.0.3.tgz b/assets/bitnami/airflow-14.0.3.tgz new file mode 100644 index 000000000..a36cbe9bf Binary files /dev/null and b/assets/bitnami/airflow-14.0.3.tgz differ diff --git a/assets/bitnami/mariadb-11.3.5.tgz b/assets/bitnami/mariadb-11.3.5.tgz new file mode 100644 index 000000000..d61b673d6 Binary files /dev/null and b/assets/bitnami/mariadb-11.3.5.tgz differ diff --git a/assets/bitnami/mysql-9.4.3.tgz b/assets/bitnami/mysql-9.4.3.tgz new file mode 100644 index 000000000..41ca72e40 Binary files /dev/null and b/assets/bitnami/mysql-9.4.3.tgz differ diff --git a/assets/bitnami/postgresql-12.1.1.tgz b/assets/bitnami/postgresql-12.1.1.tgz new file mode 100644 index 000000000..8c351b84b Binary files /dev/null and b/assets/bitnami/postgresql-12.1.1.tgz differ diff --git a/assets/bitnami/redis-17.3.9.tgz b/assets/bitnami/redis-17.3.9.tgz new file mode 100644 index 000000000..a9e80f5aa Binary files /dev/null and b/assets/bitnami/redis-17.3.9.tgz differ diff --git a/assets/bitnami/spark-6.3.10.tgz b/assets/bitnami/spark-6.3.10.tgz new file mode 100644 index 000000000..a292392a5 Binary files /dev/null and b/assets/bitnami/spark-6.3.10.tgz differ diff --git a/assets/bitnami/wordpress-15.2.14.tgz b/assets/bitnami/wordpress-15.2.14.tgz new file mode 100644 index 000000000..9ad7d2b90 Binary files /dev/null and b/assets/bitnami/wordpress-15.2.14.tgz differ diff --git a/assets/datadog/datadog-3.3.0.tgz b/assets/datadog/datadog-3.3.0.tgz new file mode 100644 index 000000000..916d7778f Binary files /dev/null and b/assets/datadog/datadog-3.3.0.tgz differ diff --git a/assets/gitlab/gitlab-6.5.4.tgz b/assets/gitlab/gitlab-6.5.4.tgz new file mode 100644 index 000000000..2e89ec3ec Binary files /dev/null and b/assets/gitlab/gitlab-6.5.4.tgz differ diff --git a/assets/harbor/harbor-1.10.2.tgz b/assets/harbor/harbor-1.10.2.tgz new file mode 100644 index 000000000..4f65f4782 Binary files /dev/null and b/assets/harbor/harbor-1.10.2.tgz differ diff --git a/assets/intel/intel-device-plugins-operator-0.25.0.tgz b/assets/intel/intel-device-plugins-operator-0.25.0.tgz new file mode 100644 index 000000000..60a4bbed5 Binary files /dev/null and b/assets/intel/intel-device-plugins-operator-0.25.0.tgz differ diff --git a/assets/kong/kong-2.13.1.tgz b/assets/kong/kong-2.13.1.tgz new file mode 100644 index 000000000..efdab8a0f Binary files /dev/null and b/assets/kong/kong-2.13.1.tgz differ diff --git a/assets/kongmesh/kuma-0.8.101.tgz b/assets/kuma/kuma-0.8.101.tgz similarity index 100% rename from assets/kongmesh/kuma-0.8.101.tgz rename to assets/kuma/kuma-0.8.101.tgz diff --git a/assets/kuma/kuma-2.0.0.tgz b/assets/kuma/kuma-2.0.0.tgz new file mode 100644 index 000000000..11f0b7f36 Binary files /dev/null and b/assets/kuma/kuma-2.0.0.tgz differ diff --git a/assets/nats/nats-0.18.2.tgz b/assets/nats/nats-0.18.2.tgz new file mode 100644 index 000000000..5ea8b7da1 Binary files /dev/null and b/assets/nats/nats-0.18.2.tgz differ diff --git a/charts/argo/argo-cd/5.13.7/.helmignore b/charts/argo/argo-cd/5.13.7/.helmignore new file mode 100644 index 000000000..3a063296a --- /dev/null +++ b/charts/argo/argo-cd/5.13.7/.helmignore @@ -0,0 +1,4 @@ +/*.tgz +output +ci/ +*.gotmpl diff --git a/charts/argo/argo-cd/5.13.7/Chart.lock b/charts/argo/argo-cd/5.13.7/Chart.lock new file mode 100644 index 000000000..7a069ce95 --- /dev/null +++ b/charts/argo/argo-cd/5.13.7/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: redis-ha + repository: https://dandydeveloper.github.io/charts/ + version: 4.22.3 +digest: sha256:ef6269e4e073dad10c230ccfb069fc013608111c895c5e7568450bb3967cf195 +generated: "2022-11-03T12:04:33.673857+09:00" diff --git a/charts/argo/argo-cd/5.13.7/Chart.yaml b/charts/argo/argo-cd/5.13.7/Chart.yaml new file mode 100644 index 000000000..7c6ec14cf --- /dev/null +++ b/charts/argo/argo-cd/5.13.7/Chart.yaml @@ -0,0 +1,31 @@ +annotations: + artifacthub.io/changes: | + - "[Fixed]: Use raw json format for cluster credentials for Vault plugin compatibility" + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: Argo CD + catalog.cattle.io/kube-version: '>=1.22.0-0' + catalog.cattle.io/release-name: argo-cd +apiVersion: v2 +appVersion: v2.5.2 +dependencies: +- condition: redis-ha.enabled + name: redis-ha + repository: file://./charts/redis-ha + version: 4.22.3 +description: A Helm chart for Argo CD, a declarative, GitOps continuous delivery tool + for Kubernetes. +home: https://github.com/argoproj/argo-helm +icon: https://argo-cd.readthedocs.io/en/stable/assets/logo.png +keywords: +- argoproj +- argocd +- gitops +kubeVersion: '>=1.22.0-0' +maintainers: +- name: argoproj + url: https://argoproj.github.io/ +name: argo-cd +sources: +- https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd +- https://github.com/argoproj/argo-cd +version: 5.13.7 diff --git a/charts/argo/argo-cd/5.13.7/README.md b/charts/argo/argo-cd/5.13.7/README.md new file mode 100644 index 000000000..38ee58264 --- /dev/null +++ b/charts/argo/argo-cd/5.13.7/README.md @@ -0,0 +1,1103 @@ +# Argo CD Chart + +A Helm chart for Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes. + +Source code can be found here: + +* +* + +This is a **community maintained** chart. This chart installs [argo-cd](https://argo-cd.readthedocs.io/en/stable/), a declarative, GitOps continuous delivery tool for Kubernetes. + +The default installation is intended to be similar to the provided Argo CD [releases](https://github.com/argoproj/argo-cd/releases). + +If you want to avoid including sensitive information unencrypted (clear text) in your version control, make use of the [declarative set up](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/) of Argo CD. +For instance, rather than adding repositories and their keys in your Helm values, you could deploy [SealedSecrets](https://github.com/bitnami-labs/sealed-secrets) with contents as seen in this [repositories section](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories) or any other secrets manager service (i.e. HashiCorp Vault, AWS/GCP Secrets Manager, etc.). + +## High Availability + +This chart installs the non-HA version of Argo CD by default. If you want to run Argo CD in HA mode, you can use one of the example values in the next sections. +Please also have a look into the upstream [Operator Manual regarding High Availability](https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/) to understand how scaling of Argo CD works in detail. + +> **Warning:** +> You need at least 3 worker nodes as the HA mode of redis enforces Pods to run on separate nodes. + +### HA mode with autoscaling + +```yaml +redis-ha: + enabled: true + +controller: + replicas: 1 + +server: + autoscaling: + enabled: true + minReplicas: 2 + +repoServer: + autoscaling: + enabled: true + minReplicas: 2 + +applicationSet: + replicaCount: 2 +``` + +### HA mode without autoscaling + +```yaml +redis-ha: + enabled: true + +controller: + replicas: 1 + +server: + replicas: 2 + +repoServer: + replicas: 2 + +applicationSet: + replicaCount: 2 +``` + +### Synchronizing Changes from Original Repository + +In the original [Argo CD repository](https://github.com/argoproj/argo-cd/) an [`manifests/install.yaml`](https://github.com/argoproj/argo-cd/blob/master/manifests/install.yaml) is generated using `kustomize`. It's the basis for the installation as [described in the docs](https://argo-cd.readthedocs.io/en/stable/getting_started/#1-install-argo-cd). + +When installing Argo CD using this helm chart the user should have a similar experience and configuration rolled out. Hence, it makes sense to try to achieve a similar output of rendered `.yaml` resources when calling `helm template` using the default settings in `values.yaml`. + +To update the templates and default settings in `values.yaml` it may come in handy to look up the diff of the `manifests/install.yaml` between two versions accordingly. This can either be done directly via github and look for `manifests/install.yaml`: + +https://github.com/argoproj/argo-cd/compare/v1.8.7...v2.0.0#files_bucket + +Or you clone the repository and do a local `git-diff`: + +```bash +git clone https://github.com/argoproj/argo-cd.git +cd argo-cd +git diff v1.8.7 v2.0.0 -- manifests/install.yaml +``` + +Changes in the `CustomResourceDefinition` resources shall be fixed easily by copying 1:1 from the [`manifests/crds` folder](https://github.com/argoproj/argo-cd/tree/master/manifests/crds) into this [`charts/argo-cd/templates/crds` folder](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd/templates/crds). + +## Upgrading + +### Custom resource definitions + +Some users would prefer to install the CRDs _outside_ of the chart. You can disable the CRD installation of this chart by using `--set crds.install=false` when installing the chart. + +Helm cannot upgrade custom resource definitions in the `/crds` folder [by design](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations). Starting with 5.2.0, the CRDs have been moved to `/templates` to address this design decision. + +If you are using Argo CD chart version prior to 5.2.0 or have elected to manage the Argo CD CRDs outside of the chart, please use `kubectl` to upgrade CRDs manually from [templates/crds](templates/crds/) folder or via the manifests from the upstream project repo: + +```bash +kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=" + +# Eg. version v2.4.9 +kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=v2.4.9" +``` + +### 5.10.0 + +This version hardens security by configuring default container security contexts. +The change aligns chart with [supported versions](https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/#supported-versions) by upstream and adds requirement for minimum Kubernetes version >= 1.22. + +### 5.5.20 + +This version moved API version templates into dedicated helper. If you are using these in your umbrella +chart please migrate your templates to pattern `argo-cd.apiVersion.`. + +### 5.5.0 + +This version introduces new `configs.params` section that replaces command line arguments for containers. +Please refer to documentation in values.yaml for migrating the configuration. + +### 5.2.0 + +Custom resource definitions were moved to `templates` folder so they can be managed by Helm. + +To adopt already created CRDs, please use following command: + +```bash +YOUR_ARGOCD_NAMESPACE="" # e.g. argo-cd +YOUR_ARGOCD_RELEASENAME="" # e.g. argo-cd + +for crd in "applications.argoproj.io" "applicationsets.argoproj.io" "argocdextensions.argoproj.io" "appprojects.argoproj.io"; do + kubectl label --overwrite crd $crd app.kubernetes.io/managed-by=Helm + kubectl annotate --overwrite crd $crd meta.helm.sh/release-namespace="$YOUR_ARGOCD_NAMESPACE" + kubectl annotate --overwrite crd $crd meta.helm.sh/release-name="$YOUR_ARGOCD_RELEASENAME" +done +``` + +### 5.0.0 + +This version **removes support for**: + +- deprecated repository credentials (parameter `configs.repositoryCredentials`) +- option to run application controller as a Deployment +- the parameters `server.additionalApplications` and `server.additionalProjects` + +Please carefully read the following section if you are using these parameters! + +In order to upgrade Applications and Projects safely against CRDs' upgrade, `server.additionalApplications` and `server.additionalProjects` are moved to [argocd-apps](../argocd-apps). + +If you are using `server.additionalApplications` or `server.additionalProjects`, you can adopt to [argocd-apps](../argocd-apps) as below: + +1. Add [helm.sh/resource-policy annotation](https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource) to avoid resources being removed by upgrading Helm chart + +You can keep your existing CRDs by adding `"helm.sh/resource-policy": keep` on `additionalAnnotations`, under `server.additionalApplications` and `server.additionalProjects` blocks, and running `helm upgrade`. + +e.g: + +```yaml +server: + additionalApplications: + - name: guestbook + namespace: argocd + additionalLabels: {} + additionalAnnotations: + "helm.sh/resource-policy": keep # <-- add this + finalizers: + - resources-finalizer.argocd.argoproj.io + project: guestbook + source: + repoURL: https://github.com/argoproj/argocd-example-apps.git + targetRevision: HEAD + path: guestbook + directory: + recurse: true + destination: + server: https://kubernetes.default.svc + namespace: guestbook + syncPolicy: + automated: + prune: false + selfHeal: false + ignoreDifferences: + - group: apps + kind: Deployment + jsonPointers: + - /spec/replicas + info: + - name: url + value: https://argoproj.github.io/ +``` + +You can also keep your existing CRDs by running the following scripts. + +```bash +# keep Applications +for app in "guestbook"; do + kubectl annotate --overwrite application $app helm.sh/resource-policy=keep +done + +# keep Projects +for project in "guestbook"; do + kubectl annotate --overwrite appproject $project helm.sh/resource-policy=keep +done +``` + +2. Upgrade argo-cd Helm chart to v5.0.0 + +3. Remove keep [helm.sh/resource-policy annotation](https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource) + +```bash +# delete annotations from Applications +for app in "guestbook"; do + kubectl annotate --overwrite application $app helm.sh/resource-policy- +done + +# delete annotations from Projects +for project in "guestbook"; do + kubectl annotate --overwrite appproject $project helm.sh/resource-policy- +done +``` + +4. Adopt existing resources to [argocd-apps](../argocd-apps) + +### 4.9.0 + +This version starts to use upstream image with applicationset binary. Start command was changed from `applicationset-controller` to `argocd-applicationset-controller` + +### 4.3.* + +With this minor version, the notification notifier's `service.slack` is no longer configured by default. + +### 4.0.0 and above + +This helm chart version deploys Argo CD v2.3. The Argo CD Notifications and ApplicationSet are part of Argo CD now. You no longer need to install them separately. The Notifications and ApplicationSet components **are bundled into default** Argo CD installation. +Please read the [v2.2 to 2.3 upgrade instructions] in the upstream repository. + +### 3.13.0 + +This release removes the flag `--staticassets` from argocd server as it has been dropped upstream. If this flag needs to be enabled e.g for older releases of Argo CD, it can be passed via the `server.extraArgs` field + +### 3.10.2 + +Argo CD has recently deprecated the flag `--staticassets` and from chart version `3.10.2` has been disabled by default +It can be re-enabled by setting `server.staticAssets.enabled` to true + +### 3.8.1 + +This bugfix version potentially introduces a rename (and recreation) of one or more ServiceAccounts. It _only happens_ when you use one of these customization: + +```yaml +# Case 1) - only happens when you do not specify a custom name (repoServer.serviceAccount.name) +repoServer: + serviceAccount: + create: true + +# Case 2) +controller: + serviceAccount: + name: "" # or + +# Case 3) +dex: + serviceAccount: + name: "" # or + +# Case 4) +server: + serviceAccount: + name: "" # or +``` + +Please check if you are affected by one of these cases **before you upgrade**, especially when you use **cloud IAM roles for service accounts.** (eg. IRSA on AWS or Workload Identity for GKE) + +### 3.2.* + +With this minor version we introduced the evaluation for the ingress manifest (depending on the capabilities version), See [Pull Request](https://github.com/argoproj/argo-helm/pull/637). +[Issue 703](https://github.com/argoproj/argo-helm/issues/703) reported that the capabilities evaluation is **not handled correctly when deploying the chart via an Argo CD instance**, +especially deploying on clusters running a cluster version prior to `1.19` (which misses `Ingress` on apiVersion `networking.k8s.io/v1`). + +If you are running a cluster version prior to `1.19` you can avoid this issue by directly installing chart version `3.6.0` and setting `kubeVersionOverride` like: + +```yaml +kubeVersionOverride: "1.18.0" +``` + +Then you should no longer encounter this issue. + +### 3.0.0 and above + +Helm apiVersion switched to `v2`. Requires Helm `3.0.0` or above to install. [Read More](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) on how to migrate your release from Helm 2 to Helm 3. + +### 2.14.7 and above + +The `matchLabels` key in the Argo CD Application Controller is no longer hard-coded. Note that labels are immutable so caution should be exercised when making changes to this resource. + +### 2.10.x to 2.11.0 + +The application controller is now available as a `StatefulSet` when the `controller.enableStatefulSet` flag is set to true. Depending on your Helm deployment this may be a downtime or breaking change if enabled when using HA and will become the default in 3.x. + +### 1.8.7 to 2.x.x + +`controller.extraArgs`, `repoServer.extraArgs` and `server.extraArgs` are now arrays of strings instead of a map + +What was + +```yaml +server: + extraArgs: + insecure: "" +``` + +is now + +```yaml +server: + extraArgs: + - --insecure +``` + +## Prerequisites + +- Kubernetes: `>=1.22.0-0` +- Helm v3.0.0+ + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```console +$ helm repo add argo https://argoproj.github.io/argo-helm +"argo" has been added to your repositories + +$ helm install my-release argo/argo-cd +NAME: my-release +... +``` + +## General parameters + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| apiVersionOverrides.autoscaling | string | `""` | String to override apiVersion of autoscaling rendered by this helm chart | +| apiVersionOverrides.certmanager | string | `""` | String to override apiVersion of cert-manager resources rendered by this helm chart | +| apiVersionOverrides.cloudgoogle | string | `""` | String to override apiVersion of GKE resources rendered by this helm chart | +| apiVersionOverrides.ingress | string | `""` | String to override apiVersion of ingresses rendered by this helm chart | +| apiVersionOverrides.pdb | string | `""` | String to override apiVersion of pod disruption budgets rendered by this helm chart | +| crds.annotations | object | `{}` | Annotations to be added to all CRDs | +| crds.install | bool | `true` | Install and upgrade CRDs | +| crds.keep | bool | `true` | Keep CRDs on chart uninstall | +| createAggregateRoles | bool | `false` | Create clusterroles that extend existing clusterroles to interact with argo-cd crds | +| extraObjects | list | `[]` | Array of extra K8s manifests to deploy | +| fullnameOverride | string | `""` | String to fully override `"argo-cd.fullname"` | +| kubeVersionOverride | string | `""` | Override the Kubernetes version, which is used to evaluate certain manifests | +| nameOverride | string | `"argocd"` | Provide a name in place of `argocd` | +| openshift.enabled | bool | `false` | enables using arbitrary uid for argo repo server | + +## Global Configs + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| global.additionalLabels | object | `{}` | Common labels for the all resources | +| global.hostAliases | list | `[]` | Mapping between IP and hostnames that will be injected as entries in the pod's hosts files | +| global.image.imagePullPolicy | string | `"IfNotPresent"` | If defined, a imagePullPolicy applied to all Argo CD deployments | +| global.image.repository | string | `"quay.io/argoproj/argocd"` | If defined, a repository applied to all Argo CD deployments | +| global.image.tag | string | `""` | Overrides the global Argo CD image tag whose default is the chart appVersion | +| global.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | +| global.logging.format | string | `"text"` | Set the global logging format. Either: `text` or `json` | +| global.logging.level | string | `"info"` | Set the global logging level. One of: `debug`, `info`, `warn` or `error` | +| global.networkPolicy.create | bool | `false` | Create NetworkPolicy objects for all components | +| global.networkPolicy.defaultDenyIngress | bool | `false` | Default deny all ingress traffic | +| global.podAnnotations | object | `{}` | Annotations for the all deployed pods | +| global.podLabels | object | `{}` | Labels for the all deployed pods | +| global.revisionHistoryLimit | int | `3` | Number of old deployment ReplicaSets to retain. The rest will be garbage collected. | +| global.securityContext | object | `{}` (See [values.yaml]) | Toggle and define pod-level security context. | + +## Argo CD Configs + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| configs.clusterCredentials | list | `[]` (See [values.yaml]) | Provide one or multiple [external cluster credentials] | +| configs.cm."admin.enabled" | bool | `true` | Enable local admin user | +| configs.cm."application.instanceLabelKey" | string | Defaults to app.kubernetes.io/instance | The name of tracking label used by Argo CD for resource pruning | +| configs.cm."exec.enabled" | bool | `false` | Enable exec feature in Argo UI | +| configs.cm."server.rbac.log.enforce.enable" | bool | `false` | Enable logs RBAC enforcement | +| configs.cm."timeout.hard.reconciliation" | int | `0` | Timeout to refresh application data as well as target manifests cache | +| configs.cm."timeout.reconciliation" | string | `"180s"` | Timeout to discover if a new manifests version got published to the repository | +| configs.cm.annotations | object | `{}` | Annotations to be added to argocd-cm configmap | +| configs.cm.create | bool | `true` | Create the argocd-cm configmap for [Declarative setup] | +| configs.cm.url | string | `""` | Argo CD's externally facing base URL (optional). Required when configuring SSO | +| configs.credentialTemplates | object | `{}` | Repository credentials to be used as Templates for other repos | +| configs.credentialTemplatesAnnotations | object | `{}` | Annotations to be added to `configs.credentialTemplates` Secret | +| configs.gpgKeys | object | `{}` (See [values.yaml]) | [GnuPG](https://argo-cd.readthedocs.io/en/stable/user-guide/gpg-verification/) keys to add to the key ring | +| configs.gpgKeysAnnotations | object | `{}` | GnuPG key ring annotations | +| configs.knownHosts.data.ssh_known_hosts | string | See [values.yaml] | Known Hosts | +| configs.knownHostsAnnotations | object | `{}` | Known Hosts configmap annotations | +| configs.params."controller.operation.processors" | int | `10` | Number of application operation processors | +| configs.params."controller.repo.server.timeout.seconds" | int | `60` | Repo server RPC call timeout seconds. | +| configs.params."controller.self.heal.timeout.seconds" | int | `5` | Specifies timeout between application self heal attempts | +| configs.params."controller.status.processors" | int | `20` | Number of application status processors | +| configs.params."otlp.address" | string | `""` | Open-Telemetry collector address: (e.g. "otel-collector:4317") | +| configs.params."reposerver.parallelism.limit" | int | `0` | Limit on number of concurrent manifests generate requests. Any value less the 1 means no limit. | +| configs.params."server.basehref" | string | `"/"` | Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from / | +| configs.params."server.disable.auth" | bool | `false` | Disable Argo CD RBAC for user authentication | +| configs.params."server.enable.gzip" | bool | `false` | Enable GZIP compression | +| configs.params."server.insecure" | bool | `false` | Run server without TLS | +| configs.params."server.rootpath" | string | `""` | Used if Argo CD is running behind reverse proxy under subpath different from / | +| configs.params."server.staticassets" | string | `"/shared/app"` | Directory path that contains additional static assets | +| configs.params."server.x.frame.options" | string | `"sameorigin"` | Set X-Frame-Options header in HTTP responses to value. To disable, set to "". | +| configs.params.annotations | object | `{}` | Annotations to be added to the argocd-cmd-params-cm ConfigMap | +| configs.rbac."policy.csv" | string | `''` (See [values.yaml]) | File containing user-defined policies and role definitions. | +| configs.rbac."policy.default" | string | `""` | The name of the default role which Argo CD will falls back to, when authorizing API requests (optional). If omitted or empty, users may be still be able to login, but will see no apps, projects, etc... | +| configs.rbac.annotations | object | `{}` | Annotations to be added to argocd-rbac-cm configmap | +| configs.rbac.create | bool | `true` | Create the argocd-rbac-cm configmap with ([Argo CD RBAC policy]) definitions. If false, it is expected the configmap will be created by something else. Argo CD will not work if there is no configmap created with the name above. | +| configs.rbac.scopes | string | `"[groups]"` | OIDC scopes to examine during rbac enforcement (in addition to `sub` scope). The scope value can be a string, or a list of strings. | +| configs.repositories | object | `{}` | Repositories list to be used by applications | +| configs.repositoriesAnnotations | object | `{}` | Annotations to be added to `configs.repositories` Secret | +| configs.secret.annotations | object | `{}` | Annotations to be added to argocd-secret | +| configs.secret.argocdServerAdminPassword | string | `""` | Bcrypt hashed admin password | +| configs.secret.argocdServerAdminPasswordMtime | string | `""` (defaults to current time) | Admin password modification time. Eg. `"2006-01-02T15:04:05Z"` | +| configs.secret.bitbucketServerSecret | string | `""` | Shared secret for authenticating BitbucketServer webhook events | +| configs.secret.bitbucketUUID | string | `""` | UUID for authenticating Bitbucket webhook events | +| configs.secret.createSecret | bool | `true` | Create the argocd-secret | +| configs.secret.extra | object | `{}` | add additional secrets to be added to argocd-secret | +| configs.secret.githubSecret | string | `""` | Shared secret for authenticating GitHub webhook events | +| configs.secret.gitlabSecret | string | `""` | Shared secret for authenticating GitLab webhook events | +| configs.secret.gogsSecret | string | `""` | Shared secret for authenticating Gogs webhook events | +| configs.styles | string | `""` (See [values.yaml]) | Define custom [CSS styles] for your argo instance. This setting will automatically mount the provided CSS and reference it in the argo configuration. | +| configs.tlsCerts | object | See [values.yaml] | TLS certificate | +| configs.tlsCertsAnnotations | object | `{}` | TLS certificate configmap annotations | + +## Argo CD Controller + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| controller.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | +| controller.args | object | `{}` | DEPRECATED - Application controller commandline flags | +| controller.clusterAdminAccess.enabled | bool | `true` | Enable RBAC for local cluster deployments | +| controller.clusterRoleRules.enabled | bool | `false` | Enable custom rules for the application controller's ClusterRole resource | +| controller.clusterRoleRules.rules | list | `[]` | List of custom rules for the application controller's ClusterRole resource | +| controller.containerPort | int | `8082` | Application controller listening port | +| controller.containerSecurityContext | object | See [values.yaml] | Application controller container-level security context | +| controller.env | list | `[]` | Environment variables to pass to application controller | +| controller.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to application controller | +| controller.extraArgs | list | `[]` | Additional command line arguments to pass to application controller | +| controller.extraContainers | list | `[]` | Additional containers to be added to the application controller pod | +| controller.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the application controller | +| controller.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the application controller | +| controller.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the application controller | +| controller.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| controller.initContainers | list | `[]` | Init containers to add to the application controller pod | +| controller.metrics.applicationLabels.enabled | bool | `false` | Enables additional labels in argocd_app_labels metric | +| controller.metrics.applicationLabels.labels | list | `[]` | Additional labels | +| controller.metrics.enabled | bool | `false` | Deploy metrics service | +| controller.metrics.rules.enabled | bool | `false` | Deploy a PrometheusRule for the application controller | +| controller.metrics.rules.spec | list | `[]` | PrometheusRule.Spec for the application controller | +| controller.metrics.service.annotations | object | `{}` | Metrics service annotations | +| controller.metrics.service.labels | object | `{}` | Metrics service labels | +| controller.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| controller.metrics.service.servicePort | int | `8082` | Metrics service port | +| controller.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| controller.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| controller.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| controller.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | +| controller.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | +| controller.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | +| controller.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | +| controller.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| controller.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| controller.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| controller.name | string | `"application-controller"` | Application controller name string | +| controller.nodeSelector | object | `{}` | [Node selector] | +| controller.pdb.annotations | object | `{}` | Annotations to be added to application controller pdb | +| controller.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the application controller | +| controller.pdb.labels | object | `{}` | Labels to be added to application controller pdb | +| controller.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| controller.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| controller.podAnnotations | object | `{}` | Annotations to be added to application controller pods | +| controller.podLabels | object | `{}` | Labels to be added to application controller pods | +| controller.priorityClassName | string | `""` | Priority class for the application controller pods | +| controller.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| controller.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| controller.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| controller.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| controller.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| controller.replicas | int | `1` | The number of application controller pods to run. Additional replicas will cause sharding of managed clusters across number of replicas. | +| controller.resources | object | `{}` | Resource limits and requests for the application controller pods | +| controller.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| controller.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | +| controller.serviceAccount.create | bool | `true` | Create a service account for the application controller | +| controller.serviceAccount.labels | object | `{}` | Labels applied to created service account | +| controller.serviceAccount.name | string | `"argocd-application-controller"` | Service account name | +| controller.tolerations | list | `[]` | [Tolerations] for use with node taints | +| controller.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to the application controller | +| controller.volumeMounts | list | `[]` | Additional volumeMounts to the application controller main container | +| controller.volumes | list | `[]` | Additional volumes to the application controller pod | + +## Argo Repo Server + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| repoServer.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | +| repoServer.autoscaling.behavior | object | `{}` | Configures the scaling behavior of the target in both Up and Down directions. This is only available on HPA apiVersion `autoscaling/v2beta2` and newer | +| repoServer.autoscaling.enabled | bool | `false` | Enable Horizontal Pod Autoscaler ([HPA]) for the repo server | +| repoServer.autoscaling.maxReplicas | int | `5` | Maximum number of replicas for the repo server [HPA] | +| repoServer.autoscaling.minReplicas | int | `1` | Minimum number of replicas for the repo server [HPA] | +| repoServer.autoscaling.targetCPUUtilizationPercentage | int | `50` | Average CPU utilization percentage for the repo server [HPA] | +| repoServer.autoscaling.targetMemoryUtilizationPercentage | int | `50` | Average memory utilization percentage for the repo server [HPA] | +| repoServer.clusterAdminAccess.enabled | bool | `false` | Enable RBAC for local cluster deployments | +| repoServer.clusterRoleRules.enabled | bool | `false` | Enable custom rules for the Repo server's Cluster Role resource | +| repoServer.clusterRoleRules.rules | list | `[]` | List of custom rules for the Repo server's Cluster Role resource | +| repoServer.containerPort | int | `8081` | Configures the repo server port | +| repoServer.containerSecurityContext | object | See [values.yaml] | Repo server container-level security context | +| repoServer.env | list | `[]` | Environment variables to pass to repo server | +| repoServer.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to repo server | +| repoServer.extraArgs | list | `[]` | Additional command line arguments to pass to repo server | +| repoServer.extraContainers | list | `[]` | Additional containers to be added to the repo server pod | +| repoServer.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the repo server | +| repoServer.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the repo server | +| repoServer.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the repo server | +| repoServer.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| repoServer.initContainers | list | `[]` | Init containers to add to the repo server pods | +| repoServer.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| repoServer.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| repoServer.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| repoServer.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| repoServer.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| repoServer.metrics.enabled | bool | `false` | Deploy metrics service | +| repoServer.metrics.service.annotations | object | `{}` | Metrics service annotations | +| repoServer.metrics.service.labels | object | `{}` | Metrics service labels | +| repoServer.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| repoServer.metrics.service.servicePort | int | `8084` | Metrics service port | +| repoServer.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| repoServer.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| repoServer.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| repoServer.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | +| repoServer.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | +| repoServer.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | +| repoServer.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | +| repoServer.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| repoServer.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| repoServer.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| repoServer.name | string | `"repo-server"` | Repo server name | +| repoServer.nodeSelector | object | `{}` | [Node selector] | +| repoServer.pdb.annotations | object | `{}` | Annotations to be added to repo server pdb | +| repoServer.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the repo server | +| repoServer.pdb.labels | object | `{}` | Labels to be added to repo server pdb | +| repoServer.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| repoServer.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| repoServer.podAnnotations | object | `{}` | Annotations to be added to repo server pods | +| repoServer.podLabels | object | `{}` | Labels to be added to repo server pods | +| repoServer.priorityClassName | string | `""` | Priority class for the repo server | +| repoServer.rbac | list | `[]` | Repo server rbac rules | +| repoServer.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| repoServer.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| repoServer.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| repoServer.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| repoServer.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| repoServer.replicas | int | `1` | The number of repo server pods to run | +| repoServer.resources | object | `{}` | Resource limits and requests for the repo server pods | +| repoServer.service.annotations | object | `{}` | Repo server service annotations | +| repoServer.service.labels | object | `{}` | Repo server service labels | +| repoServer.service.port | int | `8081` | Repo server service port | +| repoServer.service.portName | string | `"https-repo-server"` | Repo server service port name | +| repoServer.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| repoServer.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | +| repoServer.serviceAccount.create | bool | `true` | Create repo server service account | +| repoServer.serviceAccount.labels | object | `{}` | Labels applied to created service account | +| repoServer.serviceAccount.name | string | `""` | Repo server service account name | +| repoServer.tolerations | list | `[]` | [Tolerations] for use with node taints | +| repoServer.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to the repo server | +| repoServer.volumeMounts | list | `[]` | Additional volumeMounts to the repo server main container | +| repoServer.volumes | list | `[]` | Additional volumes to the repo server pod | + +## Argo Server + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| server.GKEbackendConfig.enabled | bool | `false` | Enable BackendConfig custom resource for Google Kubernetes Engine | +| server.GKEbackendConfig.spec | object | `{}` | [BackendConfigSpec] | +| server.GKEfrontendConfig.enabled | bool | `false` | Enable FrontConfig custom resource for Google Kubernetes Engine | +| server.GKEfrontendConfig.spec | object | `{}` | [FrontendConfigSpec] | +| server.GKEmanagedCertificate.domains | list | `["argocd.example.com"]` | Domains for the Google Managed Certificate | +| server.GKEmanagedCertificate.enabled | bool | `false` | Enable ManagedCertificate custom resource for Google Kubernetes Engine. | +| server.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | +| server.autoscaling.behavior | object | `{}` | Configures the scaling behavior of the target in both Up and Down directions. This is only available on HPA apiVersion `autoscaling/v2beta2` and newer | +| server.autoscaling.enabled | bool | `false` | Enable Horizontal Pod Autoscaler ([HPA]) for the Argo CD server | +| server.autoscaling.maxReplicas | int | `5` | Maximum number of replicas for the Argo CD server [HPA] | +| server.autoscaling.minReplicas | int | `1` | Minimum number of replicas for the Argo CD server [HPA] | +| server.autoscaling.targetCPUUtilizationPercentage | int | `50` | Average CPU utilization percentage for the Argo CD server [HPA] | +| server.autoscaling.targetMemoryUtilizationPercentage | int | `50` | Average memory utilization percentage for the Argo CD server [HPA] | +| server.certificate.additionalHosts | list | `[]` | Certificate Subject Alternate Names (SANs) | +| server.certificate.domain | string | `"argocd.example.com"` | Certificate primary domain (commonName) | +| server.certificate.duration | string | `""` (defaults to 2160h = 90d if not specified) | The requested 'duration' (i.e. lifetime) of the certificate. | +| server.certificate.enabled | bool | `false` | Deploy a Certificate resource (requires cert-manager) | +| server.certificate.issuer.group | string | `""` | Certificate issuer group. Set if using an external issuer. Eg. `cert-manager.io` | +| server.certificate.issuer.kind | string | `""` | Certificate issuer kind. Either `Issuer` or `ClusterIssuer` | +| server.certificate.issuer.name | string | `""` | Certificate isser name. Eg. `letsencrypt` | +| server.certificate.privateKey.algorithm | string | `"RSA"` | Algorithm used to generate certificate private key. One of: `RSA`, `Ed25519` or `ECDSA` | +| server.certificate.privateKey.encoding | string | `"PKCS1"` | The private key cryptography standards (PKCS) encoding for private key. Either: `PCKS1` or `PKCS8` | +| server.certificate.privateKey.rotationPolicy | string | `"Never"` | Rotation policy of private key when certificate is re-issued. Either: `Never` or `Always` | +| server.certificate.privateKey.size | int | `2048` | Key bit size of the private key. If algorithm is set to `Ed25519`, size is ignored. | +| server.certificate.renewBefore | string | `""` (defaults to 360h = 15d if not specified) | How long before the expiry a certificate should be renewed. | +| server.certificate.secretName | string | `"argocd-server-tls"` | The name of the Secret that will be automatically created and managed by this Certificate resource | +| server.certificateSecret.annotations | object | `{}` | Annotations to be added to argocd-server-tls secret | +| server.certificateSecret.crt | string | `""` | Certificate data | +| server.certificateSecret.enabled | bool | `false` | Create argocd-server-tls secret | +| server.certificateSecret.key | string | `""` | Private Key of the certificate | +| server.certificateSecret.labels | object | `{}` | Labels to be added to argocd-server-tls secret | +| server.clusterAdminAccess.enabled | bool | `true` | Enable RBAC for local cluster deployments | +| server.containerPort | int | `8080` | Configures the server port | +| server.containerSecurityContext | object | See [values.yaml] | Server container-level security context | +| server.env | list | `[]` | Environment variables to pass to Argo CD server | +| server.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to Argo CD server | +| server.extensions.containerSecurityContext | object | See [values.yaml] | Server UI extensions container-level security context | +| server.extensions.enabled | bool | `false` | Enable support for Argo UI extensions | +| server.extensions.image.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy for extensions | +| server.extensions.image.repository | string | `"ghcr.io/argoproj-labs/argocd-extensions"` | Repository to use for extensions image | +| server.extensions.image.tag | string | `"v0.1.0"` | Tag to use for extensions image | +| server.extensions.resources | object | `{}` | Resource limits and requests for the argocd-extensions container | +| server.extraArgs | list | `[]` | Additional command line arguments to pass to Argo CD server | +| server.extraContainers | list | `[]` | Additional containers to be added to the server pod | +| server.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the Argo CD server | +| server.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the Argo CD server | +| server.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the Argo CD server | +| server.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| server.ingress.annotations | object | `{}` | Additional ingress annotations | +| server.ingress.enabled | bool | `false` | Enable an ingress resource for the Argo CD server | +| server.ingress.extraPaths | list | `[]` | Additional ingress paths | +| server.ingress.hosts | list | `[]` | List of ingress hosts | +| server.ingress.https | bool | `false` | Uses `server.service.servicePortHttps` instead `server.service.servicePortHttp` | +| server.ingress.ingressClassName | string | `""` | Defines which ingress controller will implement the resource | +| server.ingress.labels | object | `{}` | Additional ingress labels | +| server.ingress.pathType | string | `"Prefix"` | Ingress path type. One of `Exact`, `Prefix` or `ImplementationSpecific` | +| server.ingress.paths | list | `["/"]` | List of ingress paths | +| server.ingress.tls | list | `[]` | Ingress TLS configuration | +| server.ingressGrpc.annotations | object | `{}` | Additional ingress annotations for dedicated [gRPC-ingress] | +| server.ingressGrpc.awsALB.backendProtocolVersion | string | `"HTTP2"` | Backend protocol version for the AWS ALB gRPC service | +| server.ingressGrpc.awsALB.serviceType | string | `"NodePort"` | Service type for the AWS ALB gRPC service | +| server.ingressGrpc.enabled | bool | `false` | Enable an ingress resource for the Argo CD server for dedicated [gRPC-ingress] | +| server.ingressGrpc.extraPaths | list | `[]` | Additional ingress paths for dedicated [gRPC-ingress] | +| server.ingressGrpc.hosts | list | `[]` | List of ingress hosts for dedicated [gRPC-ingress] | +| server.ingressGrpc.https | bool | `false` | Uses `server.service.servicePortHttps` instead `server.service.servicePortHttp` | +| server.ingressGrpc.ingressClassName | string | `""` | Defines which ingress controller will implement the resource [gRPC-ingress] | +| server.ingressGrpc.isAWSALB | bool | `false` | Setup up gRPC ingress to work with an AWS ALB | +| server.ingressGrpc.labels | object | `{}` | Additional ingress labels for dedicated [gRPC-ingress] | +| server.ingressGrpc.pathType | string | `"Prefix"` | Ingress path type for dedicated [gRPC-ingress]. One of `Exact`, `Prefix` or `ImplementationSpecific` | +| server.ingressGrpc.paths | list | `["/"]` | List of ingress paths for dedicated [gRPC-ingress] | +| server.ingressGrpc.tls | list | `[]` | Ingress TLS configuration for dedicated [gRPC-ingress] | +| server.initContainers | list | `[]` | Init containers to add to the server pod | +| server.lifecycle | object | `{}` | Specify postStart and preStop lifecycle hooks for your argo-cd-server container | +| server.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| server.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| server.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| server.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| server.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| server.metrics.enabled | bool | `false` | Deploy metrics service | +| server.metrics.service.annotations | object | `{}` | Metrics service annotations | +| server.metrics.service.labels | object | `{}` | Metrics service labels | +| server.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| server.metrics.service.servicePort | int | `8083` | Metrics service port | +| server.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| server.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| server.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| server.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | +| server.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | +| server.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | +| server.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | +| server.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| server.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| server.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| server.name | string | `"server"` | Argo CD server name | +| server.nodeSelector | object | `{}` | [Node selector] | +| server.pdb.annotations | object | `{}` | Annotations to be added to Argo CD server pdb | +| server.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the Argo CD server | +| server.pdb.labels | object | `{}` | Labels to be added to Argo CD server pdb | +| server.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| server.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| server.podAnnotations | object | `{}` | Annotations to be added to server pods | +| server.podLabels | object | `{}` | Labels to be added to server pods | +| server.priorityClassName | string | `""` | Priority class for the Argo CD server | +| server.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| server.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| server.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| server.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| server.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| server.replicas | int | `1` | The number of server pods to run | +| server.resources | object | `{}` | Resource limits and requests for the Argo CD server | +| server.route.annotations | object | `{}` | Openshift Route annotations | +| server.route.enabled | bool | `false` | Enable an OpenShift Route for the Argo CD server | +| server.route.hostname | string | `""` | Hostname of OpenShift Route | +| server.route.termination_policy | string | `"None"` | Termination policy of Openshift Route | +| server.route.termination_type | string | `"passthrough"` | Termination type of Openshift Route | +| server.service.annotations | object | `{}` | Server service annotations | +| server.service.externalIPs | list | `[]` | Server service external IPs | +| server.service.externalTrafficPolicy | string | `""` | Denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints | +| server.service.labels | object | `{}` | Server service labels | +| server.service.loadBalancerIP | string | `""` | LoadBalancer will get created with the IP specified in this field | +| server.service.loadBalancerSourceRanges | list | `[]` | Source IP ranges to allow access to service from | +| server.service.namedTargetPort | bool | `true` | Use named target port for argocd | +| server.service.nodePortHttp | int | `30080` | Server service http port for NodePort service type (only if `server.service.type` is set to "NodePort") | +| server.service.nodePortHttps | int | `30443` | Server service https port for NodePort service type (only if `server.service.type` is set to "NodePort") | +| server.service.servicePortHttp | int | `80` | Server service http port | +| server.service.servicePortHttpName | string | `"http"` | Server service http port name, can be used to route traffic via istio | +| server.service.servicePortHttps | int | `443` | Server service https port | +| server.service.servicePortHttpsName | string | `"https"` | Server service https port name, can be used to route traffic via istio | +| server.service.sessionAffinity | string | `""` | Used to maintain session affinity. Supports `ClientIP` and `None` | +| server.service.type | string | `"ClusterIP"` | Server service type | +| server.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| server.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | +| server.serviceAccount.create | bool | `true` | Create server service account | +| server.serviceAccount.labels | object | `{}` | Labels applied to created service account | +| server.serviceAccount.name | string | `"argocd-server"` | Server service account name | +| server.tolerations | list | `[]` | [Tolerations] for use with node taints | +| server.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to the Argo CD server | +| server.volumeMounts | list | `[]` | Additional volumeMounts to the server main container | +| server.volumes | list | `[]` | Additional volumes to the server pod | + +### Using AWS ALB Ingress Controller With GRPC + +If you are using an AWS ALB Ingress controller, you will need to set `server.ingressGrpc.isAWSALB` to `true`. This will create a second service with the annotation `alb.ingress.kubernetes.io/backend-protocol-version: HTTP2` and modify the server ingress to add a condition annotation to route GRPC traffic to the new service. + +Example: + +```yaml +server: + ingress: + enabled: true + annotations: + alb.ingress.kubernetes.io/backend-protocol: HTTPS + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' + alb.ingress.kubernetes.io/scheme: internal + alb.ingress.kubernetes.io/target-type: ip + ingressGrpc: + enabled: true + isAWSALB: true + awsALB: + serviceType: ClusterIP +``` + +## Dex + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| dex.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | +| dex.certificateSecret.annotations | object | `{}` | Annotations to be added to argocd-dex-server-tls secret | +| dex.certificateSecret.ca | string | `""` | Certificate authority. Required for self-signed certificates. | +| dex.certificateSecret.crt | string | `""` | Certificate data. Must contain SANs of Dex service (ie: argocd-dex-server, argocd-dex-server.argo-cd.svc) | +| dex.certificateSecret.enabled | bool | `false` | Create argocd-dex-server-tls secret | +| dex.certificateSecret.key | string | `""` | Certificate private key | +| dex.certificateSecret.labels | object | `{}` | Labels to be added to argocd-dex-server-tls secret | +| dex.containerPortGrpc | int | `5557` | Container port for gRPC access | +| dex.containerPortHttp | int | `5556` | Container port for HTTP access | +| dex.containerPortMetrics | int | `5558` | Container port for metrics access | +| dex.containerSecurityContext | object | See [values.yaml] | Dex container-level security context | +| dex.enabled | bool | `true` | Enable dex | +| dex.env | list | `[]` | Environment variables to pass to the Dex server | +| dex.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the Dex server | +| dex.extraArgs | list | `[]` | Additional command line arguments to pass to the Dex server | +| dex.extraContainers | list | `[]` | Additional containers to be added to the dex pod | +| dex.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Dex imagePullPolicy | +| dex.image.repository | string | `"ghcr.io/dexidp/dex"` | Dex image repository | +| dex.image.tag | string | `"v2.35.3"` | Dex image tag | +| dex.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| dex.initContainers | list | `[]` | Init containers to add to the dex pod | +| dex.initImage.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Argo CD init image imagePullPolicy | +| dex.initImage.repository | string | `""` (defaults to global.image.repository) | Argo CD init image repository | +| dex.initImage.tag | string | `""` (defaults to global.image.tag) | Argo CD init image tag | +| dex.livenessProbe.enabled | bool | `false` | Enable Kubernetes liveness probe for Dex >= 2.28.0 | +| dex.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| dex.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| dex.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| dex.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| dex.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| dex.metrics.enabled | bool | `false` | Deploy metrics service | +| dex.metrics.service.annotations | object | `{}` | Metrics service annotations | +| dex.metrics.service.labels | object | `{}` | Metrics service labels | +| dex.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| dex.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| dex.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| dex.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| dex.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | +| dex.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | +| dex.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | +| dex.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | +| dex.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| dex.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| dex.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| dex.name | string | `"dex-server"` | Dex name | +| dex.nodeSelector | object | `{}` | [Node selector] | +| dex.pdb.annotations | object | `{}` | Annotations to be added to Dex server pdb | +| dex.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the Dex server | +| dex.pdb.labels | object | `{}` | Labels to be added to Dex server pdb | +| dex.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| dex.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| dex.podAnnotations | object | `{}` | Annotations to be added to the Dex server pods | +| dex.podLabels | object | `{}` | Labels to be added to the Dex server pods | +| dex.priorityClassName | string | `""` | Priority class for dex | +| dex.readinessProbe.enabled | bool | `false` | Enable Kubernetes readiness probe for Dex >= 2.28.0 | +| dex.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| dex.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| dex.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| dex.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| dex.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| dex.resources | object | `{}` | Resource limits and requests for dex | +| dex.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| dex.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | +| dex.serviceAccount.create | bool | `true` | Create dex service account | +| dex.serviceAccount.name | string | `"argocd-dex-server"` | Dex service account name | +| dex.servicePortGrpc | int | `5557` | Service port for gRPC access | +| dex.servicePortGrpcName | string | `"grpc"` | Service port name for gRPC access | +| dex.servicePortHttp | int | `5556` | Service port for HTTP access | +| dex.servicePortHttpName | string | `"http"` | Service port name for HTTP access | +| dex.servicePortMetrics | int | `5558` | Service port for metrics access | +| dex.tolerations | list | `[]` | [Tolerations] for use with node taints | +| dex.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to dex | +| dex.volumeMounts | list | `[]` | Additional volumeMounts to the dex main container | +| dex.volumes | list | `[]` | Additional volumes to the dex pod | + +## Redis + +### Option 1 - Single Redis instance (default option) + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| redis.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | +| redis.containerPort | int | `6379` | Redis container port | +| redis.containerSecurityContext | object | See [values.yaml] | Redis container-level security context | +| redis.enabled | bool | `true` | Enable redis | +| redis.env | list | `[]` | Environment variables to pass to the Redis server | +| redis.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the Redis server | +| redis.extraArgs | list | `[]` | Additional command line arguments to pass to redis-server | +| redis.extraContainers | list | `[]` | Additional containers to be added to the redis pod | +| redis.image.imagePullPolicy | string | `"IfNotPresent"` | Redis imagePullPolicy | +| redis.image.repository | string | `"public.ecr.aws/docker/library/redis"` | Redis repository | +| redis.image.tag | string | `"7.0.5-alpine"` | Redis tag | +| redis.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| redis.initContainers | list | `[]` | Init containers to add to the redis pod | +| redis.metrics.containerPort | int | `9121` | Port to use for redis-exporter sidecar | +| redis.metrics.containerSecurityContext | object | See [values.yaml] | Redis exporter security context | +| redis.metrics.enabled | bool | `false` | Deploy metrics service and redis-exporter sidecar | +| redis.metrics.image.imagePullPolicy | string | `"IfNotPresent"` | redis-exporter image PullPolicy | +| redis.metrics.image.repository | string | `"public.ecr.aws/bitnami/redis-exporter"` | redis-exporter image repository | +| redis.metrics.image.tag | string | `"1.26.0-debian-10-r2"` | redis-exporter image tag | +| redis.metrics.resources | object | `{}` | Resource limits and requests for redis-exporter sidecar | +| redis.metrics.service.annotations | object | `{}` | Metrics service annotations | +| redis.metrics.service.clusterIP | string | `"None"` | Metrics service clusterIP. `None` makes a "headless service" (no virtual IP) | +| redis.metrics.service.labels | object | `{}` | Metrics service labels | +| redis.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| redis.metrics.service.servicePort | int | `9121` | Metrics service port | +| redis.metrics.service.type | string | `"ClusterIP"` | Metrics service type | +| redis.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| redis.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| redis.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| redis.metrics.serviceMonitor.interval | string | `"30s"` | Interval at which metrics should be scraped | +| redis.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | +| redis.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | +| redis.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | +| redis.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| redis.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| redis.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| redis.name | string | `"redis"` | Redis name | +| redis.nodeSelector | object | `{}` | [Node selector] | +| redis.pdb.annotations | object | `{}` | Annotations to be added to Redis pdb | +| redis.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the Redis | +| redis.pdb.labels | object | `{}` | Labels to be added to Redis pdb | +| redis.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| redis.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| redis.podAnnotations | object | `{}` | Annotations to be added to the Redis server pods | +| redis.podLabels | object | `{}` | Labels to be added to the Redis server pods | +| redis.priorityClassName | string | `""` | Priority class for redis | +| redis.resources | object | `{}` | Resource limits and requests for redis | +| redis.securityContext | object | See [values.yaml] | Redis pod-level security context | +| redis.service.annotations | object | `{}` | Redis service annotations | +| redis.service.labels | object | `{}` | Additional redis service labels | +| redis.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| redis.serviceAccount.automountServiceAccountToken | bool | `false` | Automount API credentials for the Service Account | +| redis.serviceAccount.create | bool | `false` | Create a service account for the redis pod | +| redis.serviceAccount.name | string | `""` | Service account name for redis pod | +| redis.servicePort | int | `6379` | Redis service port | +| redis.tolerations | list | `[]` | [Tolerations] for use with node taints | +| redis.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to redis | +| redis.volumeMounts | list | `[]` | Additional volumeMounts to the redis container | +| redis.volumes | list | `[]` | Additional volumes to the redis pod | + +### Option 2 - Redis HA + +This option uses the following third-party chart to bootstrap a clustered Redis: https://github.com/DandyDeveloper/charts/tree/master/charts/redis-ha. +For all available configuration options, please read upstream README and/or chart source. +The main options are listed here: + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| redis-ha.enabled | bool | `false` | Enables the Redis HA subchart and disables the custom Redis single node deployment | +| redis-ha.exporter.enabled | bool | `true` | If `true`, the prometheus exporter sidecar is enabled | +| redis-ha.haproxy.enabled | bool | `true` | Enabled HAProxy LoadBalancing/Proxy | +| redis-ha.haproxy.metrics.enabled | bool | `true` | HAProxy enable prometheus metric scraping | +| redis-ha.image.tag | string | `"7.0.5-alpine"` | Redis tag | +| redis-ha.persistentVolume.enabled | bool | `false` | Configures persistency on Redis nodes | +| redis-ha.redis.config | object | See [values.yaml] | Any valid redis config options in this section will be applied to each server (see `redis-ha` chart) | +| redis-ha.redis.config.save | string | `'""'` | Will save the DB if both the given number of seconds and the given number of write operations against the DB occurred. `""` is disabled | +| redis-ha.redis.masterGroupName | string | `"argocd"` | Redis convention for naming the cluster group: must match `^[\\w-\\.]+$` and can be templated | +| redis-ha.topologySpreadConstraints.enabled | bool | `false` | Enable Redis HA topology spread constraints | +| redis-ha.topologySpreadConstraints.maxSkew | string | `""` (defaults to `1`) | Max skew of pods tolerated | +| redis-ha.topologySpreadConstraints.topologyKey | string | `""` (defaults to `topology.kubernetes.io/zone`) | Topology key for spread | +| redis-ha.topologySpreadConstraints.whenUnsatisfiable | string | `""` (defaults to `ScheduleAnyway`) | Enforcement policy, hard or soft | +| redis-ha.exporter.image | string | `nil` (follows subchart default) | Exporter image | +| redis-ha.exporter.tag | string | `nil` (follows subchart default) | Exporter tag | +| redis-ha.haproxy.image.repository | string | `nil` (follows subchart default) | HAProxy Image Repository | +| redis-ha.haproxy.image.tag | string | `nil` (follows subchart default) | HAProxy Image Tag | +| redis-ha.image.repository | string | `nil` (follows subchart default) | Redis image repository | + +### Option 3 - External Redis + +If you want to use an existing Redis (eg. a managed service from a cloud provider), you can use these parameters: + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| externalRedis.existingSecret | string | `""` | The name of an existing secret with Redis credentials (must contain key `redis-password`). When it's set, the `externalRedis.password` parameter is ignored | +| externalRedis.host | string | `""` | External Redis server host | +| externalRedis.password | string | `""` | External Redis password | +| externalRedis.port | int | `6379` | External Redis server port | +| externalRedis.secretAnnotations | object | `{}` | External Redis Secret annotations | +| externalRedis.username | string | `""` | External Redis username | + +## ApplicationSet + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| applicationSet.affinity | object | `{}` | Assign custom [affinity] rules | +| applicationSet.args.debug | bool | `false` | Print debug logs | +| applicationSet.args.dryRun | bool | `false` | Enable dry run mode | +| applicationSet.args.enableLeaderElection | bool | `false` | The default leader election setting | +| applicationSet.args.metricsAddr | string | `":8080"` | The default metric address | +| applicationSet.args.policy | string | `"sync"` | How application is synced between the generator and the cluster | +| applicationSet.args.probeBindAddr | string | `":8081"` | The default health check port | +| applicationSet.containerSecurityContext | object | See [values.yaml] | ApplicationSet controller container-level security context | +| applicationSet.enabled | bool | `true` | Enable ApplicationSet controller | +| applicationSet.extraArgs | list | `[]` | List of extra cli args to add | +| applicationSet.extraContainers | list | `[]` | Additional containers to be added to the applicationset controller pod | +| applicationSet.extraEnv | list | `[]` | Environment variables to pass to the controller | +| applicationSet.extraEnvFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the controller | +| applicationSet.extraVolumeMounts | list | `[]` | List of extra mounts to add (normally used with extraVolumes) | +| applicationSet.extraVolumes | list | `[]` | List of extra volumes to add | +| applicationSet.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the application set controller | +| applicationSet.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the application set controller | +| applicationSet.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the application set controller | +| applicationSet.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | If defined, uses a Secret to pull an image from a private Docker registry or repository. | +| applicationSet.livenessProbe.enabled | bool | `false` | Enable Kubernetes liveness probe for ApplicationSet controller | +| applicationSet.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| applicationSet.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| applicationSet.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| applicationSet.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| applicationSet.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| applicationSet.logFormat | string | `""` (defaults to global.logging.format) | ApplicationSet controller log format. Either `text` or `json` | +| applicationSet.logLevel | string | `""` (defaults to global.logging.level) | ApplicationSet controller log level. One of: `debug`, `info`, `warn`, `error` | +| applicationSet.metrics.enabled | bool | `false` | Deploy metrics service | +| applicationSet.metrics.service.annotations | object | `{}` | Metrics service annotations | +| applicationSet.metrics.service.labels | object | `{}` | Metrics service labels | +| applicationSet.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| applicationSet.metrics.service.servicePort | int | `8085` | Metrics service port | +| applicationSet.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| applicationSet.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| applicationSet.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| applicationSet.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | +| applicationSet.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | +| applicationSet.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | +| applicationSet.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | +| applicationSet.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| applicationSet.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| applicationSet.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| applicationSet.name | string | `"applicationset-controller"` | Application Set controller name string | +| applicationSet.nodeSelector | object | `{}` | [Node selector] | +| applicationSet.pdb.annotations | object | `{}` | Annotations to be added to ApplicationSet controller pdb | +| applicationSet.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the ApplicationSet controller | +| applicationSet.pdb.labels | object | `{}` | Labels to be added to ApplicationSet controller pdb | +| applicationSet.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| applicationSet.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| applicationSet.podAnnotations | object | `{}` | Annotations for the controller pods | +| applicationSet.podLabels | object | `{}` | Labels for the controller pods | +| applicationSet.priorityClassName | string | `""` | If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. | +| applicationSet.readinessProbe.enabled | bool | `false` | Enable Kubernetes liveness probe for ApplicationSet controller | +| applicationSet.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | +| applicationSet.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | +| applicationSet.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | +| applicationSet.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | +| applicationSet.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | +| applicationSet.replicaCount | int | `1` | The number of ApplicationSet controller pods to run | +| applicationSet.resources | object | `{}` | Resource limits and requests for the controller pods. | +| applicationSet.service.annotations | object | `{}` | Application set service annotations | +| applicationSet.service.labels | object | `{}` | Application set service labels | +| applicationSet.service.port | int | `7000` | Application set service port | +| applicationSet.service.portName | string | `"webhook"` | Application set service port name | +| applicationSet.serviceAccount.annotations | object | `{}` | Annotations to add to the service account | +| applicationSet.serviceAccount.create | bool | `true` | Specifies whether a service account should be created | +| applicationSet.serviceAccount.labels | object | `{}` | Labels applied to created service account | +| applicationSet.serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | +| applicationSet.tolerations | list | `[]` | [Tolerations] for use with node taints | +| applicationSet.webhook.ingress.annotations | object | `{}` | Additional ingress annotations | +| applicationSet.webhook.ingress.enabled | bool | `false` | Enable an ingress resource for Webhooks | +| applicationSet.webhook.ingress.extraPaths | list | `[]` | Additional ingress paths | +| applicationSet.webhook.ingress.hosts | list | `[]` | List of ingress hosts | +| applicationSet.webhook.ingress.ingressClassName | string | `""` | Defines which ingress controller will implement the resource | +| applicationSet.webhook.ingress.labels | object | `{}` | Additional ingress labels | +| applicationSet.webhook.ingress.pathType | string | `"Prefix"` | Ingress path type. One of `Exact`, `Prefix` or `ImplementationSpecific` | +| applicationSet.webhook.ingress.paths | list | `["/api/webhook"]` | List of ingress paths | +| applicationSet.webhook.ingress.tls | list | `[]` | Ingress TLS configuration | + +## Notifications + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| notifications.affinity | object | `{}` | Assign custom [affinity] rules | +| notifications.argocdUrl | string | `nil` | Argo CD dashboard url; used in place of {{.context.argocdUrl}} in templates | +| notifications.bots.slack.affinity | object | `{}` | Assign custom [affinity] rules | +| notifications.bots.slack.containerSecurityContext | object | See [values.yaml] | Slack bot container-level security Context | +| notifications.bots.slack.enabled | bool | `false` | Enable slack bot | +| notifications.bots.slack.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the Slack bot | +| notifications.bots.slack.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the Slack bot | +| notifications.bots.slack.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the Slack bot | +| notifications.bots.slack.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| notifications.bots.slack.nodeSelector | object | `{}` | [Node selector] | +| notifications.bots.slack.pdb.annotations | object | `{}` | Annotations to be added to Slack bot pdb | +| notifications.bots.slack.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the Slack bot | +| notifications.bots.slack.pdb.labels | object | `{}` | Labels to be added to Slack bot pdb | +| notifications.bots.slack.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| notifications.bots.slack.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| notifications.bots.slack.resources | object | `{}` | Resource limits and requests for the Slack bot | +| notifications.bots.slack.service.annotations | object | `{}` | Service annotations for Slack bot | +| notifications.bots.slack.service.port | int | `80` | Service port for Slack bot | +| notifications.bots.slack.service.type | string | `"LoadBalancer"` | Service type for Slack bot | +| notifications.bots.slack.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| notifications.bots.slack.serviceAccount.create | bool | `true` | Specifies whether a service account should be created | +| notifications.bots.slack.serviceAccount.name | string | `"argocd-notifications-bot"` | The name of the service account to use. | +| notifications.bots.slack.tolerations | list | `[]` | [Tolerations] for use with node taints | +| notifications.cm.create | bool | `true` | Whether helm chart creates controller config map | +| notifications.containerSecurityContext | object | See [values.yaml] | Notification controller container-level security Context | +| notifications.context | object | `{}` | Define user-defined context | +| notifications.enabled | bool | `true` | Enable notifications controller | +| notifications.extraArgs | list | `[]` | Extra arguments to provide to the controller | +| notifications.extraEnv | list | `[]` | Additional container environment variables | +| notifications.extraEnvFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the controller | +| notifications.extraVolumeMounts | list | `[]` | List of extra mounts to add (normally used with extraVolumes) | +| notifications.extraVolumes | list | `[]` | List of extra volumes to add | +| notifications.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the notifications controller | +| notifications.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the notifications controller | +| notifications.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the notifications controller | +| notifications.imagePullSecrets | list | `[]` (defaults to global.imagePullSecrets) | Secrets with credentials to pull images from a private registry | +| notifications.logFormat | string | `""` (defaults to global.logging.format) | Application controller log format. Either `text` or `json` | +| notifications.logLevel | string | `""` (defaults to global.logging.level) | Application controller log level. One of: `debug`, `info`, `warn`, `error` | +| notifications.metrics.enabled | bool | `false` | Enables prometheus metrics server | +| notifications.metrics.port | int | `9001` | Metrics port | +| notifications.metrics.service.annotations | object | `{}` | Metrics service annotations | +| notifications.metrics.service.labels | object | `{}` | Metrics service labels | +| notifications.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | +| notifications.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | +| notifications.metrics.serviceMonitor.annotations | object | `{}` | Prometheus ServiceMonitor annotations | +| notifications.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | +| notifications.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | +| notifications.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | +| notifications.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | +| notifications.name | string | `"notifications-controller"` | Notifications controller name string | +| notifications.nodeSelector | object | `{}` | [Node selector] | +| notifications.notifiers | object | See [values.yaml] | Configures notification services such as slack, email or custom webhook | +| notifications.pdb.annotations | object | `{}` | Annotations to be added to notifications controller pdb | +| notifications.pdb.enabled | bool | `false` | Deploy a [PodDisruptionBudget] for the notifications controller | +| notifications.pdb.labels | object | `{}` | Labels to be added to notifications controller pdb | +| notifications.pdb.maxUnavailable | string | `""` | Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). | +| notifications.pdb.minAvailable | string | `""` (defaults to 0 if not specified) | Number of pods that are available after eviction as number or percentage (eg.: 50%) | +| notifications.podAnnotations | object | `{}` | Annotations to be applied to the controller Pods | +| notifications.podLabels | object | `{}` | Labels to be applied to the controller Pods | +| notifications.priorityClassName | string | `""` | Priority class for the controller pods | +| notifications.resources | object | `{}` | Resource limits and requests for the controller | +| notifications.secret.annotations | object | `{}` | key:value pairs of annotations to be added to the secret | +| notifications.secret.create | bool | `true` | Whether helm chart creates controller secret | +| notifications.secret.items | object | `{}` | Generic key:value pairs to be inserted into the secret | +| notifications.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | +| notifications.serviceAccount.create | bool | `true` | Specifies whether a service account should be created | +| notifications.serviceAccount.labels | object | `{}` | Labels applied to created service account | +| notifications.serviceAccount.name | string | `"argocd-notifications-controller"` | The name of the service account to use. | +| notifications.subscriptions | list | `[]` | Contains centrally managed global application subscriptions | +| notifications.templates | object | `{}` | The notification template is used to generate the notification content | +| notifications.tolerations | list | `[]` | [Tolerations] for use with node taints | +| notifications.triggers | object | `{}` | The trigger defines the condition when the notification should be sent | + +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs](https://github.com/norwoodj/helm-docs) + +[Argo CD RBAC policy]: https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/ +[affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ +[BackendConfigSpec]: https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig#backendconfigspec_v1beta1_cloudgooglecom +[CSS styles]: https://argo-cd.readthedocs.io/en/stable/operator-manual/custom-styles/ +[external cluster credentials]: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters +[FrontendConfigSpec]: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#configuring_ingress_features_through_frontendconfig_parameters +[Declarative setup]: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup +[gRPC-ingress]: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/ +[HPA]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +[MetricRelabelConfigs]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs +[Node selector]: https://kubernetes.io/docs/user-guide/node-selection/ +[PodDisruptionBudget]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets +[probe]: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes +[RelabelConfigs]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config +[Tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +[TopologySpreadConstraints]: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +[values.yaml]: values.yaml +[v2.2 to 2.3 upgrade instructions]: https://github.com/argoproj/argo-cd/blob/v2.3.0/docs/operator-manual/upgrading/2.2-2.3.md diff --git a/charts/argo/argo-cd/5.13.7/charts/redis-ha/Chart.yaml b/charts/argo/argo-cd/5.13.7/charts/redis-ha/Chart.yaml new file mode 100644 index 000000000..567db2ecd --- /dev/null +++ b/charts/argo/argo-cd/5.13.7/charts/redis-ha/Chart.yaml @@ -0,0 +1,21 @@ +apiVersion: v2 +appVersion: 7.0.4 +description: This Helm chart provides a highly available Redis implementation with + a master/slave configuration and uses Sentinel sidecars for failover management +home: http://redis.io/ +icon: https://upload.wikimedia.org/wikipedia/en/thumb/6/6b/Redis_Logo.svg/1200px-Redis_Logo.svg.png +keywords: +- redis +- keyvalue +- database +maintainers: +- email: salimsalaues@gmail.com + name: ssalaues +- email: aaron.layfield@gmail.com + name: dandydeveloper +name: redis-ha +sources: +- https://redis.io/download +- https://github.com/DandyDeveloper/charts/blob/master/charts/redis-ha +- https://github.com/oliver006/redis_exporter +version: 4.22.3 diff --git a/charts/argo/argo-cd/5.13.7/charts/redis-ha/README.md b/charts/argo/argo-cd/5.13.7/charts/redis-ha/README.md new file mode 100644 index 000000000..5be76f3e3 --- /dev/null +++ b/charts/argo/argo-cd/5.13.7/charts/redis-ha/README.md @@ -0,0 +1,396 @@ +# Redis + +[Redis](http://redis.io/) is an advanced key-value cache and store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, sorted sets, bitmaps and hyperloglogs. + +## TL;DR + +```bash +helm repo add dandydev https://dandydeveloper.github.io/charts +helm install dandydev/redis-ha +``` + +By default this chart install 3 pods total: + +* one pod containing a redis master and sentinel container (optional prometheus metrics exporter sidecar available) +* two pods each containing a redis slave and sentinel containers (optional prometheus metrics exporter sidecars available) + +## Introduction + +This chart bootstraps a [Redis](https://redis.io) highly available master/slave statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager. + +## Prerequisites + +* Kubernetes 1.8+ with Beta APIs enabled +* PV provisioner support in the underlying infrastructure + +## Upgrading the Chart + +Please note that there have been a number of changes simplifying the redis management strategy (for better failover and elections) in the 3.x version of this chart. These changes allow the use of official [redis](https://hub.docker.com/_/redis/) images that do not require special RBAC or ServiceAccount roles. As a result when upgrading from version >=2.0.1 to >=3.0.0 of this chart, `Role`, `RoleBinding`, and `ServiceAccount` resources should be deleted manually. + +### Upgrading the chart from 3.x to 4.x + +Starting from version `4.x` HAProxy sidecar prometheus-exporter removed and replaced by the embedded [HAProxy metrics endpoint](https://github.com/haproxy/haproxy/tree/master/contrib/prometheus-exporter), as a result when upgrading from version 3.x to 4.x section `haproxy.exporter` should be removed and the `haproxy.metrics` need to be configured for fit your needs. + +## Installing the Chart + +To install the chart + +```bash +helm repo add dandydev https://dandydeveloper.github.io/charts +helm install dandydev/redis-ha +``` + +The command deploys Redis on the Kubernetes cluster in the default configuration. By default this chart install one master pod containing redis master container and sentinel container along with 2 redis slave pods each containing their own sentinel sidecars. The [configuration](#configuration) section lists the parameters that can be configured during installation. + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the deployment: + +```bash +helm delete +``` + +The command removes all the Kubernetes components associated with the chart and deletes the release. + +## Configuration + +The following table lists the configurable parameters of the Redis chart and their default values. + +| Parameter | Description | Default | +|:--------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------| +| `image.repository` | Redis image repository | `redis` | +| `image.tag` | Redis image tag | `6.2.5-alpine` | +| `image.pullPolicy` | Redis image pull policy | `IfNotPresent` | +| `imagePullSecrets` | Reference to one or more secrets to be used when pulling redis images | [] | +| `tag` | Redis tag | `6.2.5-alpine` | +| `replicas` | Number of redis master/slave pods | `3` | +| `podManagementPolicy` | The statefulset pod management policy | `OrderedReady` | +| `ro_replicas` | Comma separated list of slaves which never get promoted to be master. Count starts with 0. Allowed values 1-9. i.e. 3,4 - 3th and 4th redis slave never make it to be master, where master is index 0. | ``| +| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` | +| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the redis-ha.fullname template | +| `serviceAccount.automountToken` | Opt in/out of automounting API credentials into container | `false` | +| `serviceAnnotations` | Annotations to set on Redis HA service | `null` | +| `serviceLabels` | Labels to set on Redis HA service | `{}` | +| `rbac.create` | Create and use RBAC resources | `true` | +| `redis.port` | Port to access the redis service | `6379` | +| `redis.tlsPort` | TLS Port to access the redis service |``| +| `redis.tlsReplication` | Configures redis with tls-replication parameter, if true sets "tls-replication yes" in redis.conf |``| +| `redis.authClients` | It is possible to disable client side certificates authentication when "authClients" is set to "no" |``| +| `redis.livenessProbe.initialDelaySeconds` | Initial delay in seconds for liveness probe | `30` | +| `redis.livenessProbe.periodSeconds` | Period in seconds after which liveness probe will be repeated | `15` | +| `redis.livenessProbe.timeoutSeconds` | Timeout seconds for liveness probe | `15` | +| `redis.livenessProbe.successThreshold` | Success threshold for liveness probe | `1` | +| `redis.livenessProbe.failureThreshold` | Failure threshold for liveness probe | `5` | +| `redis.readinessProbe.initialDelaySeconds` | Initial delay in seconds for readiness probe | `30` | +| `redis.readinessProbe.periodSeconds` | Period in seconds after which readiness probe will be repeated | `15` | +| `redis.readinessProbe.timeoutSeconds` | Timeout seconds for readiness probe | `15` | +| `redis.readinessProbe.successThreshold` | Success threshold for readiness probe | `1` | +| `redis.readinessProbe.failureThreshold` | Failure threshold for readiness probe | `5` | +| `redis.masterGroupName` | Redis convention for naming the cluster group: must match `^[\\w-\\.]+$` and can be templated | `mymaster` | +| `redis.disableCommands` | Array with commands to disable | `["FLUSHDB","FLUSHALL"]` | +| `redis.config` | Any valid redis config options in this section will be applied to each server (see below) | see values.yaml | +| `redis.customConfig` | Allows for custom redis.conf files to be applied. If this is used then `redis.config` is ignored |``| +| `redis.resources` | CPU/Memory for master/slave nodes resource requests/limits | `{}` | +| `redis.lifecycle` | Container Lifecycle Hooks for redis container | see values.yaml | +| `redis.annotations` | Annotations for the redis statefulset | `{}` | +| `redis.updateStategy.type`| Update strategy for redis statefulSet | `RollingUpdate` | +| `redis.extraVolumeMounts` | Extra volume mounts for Redis container | `[]` | +| `sentinel.port` | Port to access the sentinel service | `26379` | +| `sentinel.bind` | Configure the 'bind' directive to bind to a list of network interfaces | `` | +| `sentinel.tlsPort` | TLS Port to access the sentinel service |``| +| `sentinel.tlsReplication` | Configures sentinel with tls-replication parameter, if true sets "tls-replication yes" in sentinel.conf |``| +| `sentinel.authClients` | It is possible to disable client side certificates authentication when "authClients" is set to "no" |``| +| `sentinel.livenessProbe.initialDelaySeconds` | Initial delay in seconds for liveness probe | `30` | +| `sentinel.livenessProbe.periodSeconds` | Period in seconds after which liveness probe will be repeated | `15` | +| `sentinel.livenessProbe.timeoutSeconds` | Timeout seconds for liveness probe | `15` | +| `sentinel.livenessProbe.successThreshold` | Success threshold for liveness probe | `1` | +| `sentinel.livenessProbe.failureThreshold` | Failure threshold for liveness probe | `5` | +| `sentinel.readinessProbe.initialDelaySeconds` | Initial delay in seconds for readiness probe | `30` | +| `sentinel.readinessProbe.periodSeconds` | Period in seconds after which readiness probe will be repeated | `15` | +| `sentinel.readinessProbe.timeoutSeconds` | Timeout seconds for readiness probe | `15` | +| `sentinel.readinessProbe.successThreshold` | Success threshold for readiness probe | `3` | +| `sentinel.readinessProbe.failureThreshold` | Failure threshold for readiness probe | `5` | +| `sentinel.auth` | Enables or disables sentinel AUTH (Requires `sentinel.password` to be set) | `false` | +| `sentinel.password` | A password that configures a `requirepass` in the conf parameters (Requires `sentinel.auth: enabled`) |``| +| `sentinel.existingSecret` | An existing secret containing a key defined by `sentinel.authKey` that configures `requirepass` in the conf parameters (Requires `sentinel.auth: enabled`, cannot be used in conjunction with `.Values.sentinel.password`) |``| +| `sentinel.authKey` | The key holding the sentinel password in an existing secret. | `sentinel-password` | +| `sentinel.quorum` | Minimum number of servers necessary to maintain quorum | `2` | +| `sentinel.config` | Valid sentinel config options in this section will be applied as config options to each sentinel (see below) | see values.yaml | +| `sentinel.customConfig` | Allows for custom sentinel.conf files to be applied. If this is used then `sentinel.config` is ignored |``| +| `sentinel.resources` | CPU/Memory for sentinel node resource requests/limits | `{}` | +| `sentinel.lifecycle` | Container Lifecycle Hooks for sentinel container | `{}` | +| `sentinel.extraVolumeMounts` | Extra volume mounts for Sentinel container | `[]` | +| `init.resources` | CPU/Memory for init Container node resource requests/limits | `{}` | +| `auth` | Enables or disables redis AUTH (Requires `redisPassword` to be set) | `false` | +| `redisPassword` | A password that configures a `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`) |``| +| `authKey` | The key holding the redis password in an existing secret. | `auth` | +| `existingSecret` | An existing secret containing a key defined by `authKey` that configures `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`, cannot be used in conjunction with `.Values.redisPassword`) |``| +| `nodeSelector` | Node labels for pod assignment | `{}` | +| `tolerations` | Toleration labels for pod assignment | `[]` | +| `hardAntiAffinity` | Whether the Redis server pods should be forced to run on separate nodes. | `true` | +| `additionalAffinities` | Additional affinities to add to the Redis server pods. | `{}` | +| `securityContext` | Security context to be added to the Redis StatefulSet. | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}` | +| `containerSecurityContext` | Security context to be added to the Redis containers. | `{ runAsNonRoot: true, allowPrivilegeEscalation: false, seccompProfile: { type: RuntimeDefault }, capabilities: { drop: [ "ALL" ] }` | +| `affinity` | Override all other affinity settings with a string. | `""` | +| `labels` | Labels for the Redis pod. | `{}` | +| `configmap.labels` | Labels for the Redis configmap. | `{}` | +| `configmapTest.image.repository` | Repository of the configmap shellcheck test image. | `koalaman/shellcheck` | +| `configmapTest.image.tag` | Tag of the configmap shellcheck test image. | `v0.5.0` | +| `configmapTest.resources` | Resources for the ConfigMap tests. | `{}` | +| `persistentVolume.size` | Size for the volume | 10Gi | +| `persistentVolume.annotations` | Annotations for the volume | `{}` | +| `persistentVolume.labels` | Labels for the volume | `{}` | +| `emptyDir` | Configuration of `emptyDir`, used only if persistentVolume is disabled and no hostPath specified | `{}` | +| `exporter.enabled` | If `true`, the prometheus exporter sidecar is enabled | `false` | +| `exporter.image` | Exporter image | `oliver006/redis_exporter` | +| `exporter.tag` | Exporter tag | `v1.27.0` | +| `exporter.port` | Exporter port | `9121` | +| `exporter.portName` | Exporter port name | `exporter-port` | +| `exporter.address` | Redis instance Hostname/Address Exists to circumvent some issues with issues in IPv6 hostname resolution | `localhost` | +| `exporter.annotations` | Prometheus scrape annotations | `{prometheus.io/path: /metrics, prometheus.io/port: "9121", prometheus.io/scrape: "true"}` | +| `exporter.extraArgs` | Additional args for the exporter | `{}` | +| `exporter.script` | A custom custom Lua script that will be mounted to exporter for collection of custom metrics. Creates a ConfigMap and sets env var `REDIS_EXPORTER_SCRIPT`. | | +| `exporter.serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` | +| `exporter.serviceMonitor.namespace` | Namespace the service monitor is created in | `default` | +| `exporter.serviceMonitor.interval` | Scrape interval, If not set, the Prometheus default scrape interval is used | `nil` | +| `exporter.serviceMonitor.telemetryPath` | Path to redis-exporter telemetry-path | `/metrics` | +| `exporter.serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` | +| `exporter.serviceMonitor.timeout` | How long until a scrape request times out. If not set, the Prometheus default scape timeout is used | `nil` | +| `haproxy.enabled` | Enabled HAProxy LoadBalancing/Proxy | `false` | +| `haproxy.replicas` | Number of HAProxy instances | `3` | +| `haproxy.servicePort` | Modify HAProxy service port | `6379` | +| `haproxy.containerPort` | Modify HAProxy deployment container port | `6379` +| `haproxy.image.repository`| HAProxy Image Repository | `haproxy` | +| `haproxy.image.tag` | HAProxy Image Tag | `2.4.2` | +| `haproxy.image.pullPolicy`| HAProxy Image PullPolicy | `IfNotPresent` | +| `haproxy.imagePullSecrets`| Reference to one or more secrets to be used when pulling haproxy images | [] | +| `haproxy.tls.enabled` | If "true" this will enable TLS termination on haproxy | `false` +| `haproxy.tls.secretName` | Secret containing the .pem file | `""` +| `haproxy.tls.certMountPath` | Path to mount the secret that contains the certificates. haproxy | `false` +| `haproxy.tls.secretName` | Secret containing the .pem file | `""` +| `haproxy.annotations` | HAProxy template annotations | `{}` | +| `haproxy.customConfig` | Allows for custom config-haproxy.cfg file to be applied. If this is used then default config will be overwriten |``| +| `haproxy.extraConfig` | Allows to place any additional configuration section to add to the default config-haproxy.cfg |``| +| `haproxy.resources` | HAProxy resources | `{}` | +| `haproxy.emptyDir` | Configuration of `emptyDir` | `{}` | +| `haproxy.labels` | Labels for the HAProxy pod | `{}` | +| `haproxy.service.type` | HAProxy service type "ClusterIP", "LoadBalancer" or "NodePort" | `ClusterIP` | +| `haproxy.service.nodePort` | HAProxy service nodePort value (haproxy.service.type must be NodePort) | not set | +| `haproxy.image.serviceAccountName`| HAProxy serviceAccountName | `default` +| `haproxy.service.externalTrafficPolicy`| HAProxy service externalTrafficPolicy value (haproxy.service.type must be LoadBalancer) | not set | +| `haproxy.service.annotations` | HAProxy service annotations | `{}` | +| `haproxy.service.labels` | HAProxy service labels | `{}` | +| `haproxy.stickyBalancing` | HAProxy sticky load balancing to Redis nodes. Helps with connections shutdown. | `false` | +| `haproxy.hapreadport.enable` | Enable a read only port for redis slaves | `false` | +| `haproxy.hapreadport.port` | Haproxy port for read only redis slaves | `6380` | +| `haproxy.metrics.enabled` | HAProxy enable prometheus metric scraping | `false` | +| `haproxy.metrics.port` | HAProxy prometheus metrics scraping port | `9101` | +| `haproxy.metrics.portName` | HAProxy metrics scraping port name | `http-exporter-port` | +| `haproxy.metrics.scrapePath` | HAProxy prometheus metrics scraping port | `/metrics` | +| `haproxy.metrics.serviceMonitor.enabled` | Use servicemonitor from prometheus operator for HAProxy metrics | `false` | +| `haproxy.metrics.serviceMonitor.namespace` | Namespace the service monitor for HAProxy metrics is created in | `default` | +| `haproxy.metrics.serviceMonitor.interval` | Scrape interval, If not set, the Prometheus default scrape interval is used | `nil` | +| `haproxy.metrics.serviceMonitor.telemetryPath` | Path to HAProxy metrics telemetry-path | `/metrics` | +| `haproxy.metrics.serviceMonitor.labels` | Labels for the HAProxy metrics servicemonitor passed to Prometheus Operator | `{}` | +| `haproxy.metrics.serviceMonitor.timeout` | How long until a scrape request times out. If not set, the Prometheus default scape timeout is used | `nil` | +| `haproxy.init.resources` | Extra init resources | `{}` | +| `haproxy.timeout.connect` | haproxy.cfg `timeout connect` setting | `4s` | +| `haproxy.timeout.server` | haproxy.cfg `timeout server` setting | `30s` | +| `haproxy.timeout.client` | haproxy.cfg `timeout client` setting | `30s` | +| `haproxy.timeout.check` | haproxy.cfg `timeout check` setting | `2s` | +| `haproxy.checkInterval` | haproxy.cfg `check inter` setting | `1s` | +| `haproxy.checkFall` | haproxy.cfg `check fall` setting | `1` | +| `haproxy.priorityClassName` | priorityClassName for `haproxy` deployment | not set | +| `haproxy.securityContext` | Security context to be added to the HAProxy deployment. | `{runAsUser: 99, fsGroup: 99, runAsNonRoot: true}` | +| `haproxy.containerSecurityContext` | Security context to be added to the HAProxy containers. | `{ runAsNonRoot: true, allowPrivilegeEscalation: false, seccompProfile: { type: RuntimeDefault }, capabilities: { drop: [ "ALL" ] }` | +| `haproxy.hardAntiAffinity` | Whether the haproxy pods should be forced to run on separate nodes. | `true` | +| `haproxy.affinity` | Override all other haproxy affinity settings with a string. | `""` | +| `haproxy.additionalAffinities` | Additional affinities to add to the haproxy server pods. | `{}` | +| `haproxy.tests.resources` | Pod resources for the tests against HAProxy. | `{}` | +| `haproxy.IPv6.enabled` | Disables certain binding options to support non-IPv6 environments. | `true` | +| `podDisruptionBudget` | Pod Disruption Budget rules | `{}` | +| `nameOverride` | Override the chart name | `""` | +| `fullnameOverride` | Fully override the release name and chart name | `""` | +| `priorityClassName` | priorityClassName for `redis-ha-statefulset` | not set | +| `hostPath.path` | Use this path on the host for data storage | not set | +| `hostPath.chown` | Run an init-container as root to set ownership on the hostPath | `true` | +| `sysctlImage.enabled` | Enable an init container to modify Kernel settings | `false` | +| `sysctlImage.command` | sysctlImage command to execute | [] | +| `sysctlImage.registry` | sysctlImage Init container registry | `docker.io` | +| `sysctlImage.repository` | sysctlImage Init container name | `busybox` | +| `sysctlImage.tag` | sysctlImage Init container tag | `1.31.1` | +| `sysctlImage.pullPolicy` | sysctlImage Init container pull policy | `Always` | +| `sysctlImage.mountHostSys`| Mount the host `/sys` folder to `/host-sys` | `false` | +| `sysctlImage.resources` | sysctlImage resources | `{}` | +| `schedulerName` | Alternate scheduler name | `nil` | +| `tls.secretName` | The name of secret if you want to use your own TLS certificates. The secret should contains keys named by "tls.certFile" - the certificate, "tls.keyFile" - the private key, "tls.caCertFile" - the certificate of CA and "tls.dhParamsFile" - the dh parameter file | ``| +| `tls.certFile` | Name of certificate file | `redis.crt` | +| `tls.keyFile` | Name of key file | `redis.key` | +| `tls.dhParamsFile` | Name of Diffie-Hellman (DH) key exchange parameters file |`` | +| `tls.caCertFile` | Name of CA certificate file | `ca.crt` | +| `restore.s3.source` | Restore init container - AWS S3 location of dump - i.e. s3://bucket/dump.rdb | `false` | +| `restore.existingSecret` | Set to true to use existingSecret for the AWS S3 or SSH credentials | `false` | +| `topologySpreadConstraints.enabled` | Enable topology spread constraints |`false`| +| `topologySpreadConstraints.maxSkew` | Max skew of pods tolerated |`1`| +| `topologySpreadConstraints.topologyKey` | Topology key for spread |`topology.kubernetes.io/zone`| +| `topologySpreadConstraints.whenUnsatisfiable` | Enforcement policy, hard or soft |`ScheduleAnyway`| +| `restore.s3.access_key` | Restore init container - AWS AWS_ACCESS_KEY_ID to access restore.s3.source |``| +| `restore.s3.secret_key` | Restore init container - AWS AWS_SECRET_ACCESS_KEY to access restore.s3.source |``| +| `restore.s3.region` | Restore init container - AWS AWS_REGION to access restore.s3.source |``| +| `restore.ssh.source` | Restore init container - SSH scp location of dump - i.e. user@server:/path/dump.rdb | `false` | +| `restore.ssh.key` | Restore init container - SSH private key to scp restore.ssh.source to init container. Key should be in one line separated with \n. i.e. -----BEGIN RSA PRIVATE KEY-----\n...\n...\n-----END RSA PRIVATE KEY----- |`` | +| `extraContainers` | Extra containers to include in StatefulSet |`[]`| +| `extraInitContainers` | Extra init containers to include in StatefulSet |`[]`| +| `extraVolumes` | Extra volumes to include in StatefulSet |`[]`| +| `extraLabels` | Labels that should be applied to all created resources |`{}`| +| `networkPolicy.enabled` | Create NetworkPolicy for Redis StatefulSet pods |`false`| +| `networkPolicy.labels` | Labels for NetworkPolicy |`{}`| +| `networkPolicy.annotations` | Annotations for NetworkPolicy |`{}`| +| `networkPolicy.ingressRules[].selectors` | Label selector query to define resources for this ingress rule |`[]`| +| `networkPolicy.ingressRules[].ports` | The destination ports for the ingress rule |`[{port: redis.port, protocol: TCP}, {port: sentinel.port, protocol: TCP}]`| +| `networkPolicy.egressRules[].selectors` | Label selector query to define resources for this egress rule |`[]`| +| `networkPolicy.egressRules[].ports` | The destination ports for the egress rule |``| +| `splitBrainDetection.interval` | Interval between redis sentinel and server split brain checks (in seconds) |`60`| +| `splitBrainDetection.resources` | splitBrainDetection resources |`{}`| + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, + +```bash +$ helm repo add dandydev https://dandydeveloper.github.io/charts +$ helm install \ + --set image=redis \ + --set tag=5.0.5-alpine \ + dandydev/redis-ha +``` + +The above command sets the Redis server within `default` namespace. + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```bash +helm install -f values.yaml dandydev/redis-ha +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Custom Redis and Sentinel config options + +This chart allows for most redis or sentinel config options to be passed as a key value pair through the `values.yaml` under `redis.config` and `sentinel.config`. See links below for all available options. + +[Example redis.conf](http://download.redis.io/redis-stable/redis.conf) +[Example sentinel.conf](http://download.redis.io/redis-stable/sentinel.conf) + +For example `repl-timeout 60` would be added to the `redis.config` section of the `values.yaml` as: + +```yml + repl-timeout: "60" +``` + +Note: + +1. Some config options should be renamed by redis version,e.g.: + + ```yml + # In redis 5.x,see https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf + min-replicas-to-write: 1 + min-replicas-max-lag: 5 + + # In redis 4.x and redis 3.x,see https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf and https://raw.githubusercontent.com/antirez/redis/3.0/redis.conf + min-slaves-to-write 1 + min-slaves-max-lag 5 + ``` + +Sentinel options supported must be in the the `sentinel