5335667721 | ||
---|---|---|
.. | ||
charts/redis-ha | ||
templates | ||
Chart.lock | ||
Chart.yaml | ||
README.md | ||
values.yaml |
README.md
Argo CD Chart
A Helm chart for Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes.
Source code can be found here:
This is a community maintained chart. This chart installs argo-cd, a declarative, GitOps continuous delivery tool for Kubernetes.
The default installation is intended to be similar to the provided Argo CD releases.
If you want to avoid including sensitive information unencrypted (clear text) in your version control, make use of the declarative setup of Argo CD. For instance, rather than adding repositories and their keys in your Helm values, you could deploy SealedSecrets with contents as seen in this repositories section or any other secrets manager service (i.e. HashiCorp Vault, AWS/GCP Secrets Manager, etc.).
High Availability
This chart installs the non-HA version of Argo CD by default. If you want to run Argo CD in HA mode, you can use one of the example values in the next sections. Please also have a look into the upstream Operator Manual regarding High Availability to understand how scaling of Argo CD works in detail.
Warning: You need at least 3 worker nodes as the HA mode of redis enforces Pods to run on separate nodes.
HA mode with autoscaling
redis-ha:
enabled: true
controller:
replicas: 1
server:
autoscaling:
enabled: true
minReplicas: 2
repoServer:
autoscaling:
enabled: true
minReplicas: 2
applicationSet:
replicaCount: 2
HA mode without autoscaling
redis-ha:
enabled: true
controller:
replicas: 1
server:
replicas: 2
repoServer:
replicas: 2
applicationSet:
replicaCount: 2
Synchronizing Changes from Original Repository
In the original Argo CD repository an manifests/install.yaml
is generated using kustomize
. It's the basis for the installation as described in the docs.
When installing Argo CD using this helm chart the user should have a similar experience and configuration rolled out. Hence, it makes sense to try to achieve a similar output of rendered .yaml
resources when calling helm template
using the default settings in values.yaml
.
To update the templates and default settings in values.yaml
it may come in handy to look up the diff of the manifests/install.yaml
between two versions accordingly. This can either be done directly via github and look for manifests/install.yaml
:
https://github.com/argoproj/argo-cd/compare/v1.8.7...v2.0.0#files_bucket
Or you clone the repository and do a local git-diff
:
git clone https://github.com/argoproj/argo-cd.git
cd argo-cd
git diff v1.8.7 v2.0.0 -- manifests/install.yaml
Changes in the CustomResourceDefinition
resources shall be fixed easily by copying 1:1 from the manifests/crds
folder into this charts/argo-cd/templates/crds
folder.
Custom resource definitions
Some users would prefer to install the CRDs outside of the chart. You can disable the CRD installation of this chart by using --set crds.install=false
when installing the chart.
Helm cannot upgrade custom resource definitions in the <chart>/crds
folder by design. Starting with 5.2.0, the CRDs have been moved to <chart>/templates
to address this design decision.
If you are using Argo CD chart version prior to 5.2.0 or have elected to manage the Argo CD CRDs outside of the chart, please use kubectl
to upgrade CRDs manually from templates/crds folder or via the manifests from the upstream project repo:
kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=<appVersion>"
# Eg. version v2.4.9
kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=v2.4.9"
Changelog
For full list of changes please check ArtifactHub changelog.
Highlighted versions provide information about additional steps that should be performed by user when upgrading to newer version.
5.13.0
This version reduces history limit for Argo CD deployment replicas to 3 to provide more visibility for Argo CD deployments that manage itself. If you need more deployment revisions for rollbacks set global.revisionHistoryLimit
parameter.
5.12.0
This version deprecates the configs.secret.argocdServerTlsConfig
option. Use server.certificate
or server.certificateSecret
to provide custom TLS configuration for Argo CD server.
If you terminate TLS on ingress please use argocd-server-tls
secret instead of argocd-secret
secret.
5.10.0
This version hardens security by configuring default container security contexts and adds hard requirement for Kubernetes 1.22+ to work properly. The change aligns chart with officially supported versions by upstream project.
5.7.0
This version introcudes new configs.cm
and configs.rbac
sections that replaces server.config
and server.rbacConfig
respectively.
Please move your current configuration to the new place. The Argo CD RBAC config now also sets defaults in the argocd-rbac-cm
.
If you have manually created this ConfigMap please ensure templating is disabled so you will not lose your changes.
5.5.20
This version moved API version templates into dedicated helper. If you are using these in your umbrella
chart please migrate your templates to pattern argo-cd.apiVersion.<component>
.
5.5.0
This version introduces new configs.params
section that replaces command line arguments for containers.
Please refer to documentation in values.yaml for migrating the configuration.
5.2.0
Custom resource definitions were moved to templates
folder so they can be managed by Helm.
To adopt already created CRDs, please use following command:
YOUR_ARGOCD_NAMESPACE="" # e.g. argo-cd
YOUR_ARGOCD_RELEASENAME="" # e.g. argo-cd
for crd in "applications.argoproj.io" "applicationsets.argoproj.io" "argocdextensions.argoproj.io" "appprojects.argoproj.io"; do
kubectl label --overwrite crd $crd app.kubernetes.io/managed-by=Helm
kubectl annotate --overwrite crd $crd meta.helm.sh/release-namespace="$YOUR_ARGOCD_NAMESPACE"
kubectl annotate --overwrite crd $crd meta.helm.sh/release-name="$YOUR_ARGOCD_RELEASENAME"
done
5.0.0
This version removes support for:
- deprecated repository credentials (parameter
configs.repositoryCredentials
) - option to run application controller as a Deployment
- the parameters
server.additionalApplications
andserver.additionalProjects
Please carefully read the following section if you are using these parameters!
In order to upgrade Applications and Projects safely against CRDs' upgrade, server.additionalApplications
and server.additionalProjects
are moved to argocd-apps.
If you are using server.additionalApplications
or server.additionalProjects
, you can adopt to argocd-apps as below:
- Add helm.sh/resource-policy annotation to avoid resources being removed by upgrading Helm chart
You can keep your existing CRDs by adding "helm.sh/resource-policy": keep
on additionalAnnotations
, under server.additionalApplications
and server.additionalProjects
blocks, and running helm upgrade
.
e.g:
server:
additionalApplications:
- name: guestbook
namespace: argocd
additionalLabels: {}
additionalAnnotations:
"helm.sh/resource-policy": keep # <-- add this
finalizers:
- resources-finalizer.argocd.argoproj.io
project: guestbook
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: guestbook
syncPolicy:
automated:
prune: false
selfHeal: false
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas
info:
- name: url
value: https://argoproj.github.io/
You can also keep your existing CRDs by running the following scripts.
# keep Applications
for app in "guestbook"; do
kubectl annotate --overwrite application $app helm.sh/resource-policy=keep
done
# keep Projects
for project in "guestbook"; do
kubectl annotate --overwrite appproject $project helm.sh/resource-policy=keep
done
-
Upgrade argo-cd Helm chart to v5.0.0
-
Remove keep helm.sh/resource-policy annotation
# delete annotations from Applications
for app in "guestbook"; do
kubectl annotate --overwrite application $app helm.sh/resource-policy-
done
# delete annotations from Projects
for project in "guestbook"; do
kubectl annotate --overwrite appproject $project helm.sh/resource-policy-
done
- Adopt existing resources to argocd-apps
4.9.0
This version starts to use upstream image with applicationset binary. Start command was changed from applicationset-controller
to argocd-applicationset-controller
4.3.*
With this minor version, the notification notifier's service.slack
is no longer configured by default.
4.0.0 and above
This helm chart version deploys Argo CD v2.3. The Argo CD Notifications and ApplicationSet are part of Argo CD now. You no longer need to install them separately. The Notifications and ApplicationSet components are bundled into default Argo CD installation. Please read the v2.2 to 2.3 upgrade instructions in the upstream repository.
3.13.0
This release removes the flag --staticassets
from argocd server as it has been dropped upstream. If this flag needs to be enabled e.g for older releases of Argo CD, it can be passed via the server.extraArgs
field
3.10.2
Argo CD has recently deprecated the flag --staticassets
and from chart version 3.10.2
has been disabled by default
It can be re-enabled by setting server.staticAssets.enabled
to true
3.8.1
This bugfix version potentially introduces a rename (and recreation) of one or more ServiceAccounts. It only happens when you use one of these customization:
# Case 1) - only happens when you do not specify a custom name (repoServer.serviceAccount.name)
repoServer:
serviceAccount:
create: true
# Case 2)
controller:
serviceAccount:
name: "" # or <nil>
# Case 3)
dex:
serviceAccount:
name: "" # or <nil>
# Case 4)
server:
serviceAccount:
name: "" # or <nil>
Please check if you are affected by one of these cases before you upgrade, especially when you use cloud IAM roles for service accounts. (eg. IRSA on AWS or Workload Identity for GKE)
3.2.*
With this minor version we introduced the evaluation for the ingress manifest (depending on the capabilities version), See Pull Request.
Issue 703 reported that the capabilities evaluation is not handled correctly when deploying the chart via an Argo CD instance,
especially deploying on clusters running a cluster version prior to 1.19
(which misses Ingress
on apiVersion networking.k8s.io/v1
).
If you are running a cluster version prior to 1.19
you can avoid this issue by directly installing chart version 3.6.0
and setting kubeVersionOverride
like:
kubeVersionOverride: "1.18.0"
Then you should no longer encounter this issue.
3.0.0 and above
Helm apiVersion switched to v2
. Requires Helm 3.0.0
or above to install. Read More on how to migrate your release from Helm 2 to Helm 3.
2.14.7 and above
The matchLabels
key in the Argo CD Application Controller is no longer hard-coded. Note that labels are immutable so caution should be exercised when making changes to this resource.
2.10.x to 2.11.0
The application controller is now available as a StatefulSet
when the controller.enableStatefulSet
flag is set to true. Depending on your Helm deployment this may be a downtime or breaking change if enabled when using HA and will become the default in 3.x.
1.8.7 to 2.x.x
controller.extraArgs
, repoServer.extraArgs
and server.extraArgs
are now arrays of strings instead of a map
What was
server:
extraArgs:
insecure: ""
is now
server:
extraArgs:
- --insecure
Prerequisites
- Kubernetes:
>=1.22.0-0
- Helm v3.0.0+
Installing the Chart
To install the chart with the release name my-release
:
$ helm repo add argo https://argoproj.github.io/argo-helm
"argo" has been added to your repositories
$ helm install my-release argo/argo-cd
NAME: my-release
...
General parameters
Key | Type | Default | Description |
---|---|---|---|
apiVersionOverrides.autoscaling | string | "" |
String to override apiVersion of autoscaling rendered by this helm chart |
apiVersionOverrides.certmanager | string | "" |
String to override apiVersion of cert-manager resources rendered by this helm chart |
apiVersionOverrides.cloudgoogle | string | "" |
String to override apiVersion of GKE resources rendered by this helm chart |
apiVersionOverrides.ingress | string | "" |
String to override apiVersion of ingresses rendered by this helm chart |
apiVersionOverrides.pdb | string | "" |
String to override apiVersion of pod disruption budgets rendered by this helm chart |
crds.annotations | object | {} |
Annotations to be added to all CRDs |
crds.install | bool | true |
Install and upgrade CRDs |
crds.keep | bool | true |
Keep CRDs on chart uninstall |
createAggregateRoles | bool | false |
Create clusterroles that extend existing clusterroles to interact with argo-cd crds |
extraObjects | list | [] |
Array of extra K8s manifests to deploy |
fullnameOverride | string | "" |
String to fully override "argo-cd.fullname" |
kubeVersionOverride | string | "" |
Override the Kubernetes version, which is used to evaluate certain manifests |
nameOverride | string | "argocd" |
Provide a name in place of argocd |
openshift.enabled | bool | false |
enables using arbitrary uid for argo repo server |
Global Configs
Key | Type | Default | Description |
---|---|---|---|
global.additionalLabels | object | {} |
Common labels for the all resources |
global.hostAliases | list | [] |
Mapping between IP and hostnames that will be injected as entries in the pod's hosts files |
global.image.imagePullPolicy | string | "IfNotPresent" |
If defined, a imagePullPolicy applied to all Argo CD deployments |
global.image.repository | string | "quay.io/argoproj/argocd" |
If defined, a repository applied to all Argo CD deployments |
global.image.tag | string | "" |
Overrides the global Argo CD image tag whose default is the chart appVersion |
global.imagePullSecrets | list | [] |
Secrets with credentials to pull images from a private registry |
global.logging.format | string | "text" |
Set the global logging format. Either: text or json |
global.logging.level | string | "info" |
Set the global logging level. One of: debug , info , warn or error |
global.networkPolicy.create | bool | false |
Create NetworkPolicy objects for all components |
global.networkPolicy.defaultDenyIngress | bool | false |
Default deny all ingress traffic |
global.podAnnotations | object | {} |
Annotations for the all deployed pods |
global.podLabels | object | {} |
Labels for the all deployed pods |
global.revisionHistoryLimit | int | 3 |
Number of old deployment ReplicaSets to retain. The rest will be garbage collected. |
global.securityContext | object | {} (See values.yaml) |
Toggle and define pod-level security context. |
Argo CD Configs
Key | Type | Default | Description |
---|---|---|---|
configs.clusterCredentials | list | [] (See values.yaml) |
Provide one or multiple external cluster credentials |
configs.cm."admin.enabled" | bool | true |
Enable local admin user |
configs.cm."application.instanceLabelKey" | string | Defaults to app.kubernetes.io/instance | The name of tracking label used by Argo CD for resource pruning |
configs.cm."exec.enabled" | bool | false |
Enable exec feature in Argo UI |
configs.cm."server.rbac.log.enforce.enable" | bool | false |
Enable logs RBAC enforcement |
configs.cm."timeout.hard.reconciliation" | int | 0 |
Timeout to refresh application data as well as target manifests cache |
configs.cm."timeout.reconciliation" | string | "180s" |
Timeout to discover if a new manifests version got published to the repository |
configs.cm.annotations | object | {} |
Annotations to be added to argocd-cm configmap |
configs.cm.create | bool | true |
Create the argocd-cm configmap for declarative setup |
configs.cm.url | string | "" |
Argo CD's externally facing base URL (optional). Required when configuring SSO |
configs.credentialTemplates | object | {} |
Repository credentials to be used as Templates for other repos |
configs.credentialTemplatesAnnotations | object | {} |
Annotations to be added to configs.credentialTemplates Secret |
configs.gpg.annotations | object | {} |
Annotations to be added to argocd-gpg-keys-cm configmap |
configs.gpg.keys | object | {} (See values.yaml) |
GnuPG public keys to add to the keyring |
configs.knownHosts.data.ssh_known_hosts | string | See values.yaml | Known Hosts |
configs.knownHostsAnnotations | object | {} |
Known Hosts configmap annotations |
configs.params."controller.operation.processors" | int | 10 |
Number of application operation processors |
configs.params."controller.repo.server.timeout.seconds" | int | 60 |
Repo server RPC call timeout seconds. |
configs.params."controller.self.heal.timeout.seconds" | int | 5 |
Specifies timeout between application self heal attempts |
configs.params."controller.status.processors" | int | 20 |
Number of application status processors |
configs.params."otlp.address" | string | "" |
Open-Telemetry collector address: (e.g. "otel-collector:4317") |
configs.params."reposerver.parallelism.limit" | int | 0 |
Limit on number of concurrent manifests generate requests. Any value less the 1 means no limit. |
configs.params."server.basehref" | string | "/" |
Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from / |
configs.params."server.disable.auth" | bool | false |
Disable Argo CD RBAC for user authentication |
configs.params."server.enable.gzip" | bool | false |
Enable GZIP compression |
configs.params."server.insecure" | bool | false |
Run server without TLS |
configs.params."server.rootpath" | string | "" |
Used if Argo CD is running behind reverse proxy under subpath different from / |
configs.params."server.staticassets" | string | "/shared/app" |
Directory path that contains additional static assets |
configs.params."server.x.frame.options" | string | "sameorigin" |
Set X-Frame-Options header in HTTP responses to value. To disable, set to "". |
configs.params.annotations | object | {} |
Annotations to be added to the argocd-cmd-params-cm ConfigMap |
configs.rbac."policy.csv" | string | '' (See values.yaml) |
File containing user-defined policies and role definitions. |
configs.rbac."policy.default" | string | "" |
The name of the default role which Argo CD will falls back to, when authorizing API requests (optional). If omitted or empty, users may be still be able to login, but will see no apps, projects, etc... |
configs.rbac.annotations | object | {} |
Annotations to be added to argocd-rbac-cm configmap |
configs.rbac.create | bool | true |
Create the argocd-rbac-cm configmap with (Argo CD RBAC policy) definitions. If false, it is expected the configmap will be created by something else. Argo CD will not work if there is no configmap created with the name above. |
configs.rbac.scopes | string | "[groups]" |
OIDC scopes to examine during rbac enforcement (in addition to sub scope). The scope value can be a string, or a list of strings. |
configs.repositories | object | {} |
Repositories list to be used by applications |
configs.repositoriesAnnotations | object | {} |
Annotations to be added to configs.repositories Secret |
configs.secret.annotations | object | {} |
Annotations to be added to argocd-secret |
configs.secret.argocdServerAdminPassword | string | "" |
Bcrypt hashed admin password |
configs.secret.argocdServerAdminPasswordMtime | string | "" (defaults to current time) |
Admin password modification time. Eg. "2006-01-02T15:04:05Z" |
configs.secret.bitbucketServerSecret | string | "" |
Shared secret for authenticating BitbucketServer webhook events |
configs.secret.bitbucketUUID | string | "" |
UUID for authenticating Bitbucket webhook events |
configs.secret.createSecret | bool | true |
Create the argocd-secret |
configs.secret.extra | object | {} |
add additional secrets to be added to argocd-secret |
configs.secret.githubSecret | string | "" |
Shared secret for authenticating GitHub webhook events |
configs.secret.gitlabSecret | string | "" |
Shared secret for authenticating GitLab webhook events |
configs.secret.gogsSecret | string | "" |
Shared secret for authenticating Gogs webhook events |
configs.styles | string | "" (See values.yaml) |
Define custom CSS styles for your argo instance. This setting will automatically mount the provided CSS and reference it in the argo configuration. |
configs.tlsCerts | object | See values.yaml | TLS certificate |
configs.tlsCertsAnnotations | object | {} |
TLS certificate configmap annotations |
Argo CD Controller
Key | Type | Default | Description |
---|---|---|---|
controller.affinity | object | {} |
Assign custom affinity rules to the deployment |
controller.args | object | {} |
DEPRECATED - Application controller commandline flags |
controller.clusterAdminAccess.enabled | bool | true |
Enable RBAC for local cluster deployments |
controller.clusterRoleRules.enabled | bool | false |
Enable custom rules for the application controller's ClusterRole resource |
controller.clusterRoleRules.rules | list | [] |
List of custom rules for the application controller's ClusterRole resource |
controller.containerPort | int | 8082 |
Application controller listening port |
controller.containerSecurityContext | object | See values.yaml | Application controller container-level security context |
controller.env | list | [] |
Environment variables to pass to application controller |
controller.envFrom | list | [] (See values.yaml) |
envFrom to pass to application controller |
controller.extraArgs | list | [] |
Additional command line arguments to pass to application controller |
controller.extraContainers | list | [] |
Additional containers to be added to the application controller pod |
controller.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Image pull policy for the application controller |
controller.image.repository | string | "" (defaults to global.image.repository) |
Repository to use for the application controller |
controller.image.tag | string | "" (defaults to global.image.tag) |
Tag to use for the application controller |
controller.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
controller.initContainers | list | [] |
Init containers to add to the application controller pod |
controller.metrics.applicationLabels.enabled | bool | false |
Enables additional labels in argocd_app_labels metric |
controller.metrics.applicationLabels.labels | list | [] |
Additional labels |
controller.metrics.enabled | bool | false |
Deploy metrics service |
controller.metrics.rules.enabled | bool | false |
Deploy a PrometheusRule for the application controller |
controller.metrics.rules.spec | list | [] |
PrometheusRule.Spec for the application controller |
controller.metrics.service.annotations | object | {} |
Metrics service annotations |
controller.metrics.service.labels | object | {} |
Metrics service labels |
controller.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
controller.metrics.service.servicePort | int | 8082 |
Metrics service port |
controller.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
controller.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
controller.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
controller.metrics.serviceMonitor.interval | string | "30s" |
Prometheus ServiceMonitor interval |
controller.metrics.serviceMonitor.metricRelabelings | list | [] |
Prometheus MetricRelabelConfigs to apply to samples before ingestion |
controller.metrics.serviceMonitor.namespace | string | "" |
Prometheus ServiceMonitor namespace |
controller.metrics.serviceMonitor.relabelings | list | [] |
Prometheus RelabelConfigs to apply to samples before scraping |
controller.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
controller.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
controller.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
controller.name | string | "application-controller" |
Application controller name string |
controller.nodeSelector | object | {} |
Node selector |
controller.pdb.annotations | object | {} |
Annotations to be added to application controller pdb |
controller.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the application controller |
controller.pdb.labels | object | {} |
Labels to be added to application controller pdb |
controller.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
controller.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
controller.podAnnotations | object | {} |
Annotations to be added to application controller pods |
controller.podLabels | object | {} |
Labels to be added to application controller pods |
controller.priorityClassName | string | "" |
Priority class for the application controller pods |
controller.readinessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
controller.readinessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
controller.readinessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
controller.readinessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
controller.readinessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
controller.replicas | int | 1 |
The number of application controller pods to run. Additional replicas will cause sharding of managed clusters across number of replicas. |
controller.resources | object | {} |
Resource limits and requests for the application controller pods |
controller.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
controller.serviceAccount.automountServiceAccountToken | bool | true |
Automount API credentials for the Service Account |
controller.serviceAccount.create | bool | true |
Create a service account for the application controller |
controller.serviceAccount.labels | object | {} |
Labels applied to created service account |
controller.serviceAccount.name | string | "argocd-application-controller" |
Service account name |
controller.tolerations | list | [] |
Tolerations for use with node taints |
controller.topologySpreadConstraints | list | [] |
Assign custom TopologySpreadConstraints rules to the application controller |
controller.volumeMounts | list | [] |
Additional volumeMounts to the application controller main container |
controller.volumes | list | [] |
Additional volumes to the application controller pod |
Argo Repo Server
Key | Type | Default | Description |
---|---|---|---|
repoServer.affinity | object | {} |
Assign custom affinity rules to the deployment |
repoServer.autoscaling.behavior | object | {} |
Configures the scaling behavior of the target in both Up and Down directions. This is only available on HPA apiVersion autoscaling/v2beta2 and newer |
repoServer.autoscaling.enabled | bool | false |
Enable Horizontal Pod Autoscaler (HPA) for the repo server |
repoServer.autoscaling.maxReplicas | int | 5 |
Maximum number of replicas for the repo server HPA |
repoServer.autoscaling.minReplicas | int | 1 |
Minimum number of replicas for the repo server HPA |
repoServer.autoscaling.targetCPUUtilizationPercentage | int | 50 |
Average CPU utilization percentage for the repo server HPA |
repoServer.autoscaling.targetMemoryUtilizationPercentage | int | 50 |
Average memory utilization percentage for the repo server HPA |
repoServer.certificateSecret.annotations | object | {} |
Annotations to be added to argocd-repo-server-tls secret |
repoServer.certificateSecret.ca | string | "" |
Certificate authority. Required for self-signed certificates. |
repoServer.certificateSecret.crt | string | "" |
Certificate data. Must contain SANs of Repo service (ie: argocd-repo-server, argocd-repo-server.argo-cd.svc) |
repoServer.certificateSecret.enabled | bool | false |
Create argocd-repo-server-tls secret |
repoServer.certificateSecret.key | string | "" |
Certificate private key |
repoServer.certificateSecret.labels | object | {} |
Labels to be added to argocd-repo-server-tls secret |
repoServer.clusterAdminAccess.enabled | bool | false |
Enable RBAC for local cluster deployments |
repoServer.clusterRoleRules.enabled | bool | false |
Enable custom rules for the Repo server's Cluster Role resource |
repoServer.clusterRoleRules.rules | list | [] |
List of custom rules for the Repo server's Cluster Role resource |
repoServer.containerPort | int | 8081 |
Configures the repo server port |
repoServer.containerSecurityContext | object | See values.yaml | Repo server container-level security context |
repoServer.env | list | [] |
Environment variables to pass to repo server |
repoServer.envFrom | list | [] (See values.yaml) |
envFrom to pass to repo server |
repoServer.extraArgs | list | [] |
Additional command line arguments to pass to repo server |
repoServer.extraContainers | list | [] |
Additional containers to be added to the repo server pod |
repoServer.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Image pull policy for the repo server |
repoServer.image.repository | string | "" (defaults to global.image.repository) |
Repository to use for the repo server |
repoServer.image.tag | string | "" (defaults to global.image.tag) |
Tag to use for the repo server |
repoServer.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
repoServer.initContainers | list | [] |
Init containers to add to the repo server pods |
repoServer.livenessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
repoServer.livenessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
repoServer.livenessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
repoServer.livenessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
repoServer.livenessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
repoServer.metrics.enabled | bool | false |
Deploy metrics service |
repoServer.metrics.service.annotations | object | {} |
Metrics service annotations |
repoServer.metrics.service.labels | object | {} |
Metrics service labels |
repoServer.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
repoServer.metrics.service.servicePort | int | 8084 |
Metrics service port |
repoServer.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
repoServer.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
repoServer.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
repoServer.metrics.serviceMonitor.interval | string | "30s" |
Prometheus ServiceMonitor interval |
repoServer.metrics.serviceMonitor.metricRelabelings | list | [] |
Prometheus MetricRelabelConfigs to apply to samples before ingestion |
repoServer.metrics.serviceMonitor.namespace | string | "" |
Prometheus ServiceMonitor namespace |
repoServer.metrics.serviceMonitor.relabelings | list | [] |
Prometheus RelabelConfigs to apply to samples before scraping |
repoServer.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
repoServer.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
repoServer.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
repoServer.name | string | "repo-server" |
Repo server name |
repoServer.nodeSelector | object | {} |
Node selector |
repoServer.pdb.annotations | object | {} |
Annotations to be added to repo server pdb |
repoServer.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the repo server |
repoServer.pdb.labels | object | {} |
Labels to be added to repo server pdb |
repoServer.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
repoServer.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
repoServer.podAnnotations | object | {} |
Annotations to be added to repo server pods |
repoServer.podLabels | object | {} |
Labels to be added to repo server pods |
repoServer.priorityClassName | string | "" |
Priority class for the repo server |
repoServer.rbac | list | [] |
Repo server rbac rules |
repoServer.readinessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
repoServer.readinessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
repoServer.readinessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
repoServer.readinessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
repoServer.readinessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
repoServer.replicas | int | 1 |
The number of repo server pods to run |
repoServer.resources | object | {} |
Resource limits and requests for the repo server pods |
repoServer.service.annotations | object | {} |
Repo server service annotations |
repoServer.service.labels | object | {} |
Repo server service labels |
repoServer.service.port | int | 8081 |
Repo server service port |
repoServer.service.portName | string | "https-repo-server" |
Repo server service port name |
repoServer.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
repoServer.serviceAccount.automountServiceAccountToken | bool | true |
Automount API credentials for the Service Account |
repoServer.serviceAccount.create | bool | true |
Create repo server service account |
repoServer.serviceAccount.labels | object | {} |
Labels applied to created service account |
repoServer.serviceAccount.name | string | "" |
Repo server service account name |
repoServer.tolerations | list | [] |
Tolerations for use with node taints |
repoServer.topologySpreadConstraints | list | [] |
Assign custom TopologySpreadConstraints rules to the repo server |
repoServer.volumeMounts | list | [] |
Additional volumeMounts to the repo server main container |
repoServer.volumes | list | [] |
Additional volumes to the repo server pod |
Argo Server
Key | Type | Default | Description |
---|---|---|---|
server.GKEbackendConfig.enabled | bool | false |
Enable BackendConfig custom resource for Google Kubernetes Engine |
server.GKEbackendConfig.spec | object | {} |
BackendConfigSpec |
server.GKEfrontendConfig.enabled | bool | false |
Enable FrontConfig custom resource for Google Kubernetes Engine |
server.GKEfrontendConfig.spec | object | {} |
FrontendConfigSpec |
server.GKEmanagedCertificate.domains | list | ["argocd.example.com"] |
Domains for the Google Managed Certificate |
server.GKEmanagedCertificate.enabled | bool | false |
Enable ManagedCertificate custom resource for Google Kubernetes Engine. |
server.affinity | object | {} |
Assign custom affinity rules to the deployment |
server.autoscaling.behavior | object | {} |
Configures the scaling behavior of the target in both Up and Down directions. This is only available on HPA apiVersion autoscaling/v2beta2 and newer |
server.autoscaling.enabled | bool | false |
Enable Horizontal Pod Autoscaler (HPA) for the Argo CD server |
server.autoscaling.maxReplicas | int | 5 |
Maximum number of replicas for the Argo CD server HPA |
server.autoscaling.minReplicas | int | 1 |
Minimum number of replicas for the Argo CD server HPA |
server.autoscaling.targetCPUUtilizationPercentage | int | 50 |
Average CPU utilization percentage for the Argo CD server HPA |
server.autoscaling.targetMemoryUtilizationPercentage | int | 50 |
Average memory utilization percentage for the Argo CD server HPA |
server.certificate.additionalHosts | list | [] |
Certificate Subject Alternate Names (SANs) |
server.certificate.domain | string | "argocd.example.com" |
Certificate primary domain (commonName) |
server.certificate.duration | string | "" (defaults to 2160h = 90d if not specified) |
The requested 'duration' (i.e. lifetime) of the certificate. |
server.certificate.enabled | bool | false |
Deploy a Certificate resource (requires cert-manager) |
server.certificate.issuer.group | string | "" |
Certificate issuer group. Set if using an external issuer. Eg. cert-manager.io |
server.certificate.issuer.kind | string | "" |
Certificate issuer kind. Either Issuer or ClusterIssuer |
server.certificate.issuer.name | string | "" |
Certificate isser name. Eg. letsencrypt |
server.certificate.privateKey.algorithm | string | "RSA" |
Algorithm used to generate certificate private key. One of: RSA , Ed25519 or ECDSA |
server.certificate.privateKey.encoding | string | "PKCS1" |
The private key cryptography standards (PKCS) encoding for private key. Either: PCKS1 or PKCS8 |
server.certificate.privateKey.rotationPolicy | string | "Never" |
Rotation policy of private key when certificate is re-issued. Either: Never or Always |
server.certificate.privateKey.size | int | 2048 |
Key bit size of the private key. If algorithm is set to Ed25519 , size is ignored. |
server.certificate.renewBefore | string | "" (defaults to 360h = 15d if not specified) |
How long before the expiry a certificate should be renewed. |
server.certificate.secretName | string | "argocd-server-tls" |
The name of the Secret that will be automatically created and managed by this Certificate resource |
server.certificateSecret.annotations | object | {} |
Annotations to be added to argocd-server-tls secret |
server.certificateSecret.crt | string | "" |
Certificate data |
server.certificateSecret.enabled | bool | false |
Create argocd-server-tls secret |
server.certificateSecret.key | string | "" |
Private Key of the certificate |
server.certificateSecret.labels | object | {} |
Labels to be added to argocd-server-tls secret |
server.clusterAdminAccess.enabled | bool | true |
Enable RBAC for local cluster deployments |
server.containerPort | int | 8080 |
Configures the server port |
server.containerSecurityContext | object | See values.yaml | Server container-level security context |
server.env | list | [] |
Environment variables to pass to Argo CD server |
server.envFrom | list | [] (See values.yaml) |
envFrom to pass to Argo CD server |
server.extensions.containerSecurityContext | object | See values.yaml | Server UI extensions container-level security context |
server.extensions.enabled | bool | false |
Enable support for Argo UI extensions |
server.extensions.image.imagePullPolicy | string | "IfNotPresent" |
Image pull policy for extensions |
server.extensions.image.repository | string | "ghcr.io/argoproj-labs/argocd-extensions" |
Repository to use for extensions image |
server.extensions.image.tag | string | "v0.1.0" |
Tag to use for extensions image |
server.extensions.resources | object | {} |
Resource limits and requests for the argocd-extensions container |
server.extraArgs | list | [] |
Additional command line arguments to pass to Argo CD server |
server.extraContainers | list | [] |
Additional containers to be added to the server pod |
server.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Image pull policy for the Argo CD server |
server.image.repository | string | "" (defaults to global.image.repository) |
Repository to use for the Argo CD server |
server.image.tag | string | "" (defaults to global.image.tag) |
Tag to use for the Argo CD server |
server.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
server.ingress.annotations | object | {} |
Additional ingress annotations |
server.ingress.enabled | bool | false |
Enable an ingress resource for the Argo CD server |
server.ingress.extraPaths | list | [] |
Additional ingress paths |
server.ingress.hosts | list | [] |
List of ingress hosts |
server.ingress.https | bool | false |
Uses server.service.servicePortHttps instead server.service.servicePortHttp |
server.ingress.ingressClassName | string | "" |
Defines which ingress controller will implement the resource |
server.ingress.labels | object | {} |
Additional ingress labels |
server.ingress.pathType | string | "Prefix" |
Ingress path type. One of Exact , Prefix or ImplementationSpecific |
server.ingress.paths | list | ["/"] |
List of ingress paths |
server.ingress.tls | list | [] |
Ingress TLS configuration |
server.ingressGrpc.annotations | object | {} |
Additional ingress annotations for dedicated gRPC-ingress |
server.ingressGrpc.awsALB.backendProtocolVersion | string | "HTTP2" |
Backend protocol version for the AWS ALB gRPC service |
server.ingressGrpc.awsALB.serviceType | string | "NodePort" |
Service type for the AWS ALB gRPC service |
server.ingressGrpc.enabled | bool | false |
Enable an ingress resource for the Argo CD server for dedicated gRPC-ingress |
server.ingressGrpc.extraPaths | list | [] |
Additional ingress paths for dedicated gRPC-ingress |
server.ingressGrpc.hosts | list | [] |
List of ingress hosts for dedicated gRPC-ingress |
server.ingressGrpc.https | bool | false |
Uses server.service.servicePortHttps instead server.service.servicePortHttp |
server.ingressGrpc.ingressClassName | string | "" |
Defines which ingress controller will implement the resource gRPC-ingress |
server.ingressGrpc.isAWSALB | bool | false |
Setup up gRPC ingress to work with an AWS ALB |
server.ingressGrpc.labels | object | {} |
Additional ingress labels for dedicated gRPC-ingress |
server.ingressGrpc.pathType | string | "Prefix" |
Ingress path type for dedicated gRPC-ingress. One of Exact , Prefix or ImplementationSpecific |
server.ingressGrpc.paths | list | ["/"] |
List of ingress paths for dedicated gRPC-ingress |
server.ingressGrpc.tls | list | [] |
Ingress TLS configuration for dedicated gRPC-ingress |
server.initContainers | list | [] |
Init containers to add to the server pod |
server.lifecycle | object | {} |
Specify postStart and preStop lifecycle hooks for your argo-cd-server container |
server.livenessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
server.livenessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
server.livenessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
server.livenessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
server.livenessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
server.metrics.enabled | bool | false |
Deploy metrics service |
server.metrics.service.annotations | object | {} |
Metrics service annotations |
server.metrics.service.labels | object | {} |
Metrics service labels |
server.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
server.metrics.service.servicePort | int | 8083 |
Metrics service port |
server.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
server.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
server.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
server.metrics.serviceMonitor.interval | string | "30s" |
Prometheus ServiceMonitor interval |
server.metrics.serviceMonitor.metricRelabelings | list | [] |
Prometheus MetricRelabelConfigs to apply to samples before ingestion |
server.metrics.serviceMonitor.namespace | string | "" |
Prometheus ServiceMonitor namespace |
server.metrics.serviceMonitor.relabelings | list | [] |
Prometheus RelabelConfigs to apply to samples before scraping |
server.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
server.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
server.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
server.name | string | "server" |
Argo CD server name |
server.nodeSelector | object | {} |
Node selector |
server.pdb.annotations | object | {} |
Annotations to be added to Argo CD server pdb |
server.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the Argo CD server |
server.pdb.labels | object | {} |
Labels to be added to Argo CD server pdb |
server.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
server.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
server.podAnnotations | object | {} |
Annotations to be added to server pods |
server.podLabels | object | {} |
Labels to be added to server pods |
server.priorityClassName | string | "" |
Priority class for the Argo CD server |
server.readinessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
server.readinessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
server.readinessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
server.readinessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
server.readinessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
server.replicas | int | 1 |
The number of server pods to run |
server.resources | object | {} |
Resource limits and requests for the Argo CD server |
server.route.annotations | object | {} |
Openshift Route annotations |
server.route.enabled | bool | false |
Enable an OpenShift Route for the Argo CD server |
server.route.hostname | string | "" |
Hostname of OpenShift Route |
server.route.termination_policy | string | "None" |
Termination policy of Openshift Route |
server.route.termination_type | string | "passthrough" |
Termination type of Openshift Route |
server.service.annotations | object | {} |
Server service annotations |
server.service.externalIPs | list | [] |
Server service external IPs |
server.service.externalTrafficPolicy | string | "" |
Denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints |
server.service.labels | object | {} |
Server service labels |
server.service.loadBalancerIP | string | "" |
LoadBalancer will get created with the IP specified in this field |
server.service.loadBalancerSourceRanges | list | [] |
Source IP ranges to allow access to service from |
server.service.namedTargetPort | bool | true |
Use named target port for argocd |
server.service.nodePortHttp | int | 30080 |
Server service http port for NodePort service type (only if server.service.type is set to "NodePort") |
server.service.nodePortHttps | int | 30443 |
Server service https port for NodePort service type (only if server.service.type is set to "NodePort") |
server.service.servicePortHttp | int | 80 |
Server service http port |
server.service.servicePortHttpName | string | "http" |
Server service http port name, can be used to route traffic via istio |
server.service.servicePortHttps | int | 443 |
Server service https port |
server.service.servicePortHttpsName | string | "https" |
Server service https port name, can be used to route traffic via istio |
server.service.sessionAffinity | string | "" |
Used to maintain session affinity. Supports ClientIP and None |
server.service.type | string | "ClusterIP" |
Server service type |
server.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
server.serviceAccount.automountServiceAccountToken | bool | true |
Automount API credentials for the Service Account |
server.serviceAccount.create | bool | true |
Create server service account |
server.serviceAccount.labels | object | {} |
Labels applied to created service account |
server.serviceAccount.name | string | "argocd-server" |
Server service account name |
server.tolerations | list | [] |
Tolerations for use with node taints |
server.topologySpreadConstraints | list | [] |
Assign custom TopologySpreadConstraints rules to the Argo CD server |
server.volumeMounts | list | [] |
Additional volumeMounts to the server main container |
server.volumes | list | [] |
Additional volumes to the server pod |
Using AWS ALB Ingress Controller With GRPC
If you are using an AWS ALB Ingress controller, you will need to set server.ingressGrpc.isAWSALB
to true
. This will create a second service with the annotation alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
and modify the server ingress to add a condition annotation to route GRPC traffic to the new service.
Example:
server:
ingress:
enabled: true
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
ingressGrpc:
enabled: true
isAWSALB: true
awsALB:
serviceType: ClusterIP
Dex
Key | Type | Default | Description |
---|---|---|---|
dex.affinity | object | {} |
Assign custom affinity rules to the deployment |
dex.certificateSecret.annotations | object | {} |
Annotations to be added to argocd-dex-server-tls secret |
dex.certificateSecret.ca | string | "" |
Certificate authority. Required for self-signed certificates. |
dex.certificateSecret.crt | string | "" |
Certificate data. Must contain SANs of Dex service (ie: argocd-dex-server, argocd-dex-server.argo-cd.svc) |
dex.certificateSecret.enabled | bool | false |
Create argocd-dex-server-tls secret |
dex.certificateSecret.key | string | "" |
Certificate private key |
dex.certificateSecret.labels | object | {} |
Labels to be added to argocd-dex-server-tls secret |
dex.containerPortGrpc | int | 5557 |
Container port for gRPC access |
dex.containerPortHttp | int | 5556 |
Container port for HTTP access |
dex.containerPortMetrics | int | 5558 |
Container port for metrics access |
dex.containerSecurityContext | object | See values.yaml | Dex container-level security context |
dex.enabled | bool | true |
Enable dex |
dex.env | list | [] |
Environment variables to pass to the Dex server |
dex.envFrom | list | [] (See values.yaml) |
envFrom to pass to the Dex server |
dex.extraArgs | list | [] |
Additional command line arguments to pass to the Dex server |
dex.extraContainers | list | [] |
Additional containers to be added to the dex pod |
dex.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Dex imagePullPolicy |
dex.image.repository | string | "ghcr.io/dexidp/dex" |
Dex image repository |
dex.image.tag | string | "v2.35.3" |
Dex image tag |
dex.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
dex.initContainers | list | [] |
Init containers to add to the dex pod |
dex.initImage.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Argo CD init image imagePullPolicy |
dex.initImage.repository | string | "" (defaults to global.image.repository) |
Argo CD init image repository |
dex.initImage.tag | string | "" (defaults to global.image.tag) |
Argo CD init image tag |
dex.livenessProbe.enabled | bool | false |
Enable Kubernetes liveness probe for Dex >= 2.28.0 |
dex.livenessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
dex.livenessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
dex.livenessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
dex.livenessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
dex.livenessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
dex.metrics.enabled | bool | false |
Deploy metrics service |
dex.metrics.service.annotations | object | {} |
Metrics service annotations |
dex.metrics.service.labels | object | {} |
Metrics service labels |
dex.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
dex.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
dex.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
dex.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
dex.metrics.serviceMonitor.interval | string | "30s" |
Prometheus ServiceMonitor interval |
dex.metrics.serviceMonitor.metricRelabelings | list | [] |
Prometheus MetricRelabelConfigs to apply to samples before ingestion |
dex.metrics.serviceMonitor.namespace | string | "" |
Prometheus ServiceMonitor namespace |
dex.metrics.serviceMonitor.relabelings | list | [] |
Prometheus RelabelConfigs to apply to samples before scraping |
dex.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
dex.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
dex.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
dex.name | string | "dex-server" |
Dex name |
dex.nodeSelector | object | {} |
Node selector |
dex.pdb.annotations | object | {} |
Annotations to be added to Dex server pdb |
dex.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the Dex server |
dex.pdb.labels | object | {} |
Labels to be added to Dex server pdb |
dex.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
dex.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
dex.podAnnotations | object | {} |
Annotations to be added to the Dex server pods |
dex.podLabels | object | {} |
Labels to be added to the Dex server pods |
dex.priorityClassName | string | "" |
Priority class for dex |
dex.readinessProbe.enabled | bool | false |
Enable Kubernetes readiness probe for Dex >= 2.28.0 |
dex.readinessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
dex.readinessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
dex.readinessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
dex.readinessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
dex.readinessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
dex.resources | object | {} |
Resource limits and requests for dex |
dex.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
dex.serviceAccount.automountServiceAccountToken | bool | true |
Automount API credentials for the Service Account |
dex.serviceAccount.create | bool | true |
Create dex service account |
dex.serviceAccount.name | string | "argocd-dex-server" |
Dex service account name |
dex.servicePortGrpc | int | 5557 |
Service port for gRPC access |
dex.servicePortGrpcName | string | "grpc" |
Service port name for gRPC access |
dex.servicePortHttp | int | 5556 |
Service port for HTTP access |
dex.servicePortHttpName | string | "http" |
Service port name for HTTP access |
dex.servicePortMetrics | int | 5558 |
Service port for metrics access |
dex.tolerations | list | [] |
Tolerations for use with node taints |
dex.topologySpreadConstraints | list | [] |
Assign custom TopologySpreadConstraints rules to dex |
dex.volumeMounts | list | [] |
Additional volumeMounts to the dex main container |
dex.volumes | list | [] |
Additional volumes to the dex pod |
Redis
Option 1 - Single Redis instance (default option)
Key | Type | Default | Description |
---|---|---|---|
redis.affinity | object | {} |
Assign custom affinity rules to the deployment |
redis.containerPort | int | 6379 |
Redis container port |
redis.containerSecurityContext | object | See values.yaml | Redis container-level security context |
redis.enabled | bool | true |
Enable redis |
redis.env | list | [] |
Environment variables to pass to the Redis server |
redis.envFrom | list | [] (See values.yaml) |
envFrom to pass to the Redis server |
redis.extraArgs | list | [] |
Additional command line arguments to pass to redis-server |
redis.extraContainers | list | [] |
Additional containers to be added to the redis pod |
redis.image.imagePullPolicy | string | "IfNotPresent" |
Redis imagePullPolicy |
redis.image.repository | string | "public.ecr.aws/docker/library/redis" |
Redis repository |
redis.image.tag | string | "7.0.5-alpine" |
Redis tag |
redis.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
redis.initContainers | list | [] |
Init containers to add to the redis pod |
redis.metrics.containerPort | int | 9121 |
Port to use for redis-exporter sidecar |
redis.metrics.containerSecurityContext | object | See values.yaml | Redis exporter security context |
redis.metrics.enabled | bool | false |
Deploy metrics service and redis-exporter sidecar |
redis.metrics.image.imagePullPolicy | string | "IfNotPresent" |
redis-exporter image PullPolicy |
redis.metrics.image.repository | string | "public.ecr.aws/bitnami/redis-exporter" |
redis-exporter image repository |
redis.metrics.image.tag | string | "1.26.0-debian-10-r2" |
redis-exporter image tag |
redis.metrics.resources | object | {} |
Resource limits and requests for redis-exporter sidecar |
redis.metrics.service.annotations | object | {} |
Metrics service annotations |
redis.metrics.service.clusterIP | string | "None" |
Metrics service clusterIP. None makes a "headless service" (no virtual IP) |
redis.metrics.service.labels | object | {} |
Metrics service labels |
redis.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
redis.metrics.service.servicePort | int | 9121 |
Metrics service port |
redis.metrics.service.type | string | "ClusterIP" |
Metrics service type |
redis.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
redis.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
redis.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
redis.metrics.serviceMonitor.interval | string | "30s" |
Interval at which metrics should be scraped |
redis.metrics.serviceMonitor.metricRelabelings | list | [] |
Prometheus MetricRelabelConfigs to apply to samples before ingestion |
redis.metrics.serviceMonitor.namespace | string | "" |
Prometheus ServiceMonitor namespace |
redis.metrics.serviceMonitor.relabelings | list | [] |
Prometheus RelabelConfigs to apply to samples before scraping |
redis.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
redis.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
redis.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
redis.name | string | "redis" |
Redis name |
redis.nodeSelector | object | {} |
Node selector |
redis.pdb.annotations | object | {} |
Annotations to be added to Redis pdb |
redis.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the Redis |
redis.pdb.labels | object | {} |
Labels to be added to Redis pdb |
redis.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
redis.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
redis.podAnnotations | object | {} |
Annotations to be added to the Redis server pods |
redis.podLabels | object | {} |
Labels to be added to the Redis server pods |
redis.priorityClassName | string | "" |
Priority class for redis |
redis.resources | object | {} |
Resource limits and requests for redis |
redis.securityContext | object | See values.yaml | Redis pod-level security context |
redis.service.annotations | object | {} |
Redis service annotations |
redis.service.labels | object | {} |
Additional redis service labels |
redis.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
redis.serviceAccount.automountServiceAccountToken | bool | false |
Automount API credentials for the Service Account |
redis.serviceAccount.create | bool | false |
Create a service account for the redis pod |
redis.serviceAccount.name | string | "" |
Service account name for redis pod |
redis.servicePort | int | 6379 |
Redis service port |
redis.tolerations | list | [] |
Tolerations for use with node taints |
redis.topologySpreadConstraints | list | [] |
Assign custom TopologySpreadConstraints rules to redis |
redis.volumeMounts | list | [] |
Additional volumeMounts to the redis container |
redis.volumes | list | [] |
Additional volumes to the redis pod |
Option 2 - Redis HA
This option uses the following third-party chart to bootstrap a clustered Redis: https://github.com/DandyDeveloper/charts/tree/master/charts/redis-ha. For all available configuration options, please read upstream README and/or chart source. The main options are listed here:
Key | Type | Default | Description |
---|---|---|---|
redis-ha.enabled | bool | false |
Enables the Redis HA subchart and disables the custom Redis single node deployment |
redis-ha.exporter.enabled | bool | true |
If true , the prometheus exporter sidecar is enabled |
redis-ha.haproxy.enabled | bool | true |
Enabled HAProxy LoadBalancing/Proxy |
redis-ha.haproxy.metrics.enabled | bool | true |
HAProxy enable prometheus metric scraping |
redis-ha.image.tag | string | "7.0.5-alpine" |
Redis tag |
redis-ha.persistentVolume.enabled | bool | false |
Configures persistency on Redis nodes |
redis-ha.redis.config | object | See values.yaml | Any valid redis config options in this section will be applied to each server (see redis-ha chart) |
redis-ha.redis.config.save | string | '""' |
Will save the DB if both the given number of seconds and the given number of write operations against the DB occurred. "" is disabled |
redis-ha.redis.masterGroupName | string | "argocd" |
Redis convention for naming the cluster group: must match ^[\\w-\\.]+$ and can be templated |
redis-ha.topologySpreadConstraints.enabled | bool | false |
Enable Redis HA topology spread constraints |
redis-ha.topologySpreadConstraints.maxSkew | string | "" (defaults to 1 ) |
Max skew of pods tolerated |
redis-ha.topologySpreadConstraints.topologyKey | string | "" (defaults to topology.kubernetes.io/zone ) |
Topology key for spread |
redis-ha.topologySpreadConstraints.whenUnsatisfiable | string | "" (defaults to ScheduleAnyway ) |
Enforcement policy, hard or soft |
redis-ha.exporter.image | string | nil (follows subchart default) |
Exporter image |
redis-ha.exporter.tag | string | nil (follows subchart default) |
Exporter tag |
redis-ha.haproxy.image.repository | string | nil (follows subchart default) |
HAProxy Image Repository |
redis-ha.haproxy.image.tag | string | nil (follows subchart default) |
HAProxy Image Tag |
redis-ha.image.repository | string | nil (follows subchart default) |
Redis image repository |
Option 3 - External Redis
If you want to use an existing Redis (eg. a managed service from a cloud provider), you can use these parameters:
Key | Type | Default | Description |
---|---|---|---|
externalRedis.existingSecret | string | "" |
The name of an existing secret with Redis credentials (must contain key redis-password ). When it's set, the externalRedis.password parameter is ignored |
externalRedis.host | string | "" |
External Redis server host |
externalRedis.password | string | "" |
External Redis password |
externalRedis.port | int | 6379 |
External Redis server port |
externalRedis.secretAnnotations | object | {} |
External Redis Secret annotations |
externalRedis.username | string | "" |
External Redis username |
ApplicationSet
Key | Type | Default | Description |
---|---|---|---|
applicationSet.affinity | object | {} |
Assign custom affinity rules |
applicationSet.args.debug | bool | false |
Print debug logs |
applicationSet.args.dryRun | bool | false |
Enable dry run mode |
applicationSet.args.enableLeaderElection | bool | false |
The default leader election setting |
applicationSet.args.metricsAddr | string | ":8080" |
The default metric address |
applicationSet.args.policy | string | "sync" |
How application is synced between the generator and the cluster |
applicationSet.args.probeBindAddr | string | ":8081" |
The default health check port |
applicationSet.containerSecurityContext | object | See values.yaml | ApplicationSet controller container-level security context |
applicationSet.enabled | bool | true |
Enable ApplicationSet controller |
applicationSet.extraArgs | list | [] |
List of extra cli args to add |
applicationSet.extraContainers | list | [] |
Additional containers to be added to the applicationset controller pod |
applicationSet.extraEnv | list | [] |
Environment variables to pass to the controller |
applicationSet.extraEnvFrom | list | [] (See values.yaml) |
envFrom to pass to the controller |
applicationSet.extraVolumeMounts | list | [] |
List of extra mounts to add (normally used with extraVolumes) |
applicationSet.extraVolumes | list | [] |
List of extra volumes to add |
applicationSet.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Image pull policy for the application set controller |
applicationSet.image.repository | string | "" (defaults to global.image.repository) |
Repository to use for the application set controller |
applicationSet.image.tag | string | "" (defaults to global.image.tag) |
Tag to use for the application set controller |
applicationSet.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
If defined, uses a Secret to pull an image from a private Docker registry or repository. |
applicationSet.livenessProbe.enabled | bool | false |
Enable Kubernetes liveness probe for ApplicationSet controller |
applicationSet.livenessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
applicationSet.livenessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
applicationSet.livenessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
applicationSet.livenessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
applicationSet.livenessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
applicationSet.logFormat | string | "" (defaults to global.logging.format) |
ApplicationSet controller log format. Either text or json |
applicationSet.logLevel | string | "" (defaults to global.logging.level) |
ApplicationSet controller log level. One of: debug , info , warn , error |
applicationSet.metrics.enabled | bool | false |
Deploy metrics service |
applicationSet.metrics.service.annotations | object | {} |
Metrics service annotations |
applicationSet.metrics.service.labels | object | {} |
Metrics service labels |
applicationSet.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
applicationSet.metrics.service.servicePort | int | 8085 |
Metrics service port |
applicationSet.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
applicationSet.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
applicationSet.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
applicationSet.metrics.serviceMonitor.interval | string | "30s" |
Prometheus ServiceMonitor interval |
applicationSet.metrics.serviceMonitor.metricRelabelings | list | [] |
Prometheus MetricRelabelConfigs to apply to samples before ingestion |
applicationSet.metrics.serviceMonitor.namespace | string | "" |
Prometheus ServiceMonitor namespace |
applicationSet.metrics.serviceMonitor.relabelings | list | [] |
Prometheus RelabelConfigs to apply to samples before scraping |
applicationSet.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
applicationSet.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
applicationSet.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
applicationSet.name | string | "applicationset-controller" |
Application Set controller name string |
applicationSet.nodeSelector | object | {} |
Node selector |
applicationSet.pdb.annotations | object | {} |
Annotations to be added to ApplicationSet controller pdb |
applicationSet.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the ApplicationSet controller |
applicationSet.pdb.labels | object | {} |
Labels to be added to ApplicationSet controller pdb |
applicationSet.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
applicationSet.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
applicationSet.podAnnotations | object | {} |
Annotations for the controller pods |
applicationSet.podLabels | object | {} |
Labels for the controller pods |
applicationSet.priorityClassName | string | "" |
If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. |
applicationSet.readinessProbe.enabled | bool | false |
Enable Kubernetes liveness probe for ApplicationSet controller |
applicationSet.readinessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded |
applicationSet.readinessProbe.initialDelaySeconds | int | 10 |
Number of seconds after the container has started before probe is initiated |
applicationSet.readinessProbe.periodSeconds | int | 10 |
How often (in seconds) to perform the probe |
applicationSet.readinessProbe.successThreshold | int | 1 |
Minimum consecutive successes for the probe to be considered successful after having failed |
applicationSet.readinessProbe.timeoutSeconds | int | 1 |
Number of seconds after which the probe times out |
applicationSet.replicaCount | int | 1 |
The number of ApplicationSet controller pods to run |
applicationSet.resources | object | {} |
Resource limits and requests for the controller pods. |
applicationSet.service.annotations | object | {} |
Application set service annotations |
applicationSet.service.labels | object | {} |
Application set service labels |
applicationSet.service.port | int | 7000 |
Application set service port |
applicationSet.service.portName | string | "webhook" |
Application set service port name |
applicationSet.serviceAccount.annotations | object | {} |
Annotations to add to the service account |
applicationSet.serviceAccount.create | bool | true |
Specifies whether a service account should be created |
applicationSet.serviceAccount.labels | object | {} |
Labels applied to created service account |
applicationSet.serviceAccount.name | string | "" |
The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
applicationSet.tolerations | list | [] |
Tolerations for use with node taints |
applicationSet.webhook.ingress.annotations | object | {} |
Additional ingress annotations |
applicationSet.webhook.ingress.enabled | bool | false |
Enable an ingress resource for Webhooks |
applicationSet.webhook.ingress.extraPaths | list | [] |
Additional ingress paths |
applicationSet.webhook.ingress.hosts | list | [] |
List of ingress hosts |
applicationSet.webhook.ingress.ingressClassName | string | "" |
Defines which ingress controller will implement the resource |
applicationSet.webhook.ingress.labels | object | {} |
Additional ingress labels |
applicationSet.webhook.ingress.pathType | string | "Prefix" |
Ingress path type. One of Exact , Prefix or ImplementationSpecific |
applicationSet.webhook.ingress.paths | list | ["/api/webhook"] |
List of ingress paths |
applicationSet.webhook.ingress.tls | list | [] |
Ingress TLS configuration |
Notifications
Key | Type | Default | Description |
---|---|---|---|
notifications.affinity | object | {} |
Assign custom affinity rules |
notifications.argocdUrl | string | nil |
Argo CD dashboard url; used in place of {{.context.argocdUrl}} in templates |
notifications.bots.slack.affinity | object | {} |
Assign custom affinity rules |
notifications.bots.slack.containerSecurityContext | object | See values.yaml | Slack bot container-level security Context |
notifications.bots.slack.enabled | bool | false |
Enable slack bot |
notifications.bots.slack.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Image pull policy for the Slack bot |
notifications.bots.slack.image.repository | string | "" (defaults to global.image.repository) |
Repository to use for the Slack bot |
notifications.bots.slack.image.tag | string | "" (defaults to global.image.tag) |
Tag to use for the Slack bot |
notifications.bots.slack.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
notifications.bots.slack.nodeSelector | object | {} |
Node selector |
notifications.bots.slack.pdb.annotations | object | {} |
Annotations to be added to Slack bot pdb |
notifications.bots.slack.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the Slack bot |
notifications.bots.slack.pdb.labels | object | {} |
Labels to be added to Slack bot pdb |
notifications.bots.slack.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
notifications.bots.slack.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
notifications.bots.slack.resources | object | {} |
Resource limits and requests for the Slack bot |
notifications.bots.slack.service.annotations | object | {} |
Service annotations for Slack bot |
notifications.bots.slack.service.port | int | 80 |
Service port for Slack bot |
notifications.bots.slack.service.type | string | "LoadBalancer" |
Service type for Slack bot |
notifications.bots.slack.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
notifications.bots.slack.serviceAccount.create | bool | true |
Specifies whether a service account should be created |
notifications.bots.slack.serviceAccount.name | string | "argocd-notifications-bot" |
The name of the service account to use. |
notifications.bots.slack.tolerations | list | [] |
Tolerations for use with node taints |
notifications.cm.create | bool | true |
Whether helm chart creates controller config map |
notifications.containerSecurityContext | object | See values.yaml | Notification controller container-level security Context |
notifications.context | object | {} |
Define user-defined context |
notifications.enabled | bool | true |
Enable notifications controller |
notifications.extraArgs | list | [] |
Extra arguments to provide to the controller |
notifications.extraEnv | list | [] |
Additional container environment variables |
notifications.extraEnvFrom | list | [] (See values.yaml) |
envFrom to pass to the controller |
notifications.extraVolumeMounts | list | [] |
List of extra mounts to add (normally used with extraVolumes) |
notifications.extraVolumes | list | [] |
List of extra volumes to add |
notifications.image.imagePullPolicy | string | "" (defaults to global.image.imagePullPolicy) |
Image pull policy for the notifications controller |
notifications.image.repository | string | "" (defaults to global.image.repository) |
Repository to use for the notifications controller |
notifications.image.tag | string | "" (defaults to global.image.tag) |
Tag to use for the notifications controller |
notifications.imagePullSecrets | list | [] (defaults to global.imagePullSecrets) |
Secrets with credentials to pull images from a private registry |
notifications.logFormat | string | "" (defaults to global.logging.format) |
Application controller log format. Either text or json |
notifications.logLevel | string | "" (defaults to global.logging.level) |
Application controller log level. One of: debug , info , warn , error |
notifications.metrics.enabled | bool | false |
Enables prometheus metrics server |
notifications.metrics.port | int | 9001 |
Metrics port |
notifications.metrics.service.annotations | object | {} |
Metrics service annotations |
notifications.metrics.service.labels | object | {} |
Metrics service labels |
notifications.metrics.service.portName | string | "http-metrics" |
Metrics service port name |
notifications.metrics.serviceMonitor.additionalLabels | object | {} |
Prometheus ServiceMonitor labels |
notifications.metrics.serviceMonitor.annotations | object | {} |
Prometheus ServiceMonitor annotations |
notifications.metrics.serviceMonitor.enabled | bool | false |
Enable a prometheus ServiceMonitor |
notifications.metrics.serviceMonitor.scheme | string | "" |
Prometheus ServiceMonitor scheme |
notifications.metrics.serviceMonitor.selector | object | {} |
Prometheus ServiceMonitor selector |
notifications.metrics.serviceMonitor.tlsConfig | object | {} |
Prometheus ServiceMonitor tlsConfig |
notifications.name | string | "notifications-controller" |
Notifications controller name string |
notifications.nodeSelector | object | {} |
Node selector |
notifications.notifiers | object | See values.yaml | Configures notification services such as slack, email or custom webhook |
notifications.pdb.annotations | object | {} |
Annotations to be added to notifications controller pdb |
notifications.pdb.enabled | bool | false |
Deploy a PodDisruptionBudget for the notifications controller |
notifications.pdb.labels | object | {} |
Labels to be added to notifications controller pdb |
notifications.pdb.maxUnavailable | string | "" |
Number of pods that are unavailble after eviction as number or percentage (eg.: 50%). |
notifications.pdb.minAvailable | string | "" (defaults to 0 if not specified) |
Number of pods that are available after eviction as number or percentage (eg.: 50%) |
notifications.podAnnotations | object | {} |
Annotations to be applied to the controller Pods |
notifications.podLabels | object | {} |
Labels to be applied to the controller Pods |
notifications.priorityClassName | string | "" |
Priority class for the controller pods |
notifications.resources | object | {} |
Resource limits and requests for the controller |
notifications.secret.annotations | object | {} |
key:value pairs of annotations to be added to the secret |
notifications.secret.create | bool | true |
Whether helm chart creates controller secret |
notifications.secret.items | object | {} |
Generic key:value pairs to be inserted into the secret |
notifications.serviceAccount.annotations | object | {} |
Annotations applied to created service account |
notifications.serviceAccount.create | bool | true |
Specifies whether a service account should be created |
notifications.serviceAccount.labels | object | {} |
Labels applied to created service account |
notifications.serviceAccount.name | string | "argocd-notifications-controller" |
The name of the service account to use. |
notifications.subscriptions | list | [] |
Contains centrally managed global application subscriptions |
notifications.templates | object | {} |
The notification template is used to generate the notification content |
notifications.tolerations | list | [] |
Tolerations for use with node taints |
notifications.triggers | object | {} |
The trigger defines the condition when the notification should be sent |
Autogenerated from chart metadata using helm-docs