Added chart versions:

jenkins/jenkins:
    - 5.6.0
  linkerd/linkerd-control-plane:
    - 2024.9.2
  linkerd/linkerd-crds:
    - 2024.9.2
  speedscale/speedscale-operator:
    - 2.2.393
pull/1060/head
github-actions[bot] 2024-09-13 01:24:06 +00:00
parent c9fe72ab92
commit 0ead96e116
146 changed files with 27273 additions and 3 deletions

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,54 @@
annotations:
artifacthub.io/category: integration-delivery
artifacthub.io/changes: |
- Helm chart is also now deployed on GitHub packages and can be installed from `oci://ghcr.io/jenkinsci/helm-charts/jenkins`
artifacthub.io/images: |
- name: jenkins
image: docker.io/jenkins/jenkins:2.462.2-jdk17
- name: k8s-sidecar
image: docker.io/kiwigrid/k8s-sidecar:1.27.6
- name: inbound-agent
image: jenkins/inbound-agent:3261.v9c670a_4748a_9-1
artifacthub.io/license: Apache-2.0
artifacthub.io/links: |
- name: Chart Source
url: https://github.com/jenkinsci/helm-charts/tree/main/charts/jenkins
- name: Jenkins
url: https://www.jenkins.io/
- name: support
url: https://github.com/jenkinsci/helm-charts/issues
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Jenkins
catalog.cattle.io/kube-version: '>=1.14-0'
catalog.cattle.io/release-name: jenkins
apiVersion: v2
appVersion: 2.462.2
description: 'Jenkins - Build great things at any scale! As the leading open source
automation server, Jenkins provides over 1800 plugins to support building, deploying
and automating any project. '
home: https://www.jenkins.io/
icon: file://assets/icons/jenkins.svg
keywords:
- jenkins
- ci
- devops
kubeVersion: '>=1.14-0'
maintainers:
- email: maor.friedman@redhat.com
name: maorfr
- email: mail@torstenwalter.de
name: torstenwalter
- email: garridomota@gmail.com
name: mogaal
- email: wmcdona89@gmail.com
name: wmcdona89
- email: timjacomb1@gmail.com
name: timja
name: jenkins
sources:
- https://github.com/jenkinsci/jenkins
- https://github.com/jenkinsci/docker-inbound-agent
- https://github.com/maorfr/kube-tasks
- https://github.com/jenkinsci/configuration-as-code-plugin
type: application
version: 5.6.0

View File

@ -0,0 +1,706 @@
# Jenkins
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/jenkins)](https://artifacthub.io/packages/helm/jenkinsci/jenkins)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Releases downloads](https://img.shields.io/github/downloads/jenkinsci/helm-charts/total.svg)](https://github.com/jenkinsci/helm-charts/releases)
[![Join the chat at https://app.gitter.im/#/room/#jenkins-ci:matrix.org](https://badges.gitter.im/badge.svg)](https://app.gitter.im/#/room/#jenkins-ci:matrix.org)
[Jenkins](https://www.jenkins.io/) is the leading open source automation server, Jenkins provides over 1800 plugins to support building, deploying and automating any project.
This chart installs a Jenkins server which spawns agents on [Kubernetes](http://kubernetes.io) utilizing the [Jenkins Kubernetes plugin](https://plugins.jenkins.io/kubernetes/).
Inspired by the awesome work of [Carlos Sanchez](https://github.com/carlossg).
## Get Repository Info
```console
helm repo add jenkins https://charts.jenkins.io
helm repo update
```
_See [`helm repo`](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Install Chart
```console
# Helm 3
$ helm install [RELEASE_NAME] jenkins/jenkins [flags]
```
_See [configuration](#configuration) below._
_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
## Uninstall Chart
```console
# Helm 3
$ helm uninstall [RELEASE_NAME]
```
This removes all the Kubernetes components associated with the chart and deletes the release.
_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
## Upgrade Chart
```console
# Helm 3
$ helm upgrade [RELEASE_NAME] jenkins/jenkins [flags]
```
_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
Visit the chart's [CHANGELOG](https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/CHANGELOG.md) to view the chart's release history.
For migration between major version check [migration guide](#migration-guide).
## Building weekly releases
The default charts target Long-Term-Support (LTS) releases of Jenkins.
To use other versions the easiest way is to update the image tag to the version you want.
You can also rebuild the chart if you want the `appVersion` field to match.
## Configuration
See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing).
To see all configurable options with detailed comments, visit the chart's [values.yaml](https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/values.yaml), or run these configuration commands:
```console
# Helm 3
$ helm show values jenkins/jenkins
```
For a summary of all configurable options, see [VALUES_SUMMARY.md](https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/VALUES_SUMMARY.md).
### Configure Security Realm and Authorization Strategy
This chart configured a `securityRealm` and `authorizationStrategy` as shown below:
```yaml
controller:
JCasC:
securityRealm: |-
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${chart-admin-username}"
name: "Jenkins Admin"
password: "${chart-admin-password}"
authorizationStrategy: |-
loggedInUsersCanDoAnything:
allowAnonymousRead: false
```
With the configuration above there is only a single user.
This is fine for getting started quickly, but it needs to be adjusted for any serious environment.
So you should adjust this to suite your needs.
That could be using LDAP / OIDC / .. as authorization strategy and use globalMatrix as authorization strategy to configure more fine-grained permissions.
### Consider using a custom image
This chart allows the user to specify plugins which should be installed. However, for production use cases one should consider to build a custom Jenkins image which has all required plugins pre-installed.
This way you can be sure which plugins Jenkins is using when starting up and you avoid trouble in case of connectivity issues to the Jenkins update site.
The [docker repository](https://github.com/jenkinsci/docker) for the Jenkins image contains [documentation](https://github.com/jenkinsci/docker#preinstalling-plugins) how to do it.
Here is an example how that can be done:
```Dockerfile
FROM jenkins/jenkins:lts
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code
```
NOTE: If you want a reproducible build then you should specify a non-floating tag for the image `jenkins/jenkins:2.249.3` and specify plugin versions.
Once you built the image and pushed it to your registry you can specify it in your values file like this:
```yaml
controller:
image: "registry/my-jenkins"
tag: "v1.2.3"
installPlugins: false
```
Notice: `installPlugins` is set to false to disable plugin download. In this case, the image `registry/my-jenkins:v1.2.3` must have the plugins specified as default value for [the `controller.installPlugins` directive](https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/VALUES_SUMMARY.md#jenkins-plugins) to ensure that the configuration side-car system works as expected.
In case you are using a private registry you can use 'imagePullSecretName' to specify the name of the secret to use when pulling the image:
```yaml
controller:
image: "registry/my-jenkins"
tag: "v1.2.3"
imagePullSecretName: registry-secret
installPlugins: false
```
### External URL Configuration
If you are using the ingress definitions provided by this chart via the `controller.ingress` block the configured hostname will be the ingress hostname starting with `https://` or `http://` depending on the `tls` configuration.
The Protocol can be overwritten by specifying `controller.jenkinsUrlProtocol`.
If you are not using the provided ingress you can specify `controller.jenkinsUrl` to change the URL definition.
### Configuration as Code
Jenkins Configuration as Code (JCasC) is now a standard component in the Jenkins project.
To allow JCasC's configuration from the helm values, the plugin [`configuration-as-code`](https://plugins.jenkins.io/configuration-as-code/) must be installed in the Jenkins Controller's Docker image (which is the case by default as specified by the [default value of the directive `controller.installPlugins`](https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/VALUES_SUMMARY.md#jenkins-plugins)).
JCasc configuration is passed through Helm values under the key `controller.JCasC`.
The section ["Jenkins Configuration as Code (JCasC)" of the page "VALUES_SUMMARY.md"](https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/VALUES_SUMMARY.md#jenkins-configuration-as-code-jcasc) lists all the possible directives.
In particular, you may specify custom JCasC scripts by adding sub-key under the `controller.JCasC.configScripts` for each configuration area where each corresponds to a plugin or section of the UI.
The sub-keys (prior to `|` character) are only labels used to give the section a meaningful name.
The only restriction is they must conform to RFC 1123 definition of a DNS label, so they may only contain lowercase letters, numbers, and hyphens.
Each key will become the name of a configuration yaml file on the controller in `/var/jenkins_home/casc_configs` (by default) and will be processed by the Configuration as Code Plugin during Jenkins startup.
The lines after each `|` become the content of the configuration yaml file.
The first line after this is a JCasC root element, e.g. jenkins, credentials, etc.
Best reference is the Documentation link here: `https://<jenkins_url>/configuration-as-code`.
The example below sets custom systemMessage:
```yaml
controller:
JCasC:
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server.
```
More complex example that creates ldap settings:
```yaml
controller:
JCasC:
configScripts:
ldap-settings: |
jenkins:
securityRealm:
ldap:
configurations:
- server: ldap.acme.com
rootDN: dc=acme,dc=uk
managerPasswordSecret: ${LDAP_PASSWORD}
groupMembershipStrategy:
fromUserRecord:
attributeName: "memberOf"
```
Keep in mind that default configuration file already contains some values that you won't be able to override under configScripts section.
For example, you can not configure Jenkins URL and System Admin email address like this because of conflicting configuration error.
Incorrect:
```yaml
controller:
JCasC:
configScripts:
jenkins-url: |
unclassified:
location:
url: https://example.com/jenkins
adminAddress: example@mail.com
```
Correct:
```yaml
controller:
jenkinsUrl: https://example.com/jenkins
jenkinsAdminEmail: example@mail.com
```
Further JCasC examples can be found [here](https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos).
#### Breaking out large Config as Code scripts
Jenkins Config as Code scripts can become quite large, and maintaining all of your scripts within one yaml file can be difficult. The Config as Code plugin itself suggests updating the `CASC_JENKINS_CONFIG` environment variable to be a comma separated list of paths for the plugin to traverse, picking up the yaml files as needed.
However, under the Jenkins helm chart, this `CASC_JENKINS_CONFIG` value is maintained through the templates. A better solution is to split your `controller.JCasC.configScripts` into separate values files, and provide each file during the helm install.
For example, you can have a values file (e.g values_main.yaml) that defines the values described in the `VALUES_SUMMARY.md` for your Jenkins configuration:
```yaml
jenkins:
controller:
jenkinsUrlProtocol: https
installPlugins: false
...
```
In a second file (e.g values_jenkins_casc.yaml), you can define a section of your config scripts:
```yaml
jenkins:
controller:
JCasC:
configScripts:
jenkinsCasc: |
jenkins:
disableRememberMe: false
mode: NORMAL
...
```
And keep extending your config scripts by creating more files (so not all config scripts are located in one yaml file for better maintenance):
values_jenkins_unclassified.yaml
```yaml
jenkins:
controller:
JCasC:
configScripts:
unclassifiedCasc: |
unclassified:
...
```
When installing, you provide all relevant yaml files (e.g `helm install -f values_main.yaml -f values_jenkins_casc.yaml -f values_jenkins_unclassified.yaml ...`). Instead of updating the `CASC_JENKINS_CONFIG` environment variable to include multiple paths, multiple CasC yaml files will be created in the same path `var/jenkins_home/casc_configs`.
#### Config as Code With or Without Auto-Reload
Config as Code changes (to `controller.JCasC.configScripts`) can either force a new pod to be created and only be applied at next startup, or can be auto-reloaded on-the-fly.
If you set `controller.sidecars.configAutoReload.enabled` to `true`, a second, auxiliary container will be installed into the Jenkins controller pod, known as a "sidecar".
This watches for changes to configScripts, copies the content onto the Jenkins file-system and issues a POST to `http://<jenkins_url>/reload-configuration-as-code` with a pre-shared key.
You can monitor this sidecar's logs using command `kubectl logs <controller_pod> -c config-reload -f`.
If you want to enable auto-reload then you also need to configure rbac as the container which triggers the reload needs to watch the config maps:
```yaml
controller:
sidecars:
configAutoReload:
enabled: true
rbac:
create: true
```
### Allow Limited HTML Markup in User-Submitted Text
Some third-party systems (e.g. GitHub) use HTML-formatted data in their payload sent to a Jenkins webhook (e.g. URL of a pull-request being built).
To display such data as processed HTML instead of raw text set `controller.enableRawHtmlMarkupFormatter` to true.
This option requires installation of the [OWASP Markup Formatter Plugin (antisamy-markup-formatter)](https://plugins.jenkins.io/antisamy-markup-formatter/).
This plugin is **not** installed by default but may be added to `controller.additionalPlugins`.
### Change max connections to Kubernetes API
When using agents with containers other than JNLP, The kubernetes plugin will communicate with those containers using the Kubernetes API. this changes the maximum concurrent connections
```yaml
agent:
maxRequestsPerHostStr: "32"
```
This will change the configuration of the kubernetes "cloud" (as called by jenkins) that is created automatically as part of this helm chart.
### Change container cleanup timeout API
For tasks that use very large images, this timeout can be increased to avoid early termination of the task while the Kubernetes pod is still deploying.
```yaml
agent:
retentionTimeout: "32"
```
This will change the configuration of the kubernetes "cloud" (as called by jenkins) that is created automatically as part of this helm chart.
### Change seconds to wait for pod to be running
This will change how long Jenkins will wait (seconds) for pod to be in running state.
```yaml
agent:
waitForPodSec: "32"
```
This will change the configuration of the kubernetes "cloud" (as called by jenkins) that is created automatically as part of this helm chart.
### Mounting Volumes into Agent Pods
Your Jenkins Agents will run as pods, and it's possible to inject volumes where needed:
```yaml
agent:
volumes:
- type: Secret
secretName: jenkins-mysecrets
mountPath: /var/run/secrets/jenkins-mysecrets
```
The supported volume types are: `ConfigMap`, `EmptyDir`, `HostPath`, `Nfs`, `PVC`, `Secret`.
Each type supports a different set of configurable attributes, defined by [the corresponding Java class](https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes).
### NetworkPolicy
To make use of the NetworkPolicy resources created by default, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin).
[Install](#install-chart) helm chart with network policy enabled by setting `networkPolicy.enabled` to `true`.
You can use `controller.networkPolicy.internalAgents` and `controller.networkPolicy.externalAgents` stanzas for fine-grained controls over where internal/external agents can connect from.
Internal ones are allowed based on pod labels and (optionally) namespaces, and external ones are allowed based on IP ranges.
### Script approval list
`controller.scriptApproval` allows to pass function signatures that will be allowed in pipelines.
Example:
```yaml
controller:
scriptApproval:
- "method java.util.Base64$Decoder decode java.lang.String"
- "new java.lang.String byte[]"
- "staticMethod java.util.Base64 getDecoder"
```
### Custom Labels
`controller.serviceLabels` can be used to add custom labels in `jenkins-controller-svc.yaml`.
For example:
```yaml
ServiceLabels:
expose: true
```
### Persistence
The Jenkins image stores persistence under `/var/jenkins_home` path of the container.
A dynamically managed Persistent Volume Claim is used to keep the data across deployments, by default.
This is known to work in GCE, AWS, and minikube. Alternatively, a previously configured Persistent Volume Claim can be used.
It is possible to mount several volumes using `persistence.volumes` and `persistence.mounts` parameters.
See additional `persistence` values using [configuration commands](#configuration).
#### Existing PersistentVolumeClaim
1. Create the PersistentVolume
2. Create the PersistentVolumeClaim
3. [Install](#install-chart) the chart, setting `persistence.existingClaim` to `PVC_NAME`
#### Long Volume Attach/Mount Times
Certain volume type and filesystem format combinations may experience long
attach/mount times, [10 or more minutes][K8S_VOLUME_TIMEOUT], when using
`fsGroup`. This issue may result in the following entries in the pod's event
history:
```console
Warning FailedMount 38m kubelet, aks-default-41587790-2 Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[plugins plugin-dir jenkins-token-rmq2g sc-config-volume tmp jenkins-home jenkins-config secrets-dir]: timed out waiting for the condition
```
In these cases, experiment with replacing `fsGroup` with
`supplementalGroups` in the pod's `securityContext`. This can be achieved by
setting the `controller.podSecurityContextOverride` Helm chart value to
something like:
```yaml
controller:
podSecurityContextOverride:
runAsNonRoot: true
runAsUser: 1000
supplementalGroups: [1000]
```
This issue has been reported on [azureDisk with ext4][K8S_VOLUME_TIMEOUT] and
on [Alibaba cloud][K8S_VOLUME_TIMEOUT_ALIBABA].
[K8S_VOLUME_TIMEOUT]: https://github.com/kubernetes/kubernetes/issues/67014
[K8S_VOLUME_TIMEOUT_ALIBABA]: https://github.com/kubernetes/kubernetes/issues/67014#issuecomment-698770511
#### Storage Class
It is possible to define which storage class to use, by setting `persistence.storageClass` to `[customStorageClass]`.
If set to a dash (`-`), dynamic provisioning is disabled.
If the storage class is set to null or left undefined (`""`), the default provisioner is used (gp2 on AWS, standard on GKE, AWS & OpenStack).
### Additional Secrets
Additional secrets and Additional Existing Secrets,
can be mounted into the Jenkins controller through the chart or created using `controller.additionalSecrets` or `controller.additionalExistingSecrets`.
A common use case might be identity provider credentials if using an external LDAP or OIDC-based identity provider.
The secret may then be referenced in JCasC configuration (see [JCasC configuration](#configuration-as-code)).
`values.yaml` controller section, referencing mounted secrets:
```yaml
controller:
# the 'name' and 'keyName' are concatenated with a '-' in between, so for example:
# an existing secret "secret-credentials" and a key inside it named "github-password" should be used in Jcasc as ${secret-credentials-github-password}
# 'name' and 'keyName' must be lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-',
# and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc')
# existingSecret existing secret "secret-credentials" and a key inside it named "github-username" should be used in Jcasc as ${github-username}
# When using existingSecret no need to specify the keyName under additionalExistingSecrets.
existingSecret: secret-credentials
additionalExistingSecrets:
- name: secret-credentials
keyName: github-username
- name: secret-credentials
keyName: github-password
- name: secret-credentials
keyName: token
additionalSecrets:
- name: client_id
value: abc123
- name: client_secret
value: xyz999
JCasC:
securityRealm: |
oic:
clientId: ${client_id}
clientSecret: ${client_secret}
...
configScripts:
jenkins-casc-configs: |
credentials:
system:
domainCredentials:
- credentials:
- string:
description: "github access token"
id: "github_app_token"
scope: GLOBAL
secret: ${secret-credentials-token}
- usernamePassword:
description: "github access username password"
id: "github_username_pass"
password: ${secret-credentials-github-password}
scope: GLOBAL
username: ${secret-credentials-github-username}
```
For more information, see [JCasC documentation](https://github.com/jenkinsci/configuration-as-code-plugin/blob/master/docs/features/secrets.adoc#kubernetes-secrets).
### Secret Claims from HashiCorp Vault
It's possible for this chart to generate `SecretClaim` resources in order to automatically create and maintain Kubernetes `Secrets` from HashiCorp [Vault](https://www.vaultproject.io/) via [`kube-vault-controller`](https://github.com/roboll/kube-vault-controller)
These `Secrets` can then be referenced in the same manner as Additional Secrets above.
This can be achieved by defining required Secret Claims within `controller.secretClaims`, as follows:
```yaml
controller:
secretClaims:
- name: jenkins-secret
path: secret/path
- name: jenkins-short-ttl
path: secret/short-ttl-path
renew: 60
```
### RBAC
RBAC is enabled by default. If you want to disable it you will need to set `rbac.create` to `false`.
### Adding Custom Pod Templates
It is possible to add custom pod templates for the default configured kubernetes cloud.
Add a key under `agent.podTemplates` for each pod template. Each key (prior to `|` character) is just a label, and can be any value.
Keys are only used to give the pod template a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label characters: lowercase letters, numbers, and hyphens. Each pod template can contain multiple containers.
There's no need to add the _jnlp_ container since the kubernetes plugin will automatically inject it into the pod.
For this pod templates configuration to be loaded the following values must be set:
```yaml
controller.JCasC.defaultConfig: true
```
The example below creates a python pod template in the kubernetes cloud:
```yaml
agent:
podTemplates:
python: |
- name: python
label: jenkins-python
serviceAccount: jenkins
containers:
- name: python
image: python:3
command: "/bin/sh -c"
args: "cat"
ttyEnabled: true
privileged: true
resourceRequestCpu: "400m"
resourceRequestMemory: "512Mi"
resourceLimitCpu: "1"
resourceLimitMemory: "1024Mi"
```
Best reference is `https://<jenkins_url>/configuration-as-code/reference#Cloud-kubernetes`.
### Adding Pod Templates Using additionalAgents
`additionalAgents` may be used to configure additional kubernetes pod templates.
Each additional agent corresponds to `agent` in terms of the configurable values and inherits all values from `agent` so you only need to specify values which differ.
For example:
```yaml
agent:
podName: default
customJenkinsLabels: default
# set resources for additional agents to inherit
resources:
limits:
cpu: "1"
memory: "2048Mi"
additionalAgents:
maven:
podName: maven
customJenkinsLabels: maven
# An example of overriding the jnlp container
# sideContainerName: jnlp
image: jenkins/jnlp-agent-maven
tag: latest
python:
podName: python
customJenkinsLabels: python
sideContainerName: python
image: python
tag: "3"
command: "/bin/sh -c"
args: "cat"
TTYEnabled: true
```
### Ingress Configuration
This chart provides ingress resources configurable via the `controller.ingress` block.
The simplest configuration looks like the following:
```yaml
controller:
ingress:
enabled: true
paths: []
apiVersion: "extensions/v1beta1"
hostName: jenkins.example.com
```
This snippet configures an ingress rule for exposing jenkins at `jenkins.example.com`
You can define labels and annotations via `controller.ingress.labels` and `controller.ingress.annotations` respectively.
Additionally, you can configure the ingress tls via `controller.ingress.tls`.
By default, this ingress rule exposes all paths.
If needed this can be overwritten by specifying the wanted paths in `controller.ingress.paths`
If you want to configure a secondary ingress e.g. you don't want the jenkins instance exposed but still want to receive webhooks you can configure `controller.secondaryingress`.
The secondaryingress doesn't expose anything by default and has to be configured via `controller.secondaryingress.paths`:
```yaml
controller:
ingress:
enabled: true
apiVersion: "extensions/v1beta1"
hostName: "jenkins.internal.example.com"
annotations:
kubernetes.io/ingress.class: "internal"
secondaryingress:
enabled: true
apiVersion: "extensions/v1beta1"
hostName: "jenkins-scm.example.com"
annotations:
kubernetes.io/ingress.class: "public"
paths:
- /github-webhook
```
## Prometheus Metrics
If you want to expose Prometheus metrics you need to install the [Jenkins Prometheus Metrics Plugin](https://github.com/jenkinsci/prometheus-plugin).
It will expose an endpoint (default `/prometheus`) with metrics where a Prometheus Server can scrape.
If you have implemented [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), you can set `controller.prometheus.enabled` to `true` to configure a `ServiceMonitor` and `PrometheusRule`.
If you want to further adjust alerting rules you can do so by configuring `controller.prometheus.alertingrules`
If you have implemented Prometheus without using the operator, you can leave `controller.prometheus.enabled` set to `false`.
### Running Behind a Forward Proxy
The controller pod uses an Init Container to install plugins etc. If you are behind a corporate proxy it may be useful to set `controller.initContainerEnv` to add environment variables such as `http_proxy`, so that these can be downloaded.
Additionally, you may want to add env vars for the init container, the Jenkins container, and the JVM (`controller.javaOpts`):
```yaml
controller:
initContainerEnv:
- name: http_proxy
value: "http://192.168.64.1:3128"
- name: https_proxy
value: "http://192.168.64.1:3128"
- name: no_proxy
value: ""
- name: JAVA_OPTS
value: "-Dhttps.proxyHost=proxy_host_name_without_protocol -Dhttps.proxyPort=3128"
containerEnv:
- name: http_proxy
value: "http://192.168.64.1:3128"
- name: https_proxy
value: "http://192.168.64.1:3128"
javaOpts: >-
-Dhttp.proxyHost=192.168.64.1
-Dhttp.proxyPort=3128
-Dhttps.proxyHost=192.168.64.1
-Dhttps.proxyPort=3128
```
### HTTPS Keystore Configuration
[This configuration](https://wiki.jenkins.io/pages/viewpage.action?pageId=135468777) enables jenkins to use keystore in order to serve HTTPS.
Here is the [value file section](https://wiki.jenkins.io/pages/viewpage.action?pageId=135468777#RunningJenkinswithnativeSSL/HTTPS-ConfigureJenkinstouseHTTPSandtheJKSkeystore) related to keystore configuration.
Keystore itself should be placed in front of `jenkinsKeyStoreBase64Encoded` key and in base64 encoded format. To achieve that after having `keystore.jks` file simply do this: `cat keystore.jks | base64` and paste the output in front of `jenkinsKeyStoreBase64Encoded`.
After enabling `httpsKeyStore.enable` make sure that `httpPort` and `targetPort` are not the same, as `targetPort` will serve HTTPS.
Do not set `controller.httpsKeyStore.httpPort` to `-1` because it will cause readiness and liveliness prob to fail.
If you already have a kubernetes secret that has keystore and its password you can specify its' name in front of `jenkinsHttpsJksSecretName`, You need to remember that your secret should have proper data key names `jenkins-jks-file` (or override the key name using `jenkinsHttpsJksSecretKey`)
and `https-jks-password` (or override the key name using `jenkinsHttpsJksPasswordSecretKey`; additionally you can make it get the password from a different secret using `jenkinsHttpsJksPasswordSecretName`). Example:
```yaml
controller:
httpsKeyStore:
enable: true
jenkinsHttpsJksSecretName: ''
httpPort: 8081
path: "/var/jenkins_keystore"
fileName: "keystore.jks"
password: "changeit"
jenkinsKeyStoreBase64Encoded: ''
```
### AWS Security Group Policies
To create SecurityGroupPolicies set `awsSecurityGroupPolicies.enabled` to true and add your policies. Each policy requires a `name`, array of `securityGroupIds` and a `podSelector`. Example:
```yaml
awsSecurityGroupPolicies:
enabled: true
policies:
- name: "jenkins-controller"
securityGroupIds:
- sg-123456789
podSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- jenkins-controller
```
### Agent Direct Connection
Set `directConnection` to `true` to allow agents to connect directly to a given TCP port without having to negotiate a HTTP(S) connection. This can allow you to have agent connections without an external HTTP(S) port. Example:
```yaml
agent:
jenkinsTunnel: "jenkinsci-agent:50000"
directConnection: true
```
## Migration Guide
### From stable repository
Upgrade an existing release from `stable/jenkins` to `jenkins/jenkins` seamlessly by ensuring you have the latest [repository info](#get-repository-info) and running the [upgrade commands](#upgrade-chart) specifying the `jenkins/jenkins` chart.
### Major Version Upgrades
Chart release versions follow [SemVer](../../CONTRIBUTING.md#versioning), where a MAJOR version change (example `1.0.0` -> `2.0.0`) indicates an incompatible breaking change needing manual actions.
See [UPGRADING.md](./UPGRADING.md) for a list of breaking changes

View File

@ -0,0 +1,148 @@
# Upgrade Notes
## To 5.0.0
- `controller.image`, `controller.tag`, and `controller.tagLabel` have been removed. If you want to overwrite the image you now need to configure any or all of:
- `controller.image.registry`
- `controller.image.repository`
- `controller.image.tag`
- `controller.image.tagLabel`
- `controller.imagePullPolicy` has been removed. If you want to overwrite the pull policy you now need to configure `controller.image.pullPolicy`.
- `controller.sidecars.configAutoReload.image` has been removed. If you want to overwrite the configAutoReload image you now need to configure any or all of:
- `controller.sidecars.configAutoReload.image.registry`
- `controller.sidecars.configAutoReload.image.repository`
- `controller.sidecars.configAutoReload.image.tag`
- `controller.sidecars.other` has been renamed to `controller.sidecars.additionalSidecarContainers`.
- `agent.image` and `agent.tag` have been removed. If you want to overwrite the agent image you now need to configure any or all of:
- `agent.image.repository`
- `agent.image.tag`
- The registry can still be overwritten by `agent.jnlpregistry`
- `agent.additionalContainers[*].image` has been renamed to `agent.additionalContainers[*].image.repository`
- `agent.additionalContainers[*].tag` has been renamed to `agent.additionalContainers[*].image.tag`
- `additionalAgents.*.image` has been renamed to `additionalAgents.*.image.repository`
- `additionalAgents.*.tag` has been renamed to `additionalAgents.*.image.tag`
- `additionalClouds.*.additionalAgents.*.image` has been renamed to `additionalClouds.*.additionalAgents.*.image.repository`
- `additionalClouds.*.additionalAgents.*.tag` has been renamed to `additionalClouds.*.additionalAgents.*.image.tag`
- `helmtest.bats.image` has been split up to:
- `helmtest.bats.image.registry`
- `helmtest.bats.image.repository`
- `helmtest.bats.image.tag`
- `controller.adminUsername` and `controller.adminPassword` have been renamed to `controller.admin.username` and `controller.admin.password` respectively
- `controller.adminSecret` has been renamed to `controller.admin.createSecret`
- `backup.*` was unmaintained and has thus been removed. See the following page for alternatives: [Kubernetes Backup and Migrations](https://nubenetes.com/kubernetes-backup-migrations/).
## To 4.0.0
Removes automatic `remotingSecurity` setting when using a container tag older than `2.326` (introduced in [`3.11.7`](./CHANGELOG.md#3117)). If you're using a version older than `2.326`, you should explicitly set `.controller.legacyRemotingSecurityEnabled` to `true`.
## To 3.0.0
* Check `securityRealm` and `authorizationStrategy` and adjust it.
Otherwise, your configured users and permissions will be overridden.
* You need to use helm version 3 as the `Chart.yaml` uses `apiVersion: v2`.
* All XML configuration options have been removed.
In case those are still in use you need to migrate to configuration as code.
Upgrade guide to 2.0.0 contains pointers how to do that.
* Jenkins is now using a `StatefulSet` instead of a `Deployment`
* terminology has been adjusted that's also reflected in values.yaml
The following values from `values.yaml` have been renamed:
* `master` => `controller`
* `master.useSecurity` => `controller.adminSecret`
* `master.slaveListenerPort` => `controller.agentListenerPort`
* `master.slaveHostPort` => `controller.agentListenerHostPort`
* `master.slaveKubernetesNamespace` => `agent.namespace`
* `master.slaveDefaultsProviderTemplate` => `agent.defaultsProviderTemplate`
* `master.slaveJenkinsUrl` => `agent.jenkinsUrl`
* `master.slaveJenkinsTunnel` => `agent.jenkinsTunnel`
* `master.slaveConnectTimeout` => `agent.kubernetesConnectTimeout`
* `master.slaveReadTimeout` => `agent.kubernetesReadTimeout`
* `master.slaveListenerServiceAnnotations` => `controller.agentListenerServiceAnnotations`
* `master.slaveListenerServiceType` => `controller.agentListenerServiceType`
* `master.slaveListenerLoadBalancerIP` => `controller.agentListenerLoadBalancerIP`
* `agent.slaveConnectTimeout` => `agent.connectTimeout`
* Removed values:
* `master.imageTag`: use `controller.image` and `controller.tag` instead
* `slave.imageTag`: use `agent.image` and `agent.tag` instead
## To 2.0.0
Configuration as Code is now default + container does not run as root anymore.
### Configuration as Code new default
Configuration is done via [Jenkins Configuration as Code Plugin](https://github.com/jenkinsci/configuration-as-code-plugin) by default.
That means that changes in values which result in a configuration change are always applied.
In contrast, the XML configuration was only applied during the first start and never altered.
:exclamation::exclamation::exclamation:
Attention:
This also means if you manually altered configuration then this will most likely be reset to what was configured by default.
It also applies to `securityRealm` and `authorizationStrategy` as they are also configured using configuration as code.
:exclamation::exclamation::exclamation:
### Image does not run as root anymore
It's not recommended to run containers in Kubernetes as `root`.
❗Attention: If you had not configured a different user before then you need to ensure that your image supports the user and group ID configured and also manually change permissions of all files so that Jenkins is still able to use them.
### Summary of updated values
As version 2.0.0 only updates default values and nothing else it's still possible to migrate to this version and opt out of some or all new defaults.
All you have to do is ensure the old values are set in your installation.
Here we show which values have changed and the previous default values:
```yaml
controller:
runAsUser: 1000 # was unset before
fsGroup: 1000 # was unset before
JCasC:
enabled: true # was false
defaultConfig: true # was false
sidecars:
configAutoReload:
enabled: true # was false
```
### Migration steps
Migration instructions heavily depend on your current setup.
So think of the list below more as a general guideline of what should be done.
- Ensure that the Jenkins image you are using contains a user with ID 1000 and a group with the same ID.
That's the case for `jenkins/jenkins:lts` image, which the chart uses by default
- Make a backup of your existing installation especially the persistent volume
- Ensure that you have the configuration as code plugin installed
- Export your current settings via the plugin:
`Manage Jenkins` -> `Configuration as Code` -> `Download Configuration`
- prepare your values file for the update e.g. add additional configuration as code setting that you need.
The export taken from above might be a good starting point for this.
In addition, the [demos](https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos) from the plugin itself are quite useful.
- Test drive those setting on a separate installation
- Put Jenkins to Quiet Down mode so that it does not accept new jobs
`<JENKINS_URL>/quietDown`
- Change permissions of all files and folders to the new user and group ID:
```console
kubectl exec -it <jenkins_pod> -c jenkins /bin/bash
chown -R 1000:1000 /var/jenkins_home
```
- Update Jenkins
## To 1.0.0
Breaking changes:
- Values have been renamed to follow [helm recommended naming conventions](https://helm.sh/docs/chart_best_practices/#naming-conventions) so that all variables start with a lowercase letter and words are separated with camelcase
- All resources are now using [helm recommended standard labels](https://helm.sh/docs/chart_best_practices/#standard-labels)
As a result of the label changes also the selectors of the deployment have been updated.
Those are immutable so trying an updated will cause an error like:
```console
Error: Deployment.apps "jenkins" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"jenkins-controller", "app.kubernetes.io/instance":"jenkins"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
```
In order to upgrade, [uninstall](./README.md#uninstall-chart) the Jenkins Deployment before upgrading:

View File

@ -0,0 +1,317 @@
# Jenkins
## Configuration
The following tables list the configurable parameters of the Jenkins chart and their default values.
## Values
| Key | Type | Description | Default |
|:----|:-----|:---------|:------------|
| [additionalAgents](./values.yaml#L1195) | object | Configure additional | `{}` |
| [additionalClouds](./values.yaml#L1220) | object | | `{}` |
| [agent.TTYEnabled](./values.yaml#L1101) | bool | Allocate pseudo tty to the side container | `false` |
| [agent.additionalContainers](./values.yaml#L1148) | list | Add additional containers to the agents | `[]` |
| [agent.alwaysPullImage](./values.yaml#L994) | bool | Always pull agent container image before build | `false` |
| [agent.annotations](./values.yaml#L1144) | object | Annotations to apply to the pod | `{}` |
| [agent.args](./values.yaml#L1095) | string | Arguments passed to command to execute | `"${computer.jnlpmac} ${computer.name}"` |
| [agent.command](./values.yaml#L1093) | string | Command to execute when side container starts | `nil` |
| [agent.componentName](./values.yaml#L962) | string | | `"jenkins-agent"` |
| [agent.connectTimeout](./values.yaml#L1142) | int | Timeout in seconds for an agent to be online | `100` |
| [agent.containerCap](./values.yaml#L1103) | int | Max number of agents to launch | `10` |
| [agent.customJenkinsLabels](./values.yaml#L959) | list | Append Jenkins labels to the agent | `[]` |
| [agent.defaultsProviderTemplate](./values.yaml#L913) | string | The name of the pod template to use for providing default values | `""` |
| [agent.directConnection](./values.yaml#L965) | bool | | `false` |
| [agent.disableDefaultAgent](./values.yaml#L1166) | bool | Disable the default Jenkins Agent configuration | `false` |
| [agent.enabled](./values.yaml#L911) | bool | Enable Kubernetes plugin jnlp-agent podTemplate | `true` |
| [agent.envVars](./values.yaml#L1076) | list | Environment variables for the agent Pod | `[]` |
| [agent.garbageCollection.enabled](./values.yaml#L1110) | bool | When enabled, Jenkins will periodically check for orphan pods that have not been touched for the given timeout period and delete them. | `false` |
| [agent.garbageCollection.namespaces](./values.yaml#L1112) | string | Namespaces to look at for garbage collection, in addition to the default namespace defined for the cloud. One namespace per line. | `""` |
| [agent.garbageCollection.timeout](./values.yaml#L1117) | int | Timeout value for orphaned pods | `300` |
| [agent.hostNetworking](./values.yaml#L973) | bool | Enables the agent to use the host network | `false` |
| [agent.idleMinutes](./values.yaml#L1120) | int | Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it | `0` |
| [agent.image.repository](./values.yaml#L952) | string | Repository to pull the agent jnlp image from | `"jenkins/inbound-agent"` |
| [agent.image.tag](./values.yaml#L954) | string | Tag of the image to pull | `"3261.v9c670a_4748a_9-1"` |
| [agent.imagePullSecretName](./values.yaml#L961) | string | Name of the secret to be used to pull the image | `nil` |
| [agent.inheritYamlMergeStrategy](./values.yaml#L1140) | bool | Controls whether the defined yaml merge strategy will be inherited if another defined pod template is configured to inherit from the current one | `false` |
| [agent.jenkinsTunnel](./values.yaml#L929) | string | Overrides the Kubernetes Jenkins tunnel | `nil` |
| [agent.jenkinsUrl](./values.yaml#L925) | string | Overrides the Kubernetes Jenkins URL | `nil` |
| [agent.jnlpregistry](./values.yaml#L949) | string | Custom registry used to pull the agent jnlp image from | `nil` |
| [agent.kubernetesConnectTimeout](./values.yaml#L935) | int | The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5 | `5` |
| [agent.kubernetesReadTimeout](./values.yaml#L937) | int | The read timeout in seconds for connections to Kubernetes API. The minimum value is 15 | `15` |
| [agent.livenessProbe](./values.yaml#L984) | object | | `{}` |
| [agent.maxRequestsPerHostStr](./values.yaml#L939) | string | The maximum concurrent connections to Kubernetes API | `"32"` |
| [agent.namespace](./values.yaml#L945) | string | Namespace in which the Kubernetes agents should be launched | `nil` |
| [agent.nodeSelector](./values.yaml#L1087) | object | Node labels for pod assignment | `{}` |
| [agent.nodeUsageMode](./values.yaml#L957) | string | | `"NORMAL"` |
| [agent.podLabels](./values.yaml#L947) | object | Custom Pod labels (an object with `label-key: label-value` pairs) | `{}` |
| [agent.podName](./values.yaml#L1105) | string | Agent Pod base name | `"default"` |
| [agent.podRetention](./values.yaml#L1003) | string | | `"Never"` |
| [agent.podTemplates](./values.yaml#L1176) | object | Configures extra pod templates for the default kubernetes cloud | `{}` |
| [agent.privileged](./values.yaml#L967) | bool | Agent privileged container | `false` |
| [agent.resources](./values.yaml#L975) | object | Resources allocation (Requests and Limits) | `{"limits":{"cpu":"512m","memory":"512Mi"},"requests":{"cpu":"512m","memory":"512Mi"}}` |
| [agent.restrictedPssSecurityContext](./values.yaml#L1000) | bool | Set a restricted securityContext on jnlp containers | `false` |
| [agent.retentionTimeout](./values.yaml#L941) | int | Time in minutes after which the Kubernetes cloud plugin will clean up an idle worker that has not already terminated | `5` |
| [agent.runAsGroup](./values.yaml#L971) | string | Configure container group | `nil` |
| [agent.runAsUser](./values.yaml#L969) | string | Configure container user | `nil` |
| [agent.secretEnvVars](./values.yaml#L1080) | list | Mount a secret as environment variable | `[]` |
| [agent.serviceAccount](./values.yaml#L921) | string | Override the default service account | `serviceAccountAgent.name` if `agent.useDefaultServiceAccount` is `true` |
| [agent.showRawYaml](./values.yaml#L1007) | bool | | `true` |
| [agent.sideContainerName](./values.yaml#L1097) | string | Side container name | `"jnlp"` |
| [agent.skipTlsVerify](./values.yaml#L931) | bool | Disables the verification of the controller certificate on remote connection. This flag correspond to the "Disable https certificate check" flag in kubernetes plugin UI | `false` |
| [agent.usageRestricted](./values.yaml#L933) | bool | Enable the possibility to restrict the usage of this agent to specific folder. This flag correspond to the "Restrict pipeline support to authorized folders" flag in kubernetes plugin UI | `false` |
| [agent.useDefaultServiceAccount](./values.yaml#L917) | bool | Use `serviceAccountAgent.name` as the default value for defaults template `serviceAccount` | `true` |
| [agent.volumes](./values.yaml#L1014) | list | Additional volumes | `[]` |
| [agent.waitForPodSec](./values.yaml#L943) | int | Seconds to wait for pod to be running | `600` |
| [agent.websocket](./values.yaml#L964) | bool | Enables agent communication via websockets | `false` |
| [agent.workingDir](./values.yaml#L956) | string | Configure working directory for default agent | `"/home/jenkins/agent"` |
| [agent.workspaceVolume](./values.yaml#L1049) | object | Workspace volume (defaults to EmptyDir) | `{}` |
| [agent.yamlMergeStrategy](./values.yaml#L1138) | string | Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates. Possible values: "merge" or "override" | `"override"` |
| [agent.yamlTemplate](./values.yaml#L1127) | string | The raw yaml of a Pod API Object to merge into the agent spec | `""` |
| [awsSecurityGroupPolicies.enabled](./values.yaml#L1346) | bool | | `false` |
| [awsSecurityGroupPolicies.policies[0].name](./values.yaml#L1348) | string | | `""` |
| [awsSecurityGroupPolicies.policies[0].podSelector](./values.yaml#L1350) | object | | `{}` |
| [awsSecurityGroupPolicies.policies[0].securityGroupIds](./values.yaml#L1349) | list | | `[]` |
| [checkDeprecation](./values.yaml#L1343) | bool | Checks if any deprecated values are used | `true` |
| [clusterZone](./values.yaml#L21) | string | Override the cluster name for FQDN resolving | `"cluster.local"` |
| [controller.JCasC.authorizationStrategy](./values.yaml#L539) | string | Jenkins Config as Code Authorization Strategy-section | `"loggedInUsersCanDoAnything:\n allowAnonymousRead: false"` |
| [controller.JCasC.configMapAnnotations](./values.yaml#L544) | object | Annotations for the JCasC ConfigMap | `{}` |
| [controller.JCasC.configScripts](./values.yaml#L513) | object | List of Jenkins Config as Code scripts | `{}` |
| [controller.JCasC.configUrls](./values.yaml#L510) | list | Remote URLs for configuration files. | `[]` |
| [controller.JCasC.defaultConfig](./values.yaml#L504) | bool | Enables default Jenkins configuration via configuration as code plugin | `true` |
| [controller.JCasC.overwriteConfiguration](./values.yaml#L508) | bool | Whether Jenkins Config as Code should overwrite any existing configuration | `false` |
| [controller.JCasC.security](./values.yaml#L520) | object | Jenkins Config as Code security-section | `{"apiToken":{"creationOfLegacyTokenEnabled":false,"tokenGenerationOnCreationEnabled":false,"usageStatisticsEnabled":true}}` |
| [controller.JCasC.securityRealm](./values.yaml#L528) | string | Jenkins Config as Code Security Realm-section | `"local:\n allowsSignup: false\n enableCaptcha: false\n users:\n - id: \"${chart-admin-username}\"\n name: \"Jenkins Admin\"\n password: \"${chart-admin-password}\""` |
| [controller.additionalExistingSecrets](./values.yaml#L465) | list | List of additional existing secrets to mount | `[]` |
| [controller.additionalPlugins](./values.yaml#L415) | list | List of plugins to install in addition to those listed in controller.installPlugins | `[]` |
| [controller.additionalSecrets](./values.yaml#L474) | list | List of additional secrets to create and mount | `[]` |
| [controller.admin.createSecret](./values.yaml#L91) | bool | Create secret for admin user | `true` |
| [controller.admin.existingSecret](./values.yaml#L94) | string | The name of an existing secret containing the admin credentials | `""` |
| [controller.admin.password](./values.yaml#L81) | string | Admin password created as a secret if `controller.admin.createSecret` is true | `<random password>` |
| [controller.admin.passwordKey](./values.yaml#L86) | string | The key in the existing admin secret containing the password | `"jenkins-admin-password"` |
| [controller.admin.userKey](./values.yaml#L84) | string | The key in the existing admin secret containing the username | `"jenkins-admin-user"` |
| [controller.admin.username](./values.yaml#L78) | string | Admin username created as a secret if `controller.admin.createSecret` is true | `"admin"` |
| [controller.affinity](./values.yaml#L666) | object | Affinity settings | `{}` |
| [controller.agentListenerEnabled](./values.yaml#L324) | bool | Create Agent listener service | `true` |
| [controller.agentListenerExternalTrafficPolicy](./values.yaml#L334) | string | Traffic Policy of for the agentListener service | `nil` |
| [controller.agentListenerHostPort](./values.yaml#L328) | string | Host port to listen for agents | `nil` |
| [controller.agentListenerLoadBalancerIP](./values.yaml#L364) | string | Static IP for the agentListener LoadBalancer | `nil` |
| [controller.agentListenerLoadBalancerSourceRanges](./values.yaml#L336) | list | Allowed inbound IP for the agentListener service | `["0.0.0.0/0"]` |
| [controller.agentListenerNodePort](./values.yaml#L330) | string | Node port to listen for agents | `nil` |
| [controller.agentListenerPort](./values.yaml#L326) | int | Listening port for agents | `50000` |
| [controller.agentListenerServiceAnnotations](./values.yaml#L359) | object | Annotations for the agentListener service | `{}` |
| [controller.agentListenerServiceType](./values.yaml#L356) | string | Defines how to expose the agentListener service | `"ClusterIP"` |
| [controller.backendconfig.annotations](./values.yaml#L769) | object | backendconfig annotations | `{}` |
| [controller.backendconfig.apiVersion](./values.yaml#L763) | string | backendconfig API version | `"extensions/v1beta1"` |
| [controller.backendconfig.enabled](./values.yaml#L761) | bool | Enables backendconfig | `false` |
| [controller.backendconfig.labels](./values.yaml#L767) | object | backendconfig labels | `{}` |
| [controller.backendconfig.name](./values.yaml#L765) | string | backendconfig name | `nil` |
| [controller.backendconfig.spec](./values.yaml#L771) | object | backendconfig spec | `{}` |
| [controller.cloudName](./values.yaml#L493) | string | Name of default cloud configuration. | `"kubernetes"` |
| [controller.clusterIp](./values.yaml#L223) | string | k8s service clusterIP. Only used if serviceType is ClusterIP | `nil` |
| [controller.componentName](./values.yaml#L34) | string | Used for label app.kubernetes.io/component | `"jenkins-controller"` |
| [controller.containerEnv](./values.yaml#L156) | list | Environment variables for Jenkins Container | `[]` |
| [controller.containerEnvFrom](./values.yaml#L153) | list | Environment variable sources for Jenkins Container | `[]` |
| [controller.containerSecurityContext](./values.yaml#L211) | object | Allow controlling the securityContext for the jenkins container | `{"allowPrivilegeEscalation":false,"readOnlyRootFilesystem":true,"runAsGroup":1000,"runAsUser":1000}` |
| [controller.csrf.defaultCrumbIssuer.enabled](./values.yaml#L345) | bool | Enable the default CSRF Crumb issuer | `true` |
| [controller.csrf.defaultCrumbIssuer.proxyCompatability](./values.yaml#L347) | bool | Enable proxy compatibility | `true` |
| [controller.customInitContainers](./values.yaml#L547) | list | Custom init-container specification in raw-yaml format | `[]` |
| [controller.customJenkinsLabels](./values.yaml#L68) | list | Append Jenkins labels to the controller | `[]` |
| [controller.disableRememberMe](./values.yaml#L59) | bool | Disable use of remember me | `false` |
| [controller.disabledAgentProtocols](./values.yaml#L339) | list | Disabled agent protocols | `["JNLP-connect","JNLP2-connect"]` |
| [controller.enableRawHtmlMarkupFormatter](./values.yaml#L435) | bool | Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter) | `false` |
| [controller.enableServiceLinks](./values.yaml#L130) | bool | | `false` |
| [controller.executorMode](./values.yaml#L65) | string | Sets the executor mode of the Jenkins node. Possible values are "NORMAL" or "EXCLUSIVE" | `"NORMAL"` |
| [controller.existingSecret](./values.yaml#L462) | string | | `nil` |
| [controller.extraPorts](./values.yaml#L394) | list | Optionally configure other ports to expose in the controller container | `[]` |
| [controller.fsGroup](./values.yaml#L192) | int | Deprecated in favor of `controller.podSecurityContextOverride`. uid that will be used for persistent volume. | `1000` |
| [controller.googlePodMonitor.enabled](./values.yaml#L832) | bool | | `false` |
| [controller.googlePodMonitor.scrapeEndpoint](./values.yaml#L837) | string | | `"/prometheus"` |
| [controller.googlePodMonitor.scrapeInterval](./values.yaml#L835) | string | | `"60s"` |
| [controller.healthProbes](./values.yaml#L254) | bool | Enable Kubernetes Probes configuration configured in `controller.probes` | `true` |
| [controller.hostAliases](./values.yaml#L785) | list | Allows for adding entries to Pod /etc/hosts | `[]` |
| [controller.hostNetworking](./values.yaml#L70) | bool | | `false` |
| [controller.httpsKeyStore.disableSecretMount](./values.yaml#L853) | bool | | `false` |
| [controller.httpsKeyStore.enable](./values.yaml#L844) | bool | Enables HTTPS keystore on jenkins controller | `false` |
| [controller.httpsKeyStore.fileName](./values.yaml#L861) | string | Jenkins keystore filename which will appear under controller.httpsKeyStore.path | `"keystore.jks"` |
| [controller.httpsKeyStore.httpPort](./values.yaml#L857) | int | HTTP Port that Jenkins should listen to along with HTTPS, it also serves as the liveness and readiness probes port. | `8081` |
| [controller.httpsKeyStore.jenkinsHttpsJksPasswordSecretKey](./values.yaml#L852) | string | Name of the key in the secret that contains the JKS password | `"https-jks-password"` |
| [controller.httpsKeyStore.jenkinsHttpsJksPasswordSecretName](./values.yaml#L850) | string | Name of the secret that contains the JKS password, if it is not in the same secret as the JKS file | `""` |
| [controller.httpsKeyStore.jenkinsHttpsJksSecretKey](./values.yaml#L848) | string | Name of the key in the secret that already has ssl keystore | `"jenkins-jks-file"` |
| [controller.httpsKeyStore.jenkinsHttpsJksSecretName](./values.yaml#L846) | string | Name of the secret that already has ssl keystore | `""` |
| [controller.httpsKeyStore.jenkinsKeyStoreBase64Encoded](./values.yaml#L866) | string | Base64 encoded Keystore content. Keystore must be converted to base64 then being pasted here | `nil` |
| [controller.httpsKeyStore.password](./values.yaml#L863) | string | Jenkins keystore password | `"password"` |
| [controller.httpsKeyStore.path](./values.yaml#L859) | string | Path of HTTPS keystore file | `"/var/jenkins_keystore"` |
| [controller.image.pullPolicy](./values.yaml#L47) | string | Controller image pull policy | `"Always"` |
| [controller.image.registry](./values.yaml#L37) | string | Controller image registry | `"docker.io"` |
| [controller.image.repository](./values.yaml#L39) | string | Controller image repository | `"jenkins/jenkins"` |
| [controller.image.tag](./values.yaml#L42) | string | Controller image tag override; i.e., tag: "2.440.1-jdk17" | `nil` |
| [controller.image.tagLabel](./values.yaml#L45) | string | Controller image tag label | `"jdk17"` |
| [controller.imagePullSecretName](./values.yaml#L49) | string | Controller image pull secret | `nil` |
| [controller.ingress.annotations](./values.yaml#L708) | object | Ingress annotations | `{}` |
| [controller.ingress.apiVersion](./values.yaml#L704) | string | Ingress API version | `"extensions/v1beta1"` |
| [controller.ingress.enabled](./values.yaml#L687) | bool | Enables ingress | `false` |
| [controller.ingress.hostName](./values.yaml#L721) | string | Ingress hostname | `nil` |
| [controller.ingress.labels](./values.yaml#L706) | object | Ingress labels | `{}` |
| [controller.ingress.path](./values.yaml#L717) | string | Ingress path | `nil` |
| [controller.ingress.paths](./values.yaml#L691) | list | Override for the default Ingress paths | `[]` |
| [controller.ingress.resourceRootUrl](./values.yaml#L723) | string | Hostname to serve assets from | `nil` |
| [controller.ingress.tls](./values.yaml#L725) | list | Ingress TLS configuration | `[]` |
| [controller.initConfigMap](./values.yaml#L452) | string | Name of the existing ConfigMap that contains init scripts | `nil` |
| [controller.initContainerEnv](./values.yaml#L147) | list | Environment variables for Init Container | `[]` |
| [controller.initContainerEnvFrom](./values.yaml#L143) | list | Environment variable sources for Init Container | `[]` |
| [controller.initContainerResources](./values.yaml#L134) | object | Resources allocation (Requests and Limits) for Init Container | `{}` |
| [controller.initScripts](./values.yaml#L448) | object | Map of groovy init scripts to be executed during Jenkins controller start | `{}` |
| [controller.initializeOnce](./values.yaml#L420) | bool | Initialize only on first installation. Ensures plugins do not get updated inadvertently. Requires `persistence.enabled` to be set to `true` | `false` |
| [controller.installLatestPlugins](./values.yaml#L409) | bool | Download the minimum required version or latest version of all dependencies | `true` |
| [controller.installLatestSpecifiedPlugins](./values.yaml#L412) | bool | Set to true to download the latest version of any plugin that is requested to have the latest version | `false` |
| [controller.installPlugins](./values.yaml#L401) | list | List of Jenkins plugins to install. If you don't want to install plugins, set it to `false` | `["kubernetes:4287.v73451380b_576","workflow-aggregator:600.vb_57cdd26fdd7","git:5.4.1","configuration-as-code:1850.va_a_8c31d3158b_"]` |
| [controller.javaOpts](./values.yaml#L162) | string | Append to `JAVA_OPTS` env var | `nil` |
| [controller.jenkinsAdminEmail](./values.yaml#L96) | string | Email address for the administrator of the Jenkins instance | `nil` |
| [controller.jenkinsHome](./values.yaml#L101) | string | Custom Jenkins home path | `"/var/jenkins_home"` |
| [controller.jenkinsOpts](./values.yaml#L164) | string | Append to `JENKINS_OPTS` env var | `nil` |
| [controller.jenkinsRef](./values.yaml#L106) | string | Custom Jenkins reference path | `"/usr/share/jenkins/ref"` |
| [controller.jenkinsUriPrefix](./values.yaml#L179) | string | Root URI Jenkins will be served on | `nil` |
| [controller.jenkinsUrl](./values.yaml#L174) | string | Set Jenkins URL if you are not using the ingress definitions provided by the chart | `nil` |
| [controller.jenkinsUrlProtocol](./values.yaml#L171) | string | Set protocol for Jenkins URL; `https` if `controller.ingress.tls`, `http` otherwise | `nil` |
| [controller.jenkinsWar](./values.yaml#L109) | string | | `"/usr/share/jenkins/jenkins.war"` |
| [controller.jmxPort](./values.yaml#L391) | string | Open a port, for JMX stats | `nil` |
| [controller.legacyRemotingSecurityEnabled](./values.yaml#L367) | bool | Whether legacy remoting security should be enabled | `false` |
| [controller.lifecycle](./values.yaml#L51) | object | Lifecycle specification for controller-container | `{}` |
| [controller.loadBalancerIP](./values.yaml#L382) | string | Optionally assign a known public LB IP | `nil` |
| [controller.loadBalancerSourceRanges](./values.yaml#L378) | list | Allowed inbound IP addresses | `["0.0.0.0/0"]` |
| [controller.markupFormatter](./values.yaml#L439) | string | Yaml of the markup formatter to use | `"plainText"` |
| [controller.nodePort](./values.yaml#L229) | string | k8s node port. Only used if serviceType is NodePort | `nil` |
| [controller.nodeSelector](./values.yaml#L653) | object | Node labels for pod assignment | `{}` |
| [controller.numExecutors](./values.yaml#L62) | int | Set Number of executors | `0` |
| [controller.overwritePlugins](./values.yaml#L424) | bool | Overwrite installed plugins on start | `false` |
| [controller.overwritePluginsFromImage](./values.yaml#L428) | bool | Overwrite plugins that are already installed in the controller image | `true` |
| [controller.podAnnotations](./values.yaml#L674) | object | Annotations for controller pod | `{}` |
| [controller.podDisruptionBudget.annotations](./values.yaml#L318) | object | | `{}` |
| [controller.podDisruptionBudget.apiVersion](./values.yaml#L316) | string | Policy API version | `"policy/v1beta1"` |
| [controller.podDisruptionBudget.enabled](./values.yaml#L311) | bool | Enable Kubernetes Pod Disruption Budget configuration | `false` |
| [controller.podDisruptionBudget.labels](./values.yaml#L319) | object | | `{}` |
| [controller.podDisruptionBudget.maxUnavailable](./values.yaml#L321) | string | Number of pods that can be unavailable. Either an absolute number or a percentage | `"0"` |
| [controller.podLabels](./values.yaml#L247) | object | Custom Pod labels (an object with `label-key: label-value` pairs) | `{}` |
| [controller.podSecurityContextOverride](./values.yaml#L208) | string | Completely overwrites the contents of the pod security context, ignoring the values provided for `runAsUser`, `fsGroup`, and `securityContextCapabilities` | `nil` |
| [controller.priorityClassName](./values.yaml#L671) | string | The name of a `priorityClass` to apply to the controller pod | `nil` |
| [controller.probes.livenessProbe.failureThreshold](./values.yaml#L272) | int | Set the failure threshold for the liveness probe | `5` |
| [controller.probes.livenessProbe.httpGet.path](./values.yaml#L275) | string | Set the Pod's HTTP path for the liveness probe | `"{{ default \"\" .Values.controller.jenkinsUriPrefix }}/login"` |
| [controller.probes.livenessProbe.httpGet.port](./values.yaml#L277) | string | Set the Pod's HTTP port to use for the liveness probe | `"http"` |
| [controller.probes.livenessProbe.initialDelaySeconds](./values.yaml#L286) | string | Set the initial delay for the liveness probe in seconds | `nil` |
| [controller.probes.livenessProbe.periodSeconds](./values.yaml#L279) | int | Set the time interval between two liveness probes executions in seconds | `10` |
| [controller.probes.livenessProbe.timeoutSeconds](./values.yaml#L281) | int | Set the timeout for the liveness probe in seconds | `5` |
| [controller.probes.readinessProbe.failureThreshold](./values.yaml#L290) | int | Set the failure threshold for the readiness probe | `3` |
| [controller.probes.readinessProbe.httpGet.path](./values.yaml#L293) | string | Set the Pod's HTTP path for the liveness probe | `"{{ default \"\" .Values.controller.jenkinsUriPrefix }}/login"` |
| [controller.probes.readinessProbe.httpGet.port](./values.yaml#L295) | string | Set the Pod's HTTP port to use for the readiness probe | `"http"` |
| [controller.probes.readinessProbe.initialDelaySeconds](./values.yaml#L304) | string | Set the initial delay for the readiness probe in seconds | `nil` |
| [controller.probes.readinessProbe.periodSeconds](./values.yaml#L297) | int | Set the time interval between two readiness probes executions in seconds | `10` |
| [controller.probes.readinessProbe.timeoutSeconds](./values.yaml#L299) | int | Set the timeout for the readiness probe in seconds | `5` |
| [controller.probes.startupProbe.failureThreshold](./values.yaml#L259) | int | Set the failure threshold for the startup probe | `12` |
| [controller.probes.startupProbe.httpGet.path](./values.yaml#L262) | string | Set the Pod's HTTP path for the startup probe | `"{{ default \"\" .Values.controller.jenkinsUriPrefix }}/login"` |
| [controller.probes.startupProbe.httpGet.port](./values.yaml#L264) | string | Set the Pod's HTTP port to use for the startup probe | `"http"` |
| [controller.probes.startupProbe.periodSeconds](./values.yaml#L266) | int | Set the time interval between two startup probes executions in seconds | `10` |
| [controller.probes.startupProbe.timeoutSeconds](./values.yaml#L268) | int | Set the timeout for the startup probe in seconds | `5` |
| [controller.projectNamingStrategy](./values.yaml#L431) | string | | `"standard"` |
| [controller.prometheus.alertingRulesAdditionalLabels](./values.yaml#L818) | object | Additional labels to add to the PrometheusRule object | `{}` |
| [controller.prometheus.alertingrules](./values.yaml#L816) | list | Array of prometheus alerting rules | `[]` |
| [controller.prometheus.enabled](./values.yaml#L801) | bool | Enables prometheus service monitor | `false` |
| [controller.prometheus.metricRelabelings](./values.yaml#L828) | list | | `[]` |
| [controller.prometheus.prometheusRuleNamespace](./values.yaml#L820) | string | Set a custom namespace where to deploy PrometheusRule resource | `""` |
| [controller.prometheus.relabelings](./values.yaml#L826) | list | | `[]` |
| [controller.prometheus.scrapeEndpoint](./values.yaml#L811) | string | The endpoint prometheus should get metrics from | `"/prometheus"` |
| [controller.prometheus.scrapeInterval](./values.yaml#L807) | string | How often prometheus should scrape metrics | `"60s"` |
| [controller.prometheus.serviceMonitorAdditionalLabels](./values.yaml#L803) | object | Additional labels to add to the service monitor object | `{}` |
| [controller.prometheus.serviceMonitorNamespace](./values.yaml#L805) | string | Set a custom namespace where to deploy ServiceMonitor resource | `nil` |
| [controller.resources](./values.yaml#L115) | object | Resource allocation (Requests and Limits) | `{"limits":{"cpu":"2000m","memory":"4096Mi"},"requests":{"cpu":"50m","memory":"256Mi"}}` |
| [controller.route.annotations](./values.yaml#L780) | object | Route annotations | `{}` |
| [controller.route.enabled](./values.yaml#L776) | bool | Enables openshift route | `false` |
| [controller.route.labels](./values.yaml#L778) | object | Route labels | `{}` |
| [controller.route.path](./values.yaml#L782) | string | Route path | `nil` |
| [controller.runAsUser](./values.yaml#L189) | int | Deprecated in favor of `controller.podSecurityContextOverride`. uid that jenkins runs with. | `1000` |
| [controller.schedulerName](./values.yaml#L649) | string | Name of the Kubernetes scheduler to use | `""` |
| [controller.scriptApproval](./values.yaml#L443) | list | List of groovy functions to approve | `[]` |
| [controller.secondaryingress.annotations](./values.yaml#L743) | object | | `{}` |
| [controller.secondaryingress.apiVersion](./values.yaml#L741) | string | | `"extensions/v1beta1"` |
| [controller.secondaryingress.enabled](./values.yaml#L735) | bool | | `false` |
| [controller.secondaryingress.hostName](./values.yaml#L750) | string | | `nil` |
| [controller.secondaryingress.labels](./values.yaml#L742) | object | | `{}` |
| [controller.secondaryingress.paths](./values.yaml#L738) | list | | `[]` |
| [controller.secondaryingress.tls](./values.yaml#L751) | string | | `nil` |
| [controller.secretClaims](./values.yaml#L486) | list | List of `SecretClaim` resources to create | `[]` |
| [controller.securityContextCapabilities](./values.yaml#L198) | object | | `{}` |
| [controller.serviceAnnotations](./values.yaml#L236) | object | Jenkins controller service annotations | `{}` |
| [controller.serviceExternalTrafficPolicy](./values.yaml#L233) | string | | `nil` |
| [controller.serviceLabels](./values.yaml#L242) | object | Labels for the Jenkins controller-service | `{}` |
| [controller.servicePort](./values.yaml#L225) | int | k8s service port | `8080` |
| [controller.serviceType](./values.yaml#L220) | string | k8s service type | `"ClusterIP"` |
| [controller.shareProcessNamespace](./values.yaml#L124) | bool | | `false` |
| [controller.sidecars.additionalSidecarContainers](./values.yaml#L631) | list | Configures additional sidecar container(s) for the Jenkins controller | `[]` |
| [controller.sidecars.configAutoReload.additionalVolumeMounts](./values.yaml#L577) | list | Enables additional volume mounts for the config auto-reload container | `[]` |
| [controller.sidecars.configAutoReload.containerSecurityContext](./values.yaml#L626) | object | Enable container security context | `{"allowPrivilegeEscalation":false,"readOnlyRootFilesystem":true}` |
| [controller.sidecars.configAutoReload.enabled](./values.yaml#L560) | bool | Enables Jenkins Config as Code auto-reload | `true` |
| [controller.sidecars.configAutoReload.env](./values.yaml#L608) | object | Environment variables for the Jenkins Config as Code auto-reload container | `{}` |
| [controller.sidecars.configAutoReload.envFrom](./values.yaml#L606) | list | Environment variable sources for the Jenkins Config as Code auto-reload container | `[]` |
| [controller.sidecars.configAutoReload.folder](./values.yaml#L619) | string | | `"/var/jenkins_home/casc_configs"` |
| [controller.sidecars.configAutoReload.image.registry](./values.yaml#L563) | string | Registry for the image that triggers the reload | `"docker.io"` |
| [controller.sidecars.configAutoReload.image.repository](./values.yaml#L565) | string | Repository of the image that triggers the reload | `"kiwigrid/k8s-sidecar"` |
| [controller.sidecars.configAutoReload.image.tag](./values.yaml#L567) | string | Tag for the image that triggers the reload | `"1.27.6"` |
| [controller.sidecars.configAutoReload.imagePullPolicy](./values.yaml#L568) | string | | `"IfNotPresent"` |
| [controller.sidecars.configAutoReload.logging](./values.yaml#L583) | object | Config auto-reload logging settings | `{"configuration":{"backupCount":3,"formatter":"JSON","logLevel":"INFO","logToConsole":true,"logToFile":false,"maxBytes":1024,"override":false}}` |
| [controller.sidecars.configAutoReload.logging.configuration.override](./values.yaml#L587) | bool | Enables custom log config utilizing using the settings below. | `false` |
| [controller.sidecars.configAutoReload.reqRetryConnect](./values.yaml#L601) | int | How many connection-related errors to retry on | `10` |
| [controller.sidecars.configAutoReload.resources](./values.yaml#L569) | object | | `{}` |
| [controller.sidecars.configAutoReload.scheme](./values.yaml#L596) | string | The scheme to use when connecting to the Jenkins configuration as code endpoint | `"http"` |
| [controller.sidecars.configAutoReload.skipTlsVerify](./values.yaml#L598) | bool | Skip TLS verification when connecting to the Jenkins configuration as code endpoint | `false` |
| [controller.sidecars.configAutoReload.sleepTime](./values.yaml#L603) | string | How many seconds to wait before updating config-maps/secrets (sets METHOD=SLEEP on the sidecar) | `nil` |
| [controller.sidecars.configAutoReload.sshTcpPort](./values.yaml#L617) | int | | `1044` |
| [controller.statefulSetAnnotations](./values.yaml#L676) | object | Annotations for controller StatefulSet | `{}` |
| [controller.statefulSetLabels](./values.yaml#L238) | object | Jenkins controller custom labels for the StatefulSet | `{}` |
| [controller.targetPort](./values.yaml#L227) | int | k8s target port | `8080` |
| [controller.terminationGracePeriodSeconds](./values.yaml#L659) | string | Set TerminationGracePeriodSeconds | `nil` |
| [controller.terminationMessagePath](./values.yaml#L661) | string | Set the termination message path | `nil` |
| [controller.terminationMessagePolicy](./values.yaml#L663) | string | Set the termination message policy | `nil` |
| [controller.testEnabled](./values.yaml#L840) | bool | Can be used to disable rendering controller test resources when using helm template | `true` |
| [controller.tolerations](./values.yaml#L657) | list | Toleration labels for pod assignment | `[]` |
| [controller.topologySpreadConstraints](./values.yaml#L683) | object | Topology spread constraints | `{}` |
| [controller.updateStrategy](./values.yaml#L680) | object | Update strategy for StatefulSet | `{}` |
| [controller.usePodSecurityContext](./values.yaml#L182) | bool | Enable pod security context (must be `true` if podSecurityContextOverride, runAsUser or fsGroup are set) | `true` |
| [credentialsId](./values.yaml#L27) | string | The Jenkins credentials to access the Kubernetes API server. For the default cluster it is not needed. | `nil` |
| [fullnameOverride](./values.yaml#L13) | string | Override the full resource names | `jenkins-(release-name)` or `jenkins` if the release-name is `jenkins` |
| [helmtest.bats.image.registry](./values.yaml#L1359) | string | Registry of the image used to test the framework | `"docker.io"` |
| [helmtest.bats.image.repository](./values.yaml#L1361) | string | Repository of the image used to test the framework | `"bats/bats"` |
| [helmtest.bats.image.tag](./values.yaml#L1363) | string | Tag of the image to test the framework | `"1.11.0"` |
| [kubernetesURL](./values.yaml#L24) | string | The URL of the Kubernetes API server | `"https://kubernetes.default"` |
| [nameOverride](./values.yaml#L10) | string | Override the resource name prefix | `Chart.Name` |
| [namespaceOverride](./values.yaml#L16) | string | Override the deployment namespace | `Release.Namespace` |
| [networkPolicy.apiVersion](./values.yaml#L1289) | string | NetworkPolicy ApiVersion | `"networking.k8s.io/v1"` |
| [networkPolicy.enabled](./values.yaml#L1284) | bool | Enable the creation of NetworkPolicy resources | `false` |
| [networkPolicy.externalAgents.except](./values.yaml#L1303) | list | A list of IP sub-ranges to be excluded from the allowlisted IP range | `[]` |
| [networkPolicy.externalAgents.ipCIDR](./values.yaml#L1301) | string | The IP range from which external agents are allowed to connect to controller, i.e., 172.17.0.0/16 | `nil` |
| [networkPolicy.internalAgents.allowed](./values.yaml#L1293) | bool | Allow internal agents (from the same cluster) to connect to controller. Agent pods will be filtered based on PodLabels | `true` |
| [networkPolicy.internalAgents.namespaceLabels](./values.yaml#L1297) | object | A map of labels (keys/values) that agents namespaces must have to be able to connect to controller | `{}` |
| [networkPolicy.internalAgents.podLabels](./values.yaml#L1295) | object | A map of labels (keys/values) that agent pods must have to be able to connect to controller | `{}` |
| [persistence.accessMode](./values.yaml#L1259) | string | The PVC access mode | `"ReadWriteOnce"` |
| [persistence.annotations](./values.yaml#L1255) | object | Annotations for the PVC | `{}` |
| [persistence.dataSource](./values.yaml#L1265) | object | Existing data source to clone PVC from | `{}` |
| [persistence.enabled](./values.yaml#L1239) | bool | Enable the use of a Jenkins PVC | `true` |
| [persistence.existingClaim](./values.yaml#L1245) | string | Provide the name of a PVC | `nil` |
| [persistence.labels](./values.yaml#L1257) | object | Labels for the PVC | `{}` |
| [persistence.mounts](./values.yaml#L1277) | list | Additional mounts | `[]` |
| [persistence.size](./values.yaml#L1261) | string | The size of the PVC | `"8Gi"` |
| [persistence.storageClass](./values.yaml#L1253) | string | Storage class for the PVC | `nil` |
| [persistence.subPath](./values.yaml#L1270) | string | SubPath for jenkins-home mount | `nil` |
| [persistence.volumes](./values.yaml#L1272) | list | Additional volumes | `[]` |
| [rbac.create](./values.yaml#L1309) | bool | Whether RBAC resources are created | `true` |
| [rbac.readSecrets](./values.yaml#L1311) | bool | Whether the Jenkins service account should be able to read Kubernetes secrets | `false` |
| [renderHelmLabels](./values.yaml#L30) | bool | Enables rendering of the helm.sh/chart label to the annotations | `true` |
| [serviceAccount.annotations](./values.yaml#L1321) | object | Configures annotations for the ServiceAccount | `{}` |
| [serviceAccount.create](./values.yaml#L1315) | bool | Configures if a ServiceAccount with this name should be created | `true` |
| [serviceAccount.extraLabels](./values.yaml#L1323) | object | Configures extra labels for the ServiceAccount | `{}` |
| [serviceAccount.imagePullSecretName](./values.yaml#L1325) | string | Controller ServiceAccount image pull secret | `nil` |
| [serviceAccount.name](./values.yaml#L1319) | string | | `nil` |
| [serviceAccountAgent.annotations](./values.yaml#L1336) | object | Configures annotations for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.create](./values.yaml#L1330) | bool | Configures if an agent ServiceAccount should be created | `false` |
| [serviceAccountAgent.extraLabels](./values.yaml#L1338) | object | Configures extra labels for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.imagePullSecretName](./values.yaml#L1340) | string | Agent ServiceAccount image pull secret | `nil` |
| [serviceAccountAgent.name](./values.yaml#L1334) | string | The name of the agent ServiceAccount to be used by access-controlled resources | `nil` |

View File

@ -0,0 +1,28 @@
# Jenkins
## Configuration
The following tables list the configurable parameters of the Jenkins chart and their default values.
{{- define "chart.valueDefaultColumnRender" -}}
{{- $defaultValue := (trimAll "`" (default .Default .AutoDefault) | replace "\n" "") -}}
`{{- $defaultValue | replace "\n" "" -}}`
{{- end -}}
{{- define "chart.typeColumnRender" -}}
{{- .Type -}}
{{- end -}}
{{- define "chart.valueDescription" -}}
{{- default .Description .AutoDescription }}
{{- end -}}
{{- define "chart.valuesTable" -}}
| Key | Type | Description | Default |
|:----|:-----|:---------|:------------|
{{- range .Values }}
| [{{ .Key }}](./values.yaml#L{{ .LineNumber }}) | {{ template "chart.typeColumnRender" . }} | {{ template "chart.valueDescription" . }} | {{ template "chart.valueDefaultColumnRender" . }} |
{{- end }}
{{- end }}
{{ template "chart.valuesSection" . }}

View File

@ -0,0 +1,68 @@
{{- $prefix := .Values.controller.jenkinsUriPrefix | default "" -}}
{{- $url := "" -}}
1. Get your '{{ .Values.controller.admin.username }}' user password by running:
kubectl exec --namespace {{ template "jenkins.namespace" . }} -it svc/{{ template "jenkins.fullname" . }} -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
{{- if .Values.controller.ingress.hostName -}}
{{- if .Values.controller.ingress.tls -}}
{{- $url = print "https://" .Values.controller.ingress.hostName $prefix -}}
{{- else -}}
{{- $url = print "http://" .Values.controller.ingress.hostName $prefix -}}
{{- end }}
2. Visit {{ $url }}
{{- else }}
2. Get the Jenkins URL to visit by running these commands in the same shell:
{{- if contains "NodePort" .Values.controller.serviceType }}
export NODE_PORT=$(kubectl get --namespace {{ template "jenkins.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "jenkins.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ template "jenkins.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}")
{{- if .Values.controller.httpsKeyStore.enable -}}
{{- $url = print "https://$NODE_IP:$NODE_PORT" $prefix -}}
{{- else -}}
{{- $url = print "http://$NODE_IP:$NODE_PORT" $prefix -}}
{{- end }}
echo {{ $url }}
{{- else if contains "LoadBalancer" .Values.controller.serviceType }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace {{ template "jenkins.namespace" . }} -w {{ template "jenkins.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ template "jenkins.namespace" . }} {{ template "jenkins.fullname" . }} --template "{{ "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" }}")
{{- if .Values.controller.httpsKeyStore.enable -}}
{{- $url = print "https://$SERVICE_IP:" .Values.controller.servicePort $prefix -}}
{{- else -}}
{{- $url = print "http://$SERVICE_IP:" .Values.controller.servicePort $prefix -}}
{{- end }}
echo {{ $url }}
{{- else if contains "ClusterIP" .Values.controller.serviceType -}}
{{- if .Values.controller.httpsKeyStore.enable -}}
{{- $url = print "https://127.0.0.1:" .Values.controller.servicePort $prefix -}}
{{- else -}}
{{- $url = print "http://127.0.0.1:" .Values.controller.servicePort $prefix -}}
{{- end }}
echo {{ $url }}
kubectl --namespace {{ template "jenkins.namespace" . }} port-forward svc/{{template "jenkins.fullname" . }} {{ .Values.controller.servicePort }}:{{ .Values.controller.servicePort }}
{{- end }}
{{- end }}
3. Login with the password from step 1 and the username: {{ .Values.controller.admin.username }}
4. Configure security realm and authorization strategy
5. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: {{ $url }}/configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/
{{ if and (eq .Values.controller.image.repository "jenkins/jenkins") (eq .Values.controller.image.registry "docker.io") }}
NOTE: Consider using a custom image with pre-installed plugins
{{- else if .Values.controller.installPlugins }}
NOTE: Consider disabling `installPlugins` if your image already contains plugins.
{{- end }}
{{- if .Values.persistence.enabled }}
{{- else }}
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
###### the Jenkins pod is terminated. #####
#################################################################################
{{- end }}

View File

@ -0,0 +1,684 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "jenkins.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Expand the label of the chart.
*/}}
{{- define "jenkins.label" -}}
{{- printf "%s-%s" (include "jenkins.name" .) .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts.
*/}}
{{- define "jenkins.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{- define "jenkins.agent.namespace" -}}
{{- if .Values.agent.namespace -}}
{{- tpl .Values.agent.namespace . -}}
{{- else -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "jenkins.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Returns the admin password
https://github.com/helm/charts/issues/5167#issuecomment-619137759
*/}}
{{- define "jenkins.password" -}}
{{- if .Values.controller.admin.password -}}
{{- .Values.controller.admin.password | b64enc | quote }}
{{- else -}}
{{- $secret := (lookup "v1" "Secret" .Release.Namespace (include "jenkins.fullname" .)).data -}}
{{- if $secret -}}
{{/*
Reusing current password since secret exists
*/}}
{{- index $secret ( .Values.controller.admin.passwordKey | default "jenkins-admin-password" ) -}}
{{- else -}}
{{/*
Generate new password
*/}}
{{- randAlphaNum 22 | b64enc | quote }}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Returns the Jenkins URL
*/}}
{{- define "jenkins.url" -}}
{{- if .Values.controller.jenkinsUrl }}
{{- .Values.controller.jenkinsUrl }}
{{- else }}
{{- if .Values.controller.ingress.hostName }}
{{- if .Values.controller.ingress.tls }}
{{- default "https" .Values.controller.jenkinsUrlProtocol }}://{{ tpl .Values.controller.ingress.hostName $ }}{{ default "" .Values.controller.jenkinsUriPrefix }}
{{- else }}
{{- default "http" .Values.controller.jenkinsUrlProtocol }}://{{ tpl .Values.controller.ingress.hostName $ }}{{ default "" .Values.controller.jenkinsUriPrefix }}
{{- end }}
{{- else }}
{{- default "http" .Values.controller.jenkinsUrlProtocol }}://{{ template "jenkins.fullname" . }}:{{.Values.controller.servicePort}}{{ default "" .Values.controller.jenkinsUriPrefix }}
{{- end}}
{{- end}}
{{- end -}}
{{/*
Returns configuration as code default config
*/}}
{{- define "jenkins.casc.defaults" -}}
jenkins:
{{- $configScripts := toYaml .Values.controller.JCasC.configScripts }}
{{- if and (.Values.controller.JCasC.authorizationStrategy) (not (contains "authorizationStrategy:" $configScripts)) }}
authorizationStrategy:
{{- tpl .Values.controller.JCasC.authorizationStrategy . | nindent 4 }}
{{- end }}
{{- if and (.Values.controller.JCasC.securityRealm) (not (contains "securityRealm:" $configScripts)) }}
securityRealm:
{{- tpl .Values.controller.JCasC.securityRealm . | nindent 4 }}
{{- end }}
disableRememberMe: {{ .Values.controller.disableRememberMe }}
{{- if .Values.controller.legacyRemotingSecurityEnabled }}
remotingSecurity:
enabled: true
{{- end }}
mode: {{ .Values.controller.executorMode }}
numExecutors: {{ .Values.controller.numExecutors }}
{{- if not (kindIs "invalid" .Values.controller.customJenkinsLabels) }}
labelString: "{{ join " " .Values.controller.customJenkinsLabels }}"
{{- end }}
{{- if .Values.controller.projectNamingStrategy }}
{{- if kindIs "string" .Values.controller.projectNamingStrategy }}
projectNamingStrategy: "{{ .Values.controller.projectNamingStrategy }}"
{{- else }}
projectNamingStrategy:
{{- toYaml .Values.controller.projectNamingStrategy | nindent 4 }}
{{- end }}
{{- end }}
markupFormatter:
{{- if .Values.controller.enableRawHtmlMarkupFormatter }}
rawHtml:
disableSyntaxHighlighting: true
{{- else }}
{{- toYaml .Values.controller.markupFormatter | nindent 4 }}
{{- end }}
clouds:
- kubernetes:
containerCapStr: "{{ .Values.agent.containerCap }}"
{{- if .Values.agent.garbageCollection.enabled }}
garbageCollection:
{{- if .Values.agent.garbageCollection.namespaces }}
namespaces: |-
{{- .Values.agent.garbageCollection.namespaces | nindent 10 }}
{{- end }}
timeout: "{{ .Values.agent.garbageCollection.timeout }}"
{{- end }}
{{- if .Values.agent.jnlpregistry }}
jnlpregistry: "{{ .Values.agent.jnlpregistry }}"
{{- end }}
defaultsProviderTemplate: "{{ .Values.agent.defaultsProviderTemplate }}"
connectTimeout: "{{ .Values.agent.kubernetesConnectTimeout }}"
readTimeout: "{{ .Values.agent.kubernetesReadTimeout }}"
{{- if .Values.agent.directConnection }}
directConnection: true
{{- else }}
{{- if .Values.agent.jenkinsUrl }}
jenkinsUrl: "{{ tpl .Values.agent.jenkinsUrl . }}"
{{- else }}
jenkinsUrl: "http://{{ template "jenkins.fullname" . }}.{{ template "jenkins.namespace" . }}.svc.{{.Values.clusterZone}}:{{.Values.controller.servicePort}}{{ default "" .Values.controller.jenkinsUriPrefix }}"
{{- end }}
{{- if not .Values.agent.websocket }}
{{- if .Values.agent.jenkinsTunnel }}
jenkinsTunnel: "{{ tpl .Values.agent.jenkinsTunnel . }}"
{{- else }}
jenkinsTunnel: "{{ template "jenkins.fullname" . }}-agent.{{ template "jenkins.namespace" . }}.svc.{{.Values.clusterZone}}:{{ .Values.controller.agentListenerPort }}"
{{- end }}
{{- else }}
webSocket: true
{{- end }}
{{- end }}
skipTlsVerify: {{ .Values.agent.skipTlsVerify | default false}}
usageRestricted: {{ .Values.agent.usageRestricted | default false}}
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
name: "{{ .Values.controller.cloudName }}"
namespace: "{{ template "jenkins.agent.namespace" . }}"
restrictedPssSecurityContext: {{ .Values.agent.restrictedPssSecurityContext }}
serverUrl: "{{ .Values.kubernetesURL }}"
credentialsId: "{{ .Values.credentialsId }}"
{{- if .Values.agent.enabled }}
podLabels:
- key: "jenkins/{{ .Release.Name }}-{{ .Values.agent.componentName }}"
value: "true"
{{- range $key, $val := .Values.agent.podLabels }}
- key: {{ $key | quote }}
value: {{ $val | quote }}
{{- end }}
templates:
{{- if not .Values.agent.disableDefaultAgent }}
{{- include "jenkins.casc.podTemplate" . | nindent 8 }}
{{- end }}
{{- if .Values.additionalAgents }}
{{- /* save .Values.agent */}}
{{- $agent := .Values.agent }}
{{- range $name, $additionalAgent := .Values.additionalAgents }}
{{- $additionalContainersEmpty := and (hasKey $additionalAgent "additionalContainers") (empty $additionalAgent.additionalContainers) }}
{{- /* merge original .Values.agent into additional agent to ensure it at least has the default values */}}
{{- $additionalAgent := merge $additionalAgent $agent }}
{{- /* clear list of additional containers in case it is configured empty for this agent (merge might have overwritten that) */}}
{{- if $additionalContainersEmpty }}
{{- $_ := set $additionalAgent "additionalContainers" list }}
{{- end }}
{{- /* set .Values.agent to $additionalAgent */}}
{{- $_ := set $.Values "agent" $additionalAgent }}
{{- include "jenkins.casc.podTemplate" $ | nindent 8 }}
{{- end }}
{{- /* restore .Values.agent */}}
{{- $_ := set .Values "agent" $agent }}
{{- end }}
{{- if .Values.agent.podTemplates }}
{{- range $key, $val := .Values.agent.podTemplates }}
{{- tpl $val $ | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.additionalClouds }}
{{- /* save root */}}
{{- $oldRoot := deepCopy $ }}
{{- range $name, $additionalCloud := .Values.additionalClouds }}
{{- $newRoot := deepCopy $ }}
{{- /* clear additionalAgents from the copy if override set to `true` */}}
{{- if .additionalAgentsOverride }}
{{- $_ := set $newRoot.Values "additionalAgents" list}}
{{- end}}
{{- $newValues := merge $additionalCloud $newRoot.Values }}
{{- $_ := set $newRoot "Values" $newValues }}
{{- /* clear additionalClouds from the copy */}}
{{- $_ := set $newRoot.Values "additionalClouds" list }}
{{- with $newRoot}}
- kubernetes:
containerCapStr: "{{ .Values.agent.containerCap }}"
{{- if .Values.agent.jnlpregistry }}
jnlpregistry: "{{ .Values.agent.jnlpregistry }}"
{{- end }}
defaultsProviderTemplate: "{{ .Values.agent.defaultsProviderTemplate }}"
connectTimeout: "{{ .Values.agent.kubernetesConnectTimeout }}"
readTimeout: "{{ .Values.agent.kubernetesReadTimeout }}"
{{- if .Values.agent.directConnection }}
directConnection: true
{{- else }}
{{- if .Values.agent.jenkinsUrl }}
jenkinsUrl: "{{ tpl .Values.agent.jenkinsUrl . }}"
{{- else }}
jenkinsUrl: "http://{{ template "jenkins.fullname" . }}.{{ template "jenkins.namespace" . }}.svc.{{.Values.clusterZone}}:{{.Values.controller.servicePort}}{{ default "" .Values.controller.jenkinsUriPrefix }}"
{{- end }}
{{- if not .Values.agent.websocket }}
{{- if .Values.agent.jenkinsTunnel }}
jenkinsTunnel: "{{ tpl .Values.agent.jenkinsTunnel . }}"
{{- else }}
jenkinsTunnel: "{{ template "jenkins.fullname" . }}-agent.{{ template "jenkins.namespace" . }}.svc.{{.Values.clusterZone}}:{{ .Values.controller.agentListenerPort }}"
{{- end }}
{{- else }}
webSocket: true
{{- end }}
{{- end }}
skipTlsVerify: {{ .Values.agent.skipTlsVerify | default false}}
usageRestricted: {{ .Values.agent.usageRestricted | default false}}
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
name: {{ $name | quote }}
namespace: "{{ template "jenkins.agent.namespace" . }}"
restrictedPssSecurityContext: {{ .Values.agent.restrictedPssSecurityContext }}
serverUrl: "{{ .Values.kubernetesURL }}"
credentialsId: "{{ .Values.credentialsId }}"
{{- if .Values.agent.enabled }}
podLabels:
- key: "jenkins/{{ .Release.Name }}-{{ .Values.agent.componentName }}"
value: "true"
{{- range $key, $val := .Values.agent.podLabels }}
- key: {{ $key | quote }}
value: {{ $val | quote }}
{{- end }}
templates:
{{- if not .Values.agent.disableDefaultAgent }}
{{- include "jenkins.casc.podTemplate" . | nindent 8 }}
{{- end }}
{{- if .Values.additionalAgents }}
{{- /* save .Values.agent */}}
{{- $agent := .Values.agent }}
{{- range $name, $additionalAgent := .Values.additionalAgents }}
{{- $additionalContainersEmpty := and (hasKey $additionalAgent "additionalContainers") (empty $additionalAgent.additionalContainers) }}
{{- /* merge original .Values.agent into additional agent to ensure it at least has the default values */}}
{{- $additionalAgent := merge $additionalAgent $agent }}
{{- /* clear list of additional containers in case it is configured empty for this agent (merge might have overwritten that) */}}
{{- if $additionalContainersEmpty }}
{{- $_ := set $additionalAgent "additionalContainers" list }}
{{- end }}
{{- /* set .Values.agent to $additionalAgent */}}
{{- $_ := set $.Values "agent" $additionalAgent }}
{{- include "jenkins.casc.podTemplate" $ | nindent 8 }}
{{- end }}
{{- /* restore .Values.agent */}}
{{- $_ := set .Values "agent" $agent }}
{{- end }}
{{- with .Values.agent.podTemplates }}
{{- range $key, $val := . }}
{{- tpl $val $ | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- /* restore root */}}
{{- $_ := set $ "Values" $oldRoot.Values }}
{{- end }}
{{- if .Values.controller.csrf.defaultCrumbIssuer.enabled }}
crumbIssuer:
standard:
excludeClientIPFromCrumb: {{ if .Values.controller.csrf.defaultCrumbIssuer.proxyCompatability }}true{{ else }}false{{- end }}
{{- end }}
{{- include "jenkins.casc.security" . }}
{{- with .Values.controller.scriptApproval }}
scriptApproval:
approvedSignatures:
{{- range $key, $val := . }}
- "{{ $val }}"
{{- end }}
{{- end }}
unclassified:
location:
{{- with .Values.controller.jenkinsAdminEmail }}
adminAddress: {{ . }}
{{- end }}
url: {{ template "jenkins.url" . }}
{{- end -}}
{{/*
Returns a name template to be used for jcasc configmaps, using
suffix passed in at call as index 0
*/}}
{{- define "jenkins.casc.configName" -}}
{{- $name := index . 0 -}}
{{- $root := index . 1 -}}
"{{- include "jenkins.fullname" $root -}}-jenkins-{{ $name }}"
{{- end -}}
{{/*
Returns kubernetes pod template configuration as code
*/}}
{{- define "jenkins.casc.podTemplate" -}}
- name: "{{ .Values.agent.podName }}"
namespace: "{{ template "jenkins.agent.namespace" . }}"
{{- if .Values.agent.annotations }}
annotations:
{{- range $key, $value := .Values.agent.annotations }}
- key: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
id: {{ sha256sum (toYaml .Values.agent) }}
containers:
- name: "{{ .Values.agent.sideContainerName }}"
alwaysPullImage: {{ .Values.agent.alwaysPullImage }}
args: "{{ .Values.agent.args | replace "$" "^$" }}"
{{- with .Values.agent.command }}
command: {{ . }}
{{- end }}
envVars:
- envVar:
{{- if .Values.agent.directConnection }}
key: "JENKINS_DIRECT_CONNECTION"
{{- if .Values.agent.jenkinsTunnel }}
value: "{{ tpl .Values.agent.jenkinsTunnel . }}"
{{- else }}
value: "{{ template "jenkins.fullname" . }}-agent.{{ template "jenkins.namespace" . }}.svc.{{.Values.clusterZone}}:{{ .Values.controller.agentListenerPort }}"
{{- end }}
{{- else }}
key: "JENKINS_URL"
{{- if .Values.agent.jenkinsUrl }}
value: {{ tpl .Values.agent.jenkinsUrl . }}
{{- else }}
value: "http://{{ template "jenkins.fullname" . }}.{{ template "jenkins.namespace" . }}.svc.{{.Values.clusterZone}}:{{.Values.controller.servicePort}}{{ default "/" .Values.controller.jenkinsUriPrefix }}"
{{- end }}
{{- end }}
image: "{{ .Values.agent.image.repository }}:{{ .Values.agent.image.tag }}"
{{- if .Values.agent.livenessProbe }}
livenessProbe:
execArgs: {{.Values.agent.livenessProbe.execArgs | quote}}
failureThreshold: {{.Values.agent.livenessProbe.failureThreshold}}
initialDelaySeconds: {{.Values.agent.livenessProbe.initialDelaySeconds}}
periodSeconds: {{.Values.agent.livenessProbe.periodSeconds}}
successThreshold: {{.Values.agent.livenessProbe.successThreshold}}
timeoutSeconds: {{.Values.agent.livenessProbe.timeoutSeconds}}
{{- end }}
privileged: "{{- if .Values.agent.privileged }}true{{- else }}false{{- end }}"
resourceLimitCpu: {{.Values.agent.resources.limits.cpu}}
resourceLimitMemory: {{.Values.agent.resources.limits.memory}}
{{- with .Values.agent.resources.limits.ephemeralStorage }}
resourceLimitEphemeralStorage: {{.}}
{{- end }}
resourceRequestCpu: {{.Values.agent.resources.requests.cpu}}
resourceRequestMemory: {{.Values.agent.resources.requests.memory}}
{{- with .Values.agent.resources.requests.ephemeralStorage }}
resourceRequestEphemeralStorage: {{.}}
{{- end }}
{{- with .Values.agent.runAsUser }}
runAsUser: {{ . }}
{{- end }}
{{- with .Values.agent.runAsGroup }}
runAsGroup: {{ . }}
{{- end }}
ttyEnabled: {{ .Values.agent.TTYEnabled }}
workingDir: {{ .Values.agent.workingDir }}
{{- range $additionalContainers := .Values.agent.additionalContainers }}
- name: "{{ $additionalContainers.sideContainerName }}"
alwaysPullImage: {{ $additionalContainers.alwaysPullImage | default $.Values.agent.alwaysPullImage }}
args: "{{ $additionalContainers.args | replace "$" "^$" }}"
{{- with $additionalContainers.command }}
command: {{ . }}
{{- end }}
envVars:
- envVar:
key: "JENKINS_URL"
{{- if $additionalContainers.jenkinsUrl }}
value: {{ tpl ($additionalContainers.jenkinsUrl) . }}
{{- else }}
value: "http://{{ template "jenkins.fullname" $ }}.{{ template "jenkins.namespace" $ }}.svc.{{ $.Values.clusterZone }}:{{ $.Values.controller.servicePort }}{{ default "/" $.Values.controller.jenkinsUriPrefix }}"
{{- end }}
image: "{{ $additionalContainers.image.repository }}:{{ $additionalContainers.image.tag }}"
{{- if $additionalContainers.livenessProbe }}
livenessProbe:
execArgs: {{$additionalContainers.livenessProbe.execArgs | quote}}
failureThreshold: {{$additionalContainers.livenessProbe.failureThreshold}}
initialDelaySeconds: {{$additionalContainers.livenessProbe.initialDelaySeconds}}
periodSeconds: {{$additionalContainers.livenessProbe.periodSeconds}}
successThreshold: {{$additionalContainers.livenessProbe.successThreshold}}
timeoutSeconds: {{$additionalContainers.livenessProbe.timeoutSeconds}}
{{- end }}
privileged: "{{- if $additionalContainers.privileged }}true{{- else }}false{{- end }}"
resourceLimitCpu: {{ if $additionalContainers.resources }}{{ $additionalContainers.resources.limits.cpu }}{{ else }}{{ $.Values.agent.resources.limits.cpu }}{{ end }}
resourceLimitMemory: {{ if $additionalContainers.resources }}{{ $additionalContainers.resources.limits.memory }}{{ else }}{{ $.Values.agent.resources.limits.memory }}{{ end }}
resourceRequestCpu: {{ if $additionalContainers.resources }}{{ $additionalContainers.resources.requests.cpu }}{{ else }}{{ $.Values.agent.resources.requests.cpu }}{{ end }}
resourceRequestMemory: {{ if $additionalContainers.resources }}{{ $additionalContainers.resources.requests.memory }}{{ else }}{{ $.Values.agent.resources.requests.memory }}{{ end }}
{{- if or $additionalContainers.runAsUser $.Values.agent.runAsUser }}
runAsUser: {{ $additionalContainers.runAsUser | default $.Values.agent.runAsUser }}
{{- end }}
{{- if or $additionalContainers.runAsGroup $.Values.agent.runAsGroup }}
runAsGroup: {{ $additionalContainers.runAsGroup | default $.Values.agent.runAsGroup }}
{{- end }}
ttyEnabled: {{ $additionalContainers.TTYEnabled | default $.Values.agent.TTYEnabled }}
workingDir: {{ $additionalContainers.workingDir | default $.Values.agent.workingDir }}
{{- end }}
{{- if or .Values.agent.envVars .Values.agent.secretEnvVars }}
envVars:
{{- range $index, $var := .Values.agent.envVars }}
- envVar:
key: {{ $var.name }}
value: {{ tpl $var.value $ }}
{{- end }}
{{- range $index, $var := .Values.agent.secretEnvVars }}
- secretEnvVar:
key: {{ $var.key }}
secretName: {{ $var.secretName }}
secretKey: {{ $var.secretKey }}
optional: {{ $var.optional | default false }}
{{- end }}
{{- end }}
idleMinutes: {{ .Values.agent.idleMinutes }}
instanceCap: 2147483647
{{- if .Values.agent.hostNetworking }}
hostNetwork: {{ .Values.agent.hostNetworking }}
{{- end }}
{{- if .Values.agent.imagePullSecretName }}
imagePullSecrets:
- name: {{ .Values.agent.imagePullSecretName }}
{{- end }}
label: "{{ .Release.Name }}-{{ .Values.agent.componentName }} {{ .Values.agent.customJenkinsLabels | join " " }}"
{{- if .Values.agent.nodeSelector }}
nodeSelector:
{{- $local := dict "first" true }}
{{- range $key, $value := .Values.agent.nodeSelector }}
{{- if $local.first }} {{ else }},{{ end }}
{{- $key }}={{ tpl $value $ }}
{{- $_ := set $local "first" false }}
{{- end }}
{{- end }}
nodeUsageMode: {{ quote .Values.agent.nodeUsageMode }}
podRetention: {{ .Values.agent.podRetention }}
showRawYaml: {{ .Values.agent.showRawYaml }}
{{- $asaname := default (include "jenkins.serviceAccountAgentName" .) .Values.agent.serviceAccount -}}
{{- if or (.Values.agent.useDefaultServiceAccount) (.Values.agent.serviceAccount) }}
serviceAccount: "{{ $asaname }}"
{{- end }}
slaveConnectTimeoutStr: "{{ .Values.agent.connectTimeout }}"
{{- if .Values.agent.volumes }}
volumes:
{{- range $index, $volume := .Values.agent.volumes }}
-{{- if (eq $volume.type "ConfigMap") }} configMapVolume:
{{- else if (eq $volume.type "EmptyDir") }} emptyDirVolume:
{{- else if (eq $volume.type "EphemeralVolume") }} genericEphemeralVolume:
{{- else if (eq $volume.type "HostPath") }} hostPathVolume:
{{- else if (eq $volume.type "Nfs") }} nfsVolume:
{{- else if (eq $volume.type "PVC") }} persistentVolumeClaim:
{{- else if (eq $volume.type "Secret") }} secretVolume:
{{- else }} {{ $volume.type }}:
{{- end }}
{{- range $key, $value := $volume }}
{{- if not (eq $key "type") }}
{{ $key }}: {{ if kindIs "string" $value }}{{ tpl $value $ | quote }}{{ else }}{{ $value }}{{ end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.workspaceVolume }}
workspaceVolume:
{{- if (eq .Values.agent.workspaceVolume.type "DynamicPVC") }}
dynamicPVC:
{{- else if (eq .Values.agent.workspaceVolume.type "EmptyDir") }}
emptyDirWorkspaceVolume:
{{- else if (eq .Values.agent.workspaceVolume.type "EphemeralVolume") }}
genericEphemeralVolume:
{{- else if (eq .Values.agent.workspaceVolume.type "HostPath") }}
hostPathWorkspaceVolume:
{{- else if (eq .Values.agent.workspaceVolume.type "Nfs") }}
nfsWorkspaceVolume:
{{- else if (eq .Values.agent.workspaceVolume.type "PVC") }}
persistentVolumeClaimWorkspaceVolume:
{{- else }}
{{ .Values.agent.workspaceVolume.type }}:
{{- end }}
{{- range $key, $value := .Values.agent.workspaceVolume }}
{{- if not (eq $key "type") }}
{{ $key }}: {{ if kindIs "string" $value }}{{ tpl $value $ | quote }}{{ else }}{{ $value }}{{ end }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.yamlTemplate }}
yaml: |-
{{- tpl (trim .Values.agent.yamlTemplate) . | nindent 4 }}
{{- end }}
yamlMergeStrategy: {{ .Values.agent.yamlMergeStrategy }}
inheritYamlMergeStrategy: {{ .Values.agent.inheritYamlMergeStrategy }}
{{- end -}}
{{- define "jenkins.kubernetes-version" -}}
{{- if .Values.controller.installPlugins -}}
{{- range .Values.controller.installPlugins -}}
{{- if hasPrefix "kubernetes:" . }}
{{- $split := splitList ":" . }}
{{- printf "%s" (index $split 1 ) -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "jenkins.casc.security" }}
security:
{{- with .Values.controller.JCasC }}
{{- if .security }}
{{- .security | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "jenkins.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "jenkins.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account for Jenkins agents to use
*/}}
{{- define "jenkins.serviceAccountAgentName" -}}
{{- if .Values.serviceAccountAgent.create -}}
{{ default (printf "%s-%s" (include "jenkins.fullname" .) "agent") .Values.serviceAccountAgent.name }}
{{- else -}}
{{ default "default" .Values.serviceAccountAgent.name }}
{{- end -}}
{{- end -}}
{{/*
Create a full tag name for controller image
*/}}
{{- define "controller.image.tag" -}}
{{- if .Values.controller.image.tagLabel -}}
{{- default (printf "%s-%s" .Chart.AppVersion .Values.controller.image.tagLabel) .Values.controller.image.tag -}}
{{- else -}}
{{- default .Chart.AppVersion .Values.controller.image.tag -}}
{{- end -}}
{{- end -}}
{{/*
Create the HTTP port for interacting with the controller
*/}}
{{- define "controller.httpPort" -}}
{{- if .Values.controller.httpsKeyStore.enable -}}
{{- .Values.controller.httpsKeyStore.httpPort -}}
{{- else -}}
{{- .Values.controller.targetPort -}}
{{- end -}}
{{- end -}}
{{- define "jenkins.configReloadContainer" -}}
{{- $root := index . 0 -}}
{{- $containerName := index . 1 -}}
{{- $containerType := index . 2 -}}
- name: {{ $containerName }}
image: "{{ $root.Values.controller.sidecars.configAutoReload.image.registry }}/{{ $root.Values.controller.sidecars.configAutoReload.image.repository }}:{{ $root.Values.controller.sidecars.configAutoReload.image.tag }}"
imagePullPolicy: {{ $root.Values.controller.sidecars.configAutoReload.imagePullPolicy }}
{{- if $root.Values.controller.sidecars.configAutoReload.containerSecurityContext }}
securityContext: {{- toYaml $root.Values.controller.sidecars.configAutoReload.containerSecurityContext | nindent 4 }}
{{- end }}
{{- if $root.Values.controller.sidecars.configAutoReload.envFrom }}
envFrom:
{{ (tpl (toYaml $root.Values.controller.sidecars.configAutoReload.envFrom) $root) | indent 4 }}
{{- end }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: LABEL
value: "{{ template "jenkins.fullname" $root }}-jenkins-config"
- name: FOLDER
value: "{{ $root.Values.controller.sidecars.configAutoReload.folder }}"
- name: NAMESPACE
value: '{{ $root.Values.controller.sidecars.configAutoReload.searchNamespace | default (include "jenkins.namespace" $root) }}'
{{- if eq $containerType "init" }}
- name: METHOD
value: "LIST"
{{- else if $root.Values.controller.sidecars.configAutoReload.sleepTime }}
- name: METHOD
value: "SLEEP"
- name: SLEEP_TIME
value: "{{ $root.Values.controller.sidecars.configAutoReload.sleepTime }}"
{{- end }}
{{- if eq $containerType "sidecar" }}
- name: REQ_URL
value: "{{- default "http" $root.Values.controller.sidecars.configAutoReload.scheme }}://localhost:{{- include "controller.httpPort" $root -}}{{- $root.Values.controller.jenkinsUriPrefix -}}/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
- name: REQ_METHOD
value: "POST"
- name: REQ_RETRY_CONNECT
value: "{{ $root.Values.controller.sidecars.configAutoReload.reqRetryConnect }}"
{{- if $root.Values.controller.sidecars.configAutoReload.skipTlsVerify }}
- name: REQ_SKIP_TLS_VERIFY
value: "true"
{{- end }}
{{- end }}
{{- if $root.Values.controller.sidecars.configAutoReload.env }}
{{- range $envVarItem := $root.Values.controller.sidecars.configAutoReload.env -}}
{{- if or (ne $containerType "init") (ne .name "METHOD") }}
{{- (tpl (toYaml (list $envVarItem)) $root) | nindent 4 }}
{{- end -}}
{{- end -}}
{{- end }}
{{- if $root.Values.controller.sidecars.configAutoReload.logging.configuration.override }}
- name: LOG_CONFIG
value: "{{ $root.Values.controller.jenkinsHome }}/auto-reload/auto-reload-config.yaml"
{{- end }}
resources:
{{ toYaml $root.Values.controller.sidecars.configAutoReload.resources | indent 4 }}
volumeMounts:
- name: sc-config-volume
mountPath: {{ $root.Values.controller.sidecars.configAutoReload.folder | quote }}
- name: jenkins-home
mountPath: {{ $root.Values.controller.jenkinsHome }}
{{- if $root.Values.persistence.subPath }}
subPath: {{ $root.Values.persistence.subPath }}
{{- end }}
{{- if $root.Values.controller.sidecars.configAutoReload.logging.configuration.override }}
- name: auto-reload-config
mountPath: {{ $root.Values.controller.jenkinsHome }}/auto-reload
- name: auto-reload-config-logs
mountPath: {{ $root.Values.controller.jenkinsHome }}/auto-reload-logs
{{- end }}
{{- if $root.Values.controller.sidecars.configAutoReload.additionalVolumeMounts }}
{{ (tpl (toYaml $root.Values.controller.sidecars.configAutoReload.additionalVolumeMounts) $root) | indent 4 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,60 @@
{{- if .Values.controller.sidecars.configAutoReload.logging.configuration.override }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" . }}-auto-reload-config
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": {{ template "jenkins.name" . }}
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ $.Release.Service }}"
"app.kubernetes.io/instance": "{{ $.Release.Name }}"
"app.kubernetes.io/component": "{{ $.Values.controller.componentName }}"
data:
auto-reload-config.yaml: |-
version: 1
disable_existing_loggers: false
root:
level: {{ .Values.controller.sidecars.configAutoReload.logging.configuration.logLevel }}
handlers:
{{- if .Values.controller.sidecars.configAutoReload.logging.configuration.logToConsole}}
- console
{{- end }}
{{- if .Values.controller.sidecars.configAutoReload.logging.configuration.logToFile }}
- file
{{- end }}
handlers:
{{- if .Values.controller.sidecars.configAutoReload.logging.configuration.logToConsole}}
console:
class: logging.StreamHandler
level: {{ .Values.controller.sidecars.configAutoReload.logging.configuration.logLevel }}
formatter: {{ .Values.controller.sidecars.configAutoReload.logging.configuration.formatter }}
{{- end }}
{{- if .Values.controller.sidecars.configAutoReload.logging.configuration.logToFile }}
file:
class : logging.handlers.RotatingFileHandler
formatter: {{ .Values.controller.sidecars.configAutoReload.logging.configuration.formatter }}
filename: {{ .Values.controller.jenkinsHome }}/auto-reload-logs/file.log
maxBytes: {{ .Values.controller.sidecars.configAutoReload.logging.configuration.maxBytes }}
backupCount: {{ .Values.controller.sidecars.configAutoReload.logging.configuration.backupCount }}
{{- end }}
formatters:
JSON:
"()": logger.JsonFormatter
format: "%(levelname)s %(message)s"
rename_fields:
message: msg
levelname: level
LOGFMT:
"()": logger.LogfmtFormatter
keys:
- time
- level
- msg
mapping:
time: asctime
level: levelname
msg: message
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if .Values.controller.initScripts -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" . }}-init-scripts
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
data:
{{- range $key, $val := .Values.controller.initScripts }}
init{{ $key }}.groovy: |-
{{ tpl $val $ | indent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,92 @@
{{- $jenkinsHome := .Values.controller.jenkinsHome -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" . }}
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
data:
apply_config.sh: |-
set -e
{{- if .Values.controller.initializeOnce }}
if [ -f {{ .Values.controller.jenkinsHome }}/initialization-completed ]; then
echo "controller was previously initialized, refusing to re-initialize"
exit 0
fi
{{- end }}
echo "disable Setup Wizard"
# Prevent Setup Wizard when JCasC is enabled
echo $JENKINS_VERSION > {{ .Values.controller.jenkinsHome }}/jenkins.install.UpgradeWizard.state
echo $JENKINS_VERSION > {{ .Values.controller.jenkinsHome }}/jenkins.install.InstallUtil.lastExecVersion
{{- if .Values.controller.overwritePlugins }}
echo "remove all plugins from shared volume"
# remove all plugins from shared volume
rm -rf {{ .Values.controller.jenkinsHome }}/plugins/*
{{- end }}
{{- if .Values.controller.JCasC.overwriteConfiguration }}
echo "deleting all XML config files"
rm -f {{ .Values.controller.jenkinsHome }}/config.xml
rm -f {{ .Values.controller.jenkinsHome }}/*plugins*.xml
find {{ .Values.controller.jenkinsHome }} -maxdepth 1 -type f -iname '*configuration*.xml' -exec rm -f {} \;
{{- end }}
{{- if .Values.controller.installPlugins }}
echo "download plugins"
# Install missing plugins
cp /var/jenkins_config/plugins.txt {{ .Values.controller.jenkinsHome }};
rm -rf {{ .Values.controller.jenkinsRef }}/plugins/*.lock
version () { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; }
if [ -f "{{ .Values.controller.jenkinsWar }}" ] && [ -n "$(command -v jenkins-plugin-cli)" 2>/dev/null ] && [ $(version $(jenkins-plugin-cli --version)) -ge $(version "2.1.1") ]; then
jenkins-plugin-cli --verbose --war "{{ .Values.controller.jenkinsWar }}" --plugin-file "{{ .Values.controller.jenkinsHome }}/plugins.txt" --latest {{ .Values.controller.installLatestPlugins }}{{- if .Values.controller.installLatestSpecifiedPlugins }} --latest-specified{{- end }};
else
/usr/local/bin/install-plugins.sh `echo $(cat {{ .Values.controller.jenkinsHome }}/plugins.txt)`;
fi
echo "copy plugins to shared volume"
# Copy plugins to shared volume
yes n | cp -i {{ .Values.controller.jenkinsRef }}/plugins/* /var/jenkins_plugins/;
{{- end }}
{{- if not .Values.controller.sidecars.configAutoReload.enabled }}
echo "copy configuration as code files"
mkdir -p {{ .Values.controller.jenkinsHome }}/casc_configs;
rm -rf {{ .Values.controller.jenkinsHome }}/casc_configs/*
{{- if or .Values.controller.JCasC.defaultConfig .Values.controller.JCasC.configScripts }}
cp -v /var/jenkins_config/*.yaml {{ .Values.controller.jenkinsHome }}/casc_configs
{{- end }}
{{- end }}
echo "finished initialization"
{{- if .Values.controller.initializeOnce }}
touch {{ .Values.controller.jenkinsHome }}/initialization-completed
{{- end }}
{{- if not .Values.controller.sidecars.configAutoReload.enabled }}
# Only add config to this script if we aren't auto-reloading otherwise the pod will restart upon each config change:
{{- if .Values.controller.JCasC.defaultConfig }}
jcasc-default-config.yaml: |-
{{- include "jenkins.casc.defaults" . |nindent 4}}
{{- end }}
{{- range $key, $val := .Values.controller.JCasC.configScripts }}
{{ $key }}.yaml: |-
{{ tpl $val $| indent 4 }}
{{- end }}
{{- end }}
plugins.txt: |-
{{- if .Values.controller.installPlugins }}
{{- range $installPlugin := .Values.controller.installPlugins }}
{{- $installPlugin | nindent 4 }}
{{- end }}
{{- range $addlPlugin := .Values.controller.additionalPlugins }}
{{- /* duplicate plugin check */}}
{{- range $installPlugin := $.Values.controller.installPlugins }}
{{- if eq (splitList ":" $addlPlugin | first) (splitList ":" $installPlugin | first) }}
{{- $message := print "[PLUGIN CONFLICT] controller.additionalPlugins contains '" $addlPlugin "'" }}
{{- $message := print $message " but controller.installPlugins already contains '" $installPlugin "'." }}
{{- $message := print $message " Override controller.installPlugins to use '" $addlPlugin "' plugin." }}
{{- fail $message }}
{{- end }}
{{- end }}
{{- $addlPlugin | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,151 @@
{{- if .Values.checkDeprecation }}
{{- if .Values.master }}
{{ fail "`master` does no longer exist. It has been renamed to `controller`" }}
{{- end }}
{{- if .Values.controller.imageTag }}
{{ fail "`controller.imageTag` does no longer exist. Please use `controller.image.tag` instead" }}
{{- end }}
{{- if .Values.controller.slaveListenerPort }}
{{ fail "`controller.slaveListenerPort` does no longer exist. It has been renamed to `controller.agentListenerPort`" }}
{{- end }}
{{- if .Values.controller.slaveHostPort }}
{{ fail "`controller.slaveHostPort` does no longer exist. It has been renamed to `controller.agentListenerHostPort`" }}
{{- end }}
{{- if .Values.controller.slaveKubernetesNamespace }}
{{ fail "`controller.slaveKubernetesNamespace` does no longer exist. It has been renamed to `agent.namespace`" }}
{{- end }}
{{- if .Values.controller.slaveDefaultsProviderTemplate }}
{{ fail "`controller.slaveDefaultsProviderTemplate` does no longer exist. It has been renamed to `agent.defaultsProviderTemplate`" }}
{{- end }}
{{- if .Values.controller.useSecurity }}
{{ fail "`controller.useSecurity` does no longer exist. It has been renamed to `controller.adminSecret`" }}
{{- end }}
{{- if .Values.controller.slaveJenkinsUrl }}
{{ fail "`controller.slaveJenkinsUrl` does no longer exist. It has been renamed to `agent.jenkinsUrl`" }}
{{- end }}
{{- if .Values.controller.slaveJenkinsTunnel }}
{{ fail "`controller.slaveJenkinsTunnel` does no longer exist. It has been renamed to `agent.jenkinsTunnel`" }}
{{- end }}
{{- if .Values.controller.slaveConnectTimeout }}
{{ fail "`controller.slaveConnectTimeout` does no longer exist. It has been renamed to `agent.kubernetesConnectTimeout`" }}
{{- end }}
{{- if .Values.controller.slaveReadTimeout }}
{{ fail "`controller.slaveReadTimeout` does no longer exist. It has been renamed to `agent.kubernetesReadTimeout`" }}
{{- end }}
{{- if .Values.controller.slaveListenerServiceType }}
{{ fail "`controller.slaveListenerServiceType` does no longer exist. It has been renamed to `controller.agentListenerServiceType`" }}
{{- end }}
{{- if .Values.controller.slaveListenerLoadBalancerIP }}
{{ fail "`controller.slaveListenerLoadBalancerIP` does no longer exist. It has been renamed to `controller.agentListenerLoadBalancerIP`" }}
{{- end }}
{{- if .Values.controller.slaveListenerServiceAnnotations }}
{{ fail "`controller.slaveListenerServiceAnnotations` does no longer exist. It has been renamed to `controller.agentListenerServiceAnnotations`" }}
{{- end }}
{{- if .Values.agent.slaveConnectTimeout }}
{{ fail "`agent.slaveConnectTimeout` does no longer exist. It has been renamed to `agent.connectTimeout`" }}
{{- end }}
{{- if .Values.NetworkPolicy }}
{{- if .Values.NetworkPolicy.Enabled }}
{{ fail "`NetworkPolicy.Enabled` does no longer exist. It has been renamed to `networkPolicy.enabled`" }}
{{- end }}
{{- if .Values.NetworkPolicy.ApiVersion }}
{{ fail "`NetworkPolicy.ApiVersion` does no longer exist. It has been renamed to `networkPolicy.apiVersion`" }}
{{- end }}
{{ fail "NetworkPolicy.* values have been renamed, please check the documentation" }}
{{- end }}
{{- if .Values.rbac.install }}
{{ fail "`rbac.install` does no longer exist. It has been renamed to `rbac.create` and is enabled by default!" }}
{{- end }}
{{- if .Values.rbac.serviceAccountName }}
{{ fail "`rbac.serviceAccountName` does no longer exist. It has been renamed to `serviceAccount.name`" }}
{{- end }}
{{- if .Values.rbac.serviceAccountAnnotations }}
{{ fail "`rbac.serviceAccountAnnotations` does no longer exist. It has been renamed to `serviceAccount.annotations`" }}
{{- end }}
{{- if .Values.rbac.roleRef }}
{{ fail "`rbac.roleRef` does no longer exist. RBAC roles are now generated, please check the documentation" }}
{{- end }}
{{- if .Values.rbac.roleKind }}
{{ fail "`rbac.roleKind` does no longer exist. RBAC roles are now generated, please check the documentation" }}
{{- end }}
{{- if .Values.rbac.roleBindingKind }}
{{ fail "`rbac.roleBindingKind` does no longer exist. RBAC roles are now generated, please check the documentation" }}
{{- end }}
{{- if .Values.controller.JCasC.pluginVersion }}
{{ fail "controller.JCasC.pluginVersion has been deprecated, please use controller.installPlugins instead" }}
{{- end }}
{{- if .Values.controller.deploymentLabels }}
{{ fail "`controller.deploymentLabels` does no longer exist. It has been renamed to `controller.statefulSetLabels`" }}
{{- end }}
{{- if .Values.controller.deploymentAnnotations }}
{{ fail "`controller.deploymentAnnotations` does no longer exist. It has been renamed to `controller.statefulSetAnnotations`" }}
{{- end }}
{{- if .Values.controller.rollingUpdate }}
{{ fail "`controller.rollingUpdate` does no longer exist. It is no longer relevant, since a StatefulSet is used for the Jenkins controller" }}
{{- end }}
{{- if .Values.controller.tag }}
{{ fail "`controller.tag` no longer exists. It has been renamed to `controller.image.tag'" }}
{{- end }}
{{- if .Values.controller.tagLabel }}
{{ fail "`controller.tagLabel` no longer exists. It has been renamed to `controller.image.tagLabel`" }}
{{- end }}
{{- if .Values.controller.adminSecret }}
{{ fail "`controller.adminSecret` no longer exists. It has been renamed to `controller.admin.createSecret`" }}
{{- end }}
{{- if .Values.controller.adminUser }}
{{ fail "`controller.adminUser` no longer exists. It has been renamed to `controller.admin.username`" }}
{{- end }}
{{- if .Values.controller.adminPassword }}
{{ fail "`controller.adminPassword` no longer exists. It has been renamed to `controller.admin.password`" }}
{{- end }}
{{- if .Values.controller.sidecars.other }}
{{ fail "`controller.sidecars.other` no longer exists. It has been renamed to `controller.sidecars.additionalSidecarContainers`" }}
{{- end }}
{{- if .Values.agent.tag }}
{{ fail "`controller.agent.tag` no longer exists. It has been renamed to `controller.agent.image.tag`" }}
{{- end }}
{{- if .Values.backup }}
{{ fail "`controller.backup` no longer exists." }}
{{- end }}
{{- if .Values.helmtest.bats.tag }}
{{ fail "`helmtest.bats.tag` no longer exists. It has been renamed to `helmtest.bats.image.tag`" }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,41 @@
{{- if not (contains "jenkins-home" (quote .Values.persistence.volumes)) }}
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) -}}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
{{- if .Values.persistence.annotations }}
annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
name: {{ template "jenkins.fullname" . }}
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.persistence.labels }}
{{ toYaml .Values.persistence.labels | indent 4 }}
{{- end }}
spec:
{{- if .Values.persistence.dataSource }}
dataSource:
{{ toYaml .Values.persistence.dataSource | indent 4 }}
{{- end }}
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,53 @@
{{- $root := . }}
{{- if .Values.controller.sidecars.configAutoReload.enabled }}
{{- range $key, $val := .Values.controller.JCasC.configScripts }}
{{- if $val }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.casc.configName" (list (printf "config-%s" $key) $ )}}
namespace: {{ template "jenkins.namespace" $root }}
labels:
"app.kubernetes.io/name": {{ template "jenkins.name" $root}}
{{- if $root.Values.renderHelmLabels }}
"helm.sh/chart": "{{ $root.Chart.Name }}-{{ $root.Chart.Version }}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ $.Release.Service }}"
"app.kubernetes.io/instance": "{{ $.Release.Name }}"
"app.kubernetes.io/component": "{{ $.Values.controller.componentName }}"
{{ template "jenkins.fullname" $root }}-jenkins-config: "true"
{{- if $root.Values.controller.JCasC.configMapAnnotations }}
annotations:
{{ toYaml $root.Values.controller.JCasC.configMapAnnotations | indent 4 }}
{{- end }}
data:
{{ $key }}.yaml: |-
{{ tpl $val $| indent 4 }}
{{- end }}
{{- end }}
{{- if .Values.controller.JCasC.defaultConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.casc.configName" (list "jcasc-config" $ )}}
namespace: {{ template "jenkins.namespace" $root }}
labels:
"app.kubernetes.io/name": {{ template "jenkins.name" $root}}
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ $root.Chart.Name }}-{{ $root.Chart.Version }}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ $.Release.Service }}"
"app.kubernetes.io/instance": "{{ $.Release.Name }}"
"app.kubernetes.io/component": "{{ $.Values.controller.componentName }}"
{{ template "jenkins.fullname" $root }}-jenkins-config: "true"
{{- if $root.Values.controller.JCasC.configMapAnnotations }}
annotations:
{{ toYaml $root.Values.controller.JCasC.configMapAnnotations | indent 4 }}
{{- end }}
data:
jcasc-default-config.yaml: |-
{{- include "jenkins.casc.defaults" . | nindent 4 }}
{{- end}}
{{- end }}

View File

@ -0,0 +1,43 @@
{{- if .Values.controller.agentListenerEnabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "jenkins.fullname" . }}-agent
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.controller.agentListenerServiceAnnotations }}
annotations:
{{- toYaml .Values.controller.agentListenerServiceAnnotations | nindent 4 }}
{{- end }}
spec:
{{- if .Values.controller.agentListenerExternalTrafficPolicy }}
externalTrafficPolicy: {{.Values.controller.agentListenerExternalTrafficPolicy}}
{{- end }}
ports:
- port: {{ .Values.controller.agentListenerPort }}
targetPort: {{ .Values.controller.agentListenerPort }}
{{- if (and (eq .Values.controller.agentListenerServiceType "NodePort") (not (empty .Values.controller.agentListenerNodePort))) }}
nodePort: {{ .Values.controller.agentListenerNodePort }}
{{- end }}
name: agent-listener
selector:
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
type: {{ .Values.controller.agentListenerServiceType }}
{{if eq .Values.controller.agentListenerServiceType "LoadBalancer"}}
{{- if .Values.controller.agentListenerLoadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.controller.agentListenerLoadBalancerSourceRanges | indent 4 }}
{{- end }}
{{- end }}
{{- if and (eq .Values.controller.agentListenerServiceType "LoadBalancer") (.Values.controller.agentListenerLoadBalancerIP) }}
loadBalancerIP: {{ .Values.controller.agentListenerLoadBalancerIP }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- if .Values.awsSecurityGroupPolicies.enabled -}}
{{- range .Values.awsSecurityGroupPolicies.policies -}}
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: {{ .name }}
namespace: {{ template "jenkins.namespace" $ }}
spec:
podSelector:
{{- toYaml .podSelector | nindent 6}}
securityGroups:
groupIds:
{{- toYaml .securityGroupIds | nindent 6}}
---
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,26 @@
{{- if and .Values.controller.prometheus.enabled .Values.controller.prometheus.alertingrules }}
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "jenkins.fullname" . }}
{{- if .Values.controller.prometheus.prometheusRuleNamespace }}
namespace: {{ .Values.controller.prometheus.prometheusRuleNamespace }}
{{- else }}
namespace: {{ template "jenkins.namespace" . }}
{{- end }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- range $key, $val := .Values.controller.prometheus.alertingRulesAdditionalLabels }}
{{ $key }}: {{ $val | quote }}
{{- end}}
spec:
groups:
{{ toYaml .Values.controller.prometheus.alertingrules | indent 2 }}
{{- end }}

View File

@ -0,0 +1,24 @@
{{- if .Values.controller.backendconfig.enabled }}
apiVersion: {{ .Values.controller.backendconfig.apiVersion }}
kind: BackendConfig
metadata:
name: {{ .Values.controller.backendconfig.name }}
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.controller.backendconfig.labels }}
{{ toYaml .Values.controller.backendconfig.labels | indent 4 }}
{{- end }}
{{- if .Values.controller.backendconfig.annotations }}
annotations:
{{ toYaml .Values.controller.backendconfig.annotations | indent 4 }}
{{- end }}
spec:
{{ toYaml .Values.controller.backendconfig.spec | indent 2 }}
{{- end }}

View File

@ -0,0 +1,77 @@
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if .Values.controller.ingress.enabled }}
{{- if semverCompare ">=1.19-0" $kubeTargetVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" $kubeTargetVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: {{ .Values.controller.ingress.apiVersion }}
{{- end }}
kind: Ingress
metadata:
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.controller.ingress.labels }}
{{ toYaml .Values.controller.ingress.labels | indent 4 }}
{{- end }}
{{- if .Values.controller.ingress.annotations }}
annotations:
{{ toYaml .Values.controller.ingress.annotations | indent 4 }}
{{- end }}
name: {{ template "jenkins.fullname" . }}
spec:
{{- if .Values.controller.ingress.ingressClassName }}
ingressClassName: {{ .Values.controller.ingress.ingressClassName | quote }}
{{- end }}
rules:
- http:
paths:
{{- if empty (.Values.controller.ingress.paths) }}
- backend:
{{- if semverCompare ">=1.19-0" $kubeTargetVersion }}
service:
name: {{ template "jenkins.fullname" . }}
port:
number: {{ .Values.controller.servicePort }}
pathType: ImplementationSpecific
{{- else }}
serviceName: {{ template "jenkins.fullname" . }}
servicePort: {{ .Values.controller.servicePort }}
{{- end }}
{{- if .Values.controller.ingress.path }}
path: {{ .Values.controller.ingress.path }}
{{- end -}}
{{- else }}
{{ tpl (toYaml .Values.controller.ingress.paths | indent 6) . }}
{{- end -}}
{{- if .Values.controller.ingress.hostName }}
host: {{ tpl .Values.controller.ingress.hostName . | quote }}
{{- end }}
{{- if .Values.controller.ingress.resourceRootUrl }}
- http:
paths:
- backend:
{{- if semverCompare ">=1.19-0" $kubeTargetVersion }}
service:
name: {{ template "jenkins.fullname" . }}
port:
number: {{ .Values.controller.servicePort }}
pathType: ImplementationSpecific
{{- else }}
serviceName: {{ template "jenkins.fullname" . }}
servicePort: {{ .Values.controller.servicePort }}
{{- end }}
host: {{ tpl .Values.controller.ingress.resourceRootUrl . | quote }}
{{- end }}
{{- if .Values.controller.ingress.tls }}
tls:
{{ tpl (toYaml .Values.controller.ingress.tls ) . | indent 4 }}
{{- end -}}
{{- end }}

View File

@ -0,0 +1,76 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: {{ .Values.networkPolicy.apiVersion }}
metadata:
name: "{{ .Release.Name }}-{{ .Values.controller.componentName }}"
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
spec:
podSelector:
matchLabels:
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
ingress:
# Allow web access to the UI
- ports:
- port: {{ .Values.controller.targetPort }}
{{- if .Values.controller.agentListenerEnabled }}
# Allow inbound connections from agents
- from:
{{- if .Values.networkPolicy.internalAgents.allowed }}
- podSelector:
matchLabels:
"jenkins/{{ .Release.Name }}-{{ .Values.agent.componentName }}": "true"
{{- range $k,$v:= .Values.networkPolicy.internalAgents.podLabels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- if .Values.networkPolicy.internalAgents.namespaceLabels }}
namespaceSelector:
matchLabels:
{{- range $k,$v:= .Values.networkPolicy.internalAgents.namespaceLabels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{- end }}
{{- if or .Values.networkPolicy.externalAgents.ipCIDR .Values.networkPolicy.externalAgents.except }}
- ipBlock:
cidr: {{ required "ipCIDR is required if you wish to allow external agents to connect to Jenkins Controller." .Values.networkPolicy.externalAgents.ipCIDR }}
{{- if .Values.networkPolicy.externalAgents.except }}
except:
{{- range .Values.networkPolicy.externalAgents.except }}
- {{ . }}
{{- end }}
{{- end }}
{{- end }}
ports:
- port: {{ .Values.controller.agentListenerPort }}
{{- end }}
{{- if .Values.agent.enabled }}
---
kind: NetworkPolicy
apiVersion: {{ .Values.networkPolicy.apiVersion }}
metadata:
name: "{{ .Release.Name }}-{{ .Values.agent.componentName }}"
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
spec:
podSelector:
matchLabels:
# DefaultDeny
"jenkins/{{ .Release.Name }}-{{ .Values.agent.componentName }}": "true"
{{- end }}
{{- end }}

View File

@ -0,0 +1,34 @@
{{- if .Values.controller.podDisruptionBudget.enabled }}
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if semverCompare ">=1.21-0" $kubeTargetVersion -}}
apiVersion: policy/v1
{{- else if semverCompare ">=1.5-0" $kubeTargetVersion -}}
apiVersion: policy/v1beta1
{{- else -}}
apiVersion: {{ .Values.controller.podDisruptionBudget.apiVersion }}
{{- end }}
kind: PodDisruptionBudget
metadata:
name: {{ template "jenkins.fullname" . }}-pdb
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.controller.podDisruptionBudget.labels -}}
{{ toYaml .Values.controller.podDisruptionBudget.labels | nindent 4 }}
{{- end }}
{{- if .Values.controller.podDisruptionBudget.annotations }}
annotations: {{ toYaml .Values.controller.podDisruptionBudget.annotations | nindent 4 }}
{{- end }}
spec:
maxUnavailable: {{ .Values.controller.podDisruptionBudget.maxUnavailable }}
selector:
matchLabels:
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- end }}

View File

@ -0,0 +1,30 @@
{{- if .Values.controller.googlePodMonitor.enabled }}
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: {{ template "jenkins.fullname" . }}
{{- if .Values.controller.googlePodMonitor.serviceMonitorNamespace }}
namespace: {{ .Values.controller.googlePodMonitor.serviceMonitorNamespace }}
{{- else }}
namespace: {{ template "jenkins.namespace" . }}
{{- end }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
spec:
endpoints:
- interval: {{ .Values.controller.googlePodMonitor.scrapeInterval }}
port: http
path: {{ .Values.controller.jenkinsUriPrefix }}{{ .Values.controller.googlePodMonitor.scrapeEndpoint }}
selector:
matchLabels:
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- end }}

View File

@ -0,0 +1,34 @@
{{- if .Values.controller.route.enabled }}
apiVersion: route.openshift.io/v1
kind: Route
metadata:
namespace: {{ template "jenkins.namespace" . }}
labels:
app: {{ template "jenkins.fullname" . }}
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
component: "{{ .Release.Name }}-{{ .Values.controller.componentName }}"
{{- if .Values.controller.route.labels }}
{{ toYaml .Values.controller.route.labels | indent 4 }}
{{- end }}
{{- if .Values.controller.route.annotations }}
annotations:
{{ toYaml .Values.controller.route.annotations | indent 4 }}
{{- end }}
name: {{ template "jenkins.fullname" . }}
spec:
host: {{ .Values.controller.route.path }}
port:
targetPort: http
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: {{ template "jenkins.fullname" . }}
weight: 100
wildcardPolicy: None
{{- end }}

View File

@ -0,0 +1,56 @@
{{- if .Values.controller.secondaryingress.enabled }}
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- $serviceName := include "jenkins.fullname" . -}}
{{- $servicePort := .Values.controller.servicePort -}}
{{- if semverCompare ">=1.19-0" $kubeTargetVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" $kubeTargetVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: {{ .Values.controller.secondaryingress.apiVersion }}
{{- end }}
kind: Ingress
metadata:
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.controller.secondaryingress.labels -}}
{{ toYaml .Values.controller.secondaryingress.labels | nindent 4 }}
{{- end }}
{{- if .Values.controller.secondaryingress.annotations }}
annotations: {{ toYaml .Values.controller.secondaryingress.annotations | nindent 4 }}
{{- end }}
name: {{ template "jenkins.fullname" . }}-secondary
spec:
{{- if .Values.controller.secondaryingress.ingressClassName }}
ingressClassName: {{ .Values.controller.secondaryingress.ingressClassName | quote }}
{{- end }}
rules:
- host: {{ .Values.controller.secondaryingress.hostName }}
http:
paths:
{{- range .Values.controller.secondaryingress.paths }}
- path: {{ . | quote }}
backend:
{{ if semverCompare ">=1.19-0" $kubeTargetVersion }}
service:
name: {{ $serviceName }}
port:
number: {{ $servicePort }}
pathType: ImplementationSpecific
{{ else }}
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{ end }}
{{- end}}
{{- if .Values.controller.secondaryingress.tls }}
tls:
{{ toYaml .Values.controller.secondaryingress.tls | indent 4 }}
{{- end -}}
{{- end }}

View File

@ -0,0 +1,45 @@
{{- if and .Values.controller.prometheus.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "jenkins.fullname" . }}
{{- if .Values.controller.prometheus.serviceMonitorNamespace }}
namespace: {{ .Values.controller.prometheus.serviceMonitorNamespace }}
{{- else }}
namespace: {{ template "jenkins.namespace" . }}
{{- end }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- range $key, $val := .Values.controller.prometheus.serviceMonitorAdditionalLabels }}
{{ $key }}: {{ $val | quote }}
{{- end}}
spec:
endpoints:
- interval: {{ .Values.controller.prometheus.scrapeInterval }}
port: http
path: {{ .Values.controller.jenkinsUriPrefix }}{{ .Values.controller.prometheus.scrapeEndpoint }}
{{- with .Values.controller.prometheus.relabelings }}
relabelings:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.controller.prometheus.metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 6 }}
{{- end }}
jobLabel: {{ template "jenkins.fullname" . }}
namespaceSelector:
matchNames:
- "{{ template "jenkins.namespace" $ }}"
selector:
matchLabels:
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- end }}

View File

@ -0,0 +1,427 @@
{{- if .Capabilities.APIVersions.Has "apps/v1" }}
apiVersion: apps/v1
{{- else }}
apiVersion: apps/v1beta1
{{- end }}
kind: StatefulSet
metadata:
name: {{ template "jenkins.fullname" . }}
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- range $key, $val := .Values.controller.statefulSetLabels }}
{{ $key }}: {{ $val | quote }}
{{- end}}
{{- if .Values.controller.statefulSetAnnotations }}
annotations:
{{ toYaml .Values.controller.statefulSetAnnotations | indent 4 }}
{{- end }}
spec:
serviceName: {{ template "jenkins.fullname" . }}
replicas: 1
selector:
matchLabels:
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
{{- if .Values.controller.updateStrategy }}
updateStrategy:
{{ toYaml .Values.controller.updateStrategy | indent 4 }}
{{- end }}
template:
metadata:
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- range $key, $val := .Values.controller.podLabels }}
{{ $key }}: {{ $val | quote }}
{{- end}}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
{{- if .Values.controller.initScripts }}
checksum/config-init-scripts: {{ include (print $.Template.BasePath "/config-init-scripts.yaml") . | sha256sum }}
{{- end }}
{{- if .Values.controller.podAnnotations }}
{{ tpl (toYaml .Values.controller.podAnnotations | indent 8) . }}
{{- end }}
spec:
{{- if .Values.controller.schedulerName }}
schedulerName: {{ .Values.controller.schedulerName }}
{{- end }}
{{- if .Values.controller.nodeSelector }}
nodeSelector:
{{ toYaml .Values.controller.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.controller.tolerations }}
tolerations:
{{ toYaml .Values.controller.tolerations | indent 8 }}
{{- end }}
{{- if .Values.controller.affinity }}
affinity:
{{ toYaml .Values.controller.affinity | indent 8 }}
{{- end }}
{{- if .Values.controller.topologySpreadConstraints }}
topologySpreadConstraints:
{{ toYaml .Values.controller.topologySpreadConstraints | indent 8 }}
{{- end }}
{{- if quote .Values.controller.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.controller.terminationGracePeriodSeconds }}
{{- end }}
{{- if .Values.controller.priorityClassName }}
priorityClassName: {{ .Values.controller.priorityClassName }}
{{- end }}
{{- if .Values.controller.shareProcessNamespace }}
shareProcessNamespace: true
{{- end }}
{{- if not .Values.controller.enableServiceLinks }}
enableServiceLinks: false
{{- end }}
{{- if .Values.controller.usePodSecurityContext }}
securityContext:
{{- if kindIs "map" .Values.controller.podSecurityContextOverride }}
{{- tpl (toYaml .Values.controller.podSecurityContextOverride | nindent 8) . -}}
{{- else }}
{{/* The rest of this section should be replaced with the contents of this comment one the runAsUser, fsGroup, and securityContextCapabilities Helm chart values have been removed:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
*/}}
runAsUser: {{ default 0 .Values.controller.runAsUser }}
{{- if and (.Values.controller.runAsUser) (.Values.controller.fsGroup) }}
{{- if not (eq (int .Values.controller.runAsUser) 0) }}
fsGroup: {{ .Values.controller.fsGroup }}
runAsNonRoot: true
{{- end }}
{{- if .Values.controller.securityContextCapabilities }}
capabilities:
{{- toYaml .Values.controller.securityContextCapabilities | nindent 10 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
serviceAccountName: "{{ template "jenkins.serviceAccountName" . }}"
{{- if .Values.controller.hostNetworking }}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
{{- end }}
{{- if .Values.controller.hostAliases }}
hostAliases:
{{- toYaml .Values.controller.hostAliases | nindent 8 }}
{{- end }}
initContainers:
{{- if .Values.controller.customInitContainers }}
{{ tpl (toYaml .Values.controller.customInitContainers) . | indent 8 }}
{{- end }}
{{- if .Values.controller.sidecars.configAutoReload.enabled }}
{{- include "jenkins.configReloadContainer" (list $ "config-reload-init" "init") | nindent 8 }}
{{- end}}
- name: "init"
image: "{{ .Values.controller.image.registry }}/{{ .Values.controller.image.repository }}:{{- include "controller.image.tag" . -}}"
imagePullPolicy: "{{ .Values.controller.image.pullPolicy }}"
{{- if .Values.controller.containerSecurityContext }}
securityContext: {{- toYaml .Values.controller.containerSecurityContext | nindent 12 }}
{{- end }}
command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
{{- if .Values.controller.initContainerEnvFrom }}
envFrom:
{{ (tpl (toYaml .Values.controller.initContainerEnvFrom) .) | indent 12 }}
{{- end }}
{{- if .Values.controller.initContainerEnv }}
env:
{{ (tpl (toYaml .Values.controller.initContainerEnv) .) | indent 12 }}
{{- end }}
resources:
{{- if .Values.controller.initContainerResources }}
{{ toYaml .Values.controller.initContainerResources | indent 12 }}
{{- else }}
{{ toYaml .Values.controller.resources | indent 12 }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.mounts }}
{{ toYaml .Values.persistence.mounts | indent 12 }}
{{- end }}
- mountPath: {{ .Values.controller.jenkinsHome }}
name: jenkins-home
{{- if .Values.persistence.subPath }}
subPath: {{ .Values.persistence.subPath }}
{{- end }}
- mountPath: /var/jenkins_config
name: jenkins-config
{{- if .Values.controller.installPlugins }}
{{- if .Values.controller.overwritePluginsFromImage }}
- mountPath: {{ .Values.controller.jenkinsRef }}/plugins
name: plugins
{{- end }}
- mountPath: /var/jenkins_plugins
name: plugin-dir
- mountPath: /tmp
name: tmp-volume
{{- end }}
{{- if or .Values.controller.initScripts .Values.controller.initConfigMap }}
- mountPath: {{ .Values.controller.jenkinsHome }}/init.groovy.d
name: init-scripts
{{- end }}
{{- if and .Values.controller.httpsKeyStore.enable (not .Values.controller.httpsKeyStore.disableSecretMount) }}
{{- $httpsJKSDirPath := printf "%s" .Values.controller.httpsKeyStore.path }}
- mountPath: {{ $httpsJKSDirPath }}
name: jenkins-https-keystore
{{- end }}
containers:
- name: jenkins
image: "{{ .Values.controller.image.registry }}/{{ .Values.controller.image.repository }}:{{- include "controller.image.tag" . -}}"
imagePullPolicy: "{{ .Values.controller.image.pullPolicy }}"
{{- if .Values.controller.containerSecurityContext }}
securityContext: {{- toYaml .Values.controller.containerSecurityContext | nindent 12 }}
{{- end }}
{{- if .Values.controller.overrideArgs }}
args: [
{{- range $overrideArg := .Values.controller.overrideArgs }}
"{{- tpl $overrideArg $ }}",
{{- end }}
]
{{- else if .Values.controller.httpsKeyStore.enable }}
{{- $httpsJKSFilePath := printf "%s/%s" .Values.controller.httpsKeyStore.path .Values.controller.httpsKeyStore.fileName }}
args: [ "--httpPort={{.Values.controller.httpsKeyStore.httpPort}}", "--httpsPort={{.Values.controller.targetPort}}", '--httpsKeyStore={{ $httpsJKSFilePath }}', "--httpsKeyStorePassword=$(JENKINS_HTTPS_KEYSTORE_PASSWORD)" ]
{{- else }}
args: [ "--httpPort={{.Values.controller.targetPort}}"]
{{- end }}
{{- if .Values.controller.lifecycle }}
lifecycle:
{{ toYaml .Values.controller.lifecycle | indent 12 }}
{{- end }}
{{- if .Values.controller.terminationMessagePath }}
terminationMessagePath: {{ .Values.controller.terminationMessagePath }}
{{- end }}
{{- if .Values.controller.terminationMessagePolicy }}
terminationMessagePolicy: {{ .Values.controller.terminationMessagePolicy }}
{{- end }}
{{- if .Values.controller.containerEnvFrom }}
envFrom:
{{ (tpl ( toYaml .Values.controller.containerEnvFrom) .) | indent 12 }}
{{- end }}
env:
{{- if .Values.controller.containerEnv }}
{{ (tpl ( toYaml .Values.controller.containerEnv) .) | indent 12 }}
{{- end }}
{{- if or .Values.controller.additionalSecrets .Values.controller.existingSecret .Values.controller.additionalExistingSecrets .Values.controller.admin.createSecret }}
- name: SECRETS
value: /run/secrets/additional
{{- end }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JAVA_OPTS
value: >-
{{ if .Values.controller.sidecars.configAutoReload.enabled }} -Dcasc.reload.token=$(POD_NAME) {{ end }}{{ default "" .Values.controller.javaOpts }}
- name: JENKINS_OPTS
value: >-
{{ if .Values.controller.jenkinsUriPrefix }}--prefix={{ .Values.controller.jenkinsUriPrefix }} {{ end }} --webroot=/var/jenkins_cache/war {{ default "" .Values.controller.jenkinsOpts}}
- name: JENKINS_SLAVE_AGENT_PORT
value: "{{ .Values.controller.agentListenerPort }}"
{{- if .Values.controller.httpsKeyStore.enable }}
- name: JENKINS_HTTPS_KEYSTORE_PASSWORD
{{- if not .Values.controller.httpsKeyStore.disableSecretMount }}
valueFrom:
secretKeyRef:
name: {{ if .Values.controller.httpsKeyStore.jenkinsHttpsJksPasswordSecretName }} {{ .Values.controller.httpsKeyStore.jenkinsHttpsJksPasswordSecretName }} {{ else if .Values.controller.httpsKeyStore.jenkinsHttpsJksSecretName }} {{ .Values.controller.httpsKeyStore.jenkinsHttpsJksSecretName }} {{ else }} {{ template "jenkins.fullname" . }}-https-jks {{ end }}
key: "{{ .Values.controller.httpsKeyStore.jenkinsHttpsJksPasswordSecretKey }}"
{{- else }}
value: {{ .Values.controller.httpsKeyStore.password }}
{{- end }}
{{- end }}
- name: CASC_JENKINS_CONFIG
value: {{ .Values.controller.sidecars.configAutoReload.folder | default (printf "%s/casc_configs" (.Values.controller.jenkinsRef)) }}{{- if .Values.controller.JCasC.configUrls }},{{ join "," .Values.controller.JCasC.configUrls }}{{- end }}
ports:
{{- if .Values.controller.httpsKeyStore.enable }}
- containerPort: {{.Values.controller.httpsKeyStore.httpPort}}
{{- else }}
- containerPort: {{.Values.controller.targetPort}}
{{- end }}
name: http
- containerPort: {{ .Values.controller.agentListenerPort }}
name: agent-listener
{{- if .Values.controller.agentListenerHostPort }}
hostPort: {{ .Values.controller.agentListenerHostPort }}
{{- end }}
{{- if .Values.controller.jmxPort }}
- containerPort: {{ .Values.controller.jmxPort }}
name: jmx
{{- end }}
{{- range $index, $port := .Values.controller.extraPorts }}
- containerPort: {{ $port.port }}
name: {{ $port.name }}
{{- end }}
{{- if and .Values.controller.healthProbes .Values.controller.probes}}
{{- if semverCompare ">=1.16-0" .Capabilities.KubeVersion.GitVersion }}
startupProbe:
{{ tpl (toYaml .Values.controller.probes.startupProbe | indent 12) .}}
{{- end }}
livenessProbe:
{{ tpl (toYaml .Values.controller.probes.livenessProbe | indent 12) .}}
readinessProbe:
{{ tpl (toYaml .Values.controller.probes.readinessProbe | indent 12) .}}
{{- end }}
resources:
{{ toYaml .Values.controller.resources | indent 12 }}
volumeMounts:
{{- if .Values.persistence.mounts }}
{{ toYaml .Values.persistence.mounts | indent 12 }}
{{- end }}
{{- if and .Values.controller.httpsKeyStore.enable (not .Values.controller.httpsKeyStore.disableSecretMount) }}
{{- $httpsJKSDirPath := printf "%s" .Values.controller.httpsKeyStore.path }}
- mountPath: {{ $httpsJKSDirPath }}
name: jenkins-https-keystore
{{- end }}
- mountPath: {{ .Values.controller.jenkinsHome }}
name: jenkins-home
readOnly: false
{{- if .Values.persistence.subPath }}
subPath: {{ .Values.persistence.subPath }}
{{- end }}
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
{{- if .Values.controller.installPlugins }}
- mountPath: {{ .Values.controller.jenkinsRef }}/plugins/
name: plugin-dir
readOnly: false
{{- end }}
{{- if or .Values.controller.initScripts .Values.controller.initConfigMap }}
- mountPath: {{ .Values.controller.jenkinsHome }}/init.groovy.d
name: init-scripts
{{- end }}
{{- if .Values.controller.sidecars.configAutoReload.enabled }}
- name: sc-config-volume
mountPath: {{ .Values.controller.sidecars.configAutoReload.folder | default (printf "%s/casc_configs" (.Values.controller.jenkinsRef)) }}
{{- end }}
{{- if or .Values.controller.additionalSecrets .Values.controller.existingSecret .Values.controller.additionalExistingSecrets .Values.controller.admin.createSecret }}
- name: jenkins-secrets
mountPath: /run/secrets/additional
readOnly: true
{{- end }}
- name: jenkins-cache
mountPath: /var/jenkins_cache
- mountPath: /tmp
name: tmp-volume
{{- if .Values.controller.sidecars.configAutoReload.enabled }}
{{- include "jenkins.configReloadContainer" (list $ "config-reload" "sidecar") | nindent 8 }}
{{- end}}
{{- if .Values.controller.sidecars.additionalSidecarContainers}}
{{ tpl (toYaml .Values.controller.sidecars.additionalSidecarContainers | indent 8) .}}
{{- end }}
volumes:
{{- if .Values.persistence.volumes }}
{{ tpl (toYaml .Values.persistence.volumes | indent 6) . }}
{{- end }}
{{- if .Values.controller.sidecars.configAutoReload.logging.configuration.override }}
- name: auto-reload-config
configMap:
name: {{ template "jenkins.fullname" . }}-auto-reload-config
- name: auto-reload-config-logs
emptyDir: {}
{{- end }}
{{- if .Values.controller.installPlugins }}
{{- if .Values.controller.overwritePluginsFromImage }}
- name: plugins
emptyDir: {}
{{- end }}
{{- end }}
{{- if and .Values.controller.initScripts .Values.controller.initConfigMap }}
- name: init-scripts
projected:
sources:
- configMap:
name: {{ template "jenkins.fullname" . }}-init-scripts
- configMap:
name: {{ .Values.controller.initConfigMap }}
{{- else if .Values.controller.initConfigMap }}
- name: init-scripts
configMap:
name: {{ .Values.controller.initConfigMap }}
{{- else if .Values.controller.initScripts }}
- name: init-scripts
configMap:
name: {{ template "jenkins.fullname" . }}-init-scripts
{{- end }}
- name: jenkins-config
configMap:
name: {{ template "jenkins.fullname" . }}
{{- if .Values.controller.installPlugins }}
- name: plugin-dir
emptyDir: {}
{{- end }}
{{- if or .Values.controller.additionalSecrets .Values.controller.existingSecret .Values.controller.additionalExistingSecrets .Values.controller.admin.createSecret }}
- name: jenkins-secrets
projected:
sources:
{{- if .Values.controller.additionalSecrets }}
- secret:
name: {{ template "jenkins.fullname" . }}-additional-secrets
{{- end }}
{{- if .Values.controller.additionalExistingSecrets }}
{{- range $key, $value := .Values.controller.additionalExistingSecrets }}
- secret:
name: {{ tpl $value.name $ }}
items:
- key: {{ tpl $value.keyName $ }}
path: {{ tpl $value.name $ }}-{{ tpl $value.keyName $ }}
{{- end }}
{{- end }}
{{- if .Values.controller.admin.createSecret }}
- secret:
name: {{ .Values.controller.admin.existingSecret | default (include "jenkins.fullname" .) }}
items:
- key: {{ .Values.controller.admin.userKey | default "jenkins-admin-user" }}
path: chart-admin-username
- key: {{ .Values.controller.admin.passwordKey | default "jenkins-admin-password" }}
path: chart-admin-password
{{- end }}
{{- if .Values.controller.existingSecret }}
- secret:
name: {{ .Values.controller.existingSecret }}
{{- end }}
{{- end }}
- name: jenkins-cache
emptyDir: {}
{{- if not (contains "jenkins-home" (quote .Values.persistence.volumes)) }}
- name: jenkins-home
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default (include "jenkins.fullname" .) }}
{{- else }}
emptyDir: {}
{{- end -}}
{{- end }}
- name: sc-config-volume
emptyDir: {}
- name: tmp-volume
emptyDir: {}
{{- if and .Values.controller.httpsKeyStore.enable (not .Values.controller.httpsKeyStore.disableSecretMount) }}
- name: jenkins-https-keystore
secret:
secretName: {{ if .Values.controller.httpsKeyStore.jenkinsHttpsJksSecretName }} {{ .Values.controller.httpsKeyStore.jenkinsHttpsJksSecretName }} {{ else }} {{ template "jenkins.fullname" . }}-https-jks {{ end }}
items:
- key: {{ .Values.controller.httpsKeyStore.jenkinsHttpsJksSecretKey }}
path: {{ .Values.controller.httpsKeyStore.fileName }}
{{- end }}
{{- if .Values.controller.imagePullSecretName }}
imagePullSecrets:
- name: {{ .Values.controller.imagePullSecretName }}
{{- end -}}

View File

@ -0,0 +1,56 @@
apiVersion: v1
kind: Service
metadata:
name: {{template "jenkins.fullname" . }}
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.controller.serviceLabels }}
{{ toYaml .Values.controller.serviceLabels | indent 4 }}
{{- end }}
{{- if .Values.controller.serviceAnnotations }}
annotations:
{{ toYaml .Values.controller.serviceAnnotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.controller.serviceExternalTrafficPolicy }}
externalTrafficPolicy: {{.Values.controller.serviceExternalTrafficPolicy}}
{{- end }}
{{- if (and (eq .Values.controller.serviceType "ClusterIP") (not (empty .Values.controller.clusterIP))) }}
clusterIP: {{.Values.controller.clusterIP}}
{{- end }}
ports:
- port: {{.Values.controller.servicePort}}
name: http
targetPort: {{ .Values.controller.targetPort }}
{{- if (and (eq .Values.controller.serviceType "NodePort") (not (empty .Values.controller.nodePort))) }}
nodePort: {{.Values.controller.nodePort}}
{{- end }}
{{- range $index, $port := .Values.controller.extraPorts }}
- port: {{ $port.port }}
name: {{ $port.name }}
{{- if $port.targetPort }}
targetPort: {{ $port.targetPort }}
{{- else }}
targetPort: {{ $port.port }}
{{- end -}}
{{- end }}
selector:
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
type: {{.Values.controller.serviceType}}
{{if eq .Values.controller.serviceType "LoadBalancer"}}
{{- if .Values.controller.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.controller.loadBalancerSourceRanges | indent 4 }}
{{- end }}
{{if .Values.controller.loadBalancerIP}}
loadBalancerIP: {{.Values.controller.loadBalancerIP}}
{{end}}
{{end}}

View File

@ -0,0 +1,149 @@
{{ if .Values.rbac.create }}
{{- $serviceName := include "jenkins.fullname" . -}}
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $serviceName }}-schedule-agents
namespace: {{ template "jenkins.agent.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $serviceName }}-schedule-agents
namespace: {{ template "jenkins.agent.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ $serviceName }}-schedule-agents
subjects:
- kind: ServiceAccount
name: {{ template "jenkins.serviceAccountName" .}}
namespace: {{ template "jenkins.namespace" . }}
---
{{- if .Values.rbac.readSecrets }}
# This is needed if you want to use https://jenkinsci.github.io/kubernetes-credentials-provider-plugin/
# as it needs permissions to get/watch/list Secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "jenkins.fullname" . }}-read-secrets
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $serviceName }}-read-secrets
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "jenkins.fullname" . }}-read-secrets
subjects:
- kind: ServiceAccount
name: {{ template "jenkins.serviceAccountName" . }}
namespace: {{ template "jenkins.namespace" . }}
---
{{- end}}
{{- if .Values.controller.sidecars.configAutoReload.enabled }}
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "jenkins.fullname" . }}-casc-reload
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $serviceName }}-watch-configmaps
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "jenkins.fullname" . }}-casc-reload
subjects:
- kind: ServiceAccount
name: {{ template "jenkins.serviceAccountName" . }}
namespace: {{ template "jenkins.namespace" . }}
{{- end}}
{{ end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.controller.additionalSecrets -}}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ template "jenkins.fullname" . }}-additional-secrets
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
type: Opaque
data:
{{- range .Values.controller.additionalSecrets }}
{{ .name }}: {{ .value | b64enc }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,29 @@
{{- if .Values.controller.secretClaims -}}
{{- $r := .Release -}}
{{- $v := .Values -}}
{{- $chart := printf "%s-%s" .Chart.Name .Chart.Version -}}
{{- $namespace := include "jenkins.namespace" . -}}
{{- $serviceName := include "jenkins.fullname" . -}}
{{ range .Values.controller.secretClaims }}
---
kind: SecretClaim
apiVersion: vaultproject.io/v1
metadata:
name: {{ $serviceName }}-{{ .name | default .path | lower }}
namespace: {{ $namespace }}
labels:
"app.kubernetes.io/name": '{{ $serviceName }}'
{{- if $v.renderHelmLabels }}
"helm.sh/chart": "{{ $chart }}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ $r.Service }}"
"app.kubernetes.io/instance": "{{ $r.Name }}"
"app.kubernetes.io/component": "{{ $v.controller.componentName }}"
spec:
type: {{ .type | default "Opaque" }}
path: {{ .path }}
{{- if .renew }}
renew: {{ .renew }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if and .Values.controller.httpsKeyStore.enable ( not .Values.controller.httpsKeyStore.jenkinsHttpsJksSecretName ) (not .Values.controller.httpsKeyStore.disableSecretMount) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "jenkins.fullname" . }}-https-jks
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
type: Opaque
data:
jenkins-jks-file: |
{{ .Values.controller.httpsKeyStore.jenkinsKeyStoreBase64Encoded | indent 4 }}
https-jks-password: {{ .Values.controller.httpsKeyStore.password | b64enc }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if and (not .Values.controller.admin.existingSecret) (.Values.controller.admin.createSecret) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "jenkins.fullname" . }}
namespace: {{ template "jenkins.namespace" . }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
type: Opaque
data:
jenkins-admin-password: {{ template "jenkins.password" . }}
jenkins-admin-user: {{ .Values.controller.admin.username | b64enc | quote }}
{{- end }}

View File

@ -0,0 +1,26 @@
{{ if .Values.serviceAccountAgent.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "jenkins.serviceAccountAgentName" . }}
namespace: {{ template "jenkins.agent.namespace" . }}
{{- if .Values.serviceAccountAgent.annotations }}
annotations:
{{ tpl (toYaml .Values.serviceAccountAgent.annotations) . | indent 4 }}
{{- end }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.serviceAccountAgent.extraLabels }}
{{ tpl (toYaml .Values.serviceAccountAgent.extraLabels) . | indent 4 }}
{{- end }}
{{- if .Values.serviceAccountAgent.imagePullSecretName }}
imagePullSecrets:
- name: {{ .Values.serviceAccountAgent.imagePullSecretName }}
{{- end -}}
{{ end }}

View File

@ -0,0 +1,26 @@
{{ if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "jenkins.serviceAccountName" . }}
namespace: {{ template "jenkins.namespace" . }}
{{- if .Values.serviceAccount.annotations }}
annotations:
{{ tpl (toYaml .Values.serviceAccount.annotations) . | indent 4 }}
{{- end }}
labels:
"app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ template "jenkins.label" .}}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ .Release.Service }}"
"app.kubernetes.io/instance": "{{ .Release.Name }}"
"app.kubernetes.io/component": "{{ .Values.controller.componentName }}"
{{- if .Values.serviceAccount.extraLabels }}
{{ tpl (toYaml .Values.serviceAccount.extraLabels) . | indent 4 }}
{{- end }}
{{- if .Values.serviceAccount.imagePullSecretName }}
imagePullSecrets:
- name: {{ .Values.serviceAccount.imagePullSecretName }}
{{- end -}}
{{ end }}

View File

@ -0,0 +1,49 @@
{{- if .Values.controller.testEnabled }}
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-ui-test-{{ randAlphaNum 5 | lower }}"
namespace: {{ template "jenkins.namespace" . }}
annotations:
"helm.sh/hook": test-success
spec:
{{- if .Values.controller.nodeSelector }}
nodeSelector:
{{ toYaml .Values.controller.nodeSelector | indent 4 }}
{{- end }}
{{- if .Values.controller.tolerations }}
tolerations:
{{ toYaml .Values.controller.tolerations | indent 4 }}
{{- end }}
initContainers:
- name: "test-framework"
image: "{{ .Values.helmtest.bats.image.registry }}/{{ .Values.helmtest.bats.image.repository }}:{{ .Values.helmtest.bats.image.tag }}"
command:
- "bash"
- "-c"
args:
- |
# copy bats to tools dir
set -ex
cp -R /opt/bats /tools/bats/
volumeMounts:
- mountPath: /tools
name: tools
containers:
- name: {{ .Release.Name }}-ui-test
image: "{{ .Values.controller.image.registry }}/{{ .Values.controller.image.repository }}:{{- include "controller.image.tag" . -}}"
command: ["/tools/bats/bin/bats", "-t", "/tests/run.sh"]
volumeMounts:
- mountPath: /tests
name: tests
readOnly: true
- mountPath: /tools
name: tools
volumes:
- name: tests
configMap:
name: {{ template "jenkins.fullname" . }}-tests
- name: tools
emptyDir: {}
restartPolicy: Never
{{- end }}

View File

@ -0,0 +1,14 @@
{{- if .Values.controller.testEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" . }}-tests
namespace: {{ template "jenkins.namespace" . }}
annotations:
"helm.sh/hook": test
data:
run.sh: |-
@test "Testing Jenkins UI is accessible" {
curl --retry 48 --retry-delay 10 {{ template "jenkins.fullname" . }}:{{ .Values.controller.servicePort }}{{ default "" .Values.controller.jenkinsUriPrefix }}/login
}
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,6 @@ annotations:
catalog.cattle.io/auto-install: linkerd-crds catalog.cattle.io/auto-install: linkerd-crds
catalog.cattle.io/certified: partner catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Linkerd Control Plane catalog.cattle.io/display-name: Linkerd Control Plane
catalog.cattle.io/featured: "5"
catalog.cattle.io/kube-version: '>=1.22.0-0' catalog.cattle.io/kube-version: '>=1.22.0-0'
catalog.cattle.io/release-name: linkerd-control-plane catalog.cattle.io/release-name: linkerd-control-plane
apiVersion: v2 apiVersion: v2

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
OWNERS
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,6 @@
dependencies:
- name: partials
repository: file://../partials
version: 0.1.0
digest: sha256:8e42f9c9d4a2dc883f17f94d6044c97518ced19ad0922f47b8760e47135369ba
generated: "2021-12-06T11:42:50.784240359-05:00"

View File

@ -0,0 +1,29 @@
annotations:
catalog.cattle.io/auto-install: linkerd-crds
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Linkerd Control Plane
catalog.cattle.io/featured: "5"
catalog.cattle.io/kube-version: '>=1.22.0-0'
catalog.cattle.io/release-name: linkerd-control-plane
apiVersion: v2
appVersion: edge-24.9.2
dependencies:
- name: partials
repository: file://./charts/partials
version: 0.1.0
description: 'Linkerd gives you observability, reliability, and security for your
microservices — with no code change required. '
home: https://linkerd.io
icon: file://assets/icons/linkerd-control-plane.png
keywords:
- service-mesh
kubeVersion: '>=1.22.0-0'
maintainers:
- email: cncf-linkerd-dev@lists.cncf.io
name: Linkerd authors
url: https://linkerd.io/
name: linkerd-control-plane
sources:
- https://github.com/linkerd/linkerd2/
type: application
version: 2024.9.2

View File

@ -0,0 +1,321 @@
# linkerd-control-plane
Linkerd gives you observability, reliability, and security
for your microservices — with no code change required.
![Version: 2024.9.2](https://img.shields.io/badge/Version-2024.9.2-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![AppVersion: edge-XX.X.X](https://img.shields.io/badge/AppVersion-edge--XX.X.X-informational?style=flat-square)
**Homepage:** <https://linkerd.io>
## Quickstart and documentation
You can run Linkerd on any Kubernetes cluster in a matter of seconds. See the
[Linkerd Getting Started Guide][getting-started] for how.
For more comprehensive documentation, start with the [Linkerd
docs][linkerd-docs].
## Prerequisite: linkerd-crds chart
Before installing this chart, please install the `linkerd-crds` chart, which
creates all the CRDs that the components from the current chart require.
## Prerequisite: identity certificates
The identity component of Linkerd requires setting up a trust anchor
certificate, and an issuer certificate with its key. These need to be provided
to Helm by the user (unlike when using the `linkerd install` CLI which can
generate these automatically). You can provide your own, or follow [these
instructions](https://linkerd.io/2/tasks/generate-certificates/) to generate new
ones.
Alternatively, both trust anchor and identity issuer certificates may be
derived from in-cluster resources. Existing CA (trust anchor) certificates
**must** live in a `ConfigMap` resource named `linkerd-identity-trust-roots`.
Issuer certificates **must** live in a `Secret` named
`linkerd-identity-issuer`. Both resources should exist in the control-plane's
install namespace. In order to use an existing CA, Linkerd needs to be
installed with `identity.externalCA=true`. To use an existing issuer
certificate, Linkerd should be installed with
`identity.issuer.scheme=kubernetes.io/tls`.
A more comprehensive description is in the [automatic certificate rotation
guide](https://linkerd.io/2.12/tasks/automatically-rotating-control-plane-tls-credentials/#a-note-on-third-party-cert-management-solutions).
Note that the provided certificates must be ECDSA certificates.
## Adding Linkerd's Helm repository
Included here for completeness-sake, but should have already been added when
`linkerd-base` was installed.
```bash
# To add the repo for Linkerd edge releases:
helm repo add linkerd https://helm.linkerd.io/edge
```
## Installing the chart
You must provide the certificates and keys described in the preceding section,
and the same expiration date you used to generate the Issuer certificate.
```bash
helm install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
linkerd/linkerd-control-plane
```
Note that you require to install this chart in the same namespace you installed
the `linkerd-base` chart.
## Setting High-Availability
Besides the default `values.yaml` file, the chart provides a `values-ha.yaml`
file that overrides some default values as to set things up under a
high-availability scenario, analogous to the `--ha` option in `linkerd install`.
Values such as higher number of replicas, higher memory/cpu limits and
affinities are specified in that file.
You can get ahold of `values-ha.yaml` by fetching the chart files:
```bash
helm fetch --untar linkerd/linkerd-control-plane
```
Then use the `-f` flag to provide the override file, for example:
```bash
helm install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
-f linkerd2/values-ha.yaml
linkerd/linkerd-control-plane
```
## Get involved
* Check out Linkerd's source code at [GitHub][linkerd2].
* Join Linkerd's [user mailing list][linkerd-users], [developer mailing
list][linkerd-dev], and [announcements mailing list][linkerd-announce].
* Follow [@linkerd][twitter] on Twitter.
* Join the [Linkerd Slack][slack].
[getting-started]: https://linkerd.io/2/getting-started/
[linkerd2]: https://github.com/linkerd/linkerd2
[linkerd-announce]: https://lists.cncf.io/g/cncf-linkerd-announce
[linkerd-dev]: https://lists.cncf.io/g/cncf-linkerd-dev
[linkerd-docs]: https://linkerd.io/2/overview/
[linkerd-users]: https://lists.cncf.io/g/cncf-linkerd-users
[slack]: http://slack.linkerd.io
[twitter]: https://twitter.com/linkerd
## Extensions for Linkerd
The current chart installs the core Linkerd components, which grant you
reliability and security features. Other functionality is available through
extensions. Check the corresponding docs for each one of the following
extensions:
* Observability:
[Linkerd-viz](https://github.com/linkerd/linkerd2/blob/main/viz/charts/linkerd-viz/README.md)
* Multicluster:
[Linkerd-multicluster](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/README.md)
* Tracing:
[Linkerd-jaeger](https://github.com/linkerd/linkerd2/blob/main/jaeger/charts/linkerd-jaeger/README.md)
## Requirements
Kubernetes: `>=1.22.0-0`
| Repository | Name | Version |
|------------|------|---------|
| file://../partials | partials | 0.1.0 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| clusterDomain | string | `"cluster.local"` | Kubernetes DNS Domain name to use |
| clusterNetworks | string | `"10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8"` | The cluster networks for which service discovery is performed. This should include the pod and service networks, but need not include the node network. By default, all IPv4 private networks and all accepted IPv6 ULAs are specified so that resolution works in typical Kubernetes environments. |
| cniEnabled | bool | `false` | enabling this omits the NET_ADMIN capability in the PSP and the proxy-init container when injecting the proxy; requires the linkerd-cni plugin to already be installed |
| commonLabels | object | `{}` | Labels to apply to all resources |
| controlPlaneTracing | bool | `false` | enables control plane tracing |
| controlPlaneTracingNamespace | string | `"linkerd-jaeger"` | namespace to send control plane traces to |
| controller.podDisruptionBudget | object | `{"maxUnavailable":1}` | sets pod disruption budget parameter for all deployments |
| controller.podDisruptionBudget.maxUnavailable | int | `1` | Maximum number of pods that can be unavailable during disruption |
| controllerGID | int | `-1` | Optional customisation of the group ID for the control plane components (the group ID will be omitted if lower than 0) |
| controllerImage | string | `"cr.l5d.io/linkerd/controller"` | Docker image for the destination and identity components |
| controllerImageVersion | string | `""` | Optionally allow a specific container image Tag (or SHA) to be specified for the controllerImage. |
| controllerLogFormat | string | `"plain"` | Log format for the control plane components |
| controllerLogLevel | string | `"info"` | Log level for the control plane components |
| controllerReplicas | int | `1` | Number of replicas for each control plane pod |
| controllerUID | int | `2103` | User ID for the control plane components |
| debugContainer.image.name | string | `"cr.l5d.io/linkerd/debug"` | Docker image for the debug container |
| debugContainer.image.pullPolicy | string | imagePullPolicy | Pull policy for the debug container image |
| debugContainer.image.version | string | linkerdVersion | Tag for the debug container image |
| deploymentStrategy | object | `{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"}}` | default kubernetes deployment strategy |
| destinationController.livenessProbe.timeoutSeconds | int | `1` | |
| destinationController.meshedHttp2ClientProtobuf.keep_alive.interval.seconds | int | `10` | |
| destinationController.meshedHttp2ClientProtobuf.keep_alive.timeout.seconds | int | `3` | |
| destinationController.meshedHttp2ClientProtobuf.keep_alive.while_idle | bool | `true` | |
| destinationController.readinessProbe.timeoutSeconds | int | `1` | |
| disableHeartBeat | bool | `false` | Set to true to not start the heartbeat cronjob |
| disableIPv6 | bool | `true` | disables routing IPv6 traffic in addition to IPv4 traffic through the proxy (IPv6 routing only available as of proxy-init v2.3.0 and linkerd-cni v1.4.0) |
| enableEndpointSlices | bool | `true` | enables the use of EndpointSlice informers for the destination service; enableEndpointSlices should be set to true only if EndpointSlice K8s feature gate is on |
| enableH2Upgrade | bool | `true` | Allow proxies to perform transparent HTTP/2 upgrading |
| enablePSP | bool | `false` | Add a PSP resource and bind it to the control plane ServiceAccounts. Note PSP has been deprecated since k8s v1.21 |
| enablePodAntiAffinity | bool | `false` | enables pod anti affinity creation on deployments for high availability |
| enablePodDisruptionBudget | bool | `false` | enables the creation of pod disruption budgets for control plane components |
| enablePprof | bool | `false` | enables the use of pprof endpoints on control plane component's admin servers |
| identity.externalCA | bool | `false` | If the linkerd-identity-trust-roots ConfigMap has already been created |
| identity.issuer.clockSkewAllowance | string | `"20s"` | Amount of time to allow for clock skew within a Linkerd cluster |
| identity.issuer.issuanceLifetime | string | `"24h0m0s"` | Amount of time for which the Identity issuer should certify identity |
| identity.issuer.scheme | string | `"linkerd.io/tls"` | |
| identity.issuer.tls | object | `{"crtPEM":"","keyPEM":""}` | Which scheme is used for the identity issuer secret format |
| identity.issuer.tls.crtPEM | string | `""` | Issuer certificate (ECDSA). It must be provided during install. |
| identity.issuer.tls.keyPEM | string | `""` | Key for the issuer certificate (ECDSA). It must be provided during install |
| identity.kubeAPI.clientBurst | int | `200` | Burst value over clientQPS |
| identity.kubeAPI.clientQPS | int | `100` | Maximum QPS sent to the kube-apiserver before throttling. See [token bucket rate limiter implementation](https://github.com/kubernetes/client-go/blob/v12.0.0/util/flowcontrol/throttle.go) |
| identity.livenessProbe.timeoutSeconds | int | `1` | |
| identity.readinessProbe.timeoutSeconds | int | `1` | |
| identity.serviceAccountTokenProjection | bool | `true` | Use [Service Account token Volume projection](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) for pod validation instead of the default token |
| identityTrustAnchorsPEM | string | `""` | Trust root certificate (ECDSA). It must be provided during install. |
| identityTrustDomain | string | clusterDomain | Trust domain used for identity |
| imagePullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| imagePullSecrets | list | `[]` | For Private docker registries, authentication is needed. Registry secrets are applied to the respective service accounts |
| kubeAPI.clientBurst | int | `200` | Burst value over clientQPS |
| kubeAPI.clientQPS | int | `100` | Maximum QPS sent to the kube-apiserver before throttling. See [token bucket rate limiter implementation](https://github.com/kubernetes/client-go/blob/v12.0.0/util/flowcontrol/throttle.go) |
| linkerdVersion | string | `"linkerdVersionValue"` | control plane version. See Proxy section for proxy version |
| networkValidator.connectAddr | string | `""` | Address to which the network-validator will attempt to connect. This should be an IP that the cluster is expected to be able to reach but a port it should not, e.g., a public IP for public clusters and a private IP for air-gapped clusters with a port like 20001. If empty, defaults to 1.1.1.1:20001 and [fd00::1]:20001 for IPv4 and IPv6 respectively. |
| networkValidator.enableSecurityContext | bool | `true` | Include a securityContext in the network-validator pod spec |
| networkValidator.listenAddr | string | `""` | Address to which network-validator listens to requests from itself. If empty, defaults to 0.0.0.0:4140 and [::]:4140 for IPv4 and IPv6 respectively. |
| networkValidator.logFormat | string | plain | Log format (`plain` or `json`) for network-validator |
| networkValidator.logLevel | string | debug | Log level for the network-validator |
| networkValidator.timeout | string | `"10s"` | Timeout before network-validator fails to validate the pod's network connectivity |
| nodeSelector | object | `{"kubernetes.io/os":"linux"}` | NodeSelector section, See the [K8S documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) for more information |
| podAnnotations | object | `{}` | Additional annotations to add to all pods |
| podLabels | object | `{}` | Additional labels to add to all pods |
| podMonitor.controller.enabled | bool | `true` | Enables the creation of PodMonitor for the control-plane |
| podMonitor.controller.namespaceSelector | string | `"matchNames:\n - {{ .Release.Namespace }}\n - linkerd-viz\n - linkerd-jaeger\n"` | Selector to select which namespaces the Endpoints objects are discovered from |
| podMonitor.enabled | bool | `false` | Enables the creation of Prometheus Operator [PodMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor) |
| podMonitor.labels | object | `{}` | Labels to apply to all pod Monitors |
| podMonitor.proxy.enabled | bool | `true` | Enables the creation of PodMonitor for the data-plane |
| podMonitor.scrapeInterval | string | `"10s"` | Interval at which metrics should be scraped |
| podMonitor.scrapeTimeout | string | `"10s"` | Iimeout after which the scrape is ended |
| podMonitor.serviceMirror.enabled | bool | `true` | Enables the creation of PodMonitor for the Service Mirror component |
| policyController.image.name | string | `"cr.l5d.io/linkerd/policy-controller"` | Docker image for the policy controller |
| policyController.image.pullPolicy | string | imagePullPolicy | Pull policy for the policy controller container image |
| policyController.image.version | string | linkerdVersion | Tag for the policy controller container image |
| policyController.livenessProbe.timeoutSeconds | int | `1` | |
| policyController.logLevel | string | `"info"` | Log level for the policy controller |
| policyController.probeNetworks | list | `["0.0.0.0/0","::/0"]` | The networks from which probes are performed. By default, all networks are allowed so that all probes are authorized. |
| policyController.readinessProbe.timeoutSeconds | int | `1` | |
| policyController.resources | object | `{"cpu":{"limit":"","request":""},"ephemeral-storage":{"limit":"","request":""},"memory":{"limit":"","request":""}}` | policy controller resource requests & limits |
| policyController.resources.cpu.limit | string | `""` | Maximum amount of CPU units that the policy controller can use |
| policyController.resources.cpu.request | string | `""` | Amount of CPU units that the policy controller requests |
| policyController.resources.ephemeral-storage.limit | string | `""` | Maximum amount of ephemeral storage that the policy controller can use |
| policyController.resources.ephemeral-storage.request | string | `""` | Amount of ephemeral storage that the policy controller requests |
| policyController.resources.memory.limit | string | `""` | Maximum amount of memory that the policy controller can use |
| policyController.resources.memory.request | string | `""` | Maximum amount of memory that the policy controller requests |
| policyValidator.caBundle | string | `""` | Bundle of CA certificates for proxy injector. If not provided nor injected with cert-manager, then Helm will use the certificate generated for `policyValidator.crtPEM`. If `policyValidator.externalSecret` is set to true, this value, injectCaFrom, or injectCaFromSecret must be set, as no certificate will be generated. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector) for more information. |
| policyValidator.crtPEM | string | `""` | Certificate for the policy validator. If not provided and not using an external secret then Helm will generate one. |
| policyValidator.externalSecret | bool | `false` | Do not create a secret resource for the policyValidator webhook. If this is set to `true`, the value `policyValidator.caBundle` must be set or the ca bundle must injected with cert-manager ca injector using `policyValidator.injectCaFrom` or `policyValidator.injectCaFromSecret` (see below). |
| policyValidator.injectCaFrom | string | `""` | Inject the CA bundle from a cert-manager Certificate. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-certificate-resource) for more information. |
| policyValidator.injectCaFromSecret | string | `""` | Inject the CA bundle from a Secret. If set, the `cert-manager.io/inject-ca-from-secret` annotation will be added to the webhook. The Secret must have the CA Bundle stored in the `ca.crt` key and have the `cert-manager.io/allow-direct-injection` annotation set to `true`. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-secret-resource) for more information. |
| policyValidator.keyPEM | string | `""` | Certificate key for the policy validator. If not provided and not using an external secret then Helm will generate one. |
| policyValidator.namespaceSelector | object | `{"matchExpressions":[{"key":"config.linkerd.io/admission-webhooks","operator":"NotIn","values":["disabled"]}]}` | Namespace selector used by admission webhook |
| priorityClassName | string | `""` | Kubernetes priorityClassName for the Linkerd Pods |
| profileValidator.caBundle | string | `""` | Bundle of CA certificates for proxy injector. If not provided nor injected with cert-manager, then Helm will use the certificate generated for `profileValidator.crtPEM`. If `profileValidator.externalSecret` is set to true, this value, injectCaFrom, or injectCaFromSecret must be set, as no certificate will be generated. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector) for more information. |
| profileValidator.crtPEM | string | `""` | Certificate for the service profile validator. If not provided and not using an external secret then Helm will generate one. |
| profileValidator.externalSecret | bool | `false` | Do not create a secret resource for the profileValidator webhook. If this is set to `true`, the value `proxyInjector.caBundle` must be set or the ca bundle must injected with cert-manager ca injector using `proxyInjector.injectCaFrom` or `proxyInjector.injectCaFromSecret` (see below). |
| profileValidator.injectCaFrom | string | `""` | Inject the CA bundle from a cert-manager Certificate. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-certificate-resource) for more information. |
| profileValidator.injectCaFromSecret | string | `""` | Inject the CA bundle from a Secret. If set, the `cert-manager.io/inject-ca-from-secret` annotation will be added to the webhook. The Secret must have the CA Bundle stored in the `ca.crt` key and have the `cert-manager.io/allow-direct-injection` annotation set to `true`. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-secret-resource) for more information. |
| profileValidator.keyPEM | string | `""` | Certificate key for the service profile validator. If not provided and not using an external secret then Helm will generate one. |
| profileValidator.namespaceSelector | object | `{"matchExpressions":[{"key":"config.linkerd.io/admission-webhooks","operator":"NotIn","values":["disabled"]}]}` | Namespace selector used by admission webhook |
| prometheusUrl | string | `""` | url of external prometheus instance (used for the heartbeat) |
| proxy.await | bool | `true` | If set, the application container will not start until the proxy is ready |
| proxy.control.streams.idleTimeout | string | `"5m"` | The timeout between consecutive updates from the control plane. |
| proxy.control.streams.initialTimeout | string | `"3s"` | The timeout for the first update from the control plane. |
| proxy.control.streams.lifetime | string | `"1h"` | The maximum duration for a response stream (i.e. before it will be reinitialized). |
| proxy.cores | int | `0` | The `cpu.limit` and `cores` should be kept in sync. The value of `cores` must be an integer and should typically be set by rounding up from the limit. E.g. if cpu.limit is '1500m', cores should be 2. |
| proxy.defaultInboundPolicy | string | "all-unauthenticated" | The default allow policy to use when no `Server` selects a pod. One of: "all-authenticated", "all-unauthenticated", "cluster-authenticated", "cluster-unauthenticated", "deny", "audit" |
| proxy.disableInboundProtocolDetectTimeout | bool | `false` | When set to true, disables the protocol detection timeout on the inbound side of the proxy by setting it to a very high value |
| proxy.disableOutboundProtocolDetectTimeout | bool | `false` | When set to true, disables the protocol detection timeout on the outbound side of the proxy by setting it to a very high value |
| proxy.enableExternalProfiles | bool | `false` | Enable service profiles for non-Kubernetes services |
| proxy.enableShutdownEndpoint | bool | `false` | Enables the proxy's /shutdown admin endpoint |
| proxy.gid | int | `-1` | Optional customisation of the group id under which the proxy runs (the group ID will be omitted if lower than 0) |
| proxy.image.name | string | `"cr.l5d.io/linkerd/proxy"` | Docker image for the proxy |
| proxy.image.pullPolicy | string | imagePullPolicy | Pull policy for the proxy container image |
| proxy.image.version | string | linkerdVersion | Tag for the proxy container image |
| proxy.inbound.server.http2.keepAliveInterval | string | `"10s"` | The interval at which PINGs are issued to remote HTTP/2 clients. |
| proxy.inbound.server.http2.keepAliveTimeout | string | `"3s"` | The timeout within which keep-alive PINGs must be acknowledged on inbound HTTP/2 connections. |
| proxy.inboundConnectTimeout | string | `"100ms"` | Maximum time allowed for the proxy to establish an inbound TCP connection |
| proxy.inboundDiscoveryCacheUnusedTimeout | string | `"90s"` | Maximum time allowed before an unused inbound discovery result is evicted from the cache |
| proxy.livenessProbe | object | `{"initialDelaySeconds":10,"timeoutSeconds":1}` | LivenessProbe timeout and delay configuration |
| proxy.logFormat | string | `"plain"` | Log format (`plain` or `json`) for the proxy |
| proxy.logHTTPHeaders | `off` or `insecure` | `"off"` | If set to `off`, will prevent the proxy from logging HTTP headers. If set to `insecure`, HTTP headers may be logged verbatim. Note that setting this to `insecure` is not alone sufficient to log HTTP headers; the proxy logLevel must also be set to debug. |
| proxy.logLevel | string | `"warn,linkerd=info,hickory=error"` | Log level for the proxy |
| proxy.nativeSidecar | bool | `false` | Enable KEP-753 native sidecars This is an experimental feature. It requires Kubernetes >= 1.29. If enabled, .proxy.waitBeforeExitSeconds should not be used. |
| proxy.opaquePorts | string | `"25,587,3306,4444,5432,6379,9300,11211"` | Default set of opaque ports - SMTP (25,587) server-first - MYSQL (3306) server-first - Galera (4444) server-first - PostgreSQL (5432) server-first - Redis (6379) server-first - ElasticSearch (9300) server-first - Memcached (11211) clients do not issue any preamble, which breaks detection |
| proxy.outbound.server.http2.keepAliveInterval | string | `"10s"` | The interval at which PINGs are issued to local application HTTP/2 clients. |
| proxy.outbound.server.http2.keepAliveTimeout | string | `"3s"` | The timeout within which keep-alive PINGs must be acknowledged on outbound HTTP/2 connections. |
| proxy.outboundConnectTimeout | string | `"1000ms"` | Maximum time allowed for the proxy to establish an outbound TCP connection |
| proxy.outboundDiscoveryCacheUnusedTimeout | string | `"5s"` | Maximum time allowed before an unused outbound discovery result is evicted from the cache |
| proxy.ports.admin | int | `4191` | Admin port for the proxy container |
| proxy.ports.control | int | `4190` | Control port for the proxy container |
| proxy.ports.inbound | int | `4143` | Inbound port for the proxy container |
| proxy.ports.outbound | int | `4140` | Outbound port for the proxy container |
| proxy.readinessProbe | object | `{"initialDelaySeconds":2,"timeoutSeconds":1}` | ReadinessProbe timeout and delay configuration |
| proxy.requireIdentityOnInboundPorts | string | `""` | |
| proxy.resources.cpu.limit | string | `""` | Maximum amount of CPU units that the proxy can use |
| proxy.resources.cpu.request | string | `""` | Amount of CPU units that the proxy requests |
| proxy.resources.ephemeral-storage.limit | string | `""` | Maximum amount of ephemeral storage that the proxy can use |
| proxy.resources.ephemeral-storage.request | string | `""` | Amount of ephemeral storage that the proxy requests |
| proxy.resources.memory.limit | string | `""` | Maximum amount of memory that the proxy can use |
| proxy.resources.memory.request | string | `""` | Maximum amount of memory that the proxy requests |
| proxy.shutdownGracePeriod | string | `""` | Grace period for graceful proxy shutdowns. If this timeout elapses before all open connections have completed, the proxy will terminate forcefully, closing any remaining connections. |
| proxy.startupProbe.failureThreshold | int | `120` | |
| proxy.startupProbe.initialDelaySeconds | int | `0` | |
| proxy.startupProbe.periodSeconds | int | `1` | |
| proxy.uid | int | `2102` | User id under which the proxy runs |
| proxy.waitBeforeExitSeconds | int | `0` | If set the injected proxy sidecars in the data plane will stay alive for at least the given period before receiving the SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. See [Lifecycle hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) for more info on container lifecycle hooks. |
| proxyInit.closeWaitTimeoutSecs | int | `0` | |
| proxyInit.ignoreInboundPorts | string | `"4567,4568"` | Default set of inbound ports to skip via iptables - Galera (4567,4568) |
| proxyInit.ignoreOutboundPorts | string | `"4567,4568"` | Default set of outbound ports to skip via iptables - Galera (4567,4568) |
| proxyInit.image.name | string | `"cr.l5d.io/linkerd/proxy-init"` | Docker image for the proxy-init container |
| proxyInit.image.pullPolicy | string | imagePullPolicy | Pull policy for the proxy-init container image |
| proxyInit.image.version | string | `"v2.4.1"` | Tag for the proxy-init container image |
| proxyInit.iptablesMode | string | `"legacy"` | Variant of iptables that will be used to configure routing. Currently, proxy-init can be run either in 'nft' or in 'legacy' mode. The mode will control which utility binary will be called. The host must support whichever mode will be used |
| proxyInit.kubeAPIServerPorts | string | `"443,6443"` | Default set of ports to skip via iptables for control plane components so they can communicate with the Kubernetes API Server |
| proxyInit.logFormat | string | plain | Log format (`plain` or `json`) for the proxy-init |
| proxyInit.logLevel | string | info | Log level for the proxy-init |
| proxyInit.privileged | bool | false | Privileged mode allows the container processes to inherit all security capabilities and bypass any security limitations enforced by the kubelet. When used with 'runAsRoot: true', the container will behave exactly as if it was running as root on the host. May escape cgroup limits and see other processes and devices on the host. |
| proxyInit.runAsGroup | int | `65534` | This value is used only if runAsRoot is false; otherwise runAsGroup will be 0 |
| proxyInit.runAsRoot | bool | `false` | Allow overriding the runAsNonRoot behaviour (<https://github.com/linkerd/linkerd2/issues/7308>) |
| proxyInit.runAsUser | int | `65534` | This value is used only if runAsRoot is false; otherwise runAsUser will be 0 |
| proxyInit.skipSubnets | string | `""` | Comma-separated list of subnets in valid CIDR format that should be skipped by the proxy |
| proxyInit.xtMountPath.mountPath | string | `"/run"` | |
| proxyInit.xtMountPath.name | string | `"linkerd-proxy-init-xtables-lock"` | |
| proxyInjector.caBundle | string | `""` | Bundle of CA certificates for proxy injector. If not provided nor injected with cert-manager, then Helm will use the certificate generated for `proxyInjector.crtPEM`. If `proxyInjector.externalSecret` is set to true, this value, injectCaFrom, or injectCaFromSecret must be set, as no certificate will be generated. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector) for more information. |
| proxyInjector.crtPEM | string | `""` | Certificate for the proxy injector. If not provided and not using an external secret then Helm will generate one. |
| proxyInjector.externalSecret | bool | `false` | Do not create a secret resource for the proxyInjector webhook. If this is set to `true`, the value `proxyInjector.caBundle` must be set or the ca bundle must injected with cert-manager ca injector using `proxyInjector.injectCaFrom` or `proxyInjector.injectCaFromSecret` (see below). |
| proxyInjector.injectCaFrom | string | `""` | Inject the CA bundle from a cert-manager Certificate. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-certificate-resource) for more information. |
| proxyInjector.injectCaFromSecret | string | `""` | Inject the CA bundle from a Secret. If set, the `cert-manager.io/inject-ca-from-secret` annotation will be added to the webhook. The Secret must have the CA Bundle stored in the `ca.crt` key and have the `cert-manager.io/allow-direct-injection` annotation set to `true`. See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-secret-resource) for more information. |
| proxyInjector.keyPEM | string | `""` | Certificate key for the proxy injector. If not provided and not using an external secret then Helm will generate one. |
| proxyInjector.livenessProbe.timeoutSeconds | int | `1` | |
| proxyInjector.namespaceSelector | object | `{"matchExpressions":[{"key":"config.linkerd.io/admission-webhooks","operator":"NotIn","values":["disabled"]},{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kube-system","cert-manager"]}]}` | Namespace selector used by admission webhook. |
| proxyInjector.objectSelector | object | `{"matchExpressions":[{"key":"linkerd.io/control-plane-component","operator":"DoesNotExist"},{"key":"linkerd.io/cni-resource","operator":"DoesNotExist"}]}` | Object selector used by admission webhook. |
| proxyInjector.readinessProbe.timeoutSeconds | int | `1` | |
| proxyInjector.timeoutSeconds | int | `10` | Timeout in seconds before the API Server cancels a request to the proxy injector. If timeout is exceeded, the webhookfailurePolicy is used. |
| revisionHistoryLimit | int | `10` | Specifies the number of old ReplicaSets to retain to allow rollback. |
| runtimeClassName | string | `""` | Runtime Class Name for all the pods |
| spValidator | object | `{"livenessProbe":{"timeoutSeconds":1},"readinessProbe":{"timeoutSeconds":1}}` | SP validator configuration |
| webhookFailurePolicy | string | `"Ignore"` | Failure policy for the proxy injector |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.12.0](https://github.com/norwoodj/helm-docs/releases/v1.12.0)

View File

@ -0,0 +1,133 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}
{{ template "chart.typeBadge" . }}
{{ template "chart.appVersionBadge" . }}
{{ template "chart.homepageLine" . }}
## Quickstart and documentation
You can run Linkerd on any Kubernetes cluster in a matter of seconds. See the
[Linkerd Getting Started Guide][getting-started] for how.
For more comprehensive documentation, start with the [Linkerd
docs][linkerd-docs].
## Prerequisite: linkerd-crds chart
Before installing this chart, please install the `linkerd-crds` chart, which
creates all the CRDs that the components from the current chart require.
## Prerequisite: identity certificates
The identity component of Linkerd requires setting up a trust anchor
certificate, and an issuer certificate with its key. These need to be provided
to Helm by the user (unlike when using the `linkerd install` CLI which can
generate these automatically). You can provide your own, or follow [these
instructions](https://linkerd.io/2/tasks/generate-certificates/) to generate new
ones.
Alternatively, both trust anchor and identity issuer certificates may be
derived from in-cluster resources. Existing CA (trust anchor) certificates
**must** live in a `ConfigMap` resource named `linkerd-identity-trust-roots`.
Issuer certificates **must** live in a `Secret` named
`linkerd-identity-issuer`. Both resources should exist in the control-plane's
install namespace. In order to use an existing CA, Linkerd needs to be
installed with `identity.externalCA=true`. To use an existing issuer
certificate, Linkerd should be installed with
`identity.issuer.scheme=kubernetes.io/tls`.
A more comprehensive description is in the [automatic certificate rotation
guide](https://linkerd.io/2.12/tasks/automatically-rotating-control-plane-tls-credentials/#a-note-on-third-party-cert-management-solutions).
Note that the provided certificates must be ECDSA certificates.
## Adding Linkerd's Helm repository
Included here for completeness-sake, but should have already been added when
`linkerd-base` was installed.
```bash
# To add the repo for Linkerd edge releases:
helm repo add linkerd https://helm.linkerd.io/edge
```
## Installing the chart
You must provide the certificates and keys described in the preceding section,
and the same expiration date you used to generate the Issuer certificate.
```bash
helm install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
linkerd/linkerd-control-plane
```
Note that you require to install this chart in the same namespace you installed
the `linkerd-base` chart.
## Setting High-Availability
Besides the default `values.yaml` file, the chart provides a `values-ha.yaml`
file that overrides some default values as to set things up under a
high-availability scenario, analogous to the `--ha` option in `linkerd install`.
Values such as higher number of replicas, higher memory/cpu limits and
affinities are specified in that file.
You can get ahold of `values-ha.yaml` by fetching the chart files:
```bash
helm fetch --untar linkerd/linkerd-control-plane
```
Then use the `-f` flag to provide the override file, for example:
```bash
helm install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
-f linkerd2/values-ha.yaml
linkerd/linkerd-control-plane
```
## Get involved
* Check out Linkerd's source code at [GitHub][linkerd2].
* Join Linkerd's [user mailing list][linkerd-users], [developer mailing
list][linkerd-dev], and [announcements mailing list][linkerd-announce].
* Follow [@linkerd][twitter] on Twitter.
* Join the [Linkerd Slack][slack].
[getting-started]: https://linkerd.io/2/getting-started/
[linkerd2]: https://github.com/linkerd/linkerd2
[linkerd-announce]: https://lists.cncf.io/g/cncf-linkerd-announce
[linkerd-dev]: https://lists.cncf.io/g/cncf-linkerd-dev
[linkerd-docs]: https://linkerd.io/2/overview/
[linkerd-users]: https://lists.cncf.io/g/cncf-linkerd-users
[slack]: http://slack.linkerd.io
[twitter]: https://twitter.com/linkerd
## Extensions for Linkerd
The current chart installs the core Linkerd components, which grant you
reliability and security features. Other functionality is available through
extensions. Check the corresponding docs for each one of the following
extensions:
* Observability:
[Linkerd-viz](https://github.com/linkerd/linkerd2/blob/main/viz/charts/linkerd-viz/README.md)
* Multicluster:
[Linkerd-multicluster](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/README.md)
* Tracing:
[Linkerd-jaeger](https://github.com/linkerd/linkerd2/blob/main/jaeger/charts/linkerd-jaeger/README.md)
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
{{ template "helm-docs.versionFooter" . }}

View File

@ -0,0 +1,14 @@
# Linkerd 2 Chart
Linkerd is an ultra light, ultra simple, ultra powerful service mesh. Linkerd
adds security, observability, and reliability to Kubernetes, without the
complexity.
This particular Helm chart only installs the control plane core. You will also need to install the
linkerd-crds chart. This chart should be automatically installed along with any other dependencies.
If it is not installed as a dependency, install it first.
To gain access to the observability features, please install the linkerd-viz chart.
Other extensions are available (multicluster, jaeger) under the linkerd Helm repo.
Full documentation available at: https://linkerd.io/2/overview/

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,5 @@
apiVersion: v1
description: 'A Helm chart containing Linkerd partial templates, depended by the ''linkerd''
and ''patch'' charts. '
name: partials
version: 0.1.0

View File

@ -0,0 +1,9 @@
# partials
A Helm chart containing Linkerd partial templates,
depended by the 'linkerd' and 'patch' charts.
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square)
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.12.0](https://github.com/norwoodj/helm-docs/releases/v1.12.0)

View File

@ -0,0 +1,14 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}
{{ template "chart.typeBadge" . }}
{{ template "chart.appVersionBadge" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
{{ template "helm-docs.versionFooter" . }}

View File

@ -0,0 +1,38 @@
{{ define "linkerd.pod-affinity" -}}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: {{ default "linkerd.io/control-plane-component" .label }}
operator: In
values:
- {{ .component }}
topologyKey: topology.kubernetes.io/zone
weight: 100
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: {{ default "linkerd.io/control-plane-component" .label }}
operator: In
values:
- {{ .component }}
topologyKey: kubernetes.io/hostname
{{- end }}
{{ define "linkerd.node-affinity" -}}
nodeAffinity:
{{- toYaml .Values.nodeAffinity | trim | nindent 2 }}
{{- end }}
{{ define "linkerd.affinity" -}}
{{- if or .Values.enablePodAntiAffinity .Values.nodeAffinity -}}
affinity:
{{- end }}
{{- if .Values.enablePodAntiAffinity -}}
{{- include "linkerd.pod-affinity" . | nindent 2 }}
{{- end }}
{{- if .Values.nodeAffinity -}}
{{- include "linkerd.node-affinity" . | nindent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- define "partials.proxy.capabilities" -}}
capabilities:
{{- if .Values.proxy.capabilities.add }}
add:
{{- toYaml .Values.proxy.capabilities.add | trim | nindent 4 }}
{{- end }}
{{- if .Values.proxy.capabilities.drop }}
drop:
{{- toYaml .Values.proxy.capabilities.drop | trim | nindent 4 }}
{{- end }}
{{- end -}}
{{- define "partials.proxy-init.capabilities.drop" -}}
drop:
{{ toYaml .Values.proxyInit.capabilities.drop | trim }}
{{- end -}}

View File

@ -0,0 +1,15 @@
{{- define "partials.debug" -}}
image: {{.Values.debugContainer.image.name}}:{{.Values.debugContainer.image.version | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.debugContainer.image.pullPolicy | default .Values.imagePullPolicy}}
name: linkerd-debug
terminationMessagePolicy: FallbackToLogsOnError
# some environments require probes, so we provide some infallible ones
livenessProbe:
exec:
command:
- "true"
readinessProbe:
exec:
command:
- "true"
{{- end -}}

View File

@ -0,0 +1,14 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Splits a coma separated list into a list of string values.
For example "11,22,55,44" will become "11","22","55","44"
*/}}
{{- define "partials.splitStringList" -}}
{{- if gt (len (toString .)) 0 -}}
{{- $ports := toString . | splitList "," -}}
{{- $last := sub (len $ports) 1 -}}
{{- range $i,$port := $ports -}}
"{{$port}}"{{ternary "," "" (ne $i $last)}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,17 @@
{{- define "partials.annotations.created-by" -}}
linkerd.io/created-by: {{ .Values.cliVersion | default (printf "linkerd/helm %s" ( (.Values.image).version | default .Values.linkerdVersion)) }}
{{- end -}}
{{- define "partials.proxy.annotations" -}}
linkerd.io/proxy-version: {{.Values.proxy.image.version | default .Values.linkerdVersion}}
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
linkerd.io/trust-root-sha256: {{ .Values.identityTrustAnchorsPEM | sha256sum }}
{{- end -}}
{{/*
To add labels to the control-plane components, instead update at individual component manifests as
adding here would also update `spec.selector.matchLabels` which are immutable and would fail upgrades.
*/}}
{{- define "partials.proxy.labels" -}}
linkerd.io/proxy-{{.workloadKind}}: {{.component}}
{{- end -}}

View File

@ -0,0 +1,45 @@
{{- define "partials.network-validator" -}}
name: linkerd-network-validator
image: {{.Values.proxy.image.name}}:{{.Values.proxy.image.version | default .Values.linkerdVersion }}
imagePullPolicy: {{.Values.proxy.image.pullPolicy | default .Values.imagePullPolicy}}
{{ include "partials.resources" .Values.proxy.resources }}
{{- if or .Values.networkValidator.enableSecurityContext }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
{{- end }}
command:
- /usr/lib/linkerd/linkerd2-network-validator
args:
- --log-format
- {{ .Values.networkValidator.logFormat }}
- --log-level
- {{ .Values.networkValidator.logLevel }}
- --connect-addr
{{- if .Values.networkValidator.connectAddr }}
- {{ .Values.networkValidator.connectAddr | quote }}
{{- else if .Values.disableIPv6}}
- "1.1.1.1:20001"
{{- else }}
- "[fd00::1]:20001"
{{- end }}
- --listen-addr
{{- if .Values.networkValidator.listenAddr }}
- {{ .Values.networkValidator.listenAddr | quote }}
{{- else if .Values.disableIPv6}}
- "0.0.0.0:4140"
{{- else }}
- "[::]:4140"
{{- end }}
- --timeout
- {{ .Values.networkValidator.timeout }}
{{- end -}}

View File

@ -0,0 +1,4 @@
{{- define "linkerd.node-selector" -}}
nodeSelector:
{{- toYaml .Values.nodeSelector | trim | nindent 2 }}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- define "partials.proxy.config.annotations" -}}
{{- with .cpu }}
{{- with .request -}}
config.linkerd.io/proxy-cpu-request: {{. | quote}}
{{end}}
{{- with .limit -}}
config.linkerd.io/proxy-cpu-limit: {{. | quote}}
{{- end}}
{{- end}}
{{- with .memory }}
{{- with .request }}
config.linkerd.io/proxy-memory-request: {{. | quote}}
{{end}}
{{- with .limit -}}
config.linkerd.io/proxy-memory-limit: {{. | quote}}
{{- end}}
{{- end }}
{{- end }}

View File

@ -0,0 +1,98 @@
{{- define "partials.proxy-init" -}}
args:
{{- if (.Values.proxyInit.iptablesMode | default "legacy" | eq "nft") }}
- --firewall-bin-path
- "iptables-nft"
- --firewall-save-bin-path
- "iptables-nft-save"
{{- else if not (eq .Values.proxyInit.iptablesMode "legacy") }}
{{ fail (printf "Unsupported value \"%s\" for proxyInit.iptablesMode\nValid values: [\"nft\", \"legacy\"]" .Values.proxyInit.iptablesMode) }}
{{end -}}
{{- if .Values.disableIPv6 }}
- --ipv6=false
{{- end }}
- --incoming-proxy-port
- {{.Values.proxy.ports.inbound | quote}}
- --outgoing-proxy-port
- {{.Values.proxy.ports.outbound | quote}}
- --proxy-uid
- {{.Values.proxy.uid | quote}}
{{- if ge (int .Values.proxy.gid) 0 }}
- --proxy-gid
- {{.Values.proxy.gid | quote}}
{{- end }}
- --inbound-ports-to-ignore
- "{{.Values.proxy.ports.control}},{{.Values.proxy.ports.admin}}{{ternary (printf ",%s" (.Values.proxyInit.ignoreInboundPorts | toString)) "" (not (empty .Values.proxyInit.ignoreInboundPorts)) }}"
{{- if .Values.proxyInit.ignoreOutboundPorts }}
- --outbound-ports-to-ignore
- {{.Values.proxyInit.ignoreOutboundPorts | quote}}
{{- end }}
{{- if .Values.proxyInit.closeWaitTimeoutSecs }}
- --timeout-close-wait-secs
- {{ .Values.proxyInit.closeWaitTimeoutSecs | quote}}
{{- end }}
{{- if .Values.proxyInit.logFormat }}
- --log-format
- {{ .Values.proxyInit.logFormat }}
{{- end }}
{{- if .Values.proxyInit.logLevel }}
- --log-level
- {{ .Values.proxyInit.logLevel }}
{{- end }}
{{- if .Values.proxyInit.skipSubnets }}
- --subnets-to-ignore
- {{ .Values.proxyInit.skipSubnets | quote }}
{{- end }}
image: {{.Values.proxyInit.image.name}}:{{.Values.proxyInit.image.version}}
imagePullPolicy: {{.Values.proxyInit.image.pullPolicy | default .Values.imagePullPolicy}}
name: linkerd-init
{{ include "partials.resources" .Values.proxy.resources }}
securityContext:
{{- if or .Values.proxyInit.closeWaitTimeoutSecs .Values.proxyInit.privileged }}
allowPrivilegeEscalation: true
{{- else }}
allowPrivilegeEscalation: false
{{- end }}
capabilities:
add:
- NET_ADMIN
- NET_RAW
{{- if .Values.proxyInit.capabilities -}}
{{- if .Values.proxyInit.capabilities.add }}
{{- toYaml .Values.proxyInit.capabilities.add | trim | nindent 4 }}
{{- end }}
{{- if .Values.proxyInit.capabilities.drop -}}
{{- include "partials.proxy-init.capabilities.drop" . | nindent 4 -}}
{{- end }}
{{- end }}
{{- if or .Values.proxyInit.closeWaitTimeoutSecs .Values.proxyInit.privileged }}
privileged: true
{{- else }}
privileged: false
{{- end }}
{{- if .Values.proxyInit.runAsRoot }}
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
{{- else }}
runAsNonRoot: true
runAsUser: {{ .Values.proxyInit.runAsUser | int | eq 0 | ternary 65534 .Values.proxyInit.runAsUser }}
runAsGroup: {{ .Values.proxyInit.runAsGroup | int | eq 0 | ternary 65534 .Values.proxyInit.runAsGroup }}
{{- end }}
readOnlyRootFilesystem: true
seccompProfile:
type: RuntimeDefault
terminationMessagePolicy: FallbackToLogsOnError
{{- if or (not .Values.cniEnabled) .Values.proxyInit.saMountPath }}
volumeMounts:
{{- end -}}
{{- if not .Values.cniEnabled }}
- mountPath: {{.Values.proxyInit.xtMountPath.mountPath}}
name: {{.Values.proxyInit.xtMountPath.name}}
{{- end -}}
{{- if .Values.proxyInit.saMountPath }}
- mountPath: {{.Values.proxyInit.saMountPath.mountPath}}
name: {{.Values.proxyInit.saMountPath.name}}
readOnly: {{.Values.proxyInit.saMountPath.readOnly}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,267 @@
{{ define "partials.proxy" -}}
{{ if and .Values.proxy.nativeSidecar .Values.proxy.waitBeforeExitSeconds }}
{{ fail "proxy.nativeSidecar and waitBeforeExitSeconds cannot be used simultaneously" }}
{{- end }}
{{- if not (has .Values.proxy.logHTTPHeaders (list "insecure" "off" "")) }}
{{- fail "logHTTPHeaders must be one of: insecure | off" }}
{{- end }}
{{- $trustDomain := (.Values.identityTrustDomain | default .Values.clusterDomain) -}}
env:
- name: _pod_name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: _pod_ns
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: _pod_nodeName
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- if .Values.proxy.cores }}
- name: LINKERD2_PROXY_CORES
value: {{.Values.proxy.cores | quote}}
{{- end }}
{{ if .Values.proxy.requireIdentityOnInboundPorts -}}
- name: LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_IDENTITY
value: {{.Values.proxy.requireIdentityOnInboundPorts | quote}}
{{ end -}}
{{ if .Values.proxy.requireTLSOnInboundPorts -}}
- name: LINKERD2_PROXY_INBOUND_PORTS_REQUIRE_TLS
value: {{.Values.proxy.requireTLSOnInboundPorts | quote}}
{{ end -}}
- name: LINKERD2_PROXY_SHUTDOWN_ENDPOINT_ENABLED
value: {{.Values.proxy.enableShutdownEndpoint | quote}}
- name: LINKERD2_PROXY_LOG
value: "{{.Values.proxy.logLevel}}{{ if not (eq .Values.proxy.logHTTPHeaders "insecure") }},[{headers}]=off,[{request}]=off{{ end }}"
- name: LINKERD2_PROXY_LOG_FORMAT
value: {{.Values.proxy.logFormat | quote}}
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: {{ternary "localhost.:8086" (printf "linkerd-dst-headless.%s.svc.%s.:8086" .Release.Namespace .Values.clusterDomain) (eq (toString .Values.proxy.component) "linkerd-destination")}}
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: {{.Values.clusterNetworks | quote}}
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: {{ternary "localhost.:8090" (printf "linkerd-policy.%s.svc.%s.:8090" .Release.Namespace .Values.clusterDomain) (eq (toString .Values.proxy.component) "linkerd-destination")}}
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: |
{"ns":"$(_pod_ns)", "pod":"$(_pod_name)"}
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: {{.Values.proxy.defaultInboundPolicy}}
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: {{.Values.clusterNetworks | quote}}
- name: LINKERD2_PROXY_CONTROL_STREAM_INITIAL_TIMEOUT
value: {{((.Values.proxy.control).streams).initialTimeout | default "" | quote}}
- name: LINKERD2_PROXY_CONTROL_STREAM_IDLE_TIMEOUT
value: {{((.Values.proxy.control).streams).idleTimeout | default "" | quote}}
- name: LINKERD2_PROXY_CONTROL_STREAM_LIFETIME
value: {{((.Values.proxy.control).streams).lifetime | default "" | quote}}
{{ if .Values.proxy.inboundConnectTimeout -}}
- name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
value: {{.Values.proxy.inboundConnectTimeout | quote}}
{{ end -}}
{{ if .Values.proxy.outboundConnectTimeout -}}
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
value: {{.Values.proxy.outboundConnectTimeout | quote}}
{{ end -}}
{{ if .Values.proxy.outboundDiscoveryCacheUnusedTimeout -}}
- name: LINKERD2_PROXY_OUTBOUND_DISCOVERY_IDLE_TIMEOUT
value: {{.Values.proxy.outboundDiscoveryCacheUnusedTimeout | quote}}
{{ end -}}
{{ if .Values.proxy.inboundDiscoveryCacheUnusedTimeout -}}
- name: LINKERD2_PROXY_INBOUND_DISCOVERY_IDLE_TIMEOUT
value: {{.Values.proxy.inboundDiscoveryCacheUnusedTimeout | quote}}
{{ end -}}
{{ if .Values.proxy.disableOutboundProtocolDetectTimeout -}}
- name: LINKERD2_PROXY_OUTBOUND_DETECT_TIMEOUT
value: "365d"
{{ end -}}
{{ if .Values.proxy.disableInboundProtocolDetectTimeout -}}
- name: LINKERD2_PROXY_INBOUND_DETECT_TIMEOUT
value: "365d"
{{ end -}}
- name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: "{{ if .Values.disableIPv6 }}0.0.0.0{{ else }}[::]{{ end }}:{{.Values.proxy.ports.control}}"
- name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: "{{ if .Values.disableIPv6 }}0.0.0.0{{ else }}[::]{{ end }}:{{.Values.proxy.ports.admin}}"
{{- /* Deprecated, superseded by LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS since proxy's v2.228.0 (deployed since edge-24.4.5) */}}
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: "127.0.0.1:{{.Values.proxy.ports.outbound}}"
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS
value: "127.0.0.1:{{.Values.proxy.ports.outbound}}{{ if not .Values.disableIPv6}},[::1]:{{.Values.proxy.ports.outbound}}{{ end }}"
- name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: "{{ if .Values.disableIPv6 }}0.0.0.0{{ else }}[::]{{ end }}:{{.Values.proxy.ports.inbound}}"
- name: LINKERD2_PROXY_INBOUND_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
- name: LINKERD2_PROXY_INBOUND_PORTS
value: {{ .Values.proxy.podInboundPorts | quote }}
{{ if .Values.proxy.isGateway -}}
- name: LINKERD2_PROXY_INBOUND_GATEWAY_SUFFIXES
value: {{printf "svc.%s." .Values.clusterDomain}}
{{ end -}}
{{ if .Values.proxy.isIngress -}}
- name: LINKERD2_PROXY_INGRESS_MODE
value: "true"
{{ end -}}
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
{{- $internalDomain := printf "svc.%s." .Values.clusterDomain }}
value: {{ternary "." $internalDomain .Values.proxy.enableExternalProfiles}}
- name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
value: 10000ms
{{- /* Configure inbound and outbound parameters, e.g. for HTTP/2 servers. */}}
{{ range $proxyK, $proxyV := (dict "inbound" .Values.proxy.inbound "outbound" .Values.proxy.outbound) -}}
{{ range $scopeK, $scopeV := $proxyV -}}
{{ range $protoK, $protoV := $scopeV -}}
{{ range $paramK, $paramV := $protoV -}}
- name: LINKERD2_PROXY_{{snakecase $proxyK | upper}}_{{snakecase $scopeK | upper}}_{{snakecase $protoK | upper}}_{{snakecase $paramK | upper}}
value: {{ quote $paramV }}
{{ end -}}
{{ end -}}
{{ end -}}
{{ end -}}
{{ if .Values.proxy.opaquePorts -}}
- name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
value: {{.Values.proxy.opaquePorts | quote}}
{{ end -}}
- name: LINKERD2_PROXY_DESTINATION_CONTEXT
value: |
{"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)", "pod":"$(_pod_name)"}
- name: _pod_sa
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: _l5d_ns
value: {{.Release.Namespace}}
- name: _l5d_trustdomain
value: {{$trustDomain}}
- name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
{{- /*
Pods in the `linkerd` namespace are not injected by the proxy injector and instead obtain
the trust anchor bundle from the `linkerd-identity-trust-roots` configmap. This should not
be used in other contexts.
*/}}
{{- if .Values.proxy.loadTrustBundleFromConfigMap }}
valueFrom:
configMapKeyRef:
name: linkerd-identity-trust-roots
key: ca-bundle.crt
{{ else }}
value: |
{{- required "Please provide the identity trust anchors" .Values.identityTrustAnchorsPEM | trim | nindent 4 }}
{{ end -}}
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
{{- if .Values.identity.serviceAccountTokenProjection }}
value: /var/run/secrets/tokens/linkerd-identity-token
{{ else }}
value: /var/run/secrets/kubernetes.io/serviceaccount/token
{{ end -}}
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: {{ternary "localhost.:8080" (printf "linkerd-identity-headless.%s.svc.%s.:8080" .Release.Namespace .Values.clusterDomain) (eq (toString .Values.proxy.component) "linkerd-identity")}}
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.{{.Release.Namespace}}.{{$trustDomain}}
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: linkerd-identity.{{.Release.Namespace}}.serviceaccount.identity.{{.Release.Namespace}}.{{$trustDomain}}
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: linkerd-destination.{{.Release.Namespace}}.serviceaccount.identity.{{.Release.Namespace}}.{{$trustDomain}}
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: linkerd-destination.{{.Release.Namespace}}.serviceaccount.identity.{{.Release.Namespace}}.{{$trustDomain}}
{{ if .Values.proxy.accessLog -}}
- name: LINKERD2_PROXY_ACCESS_LOG
value: {{.Values.proxy.accessLog | quote}}
{{ end -}}
{{ if .Values.proxy.shutdownGracePeriod -}}
- name: LINKERD2_PROXY_SHUTDOWN_GRACE_PERIOD
value: {{.Values.proxy.shutdownGracePeriod | quote}}
{{ end -}}
{{ if .Values.proxy.additionalEnv -}}
{{ toYaml .Values.proxy.additionalEnv }}
{{ end -}}
{{ if .Values.proxy.experimentalEnv -}}
{{ toYaml .Values.proxy.experimentalEnv }}
{{ end -}}
image: {{.Values.proxy.image.name}}:{{.Values.proxy.image.version | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.proxy.image.pullPolicy | default .Values.imagePullPolicy}}
livenessProbe:
httpGet:
path: /live
port: {{.Values.proxy.ports.admin}}
initialDelaySeconds: {{.Values.proxy.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{.Values.proxy.livenessProbe.timeoutSeconds }}
name: linkerd-proxy
ports:
- containerPort: {{.Values.proxy.ports.inbound}}
name: linkerd-proxy
- containerPort: {{.Values.proxy.ports.admin}}
name: linkerd-admin
readinessProbe:
httpGet:
path: /ready
port: {{.Values.proxy.ports.admin}}
initialDelaySeconds: {{.Values.proxy.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{.Values.proxy.readinessProbe.timeoutSeconds }}
{{- if and .Values.proxy.nativeSidecar .Values.proxy.await }}
startupProbe:
httpGet:
path: /ready
port: {{.Values.proxy.ports.admin}}
initialDelaySeconds: {{.Values.proxy.startupProbe.initialDelaySeconds}}
periodSeconds: {{.Values.proxy.startupProbe.periodSeconds}}
failureThreshold: {{.Values.proxy.startupProbe.failureThreshold}}
{{- end }}
{{- if .Values.proxy.resources }}
{{ include "partials.resources" .Values.proxy.resources }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
{{- if .Values.proxy.capabilities -}}
{{- include "partials.proxy.capabilities" . | nindent 2 -}}
{{- end }}
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.proxy.uid}}
{{- if ge (int .Values.proxy.gid) 0 }}
runAsGroup: {{.Values.proxy.gid}}
{{- end }}
seccompProfile:
type: RuntimeDefault
terminationMessagePolicy: FallbackToLogsOnError
{{- if and (not .Values.proxy.nativeSidecar) (or .Values.proxy.await .Values.proxy.waitBeforeExitSeconds) }}
lifecycle:
{{- if .Values.proxy.await }}
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
- --timeout=2m
- --port={{.Values.proxy.ports.admin}}
{{- end }}
{{- if .Values.proxy.waitBeforeExitSeconds }}
preStop:
exec:
command:
- /bin/sleep
- {{.Values.proxy.waitBeforeExitSeconds | quote}}
{{- end }}
{{- end }}
volumeMounts:
- mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
{{- if .Values.identity.serviceAccountTokenProjection }}
- mountPath: /var/run/secrets/tokens
name: linkerd-identity-token
{{- end }}
{{- if .Values.proxy.saMountPath }}
- mountPath: {{.Values.proxy.saMountPath.mountPath}}
name: {{.Values.proxy.saMountPath.name}}
readOnly: {{.Values.proxy.saMountPath.readOnly}}
{{- end -}}
{{- if .Values.proxy.nativeSidecar }}
restartPolicy: Always
{{- end -}}
{{- end }}

View File

@ -0,0 +1,6 @@
{{- define "partials.image-pull-secrets"}}
{{- if . }}
imagePullSecrets:
{{ toYaml . | indent 2 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,28 @@
{{- define "partials.resources" -}}
{{- $ephemeralStorage := index . "ephemeral-storage" -}}
resources:
{{- if or (.cpu).limit (.memory).limit ($ephemeralStorage).limit }}
limits:
{{- with (.cpu).limit }}
cpu: {{. | quote}}
{{- end }}
{{- with (.memory).limit }}
memory: {{. | quote}}
{{- end }}
{{- with ($ephemeralStorage).limit }}
ephemeral-storage: {{. | quote}}
{{- end }}
{{- end }}
{{- if or (.cpu).request (.memory).request ($ephemeralStorage).request }}
requests:
{{- with (.cpu).request }}
cpu: {{. | quote}}
{{- end }}
{{- with (.memory).request }}
memory: {{. | quote}}
{{- end }}
{{- with ($ephemeralStorage).request }}
ephemeral-storage: {{. | quote}}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,4 @@
{{- define "linkerd.tolerations" -}}
tolerations:
{{ toYaml .Values.tolerations | trim | indent 2 }}
{{- end -}}

View File

@ -0,0 +1,5 @@
{{ define "partials.linkerd.trace" -}}
{{ if .Values.controlPlaneTracing -}}
- -trace-collector=collector.{{.Values.controlPlaneTracingNamespace}}.svc.{{.Values.clusterDomain}}:55678
{{ end -}}
{{- end }}

View File

@ -0,0 +1,19 @@
{{- define "linkerd.webhook.validation" -}}
{{- if and (.injectCaFrom) (.injectCaFromSecret) -}}
{{- fail "injectCaFrom and injectCaFromSecret cannot both be set" -}}
{{- end -}}
{{- if and (or (.injectCaFrom) (.injectCaFromSecret)) (.caBundle) -}}
{{- fail "injectCaFrom or injectCaFromSecret cannot be set if providing a caBundle" -}}
{{- end -}}
{{- if and (.externalSecret) (empty .caBundle) (empty .injectCaFrom) (empty .injectCaFromSecret) -}}
{{- fail "if externalSecret is set, then caBundle, injectCaFrom, or injectCaFromSecret must be set" -}}
{{- end }}
{{- if and (or .injectCaFrom .injectCaFromSecret .caBundle) (not .externalSecret) -}}
{{- fail "if caBundle, injectCaFrom, or injectCaFromSecret is set, then externalSecret must be set" -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,20 @@
{{ define "partials.proxy.volumes.identity" -}}
emptyDir:
medium: Memory
name: linkerd-identity-end-entity
{{- end -}}
{{ define "partials.proxyInit.volumes.xtables" -}}
emptyDir: {}
name: {{ .Values.proxyInit.xtMountPath.name }}
{{- end -}}
{{- define "partials.proxy.volumes.service-account-token" -}}
name: linkerd-identity-token
projected:
sources:
- serviceAccountToken:
path: linkerd-identity-token
expirationSeconds: 86400 {{- /* # 24 hours */}}
audience: identity.l5d.io
{{- end -}}

View File

@ -0,0 +1,19 @@
questions:
- variable: identityTrustAnchorsPEM
label: "Trust root certificate (ECDSA)"
description: "Root certificate used to support mTLS connections between meshed pods"
required: true
type: multiline
group: Identity
- variable: identity.issuer.tls.crtPEM
label: "Issuer certificate (ECDSA)"
description: "Intermediate certificate, rooted on identityTrustAnchorsPEM, used to sign the Linkerd proxies' CSR"
required: true
type: multiline
group: Identity
- variable: identity.issuer.tls.keyPEM
label: "Key for the issuer certificate (ECDSA)"
description: "Private key for the certificate entered on crtPEM"
required: true
type: multiline
group: Identity

View File

@ -0,0 +1,19 @@
The Linkerd control plane was successfully installed 🎉
To help you manage your Linkerd service mesh you can install the Linkerd CLI by running:
curl -sL https://run.linkerd.io/install | sh
Alternatively, you can download the CLI directly via the Linkerd releases page:
https://github.com/linkerd/linkerd2/releases/
To make sure everything works as expected, run the following:
linkerd check
The viz extension can be installed by running:
helm install linkerd-viz linkerd/linkerd-viz
Looking for more? Visit https://linkerd.io/2/getting-started/

View File

@ -0,0 +1,16 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
name: ext-namespace-metadata-linkerd-config
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
resourceNames: ["linkerd-config"]

View File

@ -0,0 +1,39 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: linkerd-config
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: controller
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
data:
linkerd-crds-chart-version: linkerd-crds-1.0.0-edge
values: |
{{- $values := deepCopy .Values }}
{{- /*
WARNING! All sensitive or private data such as TLS keys must be removed
here to avoid it being publicly readable.
*/ -}}
{{- if kindIs "map" $values.identity.issuer.tls -}}
{{- $_ := unset $values.identity.issuer.tls "keyPEM"}}
{{- end -}}
{{- if kindIs "map" $values.profileValidator -}}
{{- $_ := unset $values.profileValidator "keyPEM"}}
{{- end -}}
{{- if kindIs "map" $values.proxyInjector -}}
{{- $_ := unset $values.proxyInjector "keyPEM"}}
{{- end -}}
{{- if kindIs "map" $values.policyValidator -}}
{{- $_ := unset $values.policyValidator "keyPEM"}}
{{- end -}}
{{- if (empty $values.identityTrustDomain) -}}
{{- $_ := set $values "identityTrustDomain" $values.clusterDomain}}
{{- end -}}
{{- $_ := unset $values "partials"}}
{{- $_ := unset $values "configs"}}
{{- $_ := unset $values "stage"}}
{{- toYaml $values | trim | nindent 4 }}

View File

@ -0,0 +1,327 @@
---
###
### Destination Controller Service
###
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: linkerd-{{.Release.Namespace}}-destination
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["list", "get", "watch"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["list", "get", "watch"]
- apiGroups: [""]
resources: ["pods", "endpoints", "services", "nodes"]
verbs: ["list", "get", "watch"]
- apiGroups: ["linkerd.io"]
resources: ["serviceprofiles"]
verbs: ["list", "get", "watch"]
- apiGroups: ["workload.linkerd.io"]
resources: ["externalworkloads"]
verbs: ["list", "get", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "get", "update", "patch"]
{{- if .Values.enableEndpointSlices }}
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
{{- end }}
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: linkerd-{{.Release.Namespace}}-destination
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: linkerd-{{.Release.Namespace}}-destination
subjects:
- kind: ServiceAccount
name: linkerd-destination
namespace: {{.Release.Namespace}}
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: linkerd-destination
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- include "partials.image-pull-secrets" .Values.imagePullSecrets }}
---
{{- $host := printf "linkerd-sp-validator.%s.svc" .Release.Namespace }}
{{- $ca := genSelfSignedCert $host (list) (list $host) 365 }}
{{- if (not .Values.profileValidator.externalSecret) }}
kind: Secret
apiVersion: v1
metadata:
name: linkerd-sp-validator-k8s-tls
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
type: kubernetes.io/tls
data:
tls.crt: {{ ternary (b64enc (trim $ca.Cert)) (b64enc (trim .Values.profileValidator.crtPEM)) (empty .Values.profileValidator.crtPEM) }}
tls.key: {{ ternary (b64enc (trim $ca.Key)) (b64enc (trim .Values.profileValidator.keyPEM)) (empty .Values.profileValidator.keyPEM) }}
---
{{- end }}
{{- include "linkerd.webhook.validation" .Values.profileValidator }}
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: linkerd-sp-validator-webhook-config
{{- if or (.Values.profileValidator.injectCaFrom) (.Values.profileValidator.injectCaFromSecret) }}
annotations:
{{- if .Values.profileValidator.injectCaFrom }}
cert-manager.io/inject-ca-from: {{ .Values.profileValidator.injectCaFrom }}
{{- end }}
{{- if .Values.profileValidator.injectCaFromSecret }}
cert-manager.io/inject-ca-from-secret: {{ .Values.profileValidator.injectCaFromSecret }}
{{- end }}
{{- end }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
webhooks:
- name: linkerd-sp-validator.linkerd.io
namespaceSelector:
{{- toYaml .Values.profileValidator.namespaceSelector | trim | nindent 4 }}
clientConfig:
service:
name: linkerd-sp-validator
namespace: {{ .Release.Namespace }}
path: "/"
{{- if and (empty .Values.profileValidator.injectCaFrom) (empty .Values.profileValidator.injectCaFromSecret) }}
caBundle: {{ ternary (b64enc (trim $ca.Cert)) (b64enc (trim .Values.profileValidator.caBundle)) (empty .Values.profileValidator.caBundle) }}
{{- end }}
failurePolicy: {{.Values.webhookFailurePolicy}}
admissionReviewVersions: ["v1", "v1beta1"]
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["linkerd.io"]
apiVersions: ["v1alpha1", "v1alpha2"]
resources: ["serviceprofiles"]
sideEffects: None
---
{{- $host := printf "linkerd-policy-validator.%s.svc" .Release.Namespace }}
{{- $ca := genSelfSignedCert $host (list) (list $host) 365 }}
{{- if (not .Values.policyValidator.externalSecret) }}
kind: Secret
apiVersion: v1
metadata:
name: linkerd-policy-validator-k8s-tls
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
type: kubernetes.io/tls
data:
tls.crt: {{ ternary (b64enc (trim $ca.Cert)) (b64enc (trim .Values.policyValidator.crtPEM)) (empty .Values.policyValidator.crtPEM) }}
tls.key: {{ ternary (b64enc (trim $ca.Key)) (b64enc (trim .Values.policyValidator.keyPEM)) (empty .Values.policyValidator.keyPEM) }}
---
{{- end }}
{{- include "linkerd.webhook.validation" .Values.policyValidator }}
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: linkerd-policy-validator-webhook-config
{{- if or (.Values.policyValidator.injectCaFrom) (.Values.policyValidator.injectCaFromSecret) }}
annotations:
{{- if .Values.policyValidator.injectCaFrom }}
cert-manager.io/inject-ca-from: {{ .Values.policyValidator.injectCaFrom }}
{{- end }}
{{- if .Values.policyValidator.injectCaFromSecret }}
cert-manager.io/inject-ca-from-secret: {{ .Values.policyValidator.injectCaFromSecret }}
{{- end }}
{{- end }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
webhooks:
- name: linkerd-policy-validator.linkerd.io
namespaceSelector:
{{- toYaml .Values.policyValidator.namespaceSelector | trim | nindent 4 }}
clientConfig:
service:
name: linkerd-policy-validator
namespace: {{ .Release.Namespace }}
path: "/"
{{- if and (empty .Values.policyValidator.injectCaFrom) (empty .Values.policyValidator.injectCaFromSecret) }}
caBundle: {{ ternary (b64enc (trim $ca.Cert)) (b64enc (trim .Values.policyValidator.caBundle)) (empty .Values.policyValidator.caBundle) }}
{{- end }}
failurePolicy: {{.Values.webhookFailurePolicy}}
admissionReviewVersions: ["v1", "v1beta1"]
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["policy.linkerd.io"]
apiVersions: ["*"]
resources:
- authorizationpolicies
- httproutes
- networkauthentications
- meshtlsauthentications
- serverauthorizations
- servers
- operations: ["CREATE", "UPDATE"]
apiGroups: ["gateway.networking.k8s.io"]
apiVersions: ["*"]
resources:
- httproutes
- grpcroutes
sideEffects: None
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: linkerd-policy
labels:
app.kubernetes.io/part-of: Linkerd
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- apiGroups:
- policy.linkerd.io
resources:
- authorizationpolicies
- httproutes
- meshtlsauthentications
- networkauthentications
- servers
- serverauthorizations
verbs:
- get
- list
- watch
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes
- grpcroutes
verbs:
- get
- list
- watch
- apiGroups:
- policy.linkerd.io
resources:
- httproutes/status
verbs:
- patch
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes/status
- grpcroutes/status
verbs:
- patch
- apiGroups:
- workload.linkerd.io
resources:
- externalworkloads
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: linkerd-destination-policy
labels:
app.kubernetes.io/part-of: Linkerd
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: linkerd-policy
subjects:
- kind: ServiceAccount
name: linkerd-destination
namespace: {{.Release.Namespace}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: remote-discovery
namespace: {{.Release.Namespace}}
labels:
app.kubernetes.io/part-of: Linkerd
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: linkerd-destination-remote-discovery
namespace: {{.Release.Namespace}}
labels:
app.kubernetes.io/part-of: Linkerd
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: remote-discovery
subjects:
- kind: ServiceAccount
name: linkerd-destination
namespace: {{.Release.Namespace}}

View File

@ -0,0 +1,435 @@
---
###
### Destination Controller Service
###
kind: Service
apiVersion: v1
metadata:
name: linkerd-dst
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
type: ClusterIP
selector:
linkerd.io/control-plane-component: destination
ports:
- name: grpc
port: 8086
targetPort: 8086
---
kind: Service
apiVersion: v1
metadata:
name: linkerd-dst-headless
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
clusterIP: None
selector:
linkerd.io/control-plane-component: destination
ports:
- name: grpc
port: 8086
targetPort: 8086
---
kind: Service
apiVersion: v1
metadata:
name: linkerd-sp-validator
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
type: ClusterIP
selector:
linkerd.io/control-plane-component: destination
ports:
- name: sp-validator
port: 443
targetPort: sp-validator
---
kind: Service
apiVersion: v1
metadata:
name: linkerd-policy
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
clusterIP: None
selector:
linkerd.io/control-plane-component: destination
ports:
- name: grpc
port: 8090
targetPort: 8090
---
kind: Service
apiVersion: v1
metadata:
name: linkerd-policy-validator
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
type: ClusterIP
selector:
linkerd.io/control-plane-component: destination
ports:
- name: policy-https
port: 443
targetPort: policy-https
{{- if .Values.enablePodDisruptionBudget }}
---
kind: PodDisruptionBudget
apiVersion: policy/v1
metadata:
name: linkerd-dst
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
maxUnavailable: {{ .Values.controller.podDisruptionBudget.maxUnavailable }}
selector:
matchLabels:
linkerd.io/control-plane-component: destination
{{- end }}
---
{{- $tree := deepCopy . }}
{{ $_ := set $tree.Values.proxy "workloadKind" "deployment" -}}
{{ $_ := set $tree.Values.proxy "component" "linkerd-destination" -}}
{{ $_ := set $tree.Values.proxy "waitBeforeExitSeconds" 0 -}}
{{- if not (empty .Values.destinationProxyResources) }}
{{- $c := dig "cores" .Values.proxy.cores .Values.destinationProxyResources }}
{{- $_ := set $tree.Values.proxy "cores" $c }}
{{- $r := merge .Values.destinationProxyResources .Values.proxy.resources }}
{{- $_ := set $tree.Values.proxy "resources" $r }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
{{ include "partials.annotations.created-by" . }}
labels:
app.kubernetes.io/name: destination
app.kubernetes.io/part-of: Linkerd
app.kubernetes.io/version: {{.Values.linkerdVersion}}
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
name: linkerd-destination
namespace: {{ .Release.Namespace }}
spec:
replicas: {{.Values.controllerReplicas}}
revisionHistoryLimit: {{.Values.revisionHistoryLimit}}
selector:
matchLabels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- include "partials.proxy.labels" $tree.Values.proxy | nindent 6}}
{{- if .Values.deploymentStrategy }}
strategy:
{{- with .Values.deploymentStrategy }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- end }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/destination-rbac.yaml") . | sha256sum }}
{{ include "partials.annotations.created-by" . }}
{{- include "partials.proxy.annotations" . | nindent 8}}
{{- with .Values.podAnnotations }}{{ toYaml . | trim | nindent 8 }}{{- end }}
config.linkerd.io/default-inbound-policy: "all-unauthenticated"
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: {{.Release.Namespace}}
linkerd.io/workload-ns: {{.Release.Namespace}}
{{- include "partials.proxy.labels" $tree.Values.proxy | nindent 8}}
{{- with .Values.podLabels }}{{ toYaml . | trim | nindent 8 }}{{- end }}
spec:
{{- with .Values.runtimeClassName }}
runtimeClassName: {{ . | quote }}
{{- end }}
{{- if .Values.tolerations -}}
{{- include "linkerd.tolerations" . | nindent 6 }}
{{- end -}}
{{- include "linkerd.node-selector" . | nindent 6 }}
{{- $_ := set $tree "component" "destination" -}}
{{- include "linkerd.affinity" $tree | nindent 6 }}
containers:
{{- $_ := set $tree.Values.proxy "await" $tree.Values.proxy.await }}
{{- $_ := set $tree.Values.proxy "loadTrustBundleFromConfigMap" true }}
{{- $_ := set $tree.Values.proxy "podInboundPorts" "8086,8090,8443,9443,9990,9996,9997" }}
{{- $_ := set $tree.Values.proxy "outboundDiscoveryCacheUnusedTimeout" "5s" }}
{{- $_ := set $tree.Values.proxy "inboundDiscoveryCacheUnusedTimeout" "90s" }}
{{- /*
The pod needs to accept webhook traffic, and we can't rely on that originating in the
cluster network.
*/}}
{{- $_ := set $tree.Values.proxy "defaultInboundPolicy" "all-unauthenticated" }}
{{- $_ := set $tree.Values.proxy "capabilities" (dict "drop" (list "ALL")) }}
{{- if not $tree.Values.proxy.nativeSidecar }}
- {{- include "partials.proxy" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{- end }}
- args:
- destination
- -addr=:8086
- -controller-namespace={{.Release.Namespace}}
- -enable-h2-upgrade={{.Values.enableH2Upgrade}}
- -log-level={{.Values.controllerLogLevel}}
- -log-format={{.Values.controllerLogFormat}}
- -enable-endpoint-slices={{.Values.enableEndpointSlices}}
- -cluster-domain={{.Values.clusterDomain}}
- -identity-trust-domain={{.Values.identityTrustDomain | default .Values.clusterDomain}}
- -default-opaque-ports={{.Values.proxy.opaquePorts}}
- -enable-ipv6={{not .Values.disableIPv6}}
- -enable-pprof={{.Values.enablePprof | default false}}
{{- if (.Values.destinationController).meshedHttp2ClientProtobuf }}
- --meshed-http2-client-params={{ toJson .Values.destinationController.meshedHttp2ClientProtobuf }}
{{- end }}
{{- range (.Values.destinationController).additionalArgs }}
- {{ . }}
{{- end }}
{{- range (.Values.destinationController).experimentalArgs }}
- {{ . }}
{{- end }}
{{- if or (.Values.destinationController).additionalEnv (.Values.destinationController).experimentalEnv }}
env:
{{- with (.Values.destinationController).additionalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- with (.Values.destinationController).experimentalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- end }}
{{- include "partials.linkerd.trace" . | nindent 8 -}}
image: {{.Values.controllerImage}}:{{.Values.controllerImageVersion | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.imagePullPolicy}}
livenessProbe:
httpGet:
path: /ping
port: 9996
initialDelaySeconds: 10
{{- with (.Values.destinationController.livenessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
name: destination
ports:
- containerPort: 8086
name: grpc
- containerPort: 9996
name: admin-http
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: 9996
{{- with (.Values.destinationController.readinessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- if .Values.destinationResources -}}
{{- include "partials.resources" .Values.destinationResources | nindent 8 }}
{{- end }}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.controllerUID}}
{{- if ge (int .Values.controllerGID) 0 }}
runAsGroup: {{.Values.controllerGID}}
{{- end }}
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
- args:
- sp-validator
- -log-level={{.Values.controllerLogLevel}}
- -log-format={{.Values.controllerLogFormat}}
- -enable-pprof={{.Values.enablePprof | default false}}
{{- if or (.Values.spValidator).additionalEnv (.Values.spValidator).experimentalEnv }}
env:
{{- with (.Values.spValidator).additionalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- with (.Values.spValidator).experimentalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- end }}
image: {{.Values.controllerImage}}:{{.Values.controllerImageVersion | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.imagePullPolicy}}
livenessProbe:
httpGet:
path: /ping
port: 9997
initialDelaySeconds: 10
{{- with ((.Values.spValidator).livenessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
name: sp-validator
ports:
- containerPort: 8443
name: sp-validator
- containerPort: 9997
name: admin-http
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: 9997
{{- with ((.Values.spValidator).readinessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- if .Values.spValidatorResources -}}
{{- include "partials.resources" .Values.spValidatorResources | nindent 8 }}
{{- end }}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.controllerUID}}
{{- if ge (int .Values.controllerGID) 0 }}
runAsGroup: {{.Values.controllerGID}}
{{- end }}
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /var/run/linkerd/tls
name: sp-tls
readOnly: true
- args:
- --admin-addr={{ if .Values.disableIPv6 }}0.0.0.0{{ else }}[::]{{ end }}:9990
- --control-plane-namespace={{.Release.Namespace}}
- --grpc-addr={{ if .Values.disableIPv6 }}0.0.0.0{{ else }}[::]{{ end }}:8090
- --server-addr={{ if .Values.disableIPv6 }}0.0.0.0{{ else }}[::]{{ end }}:9443
- --server-tls-key=/var/run/linkerd/tls/tls.key
- --server-tls-certs=/var/run/linkerd/tls/tls.crt
- --cluster-networks={{.Values.clusterNetworks}}
- --identity-domain={{.Values.identityTrustDomain | default .Values.clusterDomain}}
- --cluster-domain={{.Values.clusterDomain}}
- --default-policy={{.Values.proxy.defaultInboundPolicy}}
- --log-level={{.Values.policyController.logLevel | default "linkerd=info,warn"}}
- --log-format={{.Values.controllerLogFormat}}
- --default-opaque-ports={{.Values.proxy.opaquePorts}}
{{- if .Values.policyController.probeNetworks }}
- --probe-networks={{.Values.policyController.probeNetworks | join ","}}
{{- end}}
{{- range .Values.policyController.additionalArgs }}
- {{ . }}
{{- end }}
{{- range .Values.policyController.experimentalArgs }}
- {{ . }}
{{- end }}
image: {{.Values.policyController.image.name}}:{{.Values.policyController.image.version | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.policyController.image.pullPolicy | default .Values.imagePullPolicy}}
livenessProbe:
httpGet:
path: /live
port: admin-http
{{- with (.Values.policyController.livenessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
name: policy
ports:
- containerPort: 8090
name: grpc
- containerPort: 9990
name: admin-http
- containerPort: 9443
name: policy-https
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: admin-http
initialDelaySeconds: 10
{{- with (.Values.policyController.readinessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- if .Values.policyController.resources }}
{{- include "partials.resources" .Values.policyController.resources | nindent 8 }}
{{- end }}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.controllerUID}}
{{- if ge (int .Values.controllerGID) 0 }}
runAsGroup: {{.Values.controllerGID}}
{{- end }}
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /var/run/linkerd/tls
name: policy-tls
readOnly: true
initContainers:
{{ if .Values.cniEnabled -}}
- {{- include "partials.network-validator" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ else -}}
{{- /*
The destination controller needs to connect to the Kubernetes API before the proxy is able
to proxy requests, so we always skip these connections.
*/}}
{{- $_ := set $tree.Values.proxyInit "ignoreOutboundPorts" .Values.proxyInit.kubeAPIServerPorts -}}
- {{- include "partials.proxy-init" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{- if $tree.Values.proxy.nativeSidecar }}
{{- $_ := set $tree.Values.proxy "startupProbeInitialDelaySeconds" 35 }}
{{- $_ := set $tree.Values.proxy "startupProbePeriodSeconds" 5 }}
{{- $_ := set $tree.Values.proxy "startupProbeFailureThreshold" 20 }}
- {{- include "partials.proxy" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{- if .Values.priorityClassName -}}
priorityClassName: {{ .Values.priorityClassName }}
{{ end -}}
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: linkerd-destination
volumes:
- name: sp-tls
secret:
secretName: linkerd-sp-validator-k8s-tls
- name: policy-tls
secret:
secretName: linkerd-policy-validator-k8s-tls
{{ if not .Values.cniEnabled -}}
- {{- include "partials.proxyInit.volumes.xtables" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{if .Values.identity.serviceAccountTokenProjection -}}
- {{- include "partials.proxy.volumes.service-account-token" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
- {{- include "partials.proxy.volumes.identity" . | indent 8 | trimPrefix (repeat 7 " ") }}

View File

@ -0,0 +1,78 @@
{{ if not .Values.disableHeartBeat -}}
---
###
### Heartbeat RBAC
###
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: linkerd-heartbeat
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
resourceNames: ["linkerd-config"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: linkerd-heartbeat
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
kind: Role
name: linkerd-heartbeat
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: linkerd-heartbeat
namespace: {{.Release.Namespace}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: linkerd-heartbeat
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["list"]
- apiGroups: ["linkerd.io"]
resources: ["serviceprofiles"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: linkerd-heartbeat
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
kind: ClusterRole
name: linkerd-heartbeat
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: linkerd-heartbeat
namespace: {{.Release.Namespace}}
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: linkerd-heartbeat
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: heartbeat
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- include "partials.image-pull-secrets" .Values.imagePullSecrets }}
{{- end }}

View File

@ -0,0 +1,94 @@
{{ if not .Values.disableHeartBeat -}}
---
###
### Heartbeat
###
apiVersion: batch/v1
kind: CronJob
metadata:
name: linkerd-heartbeat
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: heartbeat
app.kubernetes.io/part-of: Linkerd
app.kubernetes.io/version: {{.Values.linkerdVersion}}
linkerd.io/control-plane-component: heartbeat
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
concurrencyPolicy: Replace
{{ if .Values.heartbeatSchedule -}}
schedule: "{{.Values.heartbeatSchedule}}"
{{ else -}}
schedule: "{{ dateInZone "04 15 * * *" (now | mustDateModify "+10m") "UTC"}}"
{{ end -}}
successfulJobsHistoryLimit: 0
jobTemplate:
spec:
template:
metadata:
labels:
linkerd.io/control-plane-component: heartbeat
linkerd.io/workload-ns: {{.Release.Namespace}}
{{- with .Values.podLabels }}{{ toYaml . | trim | nindent 12 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
{{- with .Values.podAnnotations }}{{ toYaml . | trim | nindent 12 }}{{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end -}}
{{- with .Values.runtimeClassName }}
runtimeClassName: {{ . | quote }}
{{- end }}
{{- if .Values.tolerations -}}
{{- include "linkerd.tolerations" . | nindent 10 }}
{{- end -}}
{{- include "linkerd.node-selector" . | nindent 10 }}
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: linkerd-heartbeat
restartPolicy: Never
containers:
- name: heartbeat
image: {{.Values.controllerImage}}:{{.Values.controllerImageVersion | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.imagePullPolicy}}
env:
- name: LINKERD_DISABLED
value: "the heartbeat controller does not use the proxy"
{{- with (.Values.heartbeat).additionalEnv }}
{{- toYaml . | nindent 12 -}}
{{- end }}
{{- with (.Values.heartbeat).experimentalEnv }}
{{- toYaml . | nindent 12 -}}
{{- end }}
args:
- "heartbeat"
- "-controller-namespace={{.Release.Namespace}}"
- "-log-level={{.Values.controllerLogLevel}}"
- "-log-format={{.Values.controllerLogFormat}}"
{{- if .Values.prometheusUrl }}
- "-prometheus-url={{.Values.prometheusUrl}}"
{{- else }}
- "-prometheus-url=http://prometheus.linkerd-viz.svc.{{.Values.clusterDomain}}:9090"
{{- end }}
{{- if .Values.heartbeatResources -}}
{{- include "partials.resources" .Values.heartbeatResources | nindent 12 }}
{{- end }}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.controllerUID}}
{{- if ge (int .Values.controllerGID) 0 }}
runAsGroup: {{.Values.controllerGID}}
{{- end }}
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
{{- end }}

View File

@ -0,0 +1,49 @@
---
###
### Identity Controller Service RBAC
###
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: linkerd-{{.Release.Namespace}}-identity
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups: ["authentication.k8s.io"]
resources: ["tokenreviews"]
verbs: ["create"]
# TODO(ver) Restrict this to the Linkerd namespace. See
# https://github.com/linkerd/linkerd2/issues/9367
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: linkerd-{{.Release.Namespace}}-identity
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: linkerd-{{.Release.Namespace}}-identity
subjects:
- kind: ServiceAccount
name: linkerd-identity
namespace: {{.Release.Namespace}}
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: linkerd-identity
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- include "partials.image-pull-secrets" .Values.imagePullSecrets }}

View File

@ -0,0 +1,272 @@
{{if .Values.identity -}}
---
###
### Identity Controller Service
###
{{ if and (.Values.identity.issuer) (eq .Values.identity.issuer.scheme "linkerd.io/tls") -}}
kind: Secret
apiVersion: v1
metadata:
name: linkerd-identity-issuer
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
data:
crt.pem: {{b64enc (required "Please provide the identity issuer certificate" .Values.identity.issuer.tls.crtPEM | trim)}}
key.pem: {{b64enc (required "Please provide the identity issue private key" .Values.identity.issuer.tls.keyPEM | trim)}}
---
{{- end}}
{{ if not (.Values.identity.externalCA) -}}
kind: ConfigMap
apiVersion: v1
metadata:
name: linkerd-identity-trust-roots
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
data:
ca-bundle.crt: |-{{.Values.identityTrustAnchorsPEM | trim | nindent 4}}
---
{{- end}}
kind: Service
apiVersion: v1
metadata:
name: linkerd-identity
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
type: ClusterIP
selector:
linkerd.io/control-plane-component: identity
ports:
- name: grpc
port: 8080
targetPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: linkerd-identity-headless
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
clusterIP: None
selector:
linkerd.io/control-plane-component: identity
ports:
- name: grpc
port: 8080
targetPort: 8080
---
{{- if .Values.enablePodDisruptionBudget }}
kind: PodDisruptionBudget
apiVersion: policy/v1
metadata:
name: linkerd-identity
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
maxUnavailable: {{ .Values.controller.podDisruptionBudget.maxUnavailable }}
selector:
matchLabels:
linkerd.io/control-plane-component: identity
---
{{- end }}
{{- $tree := deepCopy . }}
{{ $_ := set $tree.Values.proxy "workloadKind" "deployment" -}}
{{ $_ := set $tree.Values.proxy "component" "linkerd-identity" -}}
{{ $_ := set $tree.Values.proxy "waitBeforeExitSeconds" 0 -}}
{{- if not (empty .Values.identityProxyResources) }}
{{- $c := dig "cores" .Values.proxy.cores .Values.identityProxyResources }}
{{- $_ := set $tree.Values.proxy "cores" $c }}
{{- $r := merge .Values.identityProxyResources .Values.proxy.resources }}
{{- $_ := set $tree.Values.proxy "resources" $r }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
{{ include "partials.annotations.created-by" . }}
labels:
app.kubernetes.io/name: identity
app.kubernetes.io/part-of: Linkerd
app.kubernetes.io/version: {{.Values.linkerdVersion}}
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
name: linkerd-identity
namespace: {{ .Release.Namespace }}
spec:
replicas: {{.Values.controllerReplicas}}
revisionHistoryLimit: {{.Values.revisionHistoryLimit}}
selector:
matchLabels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- include "partials.proxy.labels" $tree.Values.proxy | nindent 6}}
{{- if .Values.deploymentStrategy }}
strategy:
{{- with .Values.deploymentStrategy }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- end }}
template:
metadata:
annotations:
{{ include "partials.annotations.created-by" . }}
{{- include "partials.proxy.annotations" . | nindent 8}}
{{- with .Values.podAnnotations }}{{ toYaml . | trim | nindent 8 }}{{- end }}
config.linkerd.io/default-inbound-policy: "all-unauthenticated"
labels:
linkerd.io/control-plane-component: identity
linkerd.io/control-plane-ns: {{.Release.Namespace}}
linkerd.io/workload-ns: {{.Release.Namespace}}
{{- include "partials.proxy.labels" $tree.Values.proxy | nindent 8}}
{{- with .Values.podLabels }}{{ toYaml . | trim | nindent 8 }}{{- end }}
spec:
{{- with .Values.runtimeClassName }}
runtimeClassName: {{ . | quote }}
{{- end }}
{{- if .Values.tolerations -}}
{{- include "linkerd.tolerations" . | nindent 6 }}
{{- end -}}
{{- include "linkerd.node-selector" . | nindent 6 }}
{{- $_ := set $tree "component" "identity" -}}
{{- include "linkerd.affinity" $tree | nindent 6 }}
containers:
- args:
- identity
- -log-level={{.Values.controllerLogLevel}}
- -log-format={{.Values.controllerLogFormat}}
- -controller-namespace={{.Release.Namespace}}
- -identity-trust-domain={{.Values.identityTrustDomain | default .Values.clusterDomain}}
- -identity-issuance-lifetime={{.Values.identity.issuer.issuanceLifetime}}
- -identity-clock-skew-allowance={{.Values.identity.issuer.clockSkewAllowance}}
- -identity-scheme={{.Values.identity.issuer.scheme}}
- -enable-pprof={{.Values.enablePprof | default false}}
- -kube-apiclient-qps={{.Values.identity.kubeAPI.clientQPS}}
- -kube-apiclient-burst={{.Values.identity.kubeAPI.clientBurst}}
{{- include "partials.linkerd.trace" . | nindent 8 -}}
env:
- name: LINKERD_DISABLED
value: "linkerd-await cannot block the identity controller"
{{- with (.Values.identity).additionalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- with (.Values.identity).experimentalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
image: {{.Values.controllerImage}}:{{.Values.controllerImageVersion | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.imagePullPolicy}}
livenessProbe:
httpGet:
path: /ping
port: 9990
initialDelaySeconds: 10
{{- with (.Values.identity.livenessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
name: identity
ports:
- containerPort: 8080
name: grpc
- containerPort: 9990
name: admin-http
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: 9990
{{- with (.Values.identity.readinessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- if .Values.identityResources -}}
{{- include "partials.resources" .Values.identityResources | nindent 8 }}
{{- end }}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.controllerUID}}
{{- if ge (int .Values.controllerGID) 0 }}
runAsGroup: {{.Values.controllerGID}}
{{- end }}
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /var/run/linkerd/identity/issuer
name: identity-issuer
- mountPath: /var/run/linkerd/identity/trust-roots/
name: trust-roots
{{- $_ := set $tree.Values.proxy "await" false }}
{{- $_ := set $tree.Values.proxy "loadTrustBundleFromConfigMap" true }}
{{- $_ := set $tree.Values.proxy "podInboundPorts" "8080,9990" }}
{{- $_ := set $tree.Values.proxy "nativeSidecar" false }}
{{- /*
The identity controller cannot discover policies, so we configure it with defaults that
enforce TLS on the identity service.
*/}}
{{- $_ := set $tree.Values.proxy "defaultInboundPolicy" "all-unauthenticated" }}
{{- $_ := set $tree.Values.proxy "requireTLSOnInboundPorts" "8080" }}
{{- $_ := set $tree.Values.proxy "capabilities" (dict "drop" (list "ALL")) }}
{{- $_ := set $tree.Values.proxy "outboundDiscoveryCacheUnusedTimeout" "5s" }}
{{- $_ := set $tree.Values.proxy "inboundDiscoveryCacheUnusedTimeout" "90s" }}
- {{- include "partials.proxy" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
initContainers:
{{ if .Values.cniEnabled -}}
- {{- include "partials.network-validator" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ else -}}
{{- /*
The identity controller needs to connect to the Kubernetes API before the proxy is able to
proxy requests, so we always skip these connections. The identity controller makes no other
outbound connections (so it's not important to persist any other skip ports here)
*/}}
{{- $_ := set $tree.Values.proxyInit "ignoreOutboundPorts" .Values.proxyInit.kubeAPIServerPorts -}}
- {{- include "partials.proxy-init" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{- if .Values.priorityClassName -}}
priorityClassName: {{ .Values.priorityClassName }}
{{ end -}}
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: linkerd-identity
volumes:
- name: identity-issuer
secret:
secretName: linkerd-identity-issuer
- configMap:
name: linkerd-identity-trust-roots
name: trust-roots
{{ if not .Values.cniEnabled -}}
- {{- include "partials.proxyInit.volumes.xtables" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{if .Values.identity.serviceAccountTokenProjection -}}
- {{- include "partials.proxy.volumes.service-account-token" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
- {{- include "partials.proxy.volumes.identity" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{end -}}

View File

@ -0,0 +1,18 @@
{{- if eq .Release.Service "CLI" -}}
---
###
### Linkerd Namespace
###
kind: Namespace
apiVersion: v1
metadata:
name: {{ .Release.Namespace }}
annotations:
linkerd.io/inject: disabled
labels:
linkerd.io/is-control-plane: "true"
config.linkerd.io/admission-webhooks: disabled
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- /* linkerd-init requires extended capabilities and so requires priviledged mode */}}
pod-security.kubernetes.io/enforce: {{ ternary "restricted" "privileged" .Values.cniEnabled }}
{{ end -}}

View File

@ -0,0 +1,128 @@
{{- $podMonitor := .Values.podMonitor -}}
{{- if and $podMonitor.enabled $podMonitor.controller.enabled }}
---
###
### Prometheus Operator PodMonitor for Linkerd control-plane
###
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: "linkerd-controller"
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{ .Release.Namespace }}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- with .Values.podMonitor.labels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
namespaceSelector: {{ tpl .Values.podMonitor.controller.namespaceSelector . | nindent 4 }}
selector:
matchLabels: {}
podMetricsEndpoints:
- interval: {{ $podMonitor.scrapeInterval }}
scrapeTimeout: {{ $podMonitor.scrapeTimeout }}
relabelings:
- sourceLabels:
- __meta_kubernetes_pod_container_port_name
action: keep
regex: admin-http
- sourceLabels:
- __meta_kubernetes_pod_container_name
action: replace
targetLabel: component
{{- end }}
{{- if and $podMonitor.enabled $podMonitor.serviceMirror.enabled }}
---
###
### Prometheus Operator PodMonitor for Linkerd Service Mirror (multi-cluster)
###
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: "linkerd-service-mirror"
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{ .Release.Namespace }}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- with .Values.podMonitor.labels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
namespaceSelector:
any: true
selector:
matchLabels: {}
podMetricsEndpoints:
- interval: {{ $podMonitor.scrapeInterval }}
scrapeTimeout: {{ $podMonitor.scrapeTimeout }}
relabelings:
- sourceLabels:
- __meta_kubernetes_pod_label_linkerd_io_control_plane_component
- __meta_kubernetes_pod_container_port_name
action: keep
regex: linkerd-service-mirror;admin-http$
- sourceLabels:
- __meta_kubernetes_pod_container_name
action: replace
targetLabel: component
{{- end }}
{{- if and $podMonitor.enabled $podMonitor.proxy.enabled }}
---
###
### Prometheus Operator PodMonitor Linkerd data-plane
###
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: "linkerd-proxy"
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{ .Release.Namespace }}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- with .Values.podMonitor.labels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
namespaceSelector:
any: true
selector:
matchLabels: {}
podMetricsEndpoints:
- interval: {{ $podMonitor.scrapeInterval }}
scrapeTimeout: {{ $podMonitor.scrapeTimeout }}
relabelings:
- sourceLabels:
- __meta_kubernetes_pod_container_name
- __meta_kubernetes_pod_container_port_name
- __meta_kubernetes_pod_label_linkerd_io_control_plane_ns
action: keep
regex: ^linkerd-proxy;linkerd-admin;{{ .Release.Namespace }}$
- sourceLabels: [ __meta_kubernetes_namespace ]
action: replace
targetLabel: namespace
- sourceLabels: [ __meta_kubernetes_pod_name ]
action: replace
targetLabel: pod
- sourceLabels: [ __meta_kubernetes_pod_label_linkerd_io_proxy_job ]
action: replace
targetLabel: k8s_job
- action: labeldrop
regex: __meta_kubernetes_pod_label_linkerd_io_proxy_job
- action: labelmap
regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
- action: labeldrop
regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
- action: labelmap
regex: __meta_kubernetes_pod_label_linkerd_io_(.+)
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
replacement: __tmp_pod_label_$1
- action: labelmap
regex: __tmp_pod_label_linkerd_io_(.+)
replacement: __tmp_pod_label_$1
- action: labeldrop
regex: __tmp_pod_label_linkerd_io_(.+)
- action: labelmap
regex: __tmp_pod_label_(.+)
{{- end }}

View File

@ -0,0 +1,120 @@
---
###
### Proxy Injector RBAC
###
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: linkerd-{{.Release.Namespace}}-proxy-injector
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["namespaces", "replicationcontrollers"]
verbs: ["list", "get", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments", "replicasets", "daemonsets", "statefulsets"]
verbs: ["list", "get", "watch"]
- apiGroups: ["extensions", "batch"]
resources: ["cronjobs", "jobs"]
verbs: ["list", "get", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: linkerd-{{.Release.Namespace}}-proxy-injector
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
subjects:
- kind: ServiceAccount
name: linkerd-proxy-injector
namespace: {{.Release.Namespace}}
apiGroup: ""
roleRef:
kind: ClusterRole
name: linkerd-{{.Release.Namespace}}-proxy-injector
apiGroup: rbac.authorization.k8s.io
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: linkerd-proxy-injector
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- include "partials.image-pull-secrets" .Values.imagePullSecrets }}
---
{{- $host := printf "linkerd-proxy-injector.%s.svc" .Release.Namespace }}
{{- $ca := genSelfSignedCert $host (list) (list $host) 365 }}
{{- if (not .Values.proxyInjector.externalSecret) }}
kind: Secret
apiVersion: v1
metadata:
name: linkerd-proxy-injector-k8s-tls
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
type: kubernetes.io/tls
data:
tls.crt: {{ ternary (b64enc (trim $ca.Cert)) (b64enc (trim .Values.proxyInjector.crtPEM)) (empty .Values.proxyInjector.crtPEM) }}
tls.key: {{ ternary (b64enc (trim $ca.Key)) (b64enc (trim .Values.proxyInjector.keyPEM)) (empty .Values.proxyInjector.keyPEM) }}
---
{{- end }}
{{- include "linkerd.webhook.validation" .Values.proxyInjector }}
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: linkerd-proxy-injector-webhook-config
{{- if or (.Values.proxyInjector.injectCaFrom) (.Values.proxyInjector.injectCaFromSecret) }}
annotations:
{{- if .Values.proxyInjector.injectCaFrom }}
cert-manager.io/inject-ca-from: {{ .Values.proxyInjector.injectCaFrom }}
{{- end }}
{{- if .Values.proxyInjector.injectCaFromSecret }}
cert-manager.io/inject-ca-from-secret: {{ .Values.proxyInjector.injectCaFromSecret }}
{{- end }}
{{- end }}
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
webhooks:
- name: linkerd-proxy-injector.linkerd.io
namespaceSelector:
{{- toYaml .Values.proxyInjector.namespaceSelector | trim | nindent 4 }}
objectSelector:
{{- toYaml .Values.proxyInjector.objectSelector | trim | nindent 4 }}
clientConfig:
service:
name: linkerd-proxy-injector
namespace: {{ .Release.Namespace }}
path: "/"
{{- if and (empty .Values.proxyInjector.injectCaFrom) (empty .Values.proxyInjector.injectCaFromSecret) }}
caBundle: {{ ternary (b64enc (trim $ca.Cert)) (b64enc (trim .Values.proxyInjector.caBundle)) (empty .Values.proxyInjector.caBundle) }}
{{- end }}
failurePolicy: {{.Values.webhookFailurePolicy}}
admissionReviewVersions: ["v1", "v1beta1"]
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods", "services"]
scope: "Namespaced"
sideEffects: None
timeoutSeconds: {{ .Values.proxyInjector.timeoutSeconds | default 10 }}

View File

@ -0,0 +1,222 @@
---
###
### Proxy Injector
###
{{- $tree := deepCopy . }}
{{ $_ := set $tree.Values.proxy "workloadKind" "deployment" -}}
{{ $_ := set $tree.Values.proxy "component" "linkerd-proxy-injector" -}}
{{ $_ := set $tree.Values.proxy "waitBeforeExitSeconds" 0 -}}
{{- if not (empty .Values.proxyInjectorProxyResources) }}
{{- $c := dig "cores" .Values.proxy.cores .Values.proxyInjectorProxyResources }}
{{- $_ := set $tree.Values.proxy "cores" $c }}
{{- $r := merge .Values.proxyInjectorProxyResources .Values.proxy.resources }}
{{- $_ := set $tree.Values.proxy "resources" $r }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
{{ include "partials.annotations.created-by" . }}
labels:
app.kubernetes.io/name: proxy-injector
app.kubernetes.io/part-of: Linkerd
app.kubernetes.io/version: {{.Values.linkerdVersion}}
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
name: linkerd-proxy-injector
namespace: {{ .Release.Namespace }}
spec:
replicas: {{.Values.controllerReplicas}}
revisionHistoryLimit: {{.Values.revisionHistoryLimit}}
selector:
matchLabels:
linkerd.io/control-plane-component: proxy-injector
{{- if .Values.deploymentStrategy }}
strategy:
{{- with .Values.deploymentStrategy }}{{ toYaml . | trim | nindent 4 }}{{- end }}
{{- end }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/proxy-injector-rbac.yaml") . | sha256sum }}
{{ include "partials.annotations.created-by" . }}
{{- include "partials.proxy.annotations" . | nindent 8}}
{{- with .Values.podAnnotations }}{{ toYaml . | trim | nindent 8 }}{{- end }}
config.linkerd.io/opaque-ports: "8443"
config.linkerd.io/default-inbound-policy: "all-unauthenticated"
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
linkerd.io/workload-ns: {{.Release.Namespace}}
{{- include "partials.proxy.labels" $tree.Values.proxy | nindent 8}}
{{- with .Values.podLabels }}{{ toYaml . | trim | nindent 8 }}{{- end }}
spec:
{{- with .Values.runtimeClassName }}
runtimeClassName: {{ . | quote }}
{{- end }}
{{- if .Values.tolerations -}}
{{- include "linkerd.tolerations" . | nindent 6 }}
{{- end -}}
{{- include "linkerd.node-selector" . | nindent 6 }}
{{- $_ := set $tree "component" "proxy-injector" -}}
{{- include "linkerd.affinity" $tree | nindent 6 }}
containers:
{{- $_ := set $tree.Values.proxy "await" $tree.Values.proxy.await }}
{{- $_ := set $tree.Values.proxy "loadTrustBundleFromConfigMap" true }}
{{- $_ := set $tree.Values.proxy "podInboundPorts" "8443,9995" }}
{{- /*
The pod needs to accept webhook traffic, and we can't rely on that originating in the
cluster network.
*/}}
{{- $_ := set $tree.Values.proxy "defaultInboundPolicy" "all-unauthenticated" }}
{{- $_ := set $tree.Values.proxy "capabilities" (dict "drop" (list "ALL")) }}
{{- $_ := set $tree.Values.proxy "outboundDiscoveryCacheUnusedTimeout" "5s" }}
{{- $_ := set $tree.Values.proxy "inboundDiscoveryCacheUnusedTimeout" "90s" }}
{{- if not $tree.Values.proxy.nativeSidecar }}
- {{- include "partials.proxy" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{- end }}
- args:
- proxy-injector
- -log-level={{.Values.controllerLogLevel}}
- -log-format={{.Values.controllerLogFormat}}
- -linkerd-namespace={{.Release.Namespace}}
- -enable-pprof={{.Values.enablePprof | default false}}
{{- if or (.Values.proxyInjector).additionalEnv (.Values.proxyInjector).experimentalEnv }}
env:
{{- with (.Values.proxyInjector).additionalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- with (.Values.proxyInjector).experimentalEnv }}
{{- toYaml . | nindent 8 -}}
{{- end }}
{{- end }}
image: {{.Values.controllerImage}}:{{.Values.controllerImageVersion | default .Values.linkerdVersion}}
imagePullPolicy: {{.Values.imagePullPolicy}}
livenessProbe:
httpGet:
path: /ping
port: 9995
initialDelaySeconds: 10
{{- with (.Values.proxyInjector.livenessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
name: proxy-injector
ports:
- containerPort: 8443
name: proxy-injector
- containerPort: 9995
name: admin-http
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: 9995
{{- with (.Values.proxyInjector.readinessProbe).timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- if .Values.proxyInjectorResources -}}
{{- include "partials.resources" .Values.proxyInjectorResources | nindent 8 }}
{{- end }}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: {{.Values.controllerUID}}
{{- if ge (int .Values.controllerGID) 0 }}
runAsGroup: {{.Values.controllerGID}}
{{- end }}
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /var/run/linkerd/config
name: config
- mountPath: /var/run/linkerd/identity/trust-roots
name: trust-roots
- mountPath: /var/run/linkerd/tls
name: tls
readOnly: true
initContainers:
{{ if .Values.cniEnabled -}}
- {{- include "partials.network-validator" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ else -}}
{{- /*
The controller needs to connect to the Kubernetes API. There's no reason
to put the proxy in the way of that.
*/}}
{{- $_ := set $tree.Values.proxyInit "ignoreOutboundPorts" .Values.proxyInit.kubeAPIServerPorts -}}
- {{- include "partials.proxy-init" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{- if $tree.Values.proxy.nativeSidecar }}
{{- $_ := set $tree.Values.proxy "startupProbeInitialDelaySeconds" 35 }}
{{- $_ := set $tree.Values.proxy "startupProbePeriodSeconds" 5 }}
{{- $_ := set $tree.Values.proxy "startupProbeFailureThreshold" 20 }}
- {{- include "partials.proxy" $tree | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{- if .Values.priorityClassName -}}
priorityClassName: {{ .Values.priorityClassName }}
{{ end -}}
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: linkerd-proxy-injector
volumes:
- configMap:
name: linkerd-config
name: config
- configMap:
name: linkerd-identity-trust-roots
name: trust-roots
- name: tls
secret:
secretName: linkerd-proxy-injector-k8s-tls
{{ if not .Values.cniEnabled -}}
- {{- include "partials.proxyInit.volumes.xtables" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
{{if .Values.identity.serviceAccountTokenProjection -}}
- {{- include "partials.proxy.volumes.service-account-token" . | indent 8 | trimPrefix (repeat 7 " ") }}
{{ end -}}
- {{- include "partials.proxy.volumes.identity" . | indent 8 | trimPrefix (repeat 7 " ") }}
---
kind: Service
apiVersion: v1
metadata:
name: linkerd-proxy-injector
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
config.linkerd.io/opaque-ports: "443"
spec:
type: ClusterIP
selector:
linkerd.io/control-plane-component: proxy-injector
ports:
- name: proxy-injector
port: 443
targetPort: proxy-injector
{{- if .Values.enablePodDisruptionBudget }}
---
kind: PodDisruptionBudget
apiVersion: policy/v1
metadata:
name: linkerd-proxy-injector
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-component: proxy-injector
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
annotations:
{{ include "partials.annotations.created-by" . }}
spec:
maxUnavailable: {{ .Values.controller.podDisruptionBudget.maxUnavailable }}
selector:
matchLabels:
linkerd.io/control-plane-component: proxy-injector
{{- end }}

View File

@ -0,0 +1,119 @@
{{ if .Values.enablePSP -}}
---
###
### Control Plane PSP
###
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: linkerd-{{.Release.Namespace}}-control-plane
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: "runtime/default"
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
spec:
{{- if or .Values.proxyInit.closeWaitTimeoutSecs .Values.proxyInit.runAsRoot }}
allowPrivilegeEscalation: true
{{- else }}
allowPrivilegeEscalation: false
{{- end }}
readOnlyRootFilesystem: true
{{- if empty .Values.cniEnabled }}
allowedCapabilities:
- NET_ADMIN
- NET_RAW
{{- end}}
requiredDropCapabilities:
- ALL
hostNetwork: false
hostIPC: false
hostPID: false
seLinux:
rule: RunAsAny
runAsUser:
{{- if .Values.cniEnabled }}
rule: MustRunAsNonRoot
{{- else }}
rule: RunAsAny
{{- end }}
runAsGroup:
{{- if .Values.cniEnabled }}
rule: MustRunAs
ranges:
- min: 1000
max: 999999
{{- else }}
rule: RunAsAny
{{- end }}
supplementalGroups:
rule: MustRunAs
ranges:
{{- if .Values.cniEnabled }}
- min: 10001
max: 65535
{{- else }}
- min: 1
max: 65535
{{- end }}
fsGroup:
rule: MustRunAs
ranges:
{{- if .Values.cniEnabled }}
- min: 10001
max: 65535
{{- else }}
- min: 1
max: 65535
{{- end }}
volumes:
- configMap
- emptyDir
- secret
- projected
- downwardAPI
- persistentVolumeClaim
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: linkerd-psp
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
rules:
- apiGroups: ['policy', 'extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- linkerd-{{.Release.Namespace}}-control-plane
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: linkerd-psp
namespace: {{ .Release.Namespace }}
labels:
linkerd.io/control-plane-ns: {{.Release.Namespace}}
{{- with .Values.commonLabels }}{{ toYaml . | trim | nindent 4 }}{{- end }}
roleRef:
kind: Role
name: linkerd-psp
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: linkerd-destination
namespace: {{.Release.Namespace}}
{{ if not .Values.disableHeartBeat -}}
- kind: ServiceAccount
name: linkerd-heartbeat
namespace: {{.Release.Namespace}}
{{ end -}}
- kind: ServiceAccount
name: linkerd-identity
namespace: {{.Release.Namespace}}
- kind: ServiceAccount
name: linkerd-proxy-injector
namespace: {{.Release.Namespace}}
{{ end -}}

View File

@ -0,0 +1,63 @@
# This values.yaml file contains the values needed to enable HA mode.
# Usage:
# helm install -f values-ha.yaml
# -- Create PodDisruptionBudget resources for each control plane workload
enablePodDisruptionBudget: true
controller:
# -- sets pod disruption budget parameter for all deployments
podDisruptionBudget:
# -- Maximum number of pods that can be unavailable during disruption
maxUnavailable: 1
# -- Specify a deployment strategy for each control plane workload
deploymentStrategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 25%
# -- add PodAntiAffinity to each control plane workload
enablePodAntiAffinity: true
# nodeAffinity:
# proxy configuration
proxy:
resources:
cpu:
request: 100m
memory:
limit: 250Mi
request: 20Mi
# controller configuration
controllerReplicas: 3
controllerResources: &controller_resources
cpu: &controller_resources_cpu
limit: ""
request: 100m
memory:
limit: 250Mi
request: 50Mi
destinationResources: *controller_resources
# identity configuration
identityResources:
cpu: *controller_resources_cpu
memory:
limit: 250Mi
request: 10Mi
# heartbeat configuration
heartbeatResources: *controller_resources
# proxy injector configuration
proxyInjectorResources: *controller_resources
webhookFailurePolicy: Fail
# service profile validator configuration
spValidatorResources: *controller_resources
# flag for linkerd check
highAvailability: true

View File

@ -0,0 +1,664 @@
# Default values for linkerd.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- Kubernetes DNS Domain name to use
clusterDomain: cluster.local
# -- The cluster networks for which service discovery is performed. This should
# include the pod and service networks, but need not include the node network.
#
# By default, all IPv4 private networks and all accepted IPv6 ULAs are
# specified so that resolution works in typical Kubernetes environments.
clusterNetworks: "10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8"
# -- Docker image pull policy
imagePullPolicy: IfNotPresent
# -- Specifies the number of old ReplicaSets to retain to allow rollback.
revisionHistoryLimit: 10
# -- Log level for the control plane components
controllerLogLevel: info
# -- Log format for the control plane components
controllerLogFormat: plain
# -- enables control plane tracing
controlPlaneTracing: false
# -- namespace to send control plane traces to
controlPlaneTracingNamespace: linkerd-jaeger
# -- control plane version. See Proxy section for proxy version
linkerdVersion: edge-24.9.2
# -- default kubernetes deployment strategy
deploymentStrategy:
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
# -- enables the use of EndpointSlice informers for the destination service;
# enableEndpointSlices should be set to true only if EndpointSlice K8s feature
# gate is on
enableEndpointSlices: true
# -- enables pod anti affinity creation on deployments for high availability
enablePodAntiAffinity: false
# -- enables the use of pprof endpoints on control plane component's admin
# servers
enablePprof: false
# -- enables the creation of pod disruption budgets for control plane components
enablePodDisruptionBudget: false
# -- disables routing IPv6 traffic in addition to IPv4 traffic through the
# proxy (IPv6 routing only available as of proxy-init v2.3.0 and linkerd-cni
# v1.4.0)
disableIPv6: true
controller:
# -- sets pod disruption budget parameter for all deployments
podDisruptionBudget:
# -- Maximum number of pods that can be unavailable during disruption
maxUnavailable: 1
# -- enabling this omits the NET_ADMIN capability in the PSP
# and the proxy-init container when injecting the proxy;
# requires the linkerd-cni plugin to already be installed
cniEnabled: false
# -- Trust root certificate (ECDSA). It must be provided during install.
identityTrustAnchorsPEM: |
# -- Trust domain used for identity
# @default -- clusterDomain
identityTrustDomain: ""
kubeAPI: &kubeapi
# -- Maximum QPS sent to the kube-apiserver before throttling.
# See [token bucket rate limiter
# implementation](https://github.com/kubernetes/client-go/blob/v12.0.0/util/flowcontrol/throttle.go)
clientQPS: 100
# -- Burst value over clientQPS
clientBurst: 200
# -- Additional annotations to add to all pods
podAnnotations: {}
# -- Additional labels to add to all pods
podLabels: {}
# -- Labels to apply to all resources
commonLabels: {}
# -- Kubernetes priorityClassName for the Linkerd Pods
priorityClassName: ""
# -- Runtime Class Name for all the pods
runtimeClassName: ""
# policy controller configuration
policyController:
image:
# -- Docker image for the policy controller
name: cr.l5d.io/linkerd/policy-controller
# -- Pull policy for the policy controller container image
# @default -- imagePullPolicy
pullPolicy: ""
# -- Tag for the policy controller container image
# @default -- linkerdVersion
version: ""
# -- Log level for the policy controller
logLevel: info
# -- The networks from which probes are performed.
#
# By default, all networks are allowed so that all probes are authorized.
probeNetworks:
- 0.0.0.0/0
- "::/0"
# -- policy controller resource requests & limits
resources:
cpu:
# -- Maximum amount of CPU units that the policy controller can use
limit: ""
# -- Amount of CPU units that the policy controller requests
request: ""
memory:
# -- Maximum amount of memory that the policy controller can use
limit: ""
# -- Maximum amount of memory that the policy controller requests
request: ""
ephemeral-storage:
# -- Maximum amount of ephemeral storage that the policy controller can use
limit: ""
# -- Amount of ephemeral storage that the policy controller requests
request: ""
livenessProbe:
timeoutSeconds: 1
readinessProbe:
timeoutSeconds: 1
# proxy configuration
proxy:
# -- Enable service profiles for non-Kubernetes services
enableExternalProfiles: false
# -- Maximum time allowed for the proxy to establish an outbound TCP
# connection
outboundConnectTimeout: 1000ms
# -- Maximum time allowed for the proxy to establish an inbound TCP
# connection
inboundConnectTimeout: 100ms
# -- Maximum time allowed before an unused outbound discovery result
# is evicted from the cache
outboundDiscoveryCacheUnusedTimeout: "5s"
# -- Maximum time allowed before an unused inbound discovery result
# is evicted from the cache
inboundDiscoveryCacheUnusedTimeout: "90s"
# -- When set to true, disables the protocol detection timeout on the
# outbound side of the proxy by setting it to a very high value
disableOutboundProtocolDetectTimeout: false
# -- When set to true, disables the protocol detection timeout on the inbound
# side of the proxy by setting it to a very high value
disableInboundProtocolDetectTimeout: false
image:
# -- Docker image for the proxy
name: cr.l5d.io/linkerd/proxy
# -- Pull policy for the proxy container image
# @default -- imagePullPolicy
pullPolicy: ""
# -- Tag for the proxy container image
# @default -- linkerdVersion
version: ""
# -- Enables the proxy's /shutdown admin endpoint
enableShutdownEndpoint: false
# -- Log level for the proxy
logLevel: warn,linkerd=info,hickory=error
# -- Log format (`plain` or `json`) for the proxy
logFormat: plain
# -- (`off` or `insecure`) If set to `off`, will prevent the proxy from
# logging HTTP headers. If set to `insecure`, HTTP headers may be logged
# verbatim. Note that setting this to `insecure` is not alone sufficient to
# log HTTP headers; the proxy logLevel must also be set to debug.
logHTTPHeaders: "off"
ports:
# -- Admin port for the proxy container
admin: 4191
# -- Control port for the proxy container
control: 4190
# -- Inbound port for the proxy container
inbound: 4143
# -- Outbound port for the proxy container
outbound: 4140
# -- The `cpu.limit` and `cores` should be kept in sync. The value of `cores`
# must be an integer and should typically be set by rounding up from the
# limit. E.g. if cpu.limit is '1500m', cores should be 2.
cores: 0
resources:
cpu:
# -- Maximum amount of CPU units that the proxy can use
limit: ""
# -- Amount of CPU units that the proxy requests
request: ""
memory:
# -- Maximum amount of memory that the proxy can use
limit: ""
# -- Maximum amount of memory that the proxy requests
request: ""
ephemeral-storage:
# -- Maximum amount of ephemeral storage that the proxy can use
limit: ""
# -- Amount of ephemeral storage that the proxy requests
request: ""
# -- User id under which the proxy runs
uid: 2102
# -- (int) Optional customisation of the group id under which the proxy runs (the group ID will be omitted if lower than 0)
gid: -1
# -- If set the injected proxy sidecars in the data plane will stay alive for
# at least the given period before receiving the SIGTERM signal from
# Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`.
# See [Lifecycle
# hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks)
# for more info on container lifecycle hooks.
waitBeforeExitSeconds: 0
# -- If set, the application container will not start until the proxy is
# ready
await: true
requireIdentityOnInboundPorts: ""
# -- Default set of opaque ports
# - SMTP (25,587) server-first
# - MYSQL (3306) server-first
# - Galera (4444) server-first
# - PostgreSQL (5432) server-first
# - Redis (6379) server-first
# - ElasticSearch (9300) server-first
# - Memcached (11211) clients do not issue any preamble, which breaks detection
opaquePorts: "25,587,3306,4444,5432,6379,9300,11211"
# -- Grace period for graceful proxy shutdowns. If this timeout elapses before all open connections have completed, the proxy will terminate forcefully, closing any remaining connections.
shutdownGracePeriod: ""
# -- The default allow policy to use when no `Server` selects a pod. One of: "all-authenticated",
# "all-unauthenticated", "cluster-authenticated", "cluster-unauthenticated", "deny", "audit"
# @default -- "all-unauthenticated"
defaultInboundPolicy: "all-unauthenticated"
# -- Enable KEP-753 native sidecars
# This is an experimental feature. It requires Kubernetes >= 1.29.
# If enabled, .proxy.waitBeforeExitSeconds should not be used.
nativeSidecar: false
# -- Native sidecar proxy startup probe parameters.
# -- LivenessProbe timeout and delay configuration
livenessProbe:
initialDelaySeconds: 10
timeoutSeconds: 1
# -- ReadinessProbe timeout and delay configuration
readinessProbe:
initialDelaySeconds: 2
timeoutSeconds: 1
startupProbe:
initialDelaySeconds: 0
periodSeconds: 1
failureThreshold: 120
# Configures general properties of the proxy's control plane clients.
control:
# Configures limits on API response streams.
streams:
# -- The timeout for the first update from the control plane.
initialTimeout: "3s"
# -- The timeout between consecutive updates from the control plane.
idleTimeout: "5m"
# -- The maximum duration for a response stream (i.e. before it will be
# reinitialized).
lifetime: "1h"
inbound:
server:
http2:
# -- The interval at which PINGs are issued to remote HTTP/2 clients.
keepAliveInterval: "10s"
# -- The timeout within which keep-alive PINGs must be acknowledged on inbound HTTP/2 connections.
keepAliveTimeout: "3s"
outbound:
server:
http2:
# -- The interval at which PINGs are issued to local application HTTP/2 clients.
keepAliveInterval: "10s"
# -- The timeout within which keep-alive PINGs must be acknowledged on outbound HTTP/2 connections.
keepAliveTimeout: "3s"
# proxy-init configuration
proxyInit:
# -- Variant of iptables that will be used to configure routing. Currently,
# proxy-init can be run either in 'nft' or in 'legacy' mode. The mode will
# control which utility binary will be called. The host must support
# whichever mode will be used
iptablesMode: "legacy"
# -- Default set of inbound ports to skip via iptables
# - Galera (4567,4568)
ignoreInboundPorts: "4567,4568"
# -- Default set of outbound ports to skip via iptables
# - Galera (4567,4568)
ignoreOutboundPorts: "4567,4568"
# -- Default set of ports to skip via iptables for control plane
# components so they can communicate with the Kubernetes API Server
kubeAPIServerPorts: "443,6443"
# -- Comma-separated list of subnets in valid CIDR format that should be skipped by the proxy
skipSubnets: ""
# -- Log level for the proxy-init
# @default -- info
logLevel: ""
# -- Log format (`plain` or `json`) for the proxy-init
# @default -- plain
logFormat: ""
image:
# -- Docker image for the proxy-init container
name: cr.l5d.io/linkerd/proxy-init
# -- Pull policy for the proxy-init container image
# @default -- imagePullPolicy
pullPolicy: ""
# -- Tag for the proxy-init container image
version: v2.4.1
closeWaitTimeoutSecs: 0
# -- Privileged mode allows the container processes to inherit all security
# capabilities and bypass any security limitations enforced by the kubelet.
# When used with 'runAsRoot: true', the container will behave exactly as if
# it was running as root on the host. May escape cgroup limits and see other
# processes and devices on the host.
# @default -- false
privileged: false
# -- Allow overriding the runAsNonRoot behaviour (<https://github.com/linkerd/linkerd2/issues/7308>)
runAsRoot: false
# -- This value is used only if runAsRoot is false; otherwise runAsUser will be 0
runAsUser: 65534
# -- This value is used only if runAsRoot is false; otherwise runAsGroup will be 0
runAsGroup: 65534
xtMountPath:
mountPath: /run
name: linkerd-proxy-init-xtables-lock
# network validator configuration
# This runs on a host that uses iptables to reroute network traffic. The validator
# ensures that iptables is correctly routing requests before we start linkerd.
networkValidator:
# -- Log level for the network-validator
# @default -- debug
logLevel: debug
# -- Log format (`plain` or `json`) for network-validator
# @default -- plain
logFormat: plain
# -- Address to which the network-validator will attempt to connect. This should be an IP
# that the cluster is expected to be able to reach but a port it should not, e.g., a public IP
# for public clusters and a private IP for air-gapped clusters with a port like 20001.
# If empty, defaults to 1.1.1.1:20001 and [fd00::1]:20001 for IPv4 and IPv6 respectively.
connectAddr: ""
# -- Address to which network-validator listens to requests from itself.
# If empty, defaults to 0.0.0.0:4140 and [::]:4140 for IPv4 and IPv6 respectively.
listenAddr: ""
# -- Timeout before network-validator fails to validate the pod's network connectivity
timeout: "10s"
# -- Include a securityContext in the network-validator pod spec
enableSecurityContext: true
# -- For Private docker registries, authentication is needed.
# Registry secrets are applied to the respective service accounts
imagePullSecrets: []
# - name: my-private-docker-registry-login-secret
# -- Allow proxies to perform transparent HTTP/2 upgrading
enableH2Upgrade: true
# -- Add a PSP resource and bind it to the control plane ServiceAccounts. Note
# PSP has been deprecated since k8s v1.21
enablePSP: false
# -- Failure policy for the proxy injector
webhookFailurePolicy: Ignore
# controllerImage -- Docker image for the destination and identity components
controllerImage: cr.l5d.io/linkerd/controller
# -- Optionally allow a specific container image Tag (or SHA) to be specified for the controllerImage.
controllerImageVersion: ""
# -- Number of replicas for each control plane pod
controllerReplicas: 1
# -- User ID for the control plane components
controllerUID: 2103
# -- (int) Optional customisation of the group ID for the control plane components (the group ID will be omitted if lower than 0)
controllerGID: -1
# destination configuration
# set resources for the sp-validator and its linkerd proxy respectively
# see proxy.resources for details.
# destinationResources -- CPU, Memory and Ephemeral Storage resources required by destination (see `proxy.resources` for sub-fields)
#destinationResources:
# destinationProxyResources -- CPU, Memory and Ephemeral Storage resources required by proxy injected into destination pod (see `proxy.resources` for sub-fields)
#destinationProxyResources:
destinationController:
meshedHttp2ClientProtobuf:
keep_alive:
interval:
seconds: 10
timeout:
seconds: 3
while_idle: true
livenessProbe:
timeoutSeconds: 1
readinessProbe:
timeoutSeconds: 1
# debug configuration
debugContainer:
image:
# -- Docker image for the debug container
name: cr.l5d.io/linkerd/debug
# -- Pull policy for the debug container image
# @default -- imagePullPolicy
pullPolicy: ""
# -- Tag for the debug container image
# @default -- linkerdVersion
version: ""
identity:
# -- If the linkerd-identity-trust-roots ConfigMap has already been created
externalCA: false
# -- Use [Service Account token Volume projection](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) for pod validation instead of the default token
serviceAccountTokenProjection: true
issuer:
scheme: linkerd.io/tls
# -- Amount of time to allow for clock skew within a Linkerd cluster
clockSkewAllowance: 20s
# -- Amount of time for which the Identity issuer should certify identity
issuanceLifetime: 24h0m0s
# -- Which scheme is used for the identity issuer secret format
tls:
# -- Issuer certificate (ECDSA). It must be provided during install.
crtPEM: |
# -- Key for the issuer certificate (ECDSA). It must be provided during
# install
keyPEM: |
kubeAPI: *kubeapi
livenessProbe:
timeoutSeconds: 1
readinessProbe:
timeoutSeconds: 1
# -|- CPU, Memory and Ephemeral Storage resources required by the identity controller (see `proxy.resources` for sub-fields)
#identityResources:
# -|- CPU, Memory and Ephemeral Storage resources required by proxy injected into identity pod (see `proxy.resources` for sub-fields)
#identityProxyResources:
# heartbeat configuration
# disableHeartBeat -- Set to true to not start the heartbeat cronjob
disableHeartBeat: false
# -- Config for the heartbeat cronjob
# heartbeatSchedule: "0 0 * * *"
# proxy injector configuration
proxyInjector:
# -- Timeout in seconds before the API Server cancels a request to the proxy
# injector. If timeout is exceeded, the webhookfailurePolicy is used.
timeoutSeconds: 10
# -- Do not create a secret resource for the proxyInjector webhook.
# If this is set to `true`, the value `proxyInjector.caBundle` must be set
# or the ca bundle must injected with cert-manager ca injector using
# `proxyInjector.injectCaFrom` or `proxyInjector.injectCaFromSecret` (see below).
externalSecret: false
# -- Namespace selector used by admission webhook.
namespaceSelector:
matchExpressions:
- key: config.linkerd.io/admission-webhooks
operator: NotIn
values:
- disabled
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- kube-system
- cert-manager
# -- Object selector used by admission webhook.
objectSelector:
matchExpressions:
- key: linkerd.io/control-plane-component
operator: DoesNotExist
- key: linkerd.io/cni-resource
operator: DoesNotExist
# -- Certificate for the proxy injector. If not provided and not using an external secret
# then Helm will generate one.
crtPEM: |
# -- Certificate key for the proxy injector. If not provided and not using an external secret
# then Helm will generate one.
keyPEM: |
# -- Bundle of CA certificates for proxy injector.
# If not provided nor injected with cert-manager,
# then Helm will use the certificate generated for `proxyInjector.crtPEM`.
# If `proxyInjector.externalSecret` is set to true, this value, injectCaFrom, or
# injectCaFromSecret must be set, as no certificate will be generated.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector) for more information.
caBundle: |
# -- Inject the CA bundle from a cert-manager Certificate.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-certificate-resource)
# for more information.
injectCaFrom: ""
# -- Inject the CA bundle from a Secret.
# If set, the `cert-manager.io/inject-ca-from-secret` annotation will be added to the webhook.
# The Secret must have the CA Bundle stored in the `ca.crt` key and have
# the `cert-manager.io/allow-direct-injection` annotation set to `true`.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-secret-resource)
# for more information.
injectCaFromSecret: ""
livenessProbe:
timeoutSeconds: 1
readinessProbe:
timeoutSeconds: 1
# -|- CPU, Memory and Ephemeral Storage resources required by the proxy injector (see
#`proxy.resources` for sub-fields)
#proxyInjectorResources:
#-|- CPU, Memory and Ephemeral Storage resources required by proxy injected into the proxy injector
#pod (see `proxy.resources` for sub-fields)
#proxyInjectorProxyResources:
# service profile validator configuration
profileValidator:
# -- Do not create a secret resource for the profileValidator webhook.
# If this is set to `true`, the value `proxyInjector.caBundle` must be set
# or the ca bundle must injected with cert-manager ca injector using
# `proxyInjector.injectCaFrom` or `proxyInjector.injectCaFromSecret` (see below).
externalSecret: false
# -- Namespace selector used by admission webhook
namespaceSelector:
matchExpressions:
- key: config.linkerd.io/admission-webhooks
operator: NotIn
values:
- disabled
# -- Certificate for the service profile validator. If not provided and not using an external secret
# then Helm will generate one.
crtPEM: |
# -- Certificate key for the service profile validator. If not provided and not using an external secret
# then Helm will generate one.
keyPEM: |
# -- Bundle of CA certificates for proxy injector.
# If not provided nor injected with cert-manager,
# then Helm will use the certificate generated for `profileValidator.crtPEM`.
# If `profileValidator.externalSecret` is set to true, this value, injectCaFrom, or
# injectCaFromSecret must be set, as no certificate will be generated.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector) for more information.
caBundle: |
# -- Inject the CA bundle from a cert-manager Certificate.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-certificate-resource)
# for more information.
injectCaFrom: ""
# -- Inject the CA bundle from a Secret.
# If set, the `cert-manager.io/inject-ca-from-secret` annotation will be added to the webhook.
# The Secret must have the CA Bundle stored in the `ca.crt` key and have
# the `cert-manager.io/allow-direct-injection` annotation set to `true`.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-secret-resource)
# for more information.
injectCaFromSecret: ""
# policy validator configuration
policyValidator:
# -- Do not create a secret resource for the policyValidator webhook.
# If this is set to `true`, the value `policyValidator.caBundle` must be set
# or the ca bundle must injected with cert-manager ca injector using
# `policyValidator.injectCaFrom` or `policyValidator.injectCaFromSecret` (see below).
externalSecret: false
# -- Namespace selector used by admission webhook
namespaceSelector:
matchExpressions:
- key: config.linkerd.io/admission-webhooks
operator: NotIn
values:
- disabled
# -- Certificate for the policy validator. If not provided and not using an external secret
# then Helm will generate one.
crtPEM: |
# -- Certificate key for the policy validator. If not provided and not using an external secret
# then Helm will generate one.
keyPEM: |
# -- Bundle of CA certificates for proxy injector.
# If not provided nor injected with cert-manager,
# then Helm will use the certificate generated for `policyValidator.crtPEM`.
# If `policyValidator.externalSecret` is set to true, this value, injectCaFrom, or
# injectCaFromSecret must be set, as no certificate will be generated.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector) for more information.
caBundle: |
# -- Inject the CA bundle from a cert-manager Certificate.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-certificate-resource)
# for more information.
injectCaFrom: ""
# -- Inject the CA bundle from a Secret.
# If set, the `cert-manager.io/inject-ca-from-secret` annotation will be added to the webhook.
# The Secret must have the CA Bundle stored in the `ca.crt` key and have
# the `cert-manager.io/allow-direct-injection` annotation set to `true`.
# See the cert-manager [CA Injector Docs](https://cert-manager.io/docs/concepts/ca-injector/#injecting-ca-data-from-a-secret-resource)
# for more information.
injectCaFromSecret: ""
# -- NodeSelector section, See the [K8S
# documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector)
# for more information
nodeSelector:
kubernetes.io/os: linux
# -- SP validator configuration
spValidator:
livenessProbe:
timeoutSeconds: 1
readinessProbe:
timeoutSeconds: 1
# -|- CPU, Memory and Ephemeral Storage resources required by the SP validator (see
#`proxy.resources` for sub-fields)
#spValidatorResources:
# -|- Tolerations section, See the
# [K8S documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
# for more information
#tolerations:
# -|- NodeAffinity section, See the
# [K8S documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
# for more information
#nodeAffinity:
# -- url of external prometheus instance (used for the heartbeat)
prometheusUrl: ""
# Prometheus Operator PodMonitor configuration
podMonitor:
# -- Enables the creation of Prometheus Operator [PodMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor)
enabled: false
# -- Interval at which metrics should be scraped
scrapeInterval: 10s
# -- Iimeout after which the scrape is ended
scrapeTimeout: 10s
# -- Labels to apply to all pod Monitors
labels: {}
controller:
# -- Enables the creation of PodMonitor for the control-plane
enabled: true
# -- Selector to select which namespaces the Endpoints objects are discovered from
namespaceSelector: |
matchNames:
- {{ .Release.Namespace }}
- linkerd-viz
- linkerd-jaeger
serviceMirror:
# -- Enables the creation of PodMonitor for the Service Mirror component
enabled: true
proxy:
# -- Enables the creation of PodMonitor for the data-plane
enabled: true

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
OWNERS
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,6 @@
dependencies:
- name: partials
repository: file://../partials
version: 0.1.0
digest: sha256:8e42f9c9d4a2dc883f17f94d6044c97518ced19ad0922f47b8760e47135369ba
generated: "2021-08-17T10:42:52.610449255-05:00"

View File

@ -0,0 +1,26 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Linkerd CRDs
catalog.cattle.io/kube-version: '>=1.22.0-0'
catalog.cattle.io/release-name: linkerd-crds
apiVersion: v2
dependencies:
- name: partials
repository: file://./charts/partials
version: 0.1.0
description: 'Linkerd gives you observability, reliability, and security for your
microservices — with no code change required. '
home: https://linkerd.io
icon: file://assets/icons/linkerd-crds.png
keywords:
- service-mesh
kubeVersion: '>=1.22.0-0'
maintainers:
- email: cncf-linkerd-dev@lists.cncf.io
name: Linkerd authors
url: https://linkerd.io/
name: linkerd-crds
sources:
- https://github.com/linkerd/linkerd2/
type: application
version: 2024.9.2

View File

@ -0,0 +1,71 @@
# linkerd-crds
Linkerd gives you observability, reliability, and security
for your microservices — with no code change required.
![Version: 2024.9.2](https://img.shields.io/badge/Version-2024.9.2-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
**Homepage:** <https://linkerd.io>
## Quickstart and documentation
You can run Linkerd on any Kubernetes cluster in a matter of seconds. See the
[Linkerd Getting Started Guide][getting-started] for how.
For more comprehensive documentation, start with the [Linkerd
docs][linkerd-docs].
## Adding Linkerd's Helm repository
```bash
# To add the repo for Linkerd edge releases:
helm repo add linkerd https://helm.linkerd.io/edge
```
## Installing the linkerd-crds chart
This installs the `linkerd-crds` chart, which only persists the CRDs that
Linkerd requires.
After installing this chart, you need then to install the
`linkerd-control-plane` chart in the same namespace, which provides all the
linkerd core control components.
```bash
helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds
```
## Get involved
* Check out Linkerd's source code at [GitHub][linkerd2].
* Join Linkerd's [user mailing list][linkerd-users], [developer mailing
list][linkerd-dev], and [announcements mailing list][linkerd-announce].
* Follow [@linkerd][twitter] on Twitter.
* Join the [Linkerd Slack][slack].
[getting-started]: https://linkerd.io/2/getting-started/
[linkerd2]: https://github.com/linkerd/linkerd2
[linkerd-announce]: https://lists.cncf.io/g/cncf-linkerd-announce
[linkerd-dev]: https://lists.cncf.io/g/cncf-linkerd-dev
[linkerd-docs]: https://linkerd.io/2/overview/
[linkerd-users]: https://lists.cncf.io/g/cncf-linkerd-users
[slack]: http://slack.linkerd.io
[twitter]: https://twitter.com/linkerd
## Requirements
Kubernetes: `>=1.22.0-0`
| Repository | Name | Version |
|------------|------|---------|
| file://../partials | partials | 0.1.0 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| enableHttpRoutes | bool | `true` | |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.12.0](https://github.com/norwoodj/helm-docs/releases/v1.12.0)

View File

@ -0,0 +1,59 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}
{{ template "chart.typeBadge" . }}
{{ template "chart.appVersionBadge" . }}
{{ template "chart.homepageLine" . }}
## Quickstart and documentation
You can run Linkerd on any Kubernetes cluster in a matter of seconds. See the
[Linkerd Getting Started Guide][getting-started] for how.
For more comprehensive documentation, start with the [Linkerd
docs][linkerd-docs].
## Adding Linkerd's Helm repository
```bash
# To add the repo for Linkerd edge releases:
helm repo add linkerd https://helm.linkerd.io/edge
```
## Installing the linkerd-crds chart
This installs the `linkerd-crds` chart, which only persists the CRDs that
Linkerd requires.
After installing this chart, you need then to install the
`linkerd-control-plane` chart in the same namespace, which provides all the
linkerd core control components.
```bash
helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds
```
## Get involved
* Check out Linkerd's source code at [GitHub][linkerd2].
* Join Linkerd's [user mailing list][linkerd-users], [developer mailing
list][linkerd-dev], and [announcements mailing list][linkerd-announce].
* Follow [@linkerd][twitter] on Twitter.
* Join the [Linkerd Slack][slack].
[getting-started]: https://linkerd.io/2/getting-started/
[linkerd2]: https://github.com/linkerd/linkerd2
[linkerd-announce]: https://lists.cncf.io/g/cncf-linkerd-announce
[linkerd-dev]: https://lists.cncf.io/g/cncf-linkerd-dev
[linkerd-docs]: https://linkerd.io/2/overview/
[linkerd-users]: https://lists.cncf.io/g/cncf-linkerd-users
[slack]: http://slack.linkerd.io
[twitter]: https://twitter.com/linkerd
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
{{ template "helm-docs.versionFooter" . }}

View File

@ -0,0 +1,9 @@
# Linkerd 2 CRDs Chart
Linkerd is an ultra light, ultra simple, ultra powerful service mesh. Linkerd
adds security, observability, and reliability to Kubernetes, without the
complexity.
This particular Helm chart only installs Linkerd CRDs.
Full documentation available at: https://linkerd.io/2/overview/

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,5 @@
apiVersion: v1
description: 'A Helm chart containing Linkerd partial templates, depended by the ''linkerd''
and ''patch'' charts. '
name: partials
version: 0.1.0

View File

@ -0,0 +1,9 @@
# partials
A Helm chart containing Linkerd partial templates,
depended by the 'linkerd' and 'patch' charts.
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square)
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.12.0](https://github.com/norwoodj/helm-docs/releases/v1.12.0)

View File

@ -0,0 +1,14 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}
{{ template "chart.typeBadge" . }}
{{ template "chart.appVersionBadge" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
{{ template "helm-docs.versionFooter" . }}

View File

@ -0,0 +1,38 @@
{{ define "linkerd.pod-affinity" -}}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: {{ default "linkerd.io/control-plane-component" .label }}
operator: In
values:
- {{ .component }}
topologyKey: topology.kubernetes.io/zone
weight: 100
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: {{ default "linkerd.io/control-plane-component" .label }}
operator: In
values:
- {{ .component }}
topologyKey: kubernetes.io/hostname
{{- end }}
{{ define "linkerd.node-affinity" -}}
nodeAffinity:
{{- toYaml .Values.nodeAffinity | trim | nindent 2 }}
{{- end }}
{{ define "linkerd.affinity" -}}
{{- if or .Values.enablePodAntiAffinity .Values.nodeAffinity -}}
affinity:
{{- end }}
{{- if .Values.enablePodAntiAffinity -}}
{{- include "linkerd.pod-affinity" . | nindent 2 }}
{{- end }}
{{- if .Values.nodeAffinity -}}
{{- include "linkerd.node-affinity" . | nindent 2 }}
{{- end }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More