Added chart versions:

cockroach-labs/cockroachdb:
    - 13.0.3
  jfrog/artifactory-ha:
    - 107.90.6
  jfrog/artifactory-jcr:
    - 107.90.6
  linkerd/linkerd-control-plane:
    - 2024.8.2
  linkerd/linkerd-crds:
    - 2024.8.2
  nats/nats:
    - 1.2.2
pull/1059/head
github-actions[bot] 2024-08-06 00:50:59 +00:00
parent c7104f8d2d
commit 458568208d
473 changed files with 65367 additions and 3 deletions

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
assets/nats/nats-1.2.2.tgz Normal file

Binary file not shown.

View File

@ -0,0 +1,14 @@
# Contributing
Contributions are welcome!
For every change, please increment the `version` contained in
[Chart.yaml](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml).
The `version` roughly follows the [SEMVER](https://semver.org/) versioning
pattern. For changes which do not affect backwards compatibility, the PATCH or
MINOR version must be incremented, e.g. `4.1.3` -> `4.1.4`. For changes which
affect the backwards compatibility of the chart, the major version must be
incremented, e.g. `4.1.3` -> `5.0.0`. Examples of changes which affect backwards
compatibility include any major version releases of CockroachDB, as well as any
breaking changes to the CockroachDB chart templates.

View File

@ -0,0 +1,18 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: CockroachDB
catalog.cattle.io/kube-version: '>=1.8-0'
catalog.cattle.io/release-name: cockroachdb
apiVersion: v1
appVersion: 24.1.3
description: CockroachDB is a scalable, survivable, strongly-consistent SQL database.
home: https://www.cockroachlabs.com
icon: file://assets/icons/cockroachdb.png
kubeVersion: '>=1.8-0'
maintainers:
- email: helm-charts@cockroachlabs.com
name: cockroachlabs
name: cockroachdb
sources:
- https://github.com/cockroachdb/cockroach
version: 13.0.3

View File

@ -0,0 +1,584 @@
<!--- Generated file, DO NOT EDIT. Source: build/templates/README.md --->
# CockroachDB Helm Chart
[CockroachDB](https://github.com/cockroachdb/cockroach) - the open source, cloud-native distributed SQL database.
## Documentation
Below is a brief overview of operating the CockroachDB Helm Chart and some specific implementation details. For additional information on deploying CockroachDB, please see:
> <https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html>
Note that the documentation requires Helm 3.0 or higher.
## Prerequisites Details
* Kubernetes 1.8
* PV support on the underlying infrastructure (only if using `storage.persistentVolume`). [Docker for windows hostpath provisioner is not supported](https://github.com/cockroachdb/docs/issues/3184).
* If you want to secure your cluster to use TLS certificates for all network communication, [Helm must be installed with RBAC privileges](https://helm.sh/docs/topics/rbac/) or else you will get an "attempt to grant extra privileges" error.
## StatefulSet Details
* <http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/>
## StatefulSet Caveats
* <http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations>
## Chart Details
This chart will do the following:
* Set up a dynamically scalable CockroachDB cluster using a Kubernetes StatefulSet.
## Add the CockroachDB Repository
```shell
helm repo add cockroachdb https://charts.cockroachdb.com/
```
## Installing the Chart
To install the chart with the release name `my-release`:
```shell
helm install my-release cockroachdb/cockroachdb
```
Note that for a production cluster, you will likely want to override the following parameters in [`values.yaml`](values.yaml) with your own values.
- `statefulset.resources.requests.memory` and `statefulset.resources.limits.memory` allocate memory resources to CockroachDB pods in your cluster.
- `conf.cache` and `conf.max-sql-memory` are memory limits that we recommend setting to 1/4 of the above resource allocation. When running CockroachDB, you must set these limits explicitly to avoid running out of memory.
- `storage.persistentVolume.size` defaults to `100Gi` of disk space per pod, which you may increase or decrease for your use case.
- `storage.persistentVolume.storageClass` uses the default storage class for your environment. We strongly recommend that you specify a storage class which uses an SSD.
- `tls.enabled` must be set to `yes`/`true` to deploy in secure mode.
For more information on overriding the `values.yaml` parameters, please see:
> <https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#step-2-start-cockroachdb>
Confirm that all pods are `Running` successfully and init has been completed:
```shell
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 1/1 Running 0 1m
my-release-cockroachdb-1 1/1 Running 0 1m
my-release-cockroachdb-2 1/1 Running 0 1m
my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m
```
Confirm that persistent volumes are created and claimed for each pod:
```shell
kubectl get pv
```
```
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-64878ebf-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 51s
pvc-64945b4f-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 51s
pvc-649d920d-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 51s
```
### Running in secure mode
In order to set up a secure cockroachdb cluster set `tls.enabled` to `yes`/`true`
There are 3 ways to configure a secure cluster, with this chart. This all relates to how the certificates are issued:
* Self-signer (default)
* Cert-manager
* Manual
#### Self-signer
This is the default behaviour, and requires no configuration beyond setting certificate durations if user wants to set custom duration.
If you are running in this mode, self-signed certificates are created by self-signed utility for the nodes and root client and are stored in a secret.
You can look for the certificates created:
```shell
kubectl get secrets
```
```shell
crdb-cockroachdb-ca-secret Opaque 2 23s
crdb-cockroachdb-client-secret kubernetes.io/tls 3 22s
crdb-cockroachdb-node-secret kubernetes.io/tls 3 23s
```
#### Manual
If you wish to supply the certificates to the nodes yourself set `tls.certs.provided` to `yes`/`true`. You may want to use this if you want to use a different certificate authority from the one being used by Kubernetes or if your Kubernetes cluster doesn't fully support certificate-signing requests. To use this, first set up your certificates and load them into your Kubernetes cluster as Secrets using the commands below:
```shell
$ mkdir certs
$ mkdir my-safe-directory
$ cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
$ cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
$ kubectl create secret generic cockroachdb-root --from-file=certs
secret/cockroachdb-root created
$ cockroach cert create-node --certs-dir=certs --ca-key=my-safe-directory/ca.key localhost 127.0.0.1 my-release-cockroachdb-public my-release-cockroachdb-public.my-namespace my-release-cockroachdb-public.my-namespace.svc.cluster.local *.my-release-cockroachdb *.my-release-cockroachdb.my-namespace *.my-release-cockroachdb.my-namespace.svc.cluster.local
$ kubectl create secret generic cockroachdb-node --from-file=certs
secret/cockroachdb-node created
```
> Note: The subject alternative names are based on a release called `my-release` in the `my-namespace` namespace. Make sure they match the services created with the release during `helm install`
If your certificates are stored in tls secrets such as secrets generated by cert-manager, the secret will contain files named:
* `ca.crt`
* `tls.crt`
* `tls.key`
Cockroachdb, however, expects the files to be named like this:
* `ca.crt`
* `node.crt`
* `node.key`
* `client.root.crt`
* `client.root.key`
By enabling `tls.certs.tlsSecret` the tls secrets are projected on to the correct filenames, when they are mounted to the cockroachdb pods.
#### Cert-manager
If you wish to supply certificates with [cert-manager][3], set
* `tls.certs.certManager` to `yes`/`true`
* `tls.certs.certManagerIssuer` to an IssuerRef (as they appear in certificate resources) pointing to a clusterIssuer or issuer, you have set up in the cluster
Example issuer:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: cockroachdb-ca
namespace: cockroachdb
data:
tls.crt: [BASE64 Encoded ca.crt]
tls.key: [BASE64 Encoded ca.key]
type: kubernetes.io/tls
---
apiVersion: cert-manager.io/v1alpha3
kind: Issuer
metadata:
name: cockroachdb-cert-issuer
namespace: cockroachdb
spec:
ca:
secretName: cockroachdb-ca
```
## Upgrading the cluster
### Chart version 3.0.0 and after
Launch a temporary interactive pod and start the built-in SQL client:
```shell
kubectl run cockroachdb --rm -it \
--image=cockroachdb/cockroach \
--restart=Never \
-- sql --insecure --host=my-release-cockroachdb-public
```
> If you are running in secure mode, you will have to provide a client certificate to the cluster in order to authenticate, so the above command will not work. See [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) for an example of how to set up an interactive SQL shell against a secure cluster or [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/example-app-secure.yaml) for an example application connecting to a secure cluster.
Set `cluster.preserve_downgrade_option`, where `$current_version` is the CockroachDB version currently running (e.g., `19.2`):
```sql
> SET CLUSTER SETTING cluster.preserve_downgrade_option = '$current_version';
```
Exit the shell and delete the temporary pod:
```sql
> \q
```
Kick off the upgrade process by changing the new Docker image, where `$new_version` is the CockroachDB version to which you are upgrading:
```shell
helm upgrade my-release cockroachdb/cockroachdb \
--set image.tag=$new_version \
--reuse-values
```
Kubernetes will carry out a safe [rolling upgrade](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) of your CockroachDB nodes one-by-one. Monitor the cluster's pods until all have been successfully restarted:
```shell
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 1/1 Running 0 2m
my-release-cockroachdb-1 1/1 Running 0 3m
my-release-cockroachdb-2 1/1 Running 0 3m
my-release-cockroachdb-3 0/1 ContainerCreating 0 25s
my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s
```
```shell
kubectl get pods \
-o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
```
```
my-release-cockroachdb-0 cockroachdb/cockroach:v24.1.3
my-release-cockroachdb-1 cockroachdb/cockroach:v24.1.3
my-release-cockroachdb-2 cockroachdb/cockroach:v24.1.3
my-release-cockroachdb-3 cockroachdb/cockroach:v24.1.3
```
Resume normal operations. Once you are comfortable that the stability and performance of the cluster is what you'd expect post-upgrade, finalize the upgrade:
```shell
kubectl run cockroachdb --rm -it \
--image=cockroachdb/cockroach \
--restart=Never \
-- sql --insecure --host=my-release-cockroachdb-public
```
```sql
> RESET CLUSTER SETTING cluster.preserve_downgrade_option;
> \q
```
### Chart versions prior to 3.0.0
Due to a change in the label format in version 3.0.0 of this chart, upgrading requires that you delete the StatefulSet. Luckily there is a way to do it without actually deleting all the resources managed by the StatefulSet. Use the workaround below to upgrade from charts versions previous to 3.0.0:
Get the new labels from the specs rendered by Helm:
```shell
helm template -f deploy.vals.yml cockroachdb/cockroachdb -x templates/statefulset.yaml \
| yq r - spec.template.metadata.labels
```
```
app.kubernetes.io/name: cockroachdb
app.kubernetes.io/instance: my-release
app.kubernetes.io/component: cockroachdb
```
Place the new labels on all pods of the StatefulSet (change `my-release-cockroachdb-0` to the name of each pod):
```shell
kubectl label pods my-release-cockroachdb-0 \
app.kubernetes.io/name=cockroachdb \
app.kubernetes.io/instance=my-release \
app.kubernetes.io/component=cockroachdb
```
Delete the StatefulSet without deleting pods:
```shell
kubectl delete statefulset my-release-cockroachdb --cascade=false
```
Verify that no pod is deleted and then upgrade as normal. A new StatefulSet will be created, taking over the management of the existing pods and upgrading them if needed.
### See also
For more information about upgrading a cluster to the latest major release of CockroachDB, see [Upgrade to CockroachDB](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version.html).
Note that there are sometimes backward-incompatible changes to SQL features between major CockroachDB releases. For details, see the [Upgrade Policy](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-policy).
## Configuration
The following table lists the configurable parameters of the CockroachDB chart and their default values.
For details see the [`values.yaml`](values.yaml) file.
| Parameter | Description | Default |
| --------- | ----------- | ------- |
| `clusterDomain` | Cluster's default DNS domain | `cluster.local` |
| `conf.attrs` | CockroachDB node attributes | `[]` |
| `conf.cache` | Size of CockroachDB's in-memory cache | `25%` |
| `conf.cluster-name` | Name of CockroachDB cluster | `""` |
| `conf.disable-cluster-name-verification` | Disable CockroachDB cluster name verification | `no` |
| `conf.join` | List of already-existing CockroachDB instances | `[]` |
| `conf.max-disk-temp-storage` | Max storage capacity for temp data | `0` |
| `conf.max-offset` | Max allowed clock offset for CockroachDB cluster | `500ms` |
| `conf.max-sql-memory` | Max memory to use processing SQL querie | `25%` |
| `conf.locality` | Locality attribute for this deployment | `""` |
| `conf.single-node` | Disable CockroachDB clustering (standalone mode) | `no` |
| `conf.sql-audit-dir` | Directory for SQL audit log | `""` |
| `conf.port` | CockroachDB primary serving port in Pods | `26257` |
| `conf.http-port` | CockroachDB HTTP port in Pods | `8080` |
| `conf.path` | CockroachDB data directory mount path | `cockroach-data` |
| `conf.store.enabled` | Enable store configuration for CockroachDB | `false` |
| `conf.store.type` | CockroachDB storage type | `""` |
| `conf.store.size` | CockroachDB storage size | `""` |
| `conf.store.attrs` | CockroachDB storage attributes | `""` |
| `image.repository` | Container image name | `cockroachdb/cockroach` |
| `image.tag` | Container image tag | `v24.1.3` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `image.credentials` | `registry`, `user` and `pass` credentials to pull private image | `{}` |
| `statefulset.replicas` | StatefulSet replicas number | `3` |
| `statefulset.updateStrategy` | Update strategy for StatefulSet Pods | `{"type": "RollingUpdate"}` |
| `statefulset.podManagementPolicy` | `OrderedReady`/`Parallel` Pods creation/deletion order | `Parallel` |
| `statefulset.budget.maxUnavailable` | k8s PodDisruptionBudget parameter | `1` |
| `statefulset.args` | Extra command-line arguments | `[]` |
| `statefulset.env` | Extra env vars | `[]` |
| `statefulset.secretMounts` | Additional Secrets to mount at cluster members | `[]` |
| `statefulset.labels` | Additional labels of StatefulSet and its Pods | `{"app.kubernetes.io/component": "cockroachdb"}` |
| `statefulset.annotations` | Additional annotations of StatefulSet Pods | `{}` |
| `statefulset.nodeAffinity` | [Node affinity rules][2] of StatefulSet Pods | `{}` |
| `statefulset.podAffinity` | [Inter-Pod affinity rules][1] of StatefulSet Pods | `{}` |
| `statefulset.podAntiAffinity` | [Anti-affinity rules][1] of StatefulSet Pods | auto |
| `statefulset.podAntiAffinity.topologyKey` | The topologyKey for auto [anti-affinity rules][1] | `kubernetes.io/hostname` |
| `statefulset.podAntiAffinity.type` | Type of auto [anti-affinity rules][1] | `soft` |
| `statefulset.podAntiAffinity.weight` | Weight for `soft` auto [anti-affinity rules][1] | `100` |
| `statefulset.nodeSelector` | Node labels for StatefulSet Pods assignment | `{}` |
| `statefulset.priorityClassName` | [PriorityClassName][4] for StatefulSet Pods | `""` |
| `statefulset.tolerations` | Node taints to tolerate by StatefulSet Pods | `[]` |
| `statefulset.topologySpreadConstraints` | [Topology Spread Constraints rules][5] of StatefulSet Pods | auto |
| `statefulset.topologySpreadConstraints.maxSkew` | Degree to which Pods may be unevenly distributed | `1` |
| `statefulset.topologySpreadConstraints.topologyKey` | The key of node labels | `topology.kubernetes.io/zone` |
| `statefulset.topologySpreadConstraints.whenUnsatisfiable` | `ScheduleAnyway`/`DoNotSchedule` for unsatisfiable constraints | `ScheduleAnyway` |
| `statefulset.resources` | Resource requests and limits for StatefulSet Pods | `{}` |
| `statefulset.customLivenessProbe` | Custom Liveness probe | `{}` |
| `statefulset.customReadinessProbe` | Custom Rediness probe | `{}` |
| `service.ports.grpc.external.port` | CockroachDB primary serving port in Services | `26257` |
| `service.ports.grpc.external.name` | CockroachDB primary serving port name in Services | `grpc` |
| `service.ports.grpc.internal.port` | CockroachDB inter-communication port in Services | `26257` |
| `service.ports.grpc.internal.name` | CockroachDB inter-communication port name in Services | `grpc-internal` |
| `service.ports.http.port` | CockroachDB HTTP port in Services | `8080` |
| `service.ports.http.name` | CockroachDB HTTP port name in Services | `http` |
| `service.public.type` | Public Service type | `ClusterIP` |
| `service.public.labels` | Additional labels of public Service | `{"app.kubernetes.io/component": "cockroachdb"}` |
| `service.public.annotations` | Additional annotations of public Service | `{}` |
| `service.discovery.labels` | Additional labels of discovery Service | `{"app.kubernetes.io/component": "cockroachdb"}` |
| `service.discovery.annotations` | Additional annotations of discovery Service | `{}` |
| `ingress.enabled` | Enable ingress resource for CockroachDB | `false` |
| `ingress.labels` | Additional labels of Ingress | `{}` |
| `ingress.annotations` | Additional annotations of Ingress | `{}` |
| `ingress.paths` | Paths for the default host | `[/]` |
| `ingress.hosts` | CockroachDB Ingress hostnames | `[]` |
| `ingress.tls[0].hosts` | CockroachDB Ingress tls hostnames | `nil` |
| `ingress.tls[0].secretName` | CockroachDB Ingress tls secret name | `nil` |
| `prometheus.enabled` | Enable automatic monitoring of all instances when Prometheus is running | `true` |
| `serviceMonitor.enabled` | Create [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor) Resource for scraping metrics using [PrometheusOperator](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#prometheus-operator) | `false` |
| `serviceMonitor.labels` | Additional labels of ServiceMonitor | `{}` |
| `serviceMonitor.annotations` | Additional annotations of ServiceMonitor | `{}` |
| `serviceMonitor.interval` | ServiceMonitor scrape metrics interval | `10s` |
| `serviceMonitor.scrapeTimeout` | ServiceMonitor scrape timeout | `nil` |
| `serviceMonitor.tlsConfig` | Additional TLS configuration of ServiceMonitor | `{}` |
| `serviceMonitor.namespaced` | Limit ServiceMonitor to current namespace | `false` |
| `storage.hostPath` | Absolute path on host to store data | `""` |
| `storage.persistentVolume.enabled` | Whether to use PersistentVolume to store data | `yes` |
| `storage.persistentVolume.size` | PersistentVolume size | `100Gi` |
| `storage.persistentVolume.storageClass` | PersistentVolume class | `""` |
| `storage.persistentVolume.labels` | Additional labels of PersistentVolumeClaim | `{}` |
| `storage.persistentVolume.annotations` | Additional annotations of PersistentVolumeClaim | `{}` |
| `init.labels` | Additional labels of init Job and its Pod | `{"app.kubernetes.io/component": "init"}` |
| `init.jobAnnotations` | Additional annotations of the init Job itself | `{}` |
| `init.annotations` | Additional annotations of the Pod of init Job | `{}` |
| `init.affinity` | [Affinity rules][2] of init Job Pod | `{}` |
| `init.nodeSelector` | Node labels for init Job Pod assignment | `{}` |
| `init.tolerations` | Node taints to tolerate by init Job Pod | `[]` |
| `init.resources` | Resource requests and limits for the `cluster-init` container | `{}` |
| `tls.enabled` | Whether to run securely using TLS certificates | `no` |
| `tls.serviceAccount.create` | Whether to create a new RBAC service account | `yes` |
| `tls.serviceAccount.name` | Name of RBAC service account to use | `""` |
| `tls.copyCerts.image` | Image used in copy certs init container | `busybox` |
| `tls.copyCerts.resources` | Resource requests and limits for the `copy-certs` container | `{}` |
| `tls.certs.provided` | Bring your own certs scenario, i.e certificates are provided | `no` |
| `tls.certs.clientRootSecret` | If certs are provided, secret name for client root cert | `cockroachdb-root` |
| `tls.certs.nodeSecret` | If certs are provided, secret name for node cert | `cockroachdb-node` |
| `tls.certs.tlsSecret` | Own certs are stored in TLS secret | `no` |
| `tls.certs.selfSigner.enabled` | Whether cockroachdb should generate its own self-signed certs | `true` |
| `tls.certs.selfSigner.caProvided` | Bring your own CA scenario. This CA will be used to generate node and client cert | `false` |
| `tls.certs.selfSigner.caSecret` | If CA is provided, secret name for CA cert | `""` |
| `tls.certs.selfSigner.minimumCertDuration` | Minimum cert duration for all the certs, all certs duration will be validated against this duration | `624h` |
| `tls.certs.selfSigner.caCertDuration` | Duration of CA cert in hour | `43824h` |
| `tls.certs.selfSigner.caCertExpiryWindow` | Expiry window of CA cert means a window before actual expiry in which CA cert should be rotated | `648h` |
| `tls.certs.selfSigner.clientCertDuration` | Duration of client cert in hour | `672h |
| `tls.certs.selfSigner.clientCertExpiryWindow` | Expiry window of client cert means a window before actual expiry in which client cert should be rotated | `48h` |
| `tls.certs.selfSigner.nodeCertDuration` | Duration of node cert in hour | `8760h` |
| `tls.certs.selfSigner.nodeCertExpiryWindow` | Expiry window of node cert means a window before actual expiry in which node certs should be rotated | `168h` |
| `tls.certs.selfSigner.rotateCerts` | Whether to rotate the certs generate by cockroachdb | `true` |
| `tls.certs.selfSigner.readinessWait` | Wait time for each cockroachdb replica to become ready once it comes in running state. Only considered when rotateCerts is set to true | `30s` |
| `tls.certs.selfSigner.podUpdateTimeout` | Wait time for each cockroachdb replica to get to running state. Only considered when rotateCerts is set to true | `2m` |
| `tls.certs.certManager` | Provision certificates with cert-manager | `false` |
| `tls.certs.certManagerIssuer.group` | IssuerRef group to use when generating certificates | `cert-manager.io` |
| `tls.certs.certManagerIssuer.kind` | IssuerRef kind to use when generating certificates | `Issuer` |
| `tls.certs.certManagerIssuer.name` | IssuerRef name to use when generating certificates | `cockroachdb` |
| `tls.certs.certManagerIssuer.clientCertDuration` | Duration of client cert in hours | `672h` |
| `tls.certs.certManagerIssuer.clientCertExpiryWindow` | Expiry window of client cert means a window before actual expiry in which client cert should be rotated | `48h` |
| `tls.certs.certManagerIssuer.nodeCertDuration` | Duration of node cert in hours | `8760h` |
| `tls.certs.certManagerIssuer.nodeCertExpiryWindow` | Expiry window of node certificates means a window before actual expiry in which node certs should be rotated. | `168h` |
| `tls.selfSigner.image.repository` | Image to use for self signing TLS certificates | `cockroachlabs-helm-charts/cockroach-self-signer-cert`|
| `tls.selfSigner.image.tag` | Image tag to use for self signing TLS certificates | `0.1` |
| `tls.selfSigner.image.pullPolicy` | Self signing TLS certificates container pull policy | `IfNotPresent` |
| `tls.selfSigner.image.credentials` | `registry`, `user` and `pass` credentials to pull private image | `{}` |
| `networkPolicy.enabled` | Enable NetworkPolicy for CockroachDB's Pods | `no` |
| `networkPolicy.ingress.grpc` | Whitelist resources to access gRPC port of CockroachDB's Pods | `[]` |
| `networkPolicy.ingress.http` | Whitelist resources to access gRPC port of CockroachDB's Pods | `[]` |
Override the default parameters using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies custom values for the parameters can be provided while installing the chart. For example:
```shell
helm install my-release -f my-values.yaml cockroachdb/cockroachdb
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Deep dive
### Connecting to the CockroachDB cluster
Once you've created the cluster, you can start talking to it by connecting to its `-public` Service. CockroachDB is PostgreSQL wire protocol compatible, so there's a [wide variety of supported clients](https://www.cockroachlabs.com/docs/install-client-drivers.html). As an example, we'll open up a SQL shell using CockroachDB's built-in shell and play around with it a bit, like this (likely needing to replace `my-release-cockroachdb-public` with the name of the `-public` Service that was created with your installed chart):
```shell
kubectl run cockroach-client --rm -it \
--image=cockroachdb/cockroach \
--restart=Never \
-- sql --insecure --host my-release-cockroachdb-public
```
```
Waiting for pod default/cockroach-client to be running, status is Pending,
pod ready: false
If you don't see a command prompt, try pressing enter.
root@my-release-cockroachdb-public:26257> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| pg_catalog |
| system |
+--------------------+
(3 rows)
root@my-release-cockroachdb-public:26257> CREATE DATABASE bank;
CREATE DATABASE
root@my-release-cockroachdb-public:26257> CREATE TABLE bank.accounts (id INT
PRIMARY KEY, balance DECIMAL);
CREATE TABLE
root@my-release-cockroachdb-public:26257> INSERT INTO bank.accounts VALUES
(1234, 10000.50);
INSERT 1
root@my-release-cockroachdb-public:26257> SELECT * FROM bank.accounts;
+------+---------+
| id | balance |
+------+---------+
| 1234 | 10000.5 |
+------+---------+
(1 row)
root@my-release-cockroachdb-public:26257> \q
Waiting for pod default/cockroach-client to terminate, status is Running
pod "cockroach-client" deleted
```
> If you are running in secure mode, you will have to provide a client certificate to the cluster in order to authenticate, so the above command will not work. See [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) for an example of how to set up an interactive SQL shell against a secure cluster or [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/example-app-secure.yaml) for an example application connecting to a secure cluster.
### Cluster health
Because our pod spec includes regular health checks of the CockroachDB processes, simply running `kubectl get pods` and looking at the `STATUS` column is sufficient to determine the health of each instance in the cluster.
If you want more detailed information about the cluster, the best place to look is the Admin UI.
### Accessing the Admin UI
If you want to see information about how the cluster is doing, you can try pulling up the CockroachDB Admin UI by port-forwarding from your local machine to one of the pods (replacing `my-release-cockroachdb-0` with the name of one of your pods:
```shell
kubectl port-forward my-release-cockroachdb-0 8080
```
You should then be able to access the Admin UI by visiting <http://localhost:8080/> in your web browser.
### Failover
If any CockroachDB member fails, it is restarted or recreated automatically by the Kubernetes infrastructure, and will re-join the cluster automatically when it comes back up. You can test this scenario by killing any of the CockroachDB pods:
```shell
kubectl delete pod my-release-cockroachdb-1
```
```shell
kubectl get pods -l "app.kubernetes.io/instance=my-release,app.kubernetes.io/component=cockroachdb"
```
```
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 1/1 Running 0 5m
my-release-cockroachdb-2 1/1 Running 0 5m
```
After a while:
```shell
kubectl get pods -l "app.kubernetes.io/instance=my-release,app.kubernetes.io/component=cockroachdb"
```
```
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 1/1 Running 0 5m
my-release-cockroachdb-1 1/1 Running 0 20s
my-release-cockroachdb-2 1/1 Running 0 5m
```
You can check the state of re-joining from the new pod's logs:
```shell
kubectl logs my-release-cockroachdb-1
```
```
[...]
I161028 19:32:09.754026 1 server/node.go:586 [n1] node connected via gossip and
verified as part of cluster {"35ecbc27-3f67-4e7d-9b8f-27c31aae17d6"}
[...]
cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257
build: beta-20161027-55-gd2d3c7f @ 2016/10/28 19:27:25 (go1.7.3)
admin: http://0.0.0.0:8080
sql:
postgresql://root@my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257?sslmode=disable
logs: cockroach-data/logs
store[0]: path=cockroach-data
status: restarted pre-existing node
clusterID: {35ecbc27-3f67-4e7d-9b8f-27c31aae17d6}
nodeID: 2
[...]
```
### NetworkPolicy
To enable NetworkPolicy for CockroachDB, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `yes`/`true`.
For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the `DefaultDeny` Namespace annotation. Note: this will enforce policy for _all_ pods in the Namespace:
```shell
kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"
```
For more precise policy, set `networkPolicy.ingress.grpc` and `networkPolicy.ingress.http` rules. This will only allow pods that match the provided rules to connect to CockroachDB.
### Scaling
Scaling should be managed via the `helm upgrade` command. After resizing your cluster on your cloud environment (e.g., GKE or EKS), run the following command to add a pod. This assumes you scaled from 3 to 4 nodes:
```shell
helm upgrade \
my-release \
cockroachdb/cockroachdb \
--set statefulset.replicas=4 \
--reuse-values
```
Note, that if you are running in secure mode (`tls.enabled` is `yes`/`true`) and increase the size of your cluster, you will also have to approve the CSR (certificate-signing request) of each new node (using `kubectl get csr` and `kubectl certificate approve`).
[1]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
[2]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
[3]: https://cert-manager.io/
[4]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
[5]: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/

View File

@ -0,0 +1,9 @@
# CockroachDB Chart
CockroachDB is a Distributed SQL database that runs natively in Kubernetes. It gives you resilient, horizontal scale across multiple clouds with always-on availability and data partitioned by location.
CockroachDB scales horizontally without reconfiguration or need for a massive architectural overhaul. Simply add a new node to the cluster and CockroachDB takes care of the underlying complexity.
- Scale by simply adding new nodes to a CockroachDB cluster
- Automate balancing and distribution of ranges, not shards
- Optimize server utilization evenly across all nodes

View File

@ -0,0 +1,50 @@
CockroachDB can be accessed via port {{ .Values.service.ports.grpc.external.port }} at the
following DNS name from within your cluster:
{{ template "cockroachdb.fullname" . }}-public.{{ .Release.Namespace }}.svc.cluster.local
Because CockroachDB supports the PostgreSQL wire protocol, you can connect to
the cluster using any available PostgreSQL client.
{{- if not .Values.tls.enabled }}
For example, you can open up a SQL shell to the cluster by running:
kubectl run -it --rm cockroach-client \
--image=cockroachdb/cockroach \
--restart=Never \
{{- if .Values.networkPolicy.enabled }}
--labels="{{ template "cockroachdb.fullname" . }}-client=true" \
{{- end }}
--command -- \
./cockroach sql --insecure --host={{ template "cockroachdb.fullname" . }}-public.{{ .Release.Namespace }}
From there, you can interact with the SQL shell as you would any other SQL
shell, confident that any data you write will be safe and available even if
parts of your cluster fail.
{{- else }}
Note that because the cluster is running in secure mode, any client application
that you attempt to connect will either need to have a valid client certificate
or a valid username and password.
{{- end }}
{{- if and (.Values.networkPolicy.enabled) (not (empty .Values.networkPolicy.ingress.grpc)) }}
Note: Since NetworkPolicy is enabled, the only Pods allowed to connect to this
CockroachDB cluster are:
1. Having the label: "{{ template "cockroachdb.fullname" . }}-client=true"
2. Matching the following rules: {{- toYaml .Values.networkPolicy.ingress.grpc | nindent 0 }}
{{- end }}
Finally, to open up the CockroachDB admin UI, you can port-forward from your
local machine into one of the instances in the cluster:
kubectl port-forward -n {{ .Release.Namespace }} {{ template "cockroachdb.fullname" . }}-0 {{ index .Values.conf `http-port` | int64 }}
Then you can access the admin UI at http{{ if .Values.tls.enabled }}s{{ end }}://localhost:{{ index .Values.conf `http-port` | int64 }}/ in your web browser.
For more information on using CockroachDB, please see the project's docs at:
https://www.cockroachlabs.com/docs/

View File

@ -0,0 +1,291 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "cockroachdb.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 56 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "cockroachdb.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 56 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 56 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 56 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name for cluster scope resource.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name with release namespace appended at the end.
*/}}
{{- define "cockroachdb.clusterfullname" -}}
{{- if .Values.fullnameOverride -}}
{{- printf "%s-%s" .Values.fullnameOverride .Release.Namespace | trunc 56 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- printf "%s-%s" .Release.Name .Release.Namespace | trunc 56 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s-%s" .Release.Name $name .Release.Namespace | trunc 56 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "cockroachdb.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 56 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the ServiceAccount to use.
*/}}
{{- define "cockroachdb.serviceAccount.name" -}}
{{- if .Values.statefulset.serviceAccount.create -}}
{{- default (include "cockroachdb.fullname" .) .Values.statefulset.serviceAccount.name -}}
{{- else -}}
{{- default "default" .Values.statefulset.serviceAccount.name -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for NetworkPolicy.
*/}}
{{- define "cockroachdb.networkPolicy.apiVersion" -}}
{{- if semverCompare ">=1.4-0, <=1.7-0" .Capabilities.KubeVersion.Version -}}
{{- print "extensions/v1beta1" -}}
{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.Version -}}
{{- print "networking.k8s.io/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for StatefulSets
*/}}
{{- define "cockroachdb.statefulset.apiVersion" -}}
{{- if semverCompare "<1.12-0" .Capabilities.KubeVersion.Version -}}
{{- print "apps/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return CockroachDB store expression
*/}}
{{- define "cockroachdb.conf.store" -}}
{{- $isInMemory := eq (.Values.conf.store.type | toString) "mem" -}}
{{- $persistentSize := empty .Values.conf.store.size | ternary .Values.storage.persistentVolume.size .Values.conf.store.size -}}
{{- $store := dict -}}
{{- $_ := set $store "type" ($isInMemory | ternary "type=mem" "") -}}
{{- $_ := set $store "path" ($isInMemory | ternary "" (print "path=" .Values.conf.path)) -}}
{{- $_ := set $store "size" (print "size=" ($isInMemory | ternary .Values.conf.store.size $persistentSize)) -}}
{{- $_ := set $store "attrs" (empty .Values.conf.store.attrs | ternary "" (print "attrs=" .Values.conf.store.attrs)) -}}
{{ compact (values $store) | join "," }}
{{- end -}}
{{/*
Define the default values for the certificate selfSigner inputs
*/}}
{{- define "selfcerts.fullname" -}}
{{- printf "%s-%s" (include "cockroachdb.fullname" .) "self-signer" | trunc 56 | trimSuffix "-" -}}
{{- end -}}
{{- define "rotatecerts.fullname" -}}
{{- printf "%s-%s" (include "cockroachdb.fullname" .) "rotate-self-signer" | trunc 56 | trimSuffix "-" -}}
{{- end -}}
{{- define "selfcerts.minimumCertDuration" -}}
{{- if .Values.tls.certs.selfSigner.minimumCertDuration -}}
{{- print (.Values.tls.certs.selfSigner.minimumCertDuration | trimSuffix "h") -}}
{{- else }}
{{- $minCertDuration := min (sub (.Values.tls.certs.selfSigner.clientCertDuration | trimSuffix "h" ) (.Values.tls.certs.selfSigner.clientCertExpiryWindow | trimSuffix "h")) (sub (.Values.tls.certs.selfSigner.nodeCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.nodeCertExpiryWindow | trimSuffix "h")) -}}
{{- print $minCertDuration -}}
{{- end }}
{{- end -}}
{{/*
Define the cron schedules for certificate rotate jobs and converting from hours to valid cron string.
We assume that each month has 31 days, hence the cron job may run few days earlier in a year. In a cron schedule,
we can not set a cron of more than a year, hence we try to run the cron in such a way that the cron run comes to
as close possible to the expiry window. However, it is possible that cron may run earlier than the expiry window.
*/}}
{{- define "selfcerts.caRotateSchedule" -}}
{{- $tempHours := sub (.Values.tls.certs.selfSigner.caCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.caCertExpiryWindow | trimSuffix "h") -}}
{{- $days := "*" -}}
{{- $months := "*" -}}
{{- $hours := mod $tempHours 24 -}}
{{- if not (eq $hours $tempHours) -}}
{{- $tempDays := div $tempHours 24 -}}
{{- $days = mod $tempDays 31 -}}
{{- if not (eq $days $tempDays) -}}
{{- $days = add $days 1 -}}
{{- $tempMonths := div $tempDays 31 -}}
{{- $months = mod $tempMonths 12 -}}
{{- if not (eq $months $tempMonths) -}}
{{- $months = add $months 1 -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if ne (toString $months) "*" -}}
{{- $months = printf "*/%s" (toString $months) -}}
{{- else -}}
{{- if ne (toString $days) "*" -}}
{{- $days = printf "*/%s" (toString $days) -}}
{{- else -}}
{{- if ne $hours 0 -}}
{{- $hours = printf "*/%s" (toString $hours) -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- printf "0 %s %s %s *" (toString $hours) (toString $days) (toString $months) -}}
{{- end -}}
{{- define "selfcerts.clientRotateSchedule" -}}
{{- $tempHours := int64 (include "selfcerts.minimumCertDuration" .) -}}
{{- $days := "*" -}}
{{- $months := "*" -}}
{{- $hours := mod $tempHours 24 -}}
{{- if not (eq $hours $tempHours) -}}
{{- $tempDays := div $tempHours 24 -}}
{{- $days = mod $tempDays 31 -}}
{{- if not (eq $days $tempDays) -}}
{{- $days = add $days 1 -}}
{{- $tempMonths := div $tempDays 31 -}}
{{- $months = mod $tempMonths 12 -}}
{{- if not (eq $months $tempMonths) -}}
{{- $months = add $months 1 -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if ne (toString $months) "*" -}}
{{- $months = printf "*/%s" (toString $months) -}}
{{- else -}}
{{- if ne (toString $days) "*" -}}
{{- $days = printf "*/%s" (toString $days) -}}
{{- else -}}
{{- if ne $hours 0 -}}
{{- $hours = printf "*/%s" (toString $hours) -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- printf "0 %s %s %s *" (toString $hours) (toString $days) (toString $months) -}}
{{- end -}}
{{/*
Define the appropriate validations for the certificate selfSigner inputs
*/}}
{{/*
Validate that if caProvided is true, then the caSecret must not be empty and secret must be present in the namespace.
*/}}
{{- define "cockroachdb.tls.certs.selfSigner.caProvidedValidation" -}}
{{- if .Values.tls.certs.selfSigner.caProvided -}}
{{- if eq "" .Values.tls.certs.selfSigner.caSecret -}}
{{ fail "CA secret can't be empty if caProvided is set to true" }}
{{- else -}}
{{- if not (lookup "v1" "Secret" .Release.Namespace .Values.tls.certs.selfSigner.caSecret) }}
{{ fail "CA secret is not present in the release namespace" }}
{{- end }}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Validate that if caCertDuration or caCertExpiryWindow must not be empty and caCertExpiryWindow must be greater than
minimumCertDuration.
*/}}
{{- define "cockroachdb.tls.certs.selfSigner.caCertValidation" -}}
{{- if not .Values.tls.certs.selfSigner.caProvided -}}
{{- if or (not .Values.tls.certs.selfSigner.caCertDuration) (not .Values.tls.certs.selfSigner.caCertExpiryWindow) }}
{{ fail "CA cert duration or CA cert expiry window can not be empty" }}
{{- else }}
{{- if gt (int64 (include "selfcerts.minimumCertDuration" .)) (int64 (.Values.tls.certs.selfSigner.caCertExpiryWindow | trimSuffix "h")) -}}
{{ fail "CA cert expiration window should not be less than minimum Cert duration" }}
{{- end -}}
{{- if gt (int64 (include "selfcerts.minimumCertDuration" .)) (sub (.Values.tls.certs.selfSigner.caCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.caCertExpiryWindow | trimSuffix "h")) -}}
{{ fail "CA cert Duration minus CA cert expiration window should not be less than minimum Cert duration" }}
{{- end -}}
{{- end -}}
{{- end }}
{{- end -}}
{{/*
Validate that if clientCertDuration must not be empty and it must be greater than minimumCertDuration.
*/}}
{{- define "cockroachdb.tls.certs.selfSigner.clientCertValidation" -}}
{{- if or (not .Values.tls.certs.selfSigner.clientCertDuration) (not .Values.tls.certs.selfSigner.clientCertExpiryWindow) }}
{{ fail "Client cert duration can not be empty" }}
{{- else }}
{{- if lt (sub (.Values.tls.certs.selfSigner.clientCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.clientCertExpiryWindow | trimSuffix "h")) (int64 (include "selfcerts.minimumCertDuration" .)) }}
{{ fail "Client cert duration minus client cert expiry window should not be less than minimum Cert duration" }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
Validate that nodeCertDuration must not be empty and nodeCertDuration minus nodeCertExpiryWindow must be greater than minimumCertDuration.
*/}}
{{- define "cockroachdb.tls.certs.selfSigner.nodeCertValidation" -}}
{{- if or (not .Values.tls.certs.selfSigner.nodeCertDuration) (not .Values.tls.certs.selfSigner.nodeCertExpiryWindow) }}
{{ fail "Node cert duration can not be empty" }}
{{- else }}
{{- if lt (sub (.Values.tls.certs.selfSigner.nodeCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.nodeCertExpiryWindow | trimSuffix "h")) (int64 (include "selfcerts.minimumCertDuration" .))}}
{{ fail "Node cert duration minus node cert expiry window should not be less than minimum Cert duration" }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
Validate that if user enabled tls, then either self-signed certificates or certificate manager is enabled
*/}}
{{- define "cockroachdb.tlsValidation" -}}
{{- if .Values.tls.enabled -}}
{{- if and .Values.tls.certs.selfSigner.enabled .Values.tls.certs.certManager -}}
{{ fail "Can not enable the self signed certificates and certificate manager at the same time" }}
{{- end -}}
{{- if and (not .Values.tls.certs.selfSigner.enabled) (not .Values.tls.certs.certManager) -}}
{{- if not .Values.tls.certs.provided -}}
{{ fail "You have to enable either self signed certificates or certificate manager, if you have enabled tls" }}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "cockroachdb.tls.certs.selfSigner.validation" -}}
{{ include "cockroachdb.tls.certs.selfSigner.caProvidedValidation" . }}
{{ include "cockroachdb.tls.certs.selfSigner.caCertValidation" . }}
{{ include "cockroachdb.tls.certs.selfSigner.clientCertValidation" . }}
{{ include "cockroachdb.tls.certs.selfSigner.nodeCertValidation" . }}
{{- end -}}
{{- define "cockroachdb.securityContext.versionValidation" }}
{{- /* Allow using `securityContext` for custom images. */}}
{{- if ne "cockroachdb/cockroach" .Values.image.repository -}}
{{ print true }}
{{- else -}}
{{- if semverCompare ">=22.1.2" .Values.image.tag -}}
{{ print true }}
{{- else -}}
{{- if semverCompare ">=21.2.13, <22.1.0" .Values.image.tag -}}
{{ print true }}
{{- else -}}
{{ print false }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.iap.enabled }}
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: {{ template "cockroachdb.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: {{ template "cockroachdb.fullname" . }}.iap
timeoutSec: 120
{{- end }}

View File

@ -0,0 +1,31 @@
{{- if and .Values.tls.enabled .Values.tls.certs.certManager }}
{{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }}
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ template "cockroachdb.fullname" . }}-ca-cert
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
isCA: true
secretName: {{ .Values.tls.certs.caSecret }}
privateKey:
algorithm: ECDSA
size: 256
commonName: root
subject:
organizations:
- Cockroach
issuerRef:
name: {{ .Values.tls.certs.certManagerIssuer.name }}
kind: {{ .Values.tls.certs.certManagerIssuer.kind }}
group: {{ .Values.tls.certs.certManagerIssuer.group }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,40 @@
{{- if and .Values.tls.enabled .Values.tls.certs.certManager }}
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ template "cockroachdb.fullname" . }}-root-client
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
duration: {{ .Values.tls.certs.certManagerIssuer.clientCertDuration }}
renewBefore: {{ .Values.tls.certs.certManagerIssuer.clientCertExpiryWindow }}
usages:
- digital signature
- key encipherment
- client auth
privateKey:
algorithm: RSA
size: 2048
commonName: root
subject:
organizations:
- Cockroach
secretName: {{ .Values.tls.certs.clientRootSecret }}
issuerRef:
{{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }}
name: {{ template "cockroachdb.fullname" . }}-ca-issuer
kind: Issuer
group: cert-manager.io
{{- else }}
name: {{ .Values.tls.certs.certManagerIssuer.name }}
kind: {{ .Values.tls.certs.certManagerIssuer.kind }}
group: {{ .Values.tls.certs.certManagerIssuer.group }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if and .Values.tls.enabled .Values.tls.certs.certManager }}
{{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }}
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ template "cockroachdb.fullname" . }}-ca-issuer
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
ca:
secretName: {{ .Values.tls.certs.caSecret }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,50 @@
{{- if and .Values.tls.enabled .Values.tls.certs.certManager }}
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ template "cockroachdb.fullname" . }}-node
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
duration: {{ .Values.tls.certs.certManagerIssuer.nodeCertDuration }}
renewBefore: {{ .Values.tls.certs.certManagerIssuer.nodeCertExpiryWindow }}
usages:
- digital signature
- key encipherment
- server auth
- client auth
privateKey:
algorithm: RSA
size: 2048
commonName: node
subject:
organizations:
- Cockroach
dnsNames:
- "localhost"
- "127.0.0.1"
- {{ printf "%s-public" (include "cockroachdb.fullname" .) | quote }}
- {{ printf "%s-public.%s" (include "cockroachdb.fullname" .) .Release.Namespace | quote }}
- {{ printf "%s-public.%s.svc.%s" (include "cockroachdb.fullname" .) .Release.Namespace .Values.clusterDomain | quote }}
- {{ printf "*.%s" (include "cockroachdb.fullname" .) | quote }}
- {{ printf "*.%s.%s" (include "cockroachdb.fullname" .) .Release.Namespace | quote }}
- {{ printf "*.%s.%s.svc.%s" (include "cockroachdb.fullname" .) .Release.Namespace .Values.clusterDomain | quote }}
secretName: {{ .Values.tls.certs.nodeSecret }}
issuerRef:
{{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }}
name: {{ template "cockroachdb.fullname" . }}-ca-issuer
kind: Issuer
group: cert-manager.io
{{- else }}
name: {{ .Values.tls.certs.certManagerIssuer.name }}
kind: {{ .Values.tls.certs.certManagerIssuer.kind }}
group: {{ .Values.tls.certs.certManagerIssuer.group }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,19 @@
{{- if and .Values.tls.enabled (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "cockroachdb.clusterfullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests"]
verbs: ["create", "get", "watch"]
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if and .Values.tls.enabled (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "cockroachdb.clusterfullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "cockroachdb.clusterfullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "cockroachdb.serviceAccount.name" . }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}

View File

@ -0,0 +1,46 @@
{{- if and .Values.tls.enabled (and .Values.tls.certs.selfSigner.enabled (not .Values.tls.certs.selfSigner.caProvided)) }}
{{- if .Values.tls.certs.selfSigner.rotateCerts }}
{{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }}
apiVersion: batch/v1
{{- else }}
apiVersion: batch/v1beta1
{{- end }}
kind: CronJob
metadata:
name: {{ template "rotatecerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
spec:
schedule: {{ template "selfcerts.caRotateSchedule" . }}
jobTemplate:
spec:
backoffLimit: 1
template:
spec:
restartPolicy: Never
containers:
- name: cert-rotate-job
image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}"
imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}"
args:
- rotate
- --ca
- --ca-duration={{ .Values.tls.certs.selfSigner.caCertDuration }}
- --ca-expiry={{ .Values.tls.certs.selfSigner.caCertExpiryWindow }}
- --ca-cron={{ template "selfcerts.caRotateSchedule" . }}
- --readiness-wait={{ .Values.tls.certs.selfSigner.readinessWait }}
- --pod-update-timeout={{ .Values.tls.certs.selfSigner.podUpdateTimeout }}
env:
- name: STATEFULSET_NAME
value: {{ template "cockroachdb.fullname" . }}
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: CLUSTER_DOMAIN
value: {{ .Values.clusterDomain}}
serviceAccountName: {{ template "rotatecerts.fullname" . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,53 @@
{{- if and .Values.tls.certs.selfSigner.enabled .Values.tls.certs.selfSigner.rotateCerts }}
{{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }}
apiVersion: batch/v1
{{- else }}
apiVersion: batch/v1beta1
{{- end }}
kind: CronJob
metadata:
name: {{ template "rotatecerts.fullname" . }}-client
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
spec:
schedule: {{ template "selfcerts.clientRotateSchedule" . }}
jobTemplate:
spec:
backoffLimit: 1
template:
spec:
restartPolicy: Never
containers:
- name: cert-rotate-job
image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}"
imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}"
args:
- rotate
{{- if .Values.tls.certs.selfSigner.caProvided }}
- --ca-secret={{ .Values.tls.certs.selfSigner.caSecret }}
{{- else }}
- --ca-duration={{ .Values.tls.certs.selfSigner.caCertDuration }}
- --ca-expiry={{ .Values.tls.certs.selfSigner.caCertExpiryWindow }}
{{- end }}
- --client
- --client-duration={{ .Values.tls.certs.selfSigner.clientCertDuration }}
- --client-expiry={{ .Values.tls.certs.selfSigner.clientCertExpiryWindow }}
- --node
- --node-duration={{ .Values.tls.certs.selfSigner.nodeCertDuration }}
- --node-expiry={{ .Values.tls.certs.selfSigner.nodeCertExpiryWindow }}
- --node-client-cron={{ template "selfcerts.clientRotateSchedule" . }}
- --readiness-wait={{ .Values.tls.certs.selfSigner.readinessWait }}
- --pod-update-timeout={{ .Values.tls.certs.selfSigner.podUpdateTimeout }}
env:
- name: STATEFULSET_NAME
value: {{ template "cockroachdb.fullname" . }}
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: CLUSTER_DOMAIN
value: {{ .Values.clusterDomain}}
serviceAccountName: {{ template "rotatecerts.fullname" . }}
{{- end}}

View File

@ -0,0 +1,90 @@
{{- if .Values.ingress.enabled -}}
{{- $paths := .Values.ingress.paths -}}
{{- $ports := .Values.service.ports -}}
{{- $fullName := include "cockroachdb.fullname" . -}}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
apiVersion: networking.k8s.io/v1
{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1/Ingress" }}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
{{- if or .Values.ingress.annotations .Values.iap.enabled }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- if .Values.iap.enabled }}
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "false"
{{- end }}
{{- end }}
name: {{ $fullName }}-ingress
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ $.Release.Name | quote }}
app.kubernetes.io/managed-by: {{ $.Release.Service | quote }}
{{- if .Values.ingress.labels }}
{{- toYaml .Values.ingress.labels | nindent 4 }}
{{- end }}
spec:
rules:
{{- if .Values.ingress.hosts }}
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
{{- range $path := $paths }}
- path: {{ $path | quote }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
{{- if $.Values.iap.enabled }}
pathType: ImplementationSpecific
{{- else }}
pathType: Prefix
{{- end }}
{{- end }}
backend:
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
service:
name: {{ $fullName }}-public
port:
name: {{ $ports.http.name | quote }}
{{- else }}
serviceName: {{ $fullName }}-public
servicePort: {{ $ports.http.name | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- else }}
- http:
paths:
{{- range $path := $paths }}
- path: {{ $path | quote }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
{{- if $.Values.iap.enabled }}
pathType: ImplementationSpecific
{{- else }}
pathType: Prefix
{{- end }}
{{- end }}
backend:
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
service:
name: {{ $fullName }}-public
port:
name: {{ $ports.http.name | quote }}
{{- else }}
serviceName: {{ $fullName }}-public
servicePort: {{ $ports.http.name | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- toYaml .Values.ingress.tls | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,83 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "selfcerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "4"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
spec:
template:
metadata:
name: {{ template "selfcerts.fullname" . }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.tls.selfSigner.annotations }}
annotations: {{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- if and .Values.tls.certs.selfSigner.securityContext.enabled }}
securityContext:
seccompProfile:
type: "RuntimeDefault"
runAsGroup: 1000
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
{{- end }}
restartPolicy: Never
{{- if or .Values.tls.selfSigner.nodeAffinity }}
affinity:
{{- with .Values.tls.selfSigner.nodeAffinity }}
nodeAffinity: {{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
{{- with .Values.tls.selfSigner.nodeSelector }}
nodeSelector: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tls.selfSigner.tolerations }}
tolerations: {{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: cert-generate-job
image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}"
imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}"
args:
- generate
{{- if .Values.tls.certs.selfSigner.caProvided }}
- --ca-secret={{ .Values.tls.certs.selfSigner.caSecret }}
{{- else }}
- --ca-duration={{ .Values.tls.certs.selfSigner.caCertDuration }}
- --ca-expiry={{ .Values.tls.certs.selfSigner.caCertExpiryWindow }}
{{- end }}
- --client-duration={{ .Values.tls.certs.selfSigner.clientCertDuration }}
- --client-expiry={{ .Values.tls.certs.selfSigner.clientCertExpiryWindow }}
- --node-duration={{ .Values.tls.certs.selfSigner.nodeCertDuration }}
- --node-expiry={{ .Values.tls.certs.selfSigner.nodeCertExpiryWindow }}
env:
- name: STATEFULSET_NAME
value: {{ template "cockroachdb.fullname" . }}
- name: NAMESPACE
value: {{ .Release.Namespace | quote }}
- name: CLUSTER_DOMAIN
value: {{ .Values.clusterDomain}}
{{- if and .Values.tls.certs.selfSigner.securityContext.enabled }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
{{- end }}
serviceAccountName: {{ template "selfcerts.fullname" . }}
{{- end}}

View File

@ -0,0 +1,55 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "selfcerts.fullname" . }}-cleaner
namespace: {{ .Release.Namespace | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
spec:
backoffLimit: 1
template:
metadata:
name: {{ template "selfcerts.fullname" . }}-cleaner
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
spec:
{{- if and .Values.tls.certs.selfSigner.securityContext.enabled }}
securityContext:
seccompProfile:
type: "RuntimeDefault"
runAsGroup: 1000
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
{{- end }}
restartPolicy: Never
containers:
- name: cleaner
image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}"
imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}"
args:
- cleanup
- --namespace={{ .Release.Namespace }}
env:
- name: STATEFULSET_NAME
value: {{ template "cockroachdb.fullname" . }}
{{- if and .Values.tls.certs.selfSigner.securityContext.enabled }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
{{- end }}
serviceAccountName: {{ template "rotatecerts.fullname" . }}
{{- end}}

View File

@ -0,0 +1,296 @@
{{ $isClusterInitEnabled := and (eq (len .Values.conf.join) 0) (not (index .Values.conf `single-node`)) }}
{{ $isDatabaseProvisioningEnabled := .Values.init.provisioning.enabled }}
{{- if or $isClusterInitEnabled $isDatabaseProvisioningEnabled }}
{{ template "cockroachdb.tlsValidation" . }}
kind: Job
apiVersion: batch/v1
metadata:
name: {{ template "cockroachdb.fullname" . }}-init
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.init.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation
{{- with .Values.init.jobAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.init.labels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.init.annotations }}
annotations: {{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- if eq (include "cockroachdb.securityContext.versionValidation" .) "true" }}
{{- if and .Values.init.securityContext.enabled }}
securityContext:
seccompProfile:
type: "RuntimeDefault"
runAsGroup: 1000
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
{{- end }}
{{- end }}
restartPolicy: OnFailure
terminationGracePeriodSeconds: 300
{{- if or .Values.image.credentials (and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager)) }}
imagePullSecrets:
{{- if .Values.image.credentials }}
- name: {{ template "cockroachdb.fullname" . }}.db.registry
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }}
- name: {{ template "cockroachdb.fullname" . }}.self-signed-certs.registry
{{- end }}
{{- end }}
serviceAccountName: {{ template "cockroachdb.serviceAccount.name" . }}
{{- if .Values.tls.enabled }}
initContainers:
- name: copy-certs
image: {{ .Values.tls.copyCerts.image | quote }}
imagePullPolicy: {{ .Values.tls.selfSigner.image.pullPolicy | quote }}
command:
- /bin/sh
- -c
- "cp -f /certs/* /cockroach-certs/; chmod 0400 /cockroach-certs/*.key"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if and .Values.init.securityContext.enabled }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
{{- end }}
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs/
- name: certs-secret
mountPath: /certs/
{{- with .Values.tls.copyCerts.resources }}
resources: {{- toYaml . | nindent 12 }}
{{- end }}
{{- end }}
{{- with .Values.init.affinity }}
affinity: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.init.nodeSelector }}
nodeSelector: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.init.tolerations }}
tolerations: {{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: cluster-init
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
# Run the command in an `while true` loop because this Job is bound
# to come up before the CockroachDB Pods (due to the time needed to
# get PersistentVolumes attached to Nodes), and sleeping 5 seconds
# between attempts is much better than letting the Pod fail when
# the init command does and waiting out Kubernetes' non-configurable
# exponential back-off for Pod restarts.
# Command completes either when cluster initialization succeeds,
# or when cluster has been initialized already.
command:
- /bin/bash
- -c
- >-
{{- if $isClusterInitEnabled }}
initCluster() {
while true; do
local output=$(
set -x;
/cockroach/cockroach init \
{{- if .Values.tls.enabled }}
--certs-dir=/cockroach-certs/ \
{{- else }}
--insecure \
{{- end }}
{{- with index .Values.conf "cluster-name" }}
--cluster-name={{.}} \
{{- end }}
--host={{ template "cockroachdb.fullname" . }}-0.{{ template "cockroachdb.fullname" . -}}
:{{ .Values.service.ports.grpc.internal.port | int64 }} \
2>&1);
local exitCode="$?";
echo $output;
if [[ "$output" =~ .*"Cluster successfully initialized".* || "$output" =~ .*"cluster has already been initialized".* ]]; then
break;
fi
echo "Cluster is not ready to be initialized, retrying in 5 seconds"
sleep 5;
done
}
initCluster;
{{- end }}
{{- if $isDatabaseProvisioningEnabled }}
provisionCluster() {
while true; do
/cockroach/cockroach sql \
{{- if .Values.tls.enabled }}
--certs-dir=/cockroach-certs/ \
{{- else }}
--insecure \
{{- end }}
--host={{ template "cockroachdb.fullname" . }}-0.{{ template "cockroachdb.fullname" . -}}
:{{ .Values.service.ports.grpc.internal.port | int64 }} \
--execute="
{{- range $clusterSetting, $clusterSettingValue := .Values.init.provisioning.clusterSettings }}
SET CLUSTER SETTING {{ $clusterSetting }} = '${{ $clusterSetting | replace "." "_" }}_CLUSTER_SETTING';
{{- end }}
{{- range $user := .Values.init.provisioning.users }}
CREATE USER IF NOT EXISTS {{ $user.name }} WITH
{{- if $user.password }}
PASSWORD '${{ $user.name }}_PASSWORD'
{{- else }}
PASSWORD null
{{- end }}
{{ join " " $user.options }}
;
{{- end }}
{{- range $database := .Values.init.provisioning.databases }}
CREATE DATABASE IF NOT EXISTS {{ $database.name }}
{{- if $database.options }}
{{ join " " $database.options }}
{{- end }}
;
{{- range $owner := $database.owners }}
GRANT ALL ON DATABASE {{ $database.name }} TO {{ $owner }};
{{- end }}
{{- range $owner := $database.owners_with_grant_option }}
GRANT ALL ON DATABASE {{ $database.name }} TO {{ $owner }} WITH GRANT OPTION;
{{- end }}
{{- if $database.backup }}
CREATE SCHEDULE IF NOT EXISTS {{ $database.name }}_scheduled_backup
FOR BACKUP DATABASE {{ $database.name }} INTO '{{ $database.backup.into }}'
{{- if $database.backup.options }}
WITH {{ join "," $database.backup.options }}
{{- end }}
RECURRING '{{ $database.backup.recurring }}'
{{- if $database.backup.fullBackup }}
FULL BACKUP '{{ $database.backup.fullBackup }}'
{{- else }}
FULL BACKUP ALWAYS
{{- end }}
{{- if and $database.backup.schedule $database.backup.schedule.options }}
WITH SCHEDULE OPTIONS {{ join "," $database.backup.schedule.options }}
{{- end }}
;
{{- end }}
{{- end }}
"
&>/dev/null;
local exitCode="$?";
if [[ "$exitCode" -eq "0" ]]
then break;
fi
sleep 5;
done
echo "Provisioning completed successfully";
}
provisionCluster;
{{- end }}
env:
{{- $secretName := printf "%s-init" (include "cockroachdb.fullname" .) }}
{{- range $user := .Values.init.provisioning.users }}
{{- if $user.password }}
- name: {{ $user.name }}_PASSWORD
valueFrom:
secretKeyRef:
name: {{ $secretName }}
key: {{ $user.name }}-password
{{- end }}
{{- end }}
{{- range $clusterSetting, $clusterSettingValue := .Values.init.provisioning.clusterSettings }}
{{- if $clusterSettingValue }}
- name: {{ $clusterSetting | replace "." "_" }}_CLUSTER_SETTING
valueFrom:
secretKeyRef:
name: {{ $secretName }}
key: {{ $clusterSetting | replace "." "-" }}-cluster-setting
{{- end }}
{{- end }}
{{- if .Values.tls.enabled }}
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs/
{{- end }}
{{- with .Values.init.resources }}
resources: {{- toYaml . | nindent 12 }}
{{- end }}
{{- if and .Values.init.securityContext.enabled }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
{{- end }}
{{- if .Values.tls.enabled }}
volumes:
- name: client-certs
emptyDir: {}
{{- if or .Values.tls.certs.provided .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }}
- name: certs-secret
{{- if or .Values.tls.certs.tlsSecret .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }}
projected:
sources:
- secret:
{{- if .Values.tls.certs.selfSigner.enabled }}
name: {{ template "cockroachdb.fullname" . }}-client-secret
{{ else }}
name: {{ .Values.tls.certs.clientRootSecret }}
{{ end -}}
items:
- key: ca.crt
path: ca.crt
mode: 0400
- key: tls.crt
path: client.root.crt
mode: 0400
- key: tls.key
path: client.root.key
mode: 0400
{{- else }}
secret:
secretName: {{ .Values.tls.certs.clientRootSecret }}
defaultMode: 0400
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,59 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: {{ template "cockroachdb.networkPolicy.apiVersion" . }}
metadata:
name: {{ template "cockroachdb.serviceAccount.name" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 6 }}
{{- end }}
ingress:
- ports:
- port: grpc
{{- with .Values.networkPolicy.ingress.grpc }}
from:
# Allow connections via custom rules.
{{- toYaml . | nindent 8 }}
# Allow client connection via pre-considered label.
- podSelector:
matchLabels:
{{ template "cockroachdb.fullname" . }}-client: "true"
# Allow other CockroachDBs to connect to form a cluster.
- podSelector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 14 }}
{{- end }}
{{- if gt (.Values.statefulset.replicas | int64) 1 }}
# Allow init Job to connect to bootstrap a cluster.
- podSelector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.init.labels }}
{{- toYaml . | nindent 14 }}
{{- end }}
{{- end }}
{{- end }}
# Allow connections to admin UI and for Prometheus.
- ports:
- port: http
{{- with .Values.networkPolicy.ingress.http }}
from: {{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,26 @@
kind: PodDisruptionBudget
{{- if or (.Capabilities.APIVersions.Has "policy/v1") (semverCompare ">=1.21-0" .Capabilities.KubeVersion.Version) }}
apiVersion: policy/v1
{{- else }}
apiVersion: policy/v1beta1
{{- end }}
metadata:
name: {{ template "cockroachdb.fullname" . }}-budget
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 6 }}
{{- end }}
maxUnavailable: {{ .Values.statefulset.budget.maxUnavailable | int64 }}

View File

@ -0,0 +1,27 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "rotatecerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "update", "delete"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get"]
resourceNames:
- {{ template "cockroachdb.fullname" . }}
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get"]
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "selfcerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "2"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "update", "delete"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get"]
resourceNames:
- {{ template "cockroachdb.fullname" . }}
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get"]
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if .Values.tls.enabled }}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "cockroachdb.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups: [""]
resources: ["secrets"]
{{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }}
verbs: ["get"]
{{- else }}
verbs: ["create", "get"]
{{- end }}
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "rotatecerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "rotatecerts.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "rotatecerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}

View File

@ -0,0 +1,29 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "selfcerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "3"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "selfcerts.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "selfcerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if .Values.tls.enabled }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "cockroachdb.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "cockroachdb.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "cockroachdb.serviceAccount.name" . }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if .Values.iap.enabled }}
kind: Secret
apiVersion: v1
metadata:
name: {{ template "cockroachdb.fullname" . }}.iap
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
type: Opaque
data:
{{- if eq "" .Values.iap.clientId }}
{{ fail "iap.clientID can't be empty if iap.enabled is set to true" }}
{{- end }}
client_id: {{ .Values.iap.clientId | b64enc }}
{{- if eq "" .Values.iap.clientSecret }}
{{ fail "iap.clientSecret can't be empty if iap.enabled is set to true" }}
{{- end }}
client_secret: {{ .Values.iap.clientSecret | b64enc }}
{{- end }}

View File

@ -0,0 +1,19 @@
{{- if .Values.conf.log.enabled }}
kind: Secret
apiVersion: v1
metadata:
name: {{ template "cockroachdb.fullname" . }}-log-config
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
type: Opaque
stringData:
log-config.yaml: |
{{- toYaml .Values.conf.log.config | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,23 @@
{{- range $name, $cred := dict "db" (.Values.image.credentials) "init-certs" (.Values.tls.selfSigner.image.credentials) }}
{{- if not (empty $cred) }}
{{- if or (and (eq $name "init-certs") $.Values.tls.enabled) (ne $name "init-certs") }}
---
kind: Secret
apiVersion: v1
metadata:
name: {{ template "cockroachdb.fullname" $ }}.{{ $name }}.registry
namespace: {{ $.Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" $ }}
app.kubernetes.io/name: {{ template "cockroachdb.name" $ }}
app.kubernetes.io/instance: {{ $.Release.Name | quote }}
app.kubernetes.io/managed-by: {{ $.Release.Service | quote }}
{{- with $.Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ printf `{"auths":{%s:{"auth":"%s"}}}` ($cred.registry | quote) (printf "%s:%s" $cred.username $cred.password | b64enc) | b64enc | quote }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if .Values.init.provisioning.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "cockroachdb.fullname" . }}-init
namespace: {{ .Release.Namespace | quote }}
type: Opaque
stringData:
{{- range $user := .Values.init.provisioning.users }}
{{- if $user.password }}
{{ $user.name }}-password: {{ $user.password | quote }}
{{- end }}
{{- end }}
{{- range $clusterSetting, $clusterSettingValue := .Values.init.provisioning.clusterSettings }}
{{ $clusterSetting | replace "." "-" }}-cluster-setting: {{ $clusterSettingValue | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,64 @@
# This service only exists to create DNS entries for each pod in
# the StatefulSet such that they can resolve each other's IP addresses.
# It does not create a load-balanced ClusterIP and should not be used directly
# by clients in most circumstances.
kind: Service
apiVersion: v1
metadata:
name: {{ template "cockroachdb.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.service.discovery.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
annotations:
# Use this annotation in addition to the actual field below because the
# annotation will stop being respected soon, but the field is broken in
# some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
# Enable automatic monitoring of all instances when Prometheus is running
# in the cluster.
{{- if .Values.prometheus.enabled }}
prometheus.io/scrape: "true"
prometheus.io/path: _status/vars
prometheus.io/port: {{ .Values.service.ports.http.port | quote }}
{{- end }}
{{- with .Values.service.discovery.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
clusterIP: None
# We want all Pods in the StatefulSet to have their addresses published for
# the sake of the other CockroachDB Pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
ports:
{{- $ports := .Values.service.ports }}
# The main port, served by gRPC, serves Postgres-flavor SQL, inter-node
# traffic and the CLI.
- name: {{ $ports.grpc.external.name | quote }}
port: {{ $ports.grpc.external.port | int64 }}
targetPort: grpc
{{- if ne ($ports.grpc.internal.port | int64) ($ports.grpc.external.port | int64) }}
- name: {{ $ports.grpc.internal.name | quote }}
port: {{ $ports.grpc.internal.port | int64 }}
targetPort: grpc
{{- end }}
# The secondary port serves the UI as well as health and debug endpoints.
- name: {{ $ports.http.name | quote }}
port: {{ $ports.http.port | int64 }}
targetPort: http
selector:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,55 @@
# This Service is meant to be used by clients of the database.
# It exposes a ClusterIP that will automatically load balance connections
# to the different database Pods.
kind: Service
apiVersion: v1
metadata:
name: {{ template "cockroachdb.fullname" . }}-public
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.service.public.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if or .Values.service.public.annotations .Values.tls.enabled .Values.iap.enabled }}
annotations:
{{- with .Values.service.public.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.tls.enabled }}
service.alpha.kubernetes.io/app-protocols: '{"http":"HTTPS"}'
{{- end }}
{{- if .Values.iap.enabled }}
beta.cloud.google.com/backend-config: '{"default": "{{ template "cockroachdb.fullname" . }}"}'
{{- end }}
{{- end }}
spec:
type: {{ .Values.service.public.type | quote }}
ports:
{{- $ports := .Values.service.ports }}
# The main port, served by gRPC, serves Postgres-flavor SQL, inter-node
# traffic and the CLI.
- name: {{ $ports.grpc.external.name | quote }}
port: {{ $ports.grpc.external.port | int64 }}
targetPort: grpc
{{- if ne ($ports.grpc.internal.port | int64) ($ports.grpc.external.port | int64) }}
- name: {{ $ports.grpc.internal.name | quote }}
port: {{ $ports.grpc.internal.port | int64 }}
targetPort: grpc
{{- end }}
# The secondary port serves the UI as well as health and debug endpoints.
- name: {{ $ports.http.name | quote }}
port: {{ $ports.http.port | int64 }}
targetPort: http
selector:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,54 @@
{{- $serviceMonitor := .Values.serviceMonitor -}}
{{- $ports := .Values.service.ports -}}
{{- if $serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "cockroachdb.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- if $serviceMonitor.labels }}
{{- toYaml $serviceMonitor.labels | nindent 4 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if $serviceMonitor.annotations }}
annotations:
{{- toYaml $serviceMonitor.annotations | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.service.discovery.labels }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 6 }}
{{- end }}
namespaceSelector:
{{- if $serviceMonitor.namespaced }}
matchNames:
- {{ .Release.Namespace }}
{{- else }}
any: true
{{- end }}
endpoints:
- port: {{ $ports.http.name | quote }}
path: /_status/vars
{{- if $serviceMonitor.interval }}
interval: {{ $serviceMonitor.interval }}
{{- end }}
{{- if $serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ $serviceMonitor.scrapeTimeout }}
{{- end }}
{{- if .Values.serviceMonitor.tlsConfig }}
tlsConfig: {{ toYaml .Values.serviceMonitor.tlsConfig | nindent 6 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,22 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
{{ template "cockroachdb.tls.certs.selfSigner.validation" . }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "rotatecerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.tls.certs.selfSigner.svcAccountAnnotations }}
annotations:
{{- with .Values.tls.certs.selfSigner.svcAccountAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }}
{{ template "cockroachdb.tls.certs.selfSigner.validation" . }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "selfcerts.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
{{- with .Values.tls.certs.selfSigner.svcAccountAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.statefulset.serviceAccount.create }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "cockroachdb.serviceAccount.name" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.statefulset.serviceAccount.annotations }}
annotations:
{{- with .Values.statefulset.serviceAccount.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,402 @@
kind: StatefulSet
apiVersion: {{ template "cockroachdb.statefulset.apiVersion" . }}
metadata:
name: {{ template "cockroachdb.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
helm.sh/chart: {{ template "cockroachdb.chart" . }}
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
serviceName: {{ template "cockroachdb.fullname" . }}
replicas: {{ .Values.statefulset.replicas | int64 }}
updateStrategy: {{- toYaml .Values.statefulset.updateStrategy | nindent 4 }}
podManagementPolicy: {{ .Values.statefulset.podManagementPolicy | quote }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 6 }}
{{- end }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.statefulset.annotations }}
annotations: {{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- if or .Values.image.credentials (and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager)) }}
imagePullSecrets:
{{- if .Values.image.credentials }}
- name: {{ template "cockroachdb.fullname" . }}.db.registry
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }}
- name: {{ template "cockroachdb.fullname" . }}.self-signed-certs.registry
{{- end }}
{{- end }}
serviceAccountName: {{ template "cockroachdb.serviceAccount.name" . }}
{{- if .Values.tls.enabled }}
initContainers:
- name: copy-certs
image: {{ .Values.tls.copyCerts.image | quote }}
imagePullPolicy: {{ .Values.tls.selfSigner.image.pullPolicy | quote }}
command:
- /bin/sh
- -c
- "cp -f /certs/* /cockroach-certs/; chmod 0400 /cockroach-certs/*.key"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if .Values.statefulset.securityContext.enabled }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
volumeMounts:
- name: certs
mountPath: /cockroach-certs/
- name: certs-secret
mountPath: /certs/
{{- with .Values.tls.copyCerts.resources }}
resources: {{- toYaml . | nindent 12 }}
{{- end }}
{{- end }}
{{- if or .Values.statefulset.nodeAffinity .Values.statefulset.podAffinity .Values.statefulset.podAntiAffinity }}
affinity:
{{- with .Values.statefulset.nodeAffinity }}
nodeAffinity: {{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.statefulset.podAffinity }}
podAffinity: {{- toYaml . | nindent 10 }}
{{- end }}
{{- if .Values.statefulset.podAntiAffinity }}
podAntiAffinity:
{{- if .Values.statefulset.podAntiAffinity.type }}
{{- if eq .Values.statefulset.podAntiAffinity.type "hard" }}
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: {{ .Values.statefulset.podAntiAffinity.topologyKey }}
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 18 }}
{{- end }}
{{- else if eq .Values.statefulset.podAntiAffinity.type "soft" }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: {{ .Values.statefulset.podAntiAffinity.weight | int64 }}
podAffinityTerm:
topologyKey: {{ .Values.statefulset.podAntiAffinity.topologyKey }}
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 20 }}
{{- end }}
{{- end }}
{{- else }}
{{- toYaml .Values.statefulset.podAntiAffinity | nindent 10 }}
{{- end }}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.16-0" .Capabilities.KubeVersion.Version }}
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.statefulset.topologySpreadConstraints }}
maxSkew: {{ .maxSkew }}
topologyKey: {{ .topologyKey }}
whenUnsatisfiable: {{ .whenUnsatisfiable }}
{{- end }}
{{- end }}
{{- with .Values.statefulset.nodeSelector }}
nodeSelector: {{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.statefulset.priorityClassName }}
priorityClassName: {{ .Values.statefulset.priorityClassName }}
{{- end }}
{{- with .Values.statefulset.tolerations }}
tolerations: {{- toYaml . | nindent 8 }}
{{- end }}
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 300
containers:
- name: db
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
args:
- shell
- -ecx
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
#
# `--join` CLI flag is hardcoded to exactly 3 Pods, because:
# 1. Having `--join` value depending on `statefulset.replicas`
# will trigger undesired restart of existing Pods when
# StatefulSet is scaled up/down. We want to scale without
# restarting existing Pods.
# 2. At least one Pod in `--join` is enough to successfully
# join CockroachDB cluster and gossip with all other existing
# Pods, even if there are 3 or more Pods.
# 3. It's harmless for `--join` to have 3 Pods even for 1-Pod
# clusters, while it gives us opportunity to scale up even if
# some Pods of existing cluster are down (for whatever reason).
# See details explained here:
# https://github.com/helm/charts/pull/18993#issuecomment-558795102
- >-
exec /cockroach/cockroach
{{- if index .Values.conf `single-node` }}
start-single-node
{{- else }}
start --join=
{{- if .Values.conf.join }}
{{- join `,` .Values.conf.join -}}
{{- else }}
{{- range $i, $_ := until 3 -}}
{{- if gt $i 0 -}},{{- end -}}
${STATEFULSET_NAME}-{{ $i }}.${STATEFULSET_FQDN}:{{ $.Values.service.ports.grpc.internal.port | int64 -}}
{{- end -}}
{{- end }}
{{- with index .Values.conf `cluster-name` }}
--cluster-name={{ . }}
{{- if index $.Values.conf `disable-cluster-name-verification` }}
--disable-cluster-name-verification
{{- end }}
{{- end }}
{{- end }}
--advertise-host=$(hostname).${STATEFULSET_FQDN}
{{- if .Values.tls.enabled }}
--certs-dir=/cockroach/cockroach-certs/
{{- else }}
--insecure
{{- end }}
{{- with .Values.conf.attrs }}
--attrs={{ join `:` . }}
{{- end }}
--http-port={{ index .Values.conf `http-port` | int64 }}
--port={{ .Values.conf.port | int64 }}
--cache={{ .Values.conf.cache }}
{{- with index .Values.conf `max-disk-temp-storage` }}
--max-disk-temp-storage={{ . }}
{{- end }}
{{- with index .Values.conf `max-offset` }}
--max-offset={{ . }}
{{- end }}
--max-sql-memory={{ index .Values.conf `max-sql-memory` }}
{{- with .Values.conf.locality }}
--locality={{ . }}
{{- end }}
{{- with index .Values.conf `sql-audit-dir` }}
--sql-audit-dir={{ . }}
{{- end }}
{{- if .Values.conf.store.enabled }}
--store={{ template "cockroachdb.conf.store" . }}
{{- end }}
{{- if .Values.conf.log.enabled }}
--log-config-file=/cockroach/log-config/log-config.yaml
{{- else }}
--logtostderr={{ .Values.conf.logtostderr }}
{{- end }}
{{- range .Values.statefulset.args }}
{{ . }}
{{- end }}
env:
- name: STATEFULSET_NAME
value: {{ template "cockroachdb.fullname" . }}
- name: STATEFULSET_FQDN
value: {{ template "cockroachdb.fullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}
- name: COCKROACH_CHANNEL
value: kubernetes-helm
{{- with .Values.statefulset.env }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: grpc
containerPort: {{ .Values.conf.port | int64 }}
protocol: TCP
- name: http
containerPort: {{ index .Values.conf `http-port` | int64 }}
protocol: TCP
volumeMounts:
- name: datadir
mountPath: /cockroach/{{ .Values.conf.path }}/
{{- if .Values.tls.enabled }}
- name: certs
mountPath: /cockroach/cockroach-certs/
{{- if .Values.tls.certs.provided }}
- name: certs-secret
mountPath: /cockroach/certs/
{{- end }}
{{- end }}
{{- range .Values.statefulset.secretMounts }}
- name: {{ printf "secret-%s" . | quote }}
mountPath: {{ printf "/etc/cockroach/secrets/%s" . | quote }}
readOnly: true
{{- end }}
{{- if .Values.conf.log.enabled }}
- name: log-config
mountPath: /cockroach/log-config
readOnly: true
{{- end }}
livenessProbe:
{{- if .Values.statefulset.customLivenessProbe }}
{{ toYaml .Values.statefulset.customLivenessProbe | nindent 12 }}
{{- else }}
httpGet:
path: /health
port: http
{{- if .Values.tls.enabled }}
scheme: HTTPS
{{- end }}
initialDelaySeconds: 30
periodSeconds: 5
{{- end }}
readinessProbe:
{{- if .Values.statefulset.customReadinessProbe }}
{{ toYaml .Values.statefulset.customReadinessProbe | nindent 12 }}
{{- else }}
httpGet:
path: /health?ready=1
port: http
{{- if .Values.tls.enabled }}
scheme: HTTPS
{{- end }}
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 2
{{- end }}
{{- if eq (include "cockroachdb.securityContext.versionValidation" .) "true" }}
{{- if .Values.statefulset.securityContext.enabled }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
{{- end }}
{{- with .Values.statefulset.resources }}
resources: {{- toYaml . | nindent 12 }}
{{- end }}
volumes:
- name: datadir
{{- if .Values.storage.persistentVolume.enabled }}
persistentVolumeClaim:
claimName: datadir
{{- else if .Values.storage.hostPath }}
hostPath:
path: {{ .Values.storage.hostPath | quote }}
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.tls.enabled }}
- name: certs
emptyDir: {}
{{- if or .Values.tls.certs.provided .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }}
- name: certs-secret
{{- if or .Values.tls.certs.tlsSecret .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }}
projected:
sources:
- secret:
{{- if .Values.tls.certs.selfSigner.enabled }}
name: {{ template "cockroachdb.fullname" . }}-node-secret
{{ else }}
name: {{ .Values.tls.certs.nodeSecret }}
{{ end -}}
items:
- key: ca.crt
path: ca.crt
mode: 256
- key: tls.crt
path: node.crt
mode: 256
- key: tls.key
path: node.key
mode: 256
{{- else }}
secret:
secretName: {{ .Values.tls.certs.nodeSecret }}
defaultMode: 256
{{- end }}
{{- end }}
{{- end }}
{{- range .Values.statefulset.secretMounts }}
- name: {{ printf "secret-%s" . | quote }}
secret:
secretName: {{ . | quote }}
{{- end }}
{{- if .Values.conf.log.enabled }}
- name: log-config
secret:
secretName: {{ template "cockroachdb.fullname" . }}-log-config
{{- end }}
{{- if eq (include "cockroachdb.securityContext.versionValidation" .) "true" }}
{{- if and .Values.securityContext.enabled }}
securityContext:
seccompProfile:
type: "RuntimeDefault"
fsGroup: 1000
runAsGroup: 1000
runAsUser: 1000
runAsNonRoot: true
{{- end }}
{{- end }}
{{- if .Values.storage.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
name: datadir
labels:
app.kubernetes.io/name: {{ template "cockroachdb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- with .Values.storage.persistentVolume.labels }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.labels }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.storage.persistentVolume.annotations }}
annotations: {{- toYaml . | nindent 10 }}
{{- end }}
spec:
accessModes: ["ReadWriteOnce"]
{{- if .Values.storage.persistentVolume.storageClass }}
{{- if (eq "-" .Values.storage.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.storage.persistentVolume.storageClass | quote}}
{{- end }}
{{- end }}
resources:
requests:
storage: {{ .Values.storage.persistentVolume.size | quote }}
{{- end }}

View File

@ -0,0 +1,65 @@
kind: Pod
apiVersion: v1
metadata:
name: {{ template "cockroachdb.fullname" . }}-test
namespace: {{ .Release.Namespace | quote }}
{{- if .Values.networkPolicy.enabled }}
labels:
{{ template "cockroachdb.fullname" . }}-client: "true"
{{- end }}
annotations:
helm.sh/hook: test-success
spec:
restartPolicy: Never
{{- if .Values.image.credentials }}
imagePullSecrets:
- name: {{ template "cockroachdb.fullname" . }}.db.registry
{{- end }}
{{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }}
volumes:
- name: client-certs
{{- if or .Values.tls.certs.tlsSecret .Values.tls.certs.certManager }}
projected:
sources:
- secret:
name: {{ .Values.tls.certs.clientRootSecret }}
items:
- key: ca.crt
path: ca.crt
mode: 0400
- key: tls.crt
path: client.root.crt
mode: 0400
- key: tls.key
path: client.root.key
mode: 0400
{{- else }}
secret:
secretName: {{ .Values.tls.certs.clientRootSecret }}
defaultMode: 0400
{{- end }}
{{- end }}
containers:
- name: client-test
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
{{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }}
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
{{- end }}
command:
- /cockroach/cockroach
- sql
{{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }}
- --certs-dir
- /cockroach-certs
{{- else }}
- --insecure
{{- end}}
- --host
- {{ template "cockroachdb.fullname" . }}-public.{{ .Release.Namespace }}
- --port
- {{ .Values.service.ports.grpc.external.port | quote }}
- -e
- SHOW DATABASES;

View File

@ -0,0 +1,97 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"tls": {
"type": "object",
"properties": {
"certs": {
"type": "object",
"properties": {
"selfSigner": {
"type": "object",
"required": ["enabled", "caProvided"],
"properties": {
"enabled": {
"type": "boolean"
},
"caProvided": {
"type": "boolean"
}
},
"if": {
"properties": {
"enabled": {
"const": true
}
}
},
"then": {
"if": {
"properties": {
"caProvided": {
"const": false
}
}
},
"then": {
"properties": {
"caCertDuration" : {
"type": "string",
"pattern": "^[0-9]*h$"
},
"caCertExpiryWindow": {
"type": "string",
"pattern": "^[0-9]*h$"
}
}
},
"properties": {
"clientCertDuration": {
"type": "string",
"pattern": "^[0-9]*h$"
},
"clientCertExpiryWindow": {
"type": "string",
"pattern": "^[0-9]*h$"
},
"nodeCertDuration": {
"type": "string",
"pattern": "^[0-9]*h$"
},
"nodeCertExpiryWindow": {
"type": "string",
"pattern": "^[0-9]*h$"
},
"rotateCerts": {
"type": "boolean"
}
}
}
}
}
},
"selfSigner": {
"type": "object",
"properties": {
"image": {
"type": "object",
"required": ["repository", "tag", "pullPolicy"],
"properties": {
"repository": {
"type": "string"
},
"tag": {
"type": "string"
},
"pullPolicy": {
"type": "string",
"pattern": "^(Always|Never|IfNotPresent)$"
}
}
}
}
}
}
}
}
}

View File

@ -0,0 +1,586 @@
# Generated file, DO NOT EDIT. Source: build/templates/values.yaml
# Overrides the chart name against the label "app.kubernetes.io/name: " placed on every resource this chart creates.
nameOverride: ""
# Override the resource names created by this chart which originally is generated using release and chart name.
fullnameOverride: ""
image:
repository: cockroachdb/cockroach
tag: v24.1.3
pullPolicy: IfNotPresent
credentials: {}
# registry: docker.io
# username: john_doe
# password: changeme
# Additional labels to apply to all Kubernetes resources created by this chart.
labels: {}
# app.kubernetes.io/part-of: my-app
# Cluster's default DNS domain.
# You should overwrite it if you're using a different one,
# otherwise CockroachDB nodes discovery won't work.
clusterDomain: cluster.local
conf:
# An ordered list of CockroachDB node attributes.
# Attributes are arbitrary strings specifying machine capabilities.
# Machine capabilities might include specialized hardware or number of cores
# (e.g. "gpu", "x16c").
attrs: []
# - x16c
# - gpu
# Total size in bytes for caches, shared evenly if there are multiple
# storage devices. Size suffixes are supported (e.g. `1GB` and `1GiB`).
# A percentage of physical memory can also be specified (e.g. `.25`).
cache: 25%
# Sets a name to verify the identity of a cluster.
# The value must match between all nodes specified via `conf.join`.
# This can be used as an additional verification when either the node or
# cluster, or both, have not yet been initialized and do not yet know their
# cluster ID.
# To introduce a cluster name into an already-initialized cluster, pair this
# option with `conf.disable-cluster-name-verification: yes`.
cluster-name: ""
# Tell the server to ignore `conf.cluster-name` mismatches.
# This is meant for use when opting an existing cluster into starting to use
# cluster name verification, or when changing the cluster name.
# The cluster should be restarted once with `conf.cluster-name` and
# `conf.disable-cluster-name-verification: yes` combined, and once all nodes
# have been updated to know the new cluster name, the cluster can be restarted
# again with `conf.disable-cluster-name-verification: no`.
# This option has no effect if `conf.cluster-name` is not specified.
disable-cluster-name-verification: false
# The addresses for connecting a CockroachDB nodes to an existing cluster.
# If you are deploying a second CockroachDB instance that should join a first
# one, use the below list to join to the existing instance.
# Each item in the array should be a FQDN (and port if needed) resolvable by
# new Pods.
join: []
# New logging configuration.
log:
enabled: false
# https://www.cockroachlabs.com/docs/v21.1/configure-logs
config: {}
# file-defaults:
# dir: /custom/dir/path/
# fluent-defaults:
# format: json-fluent
# sinks:
# stderr:
# channels: [DEV]
# Logs at or above this threshold to STDERR. Ignored when "log" is enabled
logtostderr: INFO
# Maximum storage capacity available to store temporary disk-based data for
# SQL queries that exceed the memory budget (e.g. join, sorts, etc are
# sometimes able to spill intermediate results to disk).
# Accepts numbers interpreted as bytes, size suffixes (e.g. `32GB` and
# `32GiB`) or a percentage of disk size (e.g. `10%`).
# The location of the temporary files is within the first store dir.
# If expressed as a percentage, `max-disk-temp-storage` is interpreted
# relative to the size of the storage device on which the first store is
# placed. The temp space usage is never counted towards any store usage
# (although it does share the device with the first store) so, when
# configuring this, make sure that the size of this temp storage plus the size
# of the first store don't exceed the capacity of the storage device.
# If the first store is an in-memory one (i.e. `type=mem`), then this
# temporary "disk" data is also kept in-memory.
# A percentage value is interpreted as a percentage of the available internal
# memory.
# max-disk-temp-storage: 0GB
# Maximum allowed clock offset for the cluster. If observed clock offsets
# exceed this limit, servers will crash to minimize the likelihood of
# reading inconsistent data. Increasing this value will increase the time
# to recovery of failures as well as the frequency of uncertainty-based
# read restarts.
# Note, that this value must be the same on all nodes in the cluster.
# In order to change it, all nodes in the cluster must be stopped
# simultaneously and restarted with the new value.
# max-offset: 500ms
# Maximum memory capacity available to store temporary data for SQL clients,
# including prepared queries and intermediate data rows during query
# execution. Accepts numbers interpreted as bytes, size suffixes
# (e.g. `1GB` and `1GiB`) or a percentage of physical memory (e.g. `.25`).
max-sql-memory: 25%
# An ordered, comma-separated list of key-value pairs that describe the
# topography of the machine. Topography might include country, datacenter
# or rack designations. Data is automatically replicated to maximize
# diversities of each tier. The order of tiers is used to determine
# the priority of the diversity, so the more inclusive localities like
# country should come before less inclusive localities like datacenter.
# The tiers and order must be the same on all nodes. Including more tiers
# is better than including fewer. For example:
# locality: country=us,region=us-west,datacenter=us-west-1b,rack=12
# locality: country=ca,region=ca-east,datacenter=ca-east-2,rack=4
# locality: planet=earth,province=manitoba,colo=secondary,power=3
locality: ""
# Run CockroachDB instances in standalone mode with replication disabled
# (replication factor = 1).
# Enabling this option makes the following values to be ignored:
# - `conf.cluster-name`
# - `conf.disable-cluster-name-verification`
# - `conf.join`
#
# WARNING: Enabling this option makes each deployed Pod as a STANDALONE
# CockroachDB instance, so the StatefulSet does NOT FORM A CLUSTER.
# Don't use this option for production deployments unless you clearly
# understand what you're doing.
# Usually, this option is intended to be used in conjunction with
# `statefulset.replicas: 1` for temporary one-time deployments (like
# running E2E tests, for example).
single-node: false
# If non-empty, create a SQL audit log in the specified directory.
sql-audit-dir: ""
# CockroachDB's port to listen to inter-communications and client connections.
port: 26257
# CockroachDB's port to listen to HTTP requests.
http-port: 8080
# CockroachDB's data mount path.
path: cockroach-data
# CockroachDB's storage configuration https://www.cockroachlabs.com/docs/v21.1/cockroach-start.html#storage
# Uses --store flag
store:
enabled: false
# Should be empty or 'mem'
type:
# Required for type=mem. If type and size is empty - storage.persistentVolume.size is used
size:
# Arbitrary strings, separated by colons, specifying disk type or capability
attrs:
statefulset:
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
budget:
maxUnavailable: 1
# List of additional command-line arguments you want to pass to the
# `cockroach start` command.
args: []
# - --disable-cluster-name-verification
# List of extra environment variables to pass into container
env: []
# - name: COCKROACH_ENGINE_MAX_SYNC_DURATION
# value: "24h"
# List of Secrets names in the same Namespace as the CockroachDB cluster,
# which shall be mounted into `/etc/cockroach/secrets/` for every cluster
# member.
secretMounts: []
# Additional labels to apply to this StatefulSet and all its Pods.
labels:
app.kubernetes.io/component: cockroachdb
# Additional annotations to apply to the Pods of this StatefulSet.
annotations: {}
# Affinity rules for scheduling Pods of this StatefulSet on Nodes.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
nodeAffinity: {}
# Inter-Pod Affinity rules for scheduling Pods of this StatefulSet.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
podAffinity: {}
# Anti-affinity rules for scheduling Pods of this StatefulSet.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
# You may either toggle options below for default anti-affinity rules,
# or specify the whole set of anti-affinity rules instead of them.
podAntiAffinity:
# The topologyKey to be used.
# Can be used to spread across different nodes, AZs, regions etc.
topologyKey: kubernetes.io/hostname
# Type of anti-affinity rules: either `soft`, `hard` or empty value (which
# disables anti-affinity rules).
type: soft
# Weight for `soft` anti-affinity rules.
# Does not apply for other anti-affinity types.
weight: 100
# Node selection constraints for scheduling Pods of this StatefulSet.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
# PriorityClassName given to Pods of this StatefulSet
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# Taints to be tolerated by Pods of this StatefulSet.
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints:
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
# Uncomment the following resources definitions or pass them from
# command line to control the CPU and memory resources allocated
# by Pods of this StatefulSet.
resources: {}
# limits:
# cpu: 100m
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 512Mi
# Custom Liveness probe
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request
customLivenessProbe: {}
# httpGet:
# path: /health
# port: http
# scheme: HTTPS
# initialDelaySeconds: 30
# periodSeconds: 5
# Custom Rediness probe
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
customReadinessProbe: {}
# httpGet:
# path: /health
# port: http
# scheme: HTTPS
# initialDelaySeconds: 30
# periodSeconds: 5
securityContext:
enabled: true
serviceAccount:
# Specifies whether this ServiceAccount should be created.
create: true
# The name of this ServiceAccount to use.
# If not set and `create` is `true`, then service account is auto-generated.
# If not set and `create` is `false`, then it uses default service account.
name: ""
# Additional serviceAccount annotations (e.g. for attaching AWS IAM roles to pods)
annotations: {}
service:
ports:
# You can set a different external and internal gRPC ports and their name.
grpc:
external:
port: 26257
name: grpc
# If the port number is different than `external.port`, then it will be
# named as `internal.name` in Service.
internal:
port: 26257
# If using Istio set it to `cockroach`.
name: grpc-internal
http:
port: 8080
name: http
# This Service is meant to be used by clients of the database.
# It exposes a ClusterIP that will automatically load balance connections
# to the different database Pods.
public:
type: ClusterIP
# Additional labels to apply to this Service.
labels:
app.kubernetes.io/component: cockroachdb
# Additional annotations to apply to this Service.
annotations: {}
# This service only exists to create DNS entries for each pod in
# the StatefulSet such that they can resolve each other's IP addresses.
# It does not create a load-balanced ClusterIP and should not be used directly
# by clients in most circumstances.
discovery:
# Additional labels to apply to this Service.
labels:
app.kubernetes.io/component: cockroachdb
# Additional annotations to apply to this Service.
annotations: {}
# CockroachDB's ingress for web ui.
ingress:
enabled: false
labels: {}
annotations: {}
# kubernetes.io/ingress.class: nginx
# cert-manager.io/cluster-issuer: letsencrypt
paths: [/]
hosts: []
# - cockroachlabs.com
tls: []
# - hosts: [cockroachlabs.com]
# secretName: cockroachlabs-tls
prometheus:
enabled: true
securityContext:
enabled: true
# CockroachDB's Prometheus operator ServiceMonitor support
serviceMonitor:
enabled: false
labels: {}
annotations: {}
interval: 10s
# scrapeTimeout: 10s
# Limits the ServiceMonitor to the current namespace if set to `true`.
namespaced: false
# tlsConfig: TLS configuration to use when scraping the endpoint.
# Of type: https://github.com/coreos/prometheus-operator/blob/main/Documentation/api.md#tlsconfig
tlsConfig: {}
# CockroachDB's data persistence.
# If neither `persistentVolume` nor `hostPath` is used, then data will be
# persisted in ad-hoc `emptyDir`.
storage:
# Absolute path on host to store CockroachDB's data.
# If not specified, then `emptyDir` will be used instead.
# If specified, but `persistentVolume.enabled` is `true`, then has no effect.
hostPath: ""
# If `enabled` is `true` then a PersistentVolumeClaim will be created and
# used to store CockroachDB's data, otherwise `hostPath` is used.
persistentVolume:
enabled: true
size: 100Gi
# If defined, then `storageClassName: <storageClass>`.
# If set to "-", then `storageClassName: ""`, which disables dynamic
# provisioning.
# If undefined or empty (default), then no `storageClassName` spec is set,
# so the default provisioner will be chosen (gp2 on AWS, standard on
# GKE, AWS & OpenStack).
storageClass: ""
# Additional labels to apply to the created PersistentVolumeClaims.
labels: {}
# Additional annotations to apply to the created PersistentVolumeClaims.
annotations: {}
# Kubernetes Job which initializes multi-node CockroachDB cluster.
# It's not created if `statefulset.replicas` is `1`.
init:
# Additional labels to apply to this Job and its Pod.
labels:
app.kubernetes.io/component: init
# Additional annotations to apply to this Job.
jobAnnotations: {}
# Additional annotations to apply to the Pod of this Job.
annotations: {}
# Affinity rules for scheduling the Pod of this Job.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
affinity: {}
# Node selection constraints for scheduling the Pod of this Job.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
# Taints to be tolerated by the Pod of this Job.
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# The init Pod runs at cluster creation to initialize CockroachDB. It finishes
# quickly and doesn't continue to consume resources in the Kubernetes
# cluster. Normally, you should leave this section commented out, but if your
# Kubernetes cluster uses Resource Quotas and requires all pods to specify
# resource requests or limits, you can set those here.
resources: {}
# requests:
# cpu: "10m"
# memory: "128Mi"
# limits:
# cpu: "10m"
# memory: "128Mi"
securityContext:
enabled: true
provisioning:
enabled: false
# https://www.cockroachlabs.com/docs/stable/cluster-settings.html
clusterSettings:
# cluster.organization: "'FooCorp - Local Testing'"
# enterprise.license: "'xxxxx'"
users: []
# - name:
# password:
# # https://www.cockroachlabs.com/docs/stable/create-user.html#parameters
# options: [LOGIN]
databases: []
# - name:
# # https://www.cockroachlabs.com/docs/stable/create-database.html#parameters
# options: [encoding='utf-8']
# owners: []
# # https://www.cockroachlabs.com/docs/stable/grant.html#parameters
# owners_with_grant_option: []
# # Backup schedules are not idemponent for now and will fail on next run
# # https://github.com/cockroachdb/cockroach/issues/57892
# backup:
# into: s3://
# # Enterprise-only option (revision_history)
# # https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html#backup-options
# options: [revision_history]
# recurring: '@always'
# # Enterprise-only feature. Remove this value to use `FULL BACKUP ALWAYS`
# fullBackup: '@daily'
# schedule:
# # https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html#schedule-options
# options: [first_run = 'now']
# Whether to run securely using TLS certificates.
tls:
enabled: true
copyCerts:
image: busybox
certs:
# Bring your own certs scenario. If provided, tls.init section will be ignored.
provided: false
# Secret name for the client root cert.
clientRootSecret: cockroachdb-root
# Secret name for node cert.
nodeSecret: cockroachdb-node
# Secret name for CA cert
caSecret: cockroach-ca
# Enable if the secret is a dedicated TLS.
# TLS secrets are created by cert-mananger, for example.
tlsSecret: false
# Enable if the you want cockroach db to create its own certificates
selfSigner:
# If set, the cockroach db will generate its own certificates
enabled: true
# Run selfSigner as non-root
securityContext:
enabled: true
# If set, the user should provide the CA certificate to sign other certificates.
caProvided: false
# It holds the name of the secret with caCerts. If caProvided is set, this can not be empty.
caSecret: ""
# Minimum Certificate duration for all the certificates, all certs duration will be validated against this.
minimumCertDuration: 624h
# Duration of CA certificates in hour
caCertDuration: 43800h
# Expiry window of CA certificates means a window before actual expiry in which CA certs should be rotated.
caCertExpiryWindow: 648h
# Duration of Client certificates in hour
clientCertDuration: 672h
# Expiry window of client certificates means a window before actual expiry in which client certs should be rotated.
clientCertExpiryWindow: 48h
# Duration of node certificates in hour
nodeCertDuration: 8760h
# Expiry window of node certificates means a window before actual expiry in which node certs should be rotated.
nodeCertExpiryWindow: 168h
# If set, the cockroachdb cert selfSigner will rotate the certificates before expiry.
rotateCerts: true
# Wait time for each cockroachdb replica to become ready once it comes in running state. Only considered when rotateCerts is set to true
readinessWait: 30s
# Wait time for each cockroachdb replica to get to running state. Only considered when rotateCerts is set to true
podUpdateTimeout: 2m
# ServiceAccount annotations for selfSigner jobs (e.g. for attaching AWS IAM roles to pods)
svcAccountAnnotations: {}
# Use cert-manager to issue certificates for mTLS.
certManager: false
# Specify an Issuer or a ClusterIssuer to use, when issuing
# node and client certificates. The values correspond to the
# issuerRef specified in the certificate.
certManagerIssuer:
group: cert-manager.io
kind: Issuer
name: cockroachdb
# Make it false when you are providing your own CA issuer
isSelfSignedIssuer: true
# Duration of Client certificates in hours
clientCertDuration: 672h
# Expiry window of client certificates means a window before actual expiry in which client certs should be rotated.
clientCertExpiryWindow: 48h
# Duration of node certificates in hours
nodeCertDuration: 8760h
# Expiry window of node certificates means a window before actual expiry in which node certs should be rotated.
nodeCertExpiryWindow: 168h
selfSigner:
# Additional annotations to apply to the Pod of this Job.
annotations: {}
# Affinity rules for scheduling the Pod of this Job.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
affinity: {}
# Node selection constraints for scheduling the Pod of this Job.
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
# Taints to be tolerated by the Pod of this Job.
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Image Placeholder for the selfSigner utility. This will be changed once the CI workflows for the image is in place.
image:
repository: cockroachlabs-helm-charts/cockroach-self-signer-cert
tag: "1.5"
pullPolicy: IfNotPresent
credentials: {}
registry: gcr.io
# username: john_doe
# password: changeme
networkPolicy:
enabled: false
ingress:
# List of sources which should be able to access the CockroachDB Pods via
# gRPC port. Items in this list are combined using a logical OR operation.
# Rules for allowing inter-communication are applied automatically.
# If empty, then connections from any Pod is allowed.
grpc: []
# - podSelector:
# matchLabels:
# app.kubernetes.io/name: my-app-django
# app.kubernetes.io/instance: my-app
# List of sources which should be able to access the CockroachDB Pods via
# HTTP port. Items in this list are combined using a logical OR operation.
# If empty, then connections from any Pod is allowed.
http: []
# - namespaceSelector:
# matchLabels:
# project: my-project
# To put the admin interface behind Identity Aware Proxy (IAP) on Google Cloud Platform
# make sure to set ingress.paths: ['/*']
iap:
enabled: false
# Create Google Cloud OAuth credentials and set client id and secret
# clientId:
# clientSecret:

View File

@ -0,0 +1,24 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
OWNERS
tests/

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,6 @@
dependencies:
- name: postgresql
repository: https://charts.jfrog.io/
version: 10.3.18
digest: sha256:404ce007353baaf92a6c5f24b249d5b336c232e5fd2c29f8a0e4d0095a09fd53
generated: "2022-03-08T08:54:51.805126+05:30"

View File

@ -0,0 +1,30 @@
annotations:
artifactoryServiceVersion: 7.90.7
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: JFrog Artifactory HA
catalog.cattle.io/kube-version: '>= 1.19.0-0'
catalog.cattle.io/release-name: artifactory-ha
apiVersion: v2
appVersion: 7.90.6
dependencies:
- condition: postgresql.enabled
name: postgresql
repository: file://./charts/postgresql
version: 10.3.18
description: Universal Repository Manager supporting all major packaging formats,
build tools and CI servers.
home: https://www.jfrog.com/artifactory/
icon: file://assets/icons/artifactory-ha.png
keywords:
- artifactory
- jfrog
- devops
kubeVersion: '>= 1.19.0-0'
maintainers:
- email: installers@jfrog.com
name: Chart Maintainers at JFrog
name: artifactory-ha
sources:
- https://github.com/jfrog/charts
type: application
version: 107.90.6

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,69 @@
# JFrog Artifactory High Availability Helm Chart
**IMPORTANT!** Our Helm Chart docs have moved to our main documentation site. Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to [Installing Artifactory - Helm HA Installation](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-HelmHAInstallation).
**Note:** From Artifactory 7.17.4 and above, the Helm HA installation can be installed so that each node you install can run all tasks in the cluster.
Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to the documentation site.
## Prerequisites Details
* Kubernetes 1.19+
* Artifactory HA license
## Chart Details
This chart will do the following:
* Deploy Artifactory highly available cluster. 1 primary node and 2 member nodes.
* Deploy a PostgreSQL database **NOTE:** For production grade installations it is recommended to use an external PostgreSQL
* Deploy an Nginx server
## Installing the Chart
### Add JFrog Helm repository
Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client
```bash
helm repo add jfrog https://charts.jfrog.io
```
2. Next, create a unique Master Key (Artifactory requires a unique master key) and pass it to the template during installation.
3. Now, update the repository.
```bash
helm repo update
```
### Install Chart
To install the chart with the release name `artifactory`:
```bash
helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha --create-namespace
```
### Apply Sizing configurations to the Chart
To apply the chart with recommended sizing configurations :
For small configurations :
```bash
helm upgrade --install artifactory-ha jfrog/artifactory-ha -f sizing/artifactory-small-extra-config.yaml -f sizing/artifactory-small.yaml --namespace artifactory-ha --create-namespace
```
## Uninstalling Artifactory
Uninstall is supported only on Helm v3 and on.
Uninstall Artifactory using the following command.
```bash
helm uninstall artifactory-ha && sleep 90 && kubectl delete pvc -l app=artifactory-ha
```
## Deleting Artifactory
**IMPORTANT:** Deleting Artifactory will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. You do not need to uninstall Artifactory before deleting it.
To delete Artifactory use the following command.
```bash
helm delete artifactory-ha --namespace artifactory-ha
```

View File

@ -0,0 +1,16 @@
# JFrog Artifactory High Availability Helm Chart
Universal Repository Manager supporting all major packaging formats, build tools and CI servers.
## Chart Details
This chart will do the following:
* Deploy Artifactory highly available cluster. 1 primary node and 2 member nodes.
* Deploy a PostgreSQL database
* Deploy an Nginx server(optional)
## Useful links
Blog: [Herd Trust Into Your Rancher Labs Multi-Cloud Strategy with Artifactory](https://jfrog.com/blog/herd-trust-into-your-rancher-labs-multi-cloud-strategy-with-artifactory/)
## Activate Your Artifactory Instance
Don't have a license? Please send an email to [rancher-jfrog-licenses@jfrog.com](mailto:rancher-jfrog-licenses@jfrog.com) to get it.

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,6 @@
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
version: 1.4.2
digest: sha256:dce0349883107e3ff103f4f17d3af4ad1ea3c7993551b1c28865867d3e53d37c
generated: "2021-03-30T09:13:28.360322819Z"

View File

@ -0,0 +1,29 @@
annotations:
category: Database
apiVersion: v2
appVersion: 11.11.0
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
version: 1.x.x
description: Chart for PostgreSQL, an object-relational database management system
(ORDBMS) with an emphasis on extensibility and on standards-compliance.
home: https://github.com/bitnami/charts/tree/master/bitnami/postgresql
icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-220x234.png
keywords:
- postgresql
- postgres
- database
- sql
- replication
- cluster
maintainers:
- email: containers@bitnami.com
name: Bitnami
- email: cedric@desaintmartin.fr
name: desaintmartin
name: postgresql
sources:
- https://github.com/bitnami/bitnami-docker-postgresql
- https://www.postgresql.org/
version: 10.3.18

View File

@ -0,0 +1,770 @@
# PostgreSQL
[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha)
## TL;DR
```console
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/postgresql
```
## Introduction
This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
- Kubernetes 1.12+
- Helm 3.1.0
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install my-release bitnami/postgresql
```
The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release.
To delete the PVC's associated with `my-release`:
```console
$ kubectl delete pvc -l release=my-release
```
> **Note**: Deleting the PVC's will delete postgresql data as well. Please be cautious before doing it.
## Parameters
The following tables lists the configurable parameters of the PostgreSQL chart and their default values.
| Parameter | Description | Default |
|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------|
| `global.imageRegistry` | Global Docker Image registry | `nil` |
| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` |
| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` |
| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` |
| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` |
| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` |
| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `global.storageClass` | Global storage class for dynamic provisioning | `nil` |
| `image.registry` | PostgreSQL Image registry | `docker.io` |
| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` |
| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug values should be set | `false` |
| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` |
| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` |
| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` |
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
| `volumePermissions.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` |
| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` |
| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` |
| `ldap.enabled` | Enable LDAP support | `false` |
| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` |
| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` |
| `ldap.server` | IP address or name of the LDAP server. | `nil` |
| `ldap.port` | Port number on the LDAP server to connect to | `nil` |
| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` |
| `ldap.tls` | Set to `1` to use TLS encryption | `nil` |
| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` |
| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` |
| `ldap.search_attr` | Attribute to match against the user name in the search | `nil` |
| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` |
| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` |
| `ldap.bindDN` | DN of user to bind to LDAP | `nil` |
| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` |
| `replication.enabled` | Enable replication | `false` |
| `replication.user` | Replication user | `repl_user` |
| `replication.password` | Replication user password | `repl_password` |
| `replication.readReplicas` | Number of read replicas replicas | `1` |
| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` |
| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.readReplicas`. | `0` |
| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` |
| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-postgres-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be used to authenticate on LDAP. The value is evaluated as a template. | `nil` |
| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`, in which case`postgres` is the admin username). | _random 10 character alphanumeric string_ |
| `postgresqlUsername` | PostgreSQL user (creates a non-admin user when `postgresqlUsername` is not `postgres`) | `postgres` |
| `postgresqlPassword` | PostgreSQL user password | _random 10 character alphanumeric string_ |
| `postgresqlDatabase` | PostgreSQL database | `nil` |
| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) |
| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` |
| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` |
| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` |
| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` |
| `postgresqlConfiguration` | Runtime Config Parameters | `nil` |
| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` |
| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` |
| `postgresqlSharedPreloadLibraries` | Shared preload libraries (comma-separated list) | `pgaudit` |
| `postgresqlMaxConnections` | Maximum total connections | `nil` |
| `postgresqlPostgresConnectionLimit` | Maximum total connections for the postgres user | `nil` |
| `postgresqlDbUserConnectionLimit` | Maximum total connections for the non-admin user | `nil` |
| `postgresqlTcpKeepalivesInterval` | TCP keepalives interval | `nil` |
| `postgresqlTcpKeepalivesIdle` | TCP keepalives idle | `nil` |
| `postgresqlTcpKeepalivesCount` | TCP keepalives count | `nil` |
| `postgresqlStatementTimeout` | Statement timeout | `nil` |
| `postgresqlPghbaRemoveFilters` | Comma-separated list of patterns to remove from the pg_hba.conf file | `nil` |
| `customStartupProbe` | Override default startup probe | `nil` |
| `customLivenessProbe` | Override default liveness probe | `nil` |
| `customReadinessProbe` | Override default readiness probe | `nil` |
| `audit.logHostname` | Add client hostnames to the log file | `false` |
| `audit.logConnections` | Add client log-in operations to the log file | `false` |
| `audit.logDisconnections` | Add client log-outs operations to the log file | `false` |
| `audit.pgAuditLog` | Add operations to log using the pgAudit extension | `nil` |
| `audit.clientMinMessages` | Message log level to share with the user | `nil` |
| `audit.logLinePrefix` | Template string for the log line prefix | `nil` |
| `audit.logTimezone` | Timezone for the log timestamps | `nil` |
| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` |
| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` |
| `initdbScripts` | Dictionary of initdb scripts | `nil` |
| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` |
| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` |
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` |
| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.port` | PostgreSQL port | `5432` |
| `service.nodePort` | Kubernetes Service nodePort | `nil` |
| `service.annotations` | Annotations for PostgreSQL service | `{}` (evaluated as a template) |
| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | `[]` (evaluated as a template) |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for primary and read replica(s) Pod(s) | `true` |
| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` |
| `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` |
| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` |
| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
| `persistence.annotations` | Annotations for the PVC | `{}` |
| `persistence.selector` | Selector to match an existing Persistent Volume (this value is evaluated as a template) | `{}` |
| `commonAnnotations` | Annotations to be added to all deployed resources (rendered as a template) | `{}` |
| `primary.podAffinityPreset` | PostgreSQL primary pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `primary.podAntiAffinityPreset` | PostgreSQL primary pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `primary.nodeAffinityPreset.type` | PostgreSQL primary node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `primary.nodeAffinityPreset.key` | PostgreSQL primary node label key to match Ignored if `primary.affinity` is set. | `""` |
| `primary.nodeAffinityPreset.values` | PostgreSQL primary node label values to match. Ignored if `primary.affinity` is set. | `[]` |
| `primary.affinity` | Affinity for PostgreSQL primary pods assignment | `{}` (evaluated as a template) |
| `primary.nodeSelector` | Node labels for PostgreSQL primary pods assignment | `{}` (evaluated as a template) |
| `primary.tolerations` | Tolerations for PostgreSQL primary pods assignment | `[]` (evaluated as a template) |
| `primary.anotations` | Map of annotations to add to the statefulset (postgresql primary) | `{}` |
| `primary.labels` | Map of labels to add to the statefulset (postgresql primary) | `{}` |
| `primary.podAnnotations` | Map of annotations to add to the pods (postgresql primary) | `{}` |
| `primary.podLabels` | Map of labels to add to the pods (postgresql primary) | `{}` |
| `primary.priorityClassName` | Priority Class to use for each pod (postgresql primary) | `nil` |
| `primary.extraInitContainers` | Additional init containers to add to the pods (postgresql primary) | `[]` |
| `primary.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql primary) | `[]` |
| `primary.extraVolumes` | Additional volumes to add to the pods (postgresql primary) | `[]` |
| `primary.sidecars` | Add additional containers to the pod | `[]` |
| `primary.service.type` | Allows using a different service type for primary | `nil` |
| `primary.service.nodePort` | Allows using a different nodePort for primary | `nil` |
| `primary.service.clusterIP` | Allows using a different clusterIP for primary | `nil` |
| `primaryAsStandBy.enabled` | Whether to enable current cluster's primary as standby server of another cluster or not. | `false` |
| `primaryAsStandBy.primaryHost` | The Host of replication primary in the other cluster. | `nil` |
| `primaryAsStandBy.primaryPort ` | The Port of replication primary in the other cluster. | `nil` |
| `readReplicas.podAffinityPreset` | PostgreSQL read only pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `readReplicas.podAntiAffinityPreset` | PostgreSQL read only pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `readReplicas.nodeAffinityPreset.type` | PostgreSQL read only node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `readReplicas.nodeAffinityPreset.key` | PostgreSQL read only node label key to match Ignored if `primary.affinity` is set. | `""` |
| `readReplicas.nodeAffinityPreset.values` | PostgreSQL read only node label values to match. Ignored if `primary.affinity` is set. | `[]` |
| `readReplicas.affinity` | Affinity for PostgreSQL read only pods assignment | `{}` (evaluated as a template) |
| `readReplicas.nodeSelector` | Node labels for PostgreSQL read only pods assignment | `{}` (evaluated as a template) |
| `readReplicas.anotations` | Map of annotations to add to the statefulsets (postgresql readReplicas) | `{}` |
| `readReplicas.resources` | CPU/Memory resource requests/limits override for readReplicass. Will fallback to `values.resources` if not defined. | `{}` |
| `readReplicas.labels` | Map of labels to add to the statefulsets (postgresql readReplicas) | `{}` |
| `readReplicas.podAnnotations` | Map of annotations to add to the pods (postgresql readReplicas) | `{}` |
| `readReplicas.podLabels` | Map of labels to add to the pods (postgresql readReplicas) | `{}` |
| `readReplicas.priorityClassName` | Priority Class to use for each pod (postgresql readReplicas) | `nil` |
| `readReplicas.extraInitContainers` | Additional init containers to add to the pods (postgresql readReplicas) | `[]` |
| `readReplicas.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql readReplicas) | `[]` |
| `readReplicas.extraVolumes` | Additional volumes to add to the pods (postgresql readReplicas) | `[]` |
| `readReplicas.sidecars` | Add additional containers to the pod | `[]` |
| `readReplicas.service.type` | Allows using a different service type for readReplicas | `nil` |
| `readReplicas.service.nodePort` | Allows using a different nodePort for readReplicas | `nil` |
| `readReplicas.service.clusterIP` | Allows using a different clusterIP for readReplicas | `nil` |
| `readReplicas.persistence.enabled` | Whether to enable readReplicas replicas persistence | `true` |
| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
| `securityContext.*` | Other pod security context to be included as-is in the pod spec | `{}` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the pod | `1001` |
| `containerSecurityContext.*` | Other container security context to be included as-is in the container spec | `{}` |
| `containerSecurityContext.enabled` | Enable container security context | `true` |
| `containerSecurityContext.runAsUser` | User ID for the container | `1001` |
| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` |
| `serviceAccount.name` | Name of existing service account | `nil` |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` |
| `startupProbe.enabled` | Enable startupProbe | `false` |
| `startupProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `startupProbe.periodSeconds` | How often to perform the probe | 15 |
| `startupProbe.timeoutSeconds` | When the probe times | 5 |
| `startupProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 10 |
| `startupProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
| `livenessProbe.enabled` | Enable livenessProbe | `true` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `readinessProbe.enabled` | Enable readinessProbe | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 |
| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `tls.enabled` | Enable TLS traffic support | `false` |
| `tls.preferServerCiphers` | Whether to use the server's TLS cipher preferences rather than the client's | `true` |
| `tls.certificatesSecret` | Name of an existing secret that contains the certificates | `nil` |
| `tls.certFilename` | Certificate filename | `""` |
| `tls.certKeyFilename` | Certificate key filename | `""` |
| `tls.certCAFilename` | CA Certificate filename. If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate. | `nil` |
| `tls.crlFilename` | File containing a Certificate Revocation List | `nil` |
| `metrics.enabled` | Start a prometheus exporter | `false` |
| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` |
| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` |
| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` |
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` |
| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql |
| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` |
| `metrics.image.registry` | PostgreSQL Exporter Image registry | `docker.io` |
| `metrics.image.repository` | PostgreSQL Exporter Image name | `bitnami/postgres-exporter` |
| `metrics.image.tag` | PostgreSQL Exporter Image tag | `{TAG_NAME}` |
| `metrics.image.pullPolicy` | PostgreSQL Exporter Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `metrics.customMetrics` | Additional custom metrics | `nil` |
| `metrics.extraEnvVars` | Extra environment variables to add to exporter | `{}` (evaluated as a template) |
| `metrics.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` |
| `metrics.securityContext.enabled` | Enable security context for metrics | `false` |
| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` |
| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` |
| `psp.create` | Create Pod Security Policy | `false` |
| `rbac.create` | Create Role and RoleBinding (required for PSP to work) | `false` |
| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install my-release \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
```
The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`.
> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install my-release -f values.yaml bitnami/postgresql
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Configuration and installation details
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
### Customizing primary and read replica services in a replicated configuration
At the top level, there is a service object which defines the services for both primary and readReplicas. For deeper customization, there are service objects for both the primary and read types individually. This allows you to override the values in the top level service object so that the primary and read can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the primary and read to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the primary.service or readReplicas.service objects will take precedence over the top level service object.
### Change PostgreSQL version
To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=X.Y.Z`. This approach is also applicable to other images like exporters.
### postgresql.conf / pg_hba.conf files as configMap
This helm chart also supports to customize the whole configuration file.
Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server.
Alternatively, you can add additional PostgreSQL configuration parameters using the `postgresqlExtendedConf` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}. Alternatively, to replace the entire default configuration use `postgresqlConfiguration`.
In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options.
### Allow settings to be loaded from files other than the default `postgresql.conf`
If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory.
Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`.
Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option.
### Initialize a fresh instance
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict.
In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
### Securing traffic using TLS
TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart:
- `tls.enabled`: Enable TLS support. Defaults to `false`
- `tls.certificatesSecret`: Name of an existing secret that contains the certificates. No defaults.
- `tls.certFilename`: Certificate filename. No defaults.
- `tls.certKeyFilename`: Certificate key filename. No defaults.
For example:
* First, create the secret with the cetificates files:
```console
kubectl create secret generic certificates-tls-secret --from-file=./cert.crt --from-file=./cert.key --from-file=./ca.crt
```
* Then, use the following parameters:
```console
volumePermissions.enabled=true
tls.enabled=true
tls.certificatesSecret="certificates-tls-secret"
tls.certFilename="cert.crt"
tls.certKeyFilename="cert.key"
```
> Note TLS and VolumePermissions: PostgreSQL requires certain permissions on sensitive files (such as certificate keys) to start up. Due to an on-going [issue](https://github.com/kubernetes/kubernetes/issues/57923) regarding kubernetes permissions and the use of `containerSecurityContext.runAsUser`, you must enable `volumePermissions` to ensure everything works as expected.
### Sidecars
If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
```yaml
# For the PostgreSQL primary
primary:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
# For the PostgreSQL replicas
readReplicas:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
### Metrics
The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml).
The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details.
### Use of global variables
In more complex scenarios, we may have the following tree of dependencies
```
+--------------+
| |
+------------+ Chart 1 +-----------+
| | | |
| --------+------+ |
| | |
| | |
| | |
| | |
v v v
+-------+------+ +--------+------+ +--------+------+
| | | | | |
| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 |
| | | | | |
+--------------+ +---------------+ +---------------+
```
The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters:
```
postgresql.postgresqlPassword=testtest
subchart1.postgresql.postgresqlPassword=testtest
subchart2.postgresql.postgresqlPassword=testtest
postgresql.postgresqlDatabase=db1
subchart1.postgresql.postgresqlDatabase=db1
subchart2.postgresql.postgresqlDatabase=db1
```
If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows:
```
global.postgresql.postgresqlPassword=testtest
global.postgresql.postgresqlDatabase=db1
```
This way, the credentials will be available in all of the subcharts.
## Persistence
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
See the [Parameters](#parameters) section to configure the PVC or to disable persistence.
If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished.
## NetworkPolicy
To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`.
For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace:
```console
$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"
```
With NetworkPolicy enabled, traffic will be limited to just port 5432.
For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL.
This label will be displayed in the output of a successful install.
## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image
- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image.
- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift.
- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,containerSecurityContext.enabled=false,shmVolume.chmod.enabled=false
### Deploy chart using Docker Official PostgreSQL Image
From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image.
Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory.
```
image.repository=postgres
image.tag=10.6
postgresqlDataDir=/data/pgdata
persistence.mountPath=/data/
```
### Setting Pod's affinity
This chart allows you to set your custom affinity using the `XXX.affinity` paremeter(s). Find more infomation about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters.
## Troubleshooting
Find more information about how to deal with common errors related to Bitnamis Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
## Upgrading
It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart:
```bash
$ helm upgrade my-release bitnami/postgresql \
--set postgresqlPassword=[POSTGRESQL_PASSWORD] \
--set replication.password=[REPLICATION_PASSWORD]
```
> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes.
### To 10.0.0
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
**What changes were introduced in this major version?**
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
- Move dependency information from the *requirements.yaml* to the *Chart.yaml*
- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock*
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart.
**Considerations when upgrading to this version**
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
**Useful links**
- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/
- https://helm.sh/docs/topics/v2_v3_migration/
- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
#### Breaking changes
- The term `master` has been replaced with `primary` and `slave` with `readReplicas` throughout the chart. Role names have changed from `master` and `slave` to `primary` and `read`.
To upgrade to `10.0.0`, it should be done reusing the PVCs used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is `postgresql`):
> NOTE: Please, create a backup of your database before running any of those actions.
Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release:
```console
$ export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
$ export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=master -o jsonpath="{.items[0].metadata.name}")
```
Delete the PostgreSQL statefulset. Notice the option `--cascade=false`:
```console
$ kubectl delete statefulsets.apps postgresql-postgresql --cascade=false
```
Now the upgrade works:
```console
$ helm upgrade postgresql bitnami/postgresql --set postgresqlPassword=$POSTGRESQL_PASSWORD --set persistence.existingClaim=$POSTGRESQL_PVC
```
You will have to delete the existing PostgreSQL pod and the new statefulset is going to create a new one
```console
$ kubectl delete pod postgresql-postgresql-0
```
Finally, you should see the lines below in PostgreSQL container logs:
```console
$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
...
postgresql 08:05:12.59 INFO ==> Deploying PostgreSQL with persisted data...
...
```
### To 9.0.0
In this version the chart was adapted to follow the Helm label best practices, see [PR 3021](https://github.com/bitnami/charts/pull/3021). That means the backward compatibility is not guarantee when upgrading the chart to this major version.
As a workaround, you can delete the existing statefulset (using the `--cascade=false` flag pods are not deleted) before upgrade the chart. For example, this can be a valid workflow:
- Deploy an old version (8.X.X)
```console
$ helm install postgresql bitnami/postgresql --version 8.10.14
```
- Old version is up and running
```console
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
postgresql default 1 2020-08-04 13:39:54.783480286 +0000 UTC deployed postgresql-8.10.14 11.8.0
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgresql-postgresql-0 1/1 Running 0 76s
```
- The upgrade to the latest one (9.X.X) is going to fail
```console
$ helm upgrade postgresql bitnami/postgresql
Error: UPGRADE FAILED: cannot patch "postgresql-postgresql" with kind StatefulSet: StatefulSet.apps "postgresql-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
```
- Delete the statefulset
```console
$ kubectl delete statefulsets.apps --cascade=false postgresql-postgresql
statefulset.apps "postgresql-postgresql" deleted
```
- Now the upgrade works
```console
$ helm upgrade postgresql bitnami/postgresql
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
postgresql default 3 2020-08-04 13:42:08.020385884 +0000 UTC deployed postgresql-9.1.2 11.8.0
```
- We can kill the existing pod and the new statefulset is going to create a new one:
```console
$ kubectl delete pod postgresql-postgresql-0
pod "postgresql-postgresql-0" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgresql-postgresql-0 1/1 Running 0 19s
```
Please, note that without the `--cascade=false` both objects (statefulset and pod) are going to be removed and both objects will be deployed again with the `helm upgrade` command
### To 8.0.0
Prefixes the port names with their protocols to comply with Istio conventions.
If you depend on the port names in your setup, make sure to update them to reflect this change.
### To 7.1.0
Adds support for LDAP configuration.
### To 7.0.0
Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec.
In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage.
This major version bump signifies this change.
### To 6.5.7
In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies:
- protobuf
- protobuf-c
- json-c
- geos
- proj
### To 5.0.0
In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/).
For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs:
```console
Welcome to the Bitnami postgresql container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
Send us your feedback at containers@bitnami.com
INFO ==> ** Starting PostgreSQL setup **
NFO ==> Validating settings in POSTGRESQL_* env vars..
INFO ==> Initializing PostgreSQL database...
INFO ==> postgresql.conf file not detected. Generating it...
INFO ==> pg_hba.conf file not detected. Generating it...
INFO ==> Deploying PostgreSQL with persisted data...
INFO ==> Configuring replication parameters
INFO ==> Loading custom scripts...
INFO ==> Enabling remote connections
INFO ==> Stopping PostgreSQL...
INFO ==> ** PostgreSQL setup finished! **
INFO ==> ** Starting PostgreSQL **
[1] FATAL: database files are incompatible with server
[1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3.
```
In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one.
### To 4.0.0
This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately.
IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error
```
The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development
```
### To 3.0.0
This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods.
It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride.
#### Breaking changes
- `affinty` has been renamed to `master.affinity` and `slave.affinity`.
- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`.
- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`.
### To 2.0.0
In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps:
- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running
```console
$ kubectl get svc
```
- Install (not upgrade) the new version
```console
$ helm repo update
$ helm install my-release bitnami/postgresql
```
- Connect to the new pod (you can obtain the name by running `kubectl get pods`):
```console
$ kubectl exec -it NAME bash
```
- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart:
```console
$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql
```
After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`).
This operation could take some time depending on the database size.
- Once you have the backup file, you can restore it with a command like the one below:
```console
$ psql -U postgres DATABASE_NAME < /tmp/backup.sql
```
In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt).
If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below.
```console
$ psql -U postgres
postgres=# drop database DATABASE_NAME;
postgres=# create database DATABASE_NAME;
postgres=# create user USER_NAME;
postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD';
postgres=# grant all privileges on database DATABASE_NAME to USER_NAME;
postgres=# alter database DATABASE_NAME owner to USER_NAME;
```

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,23 @@
annotations:
category: Infrastructure
apiVersion: v2
appVersion: 1.4.2
description: A Library Helm Chart for grouping common logic between bitnami charts.
This chart is not deployable by itself.
home: https://github.com/bitnami/charts/tree/master/bitnami/common
icon: https://bitnami.com/downloads/logos/bitnami-mark.png
keywords:
- common
- helper
- template
- function
- bitnami
maintainers:
- email: containers@bitnami.com
name: Bitnami
name: common
sources:
- https://github.com/bitnami/charts
- http://www.bitnami.com/
type: library
version: 1.4.2

View File

@ -0,0 +1,322 @@
# Bitnami Common Library Chart
A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts.
## TL;DR
```yaml
dependencies:
- name: common
version: 0.x.x
repository: https://charts.bitnami.com/bitnami
```
```bash
$ helm dependency update
```
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.names.fullname" . }}
data:
myvalue: "Hello World"
```
## Introduction
This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
## Prerequisites
- Kubernetes 1.12+
- Helm 3.1.0
## Parameters
The following table lists the helpers available in the library which are scoped in different sections.
### Affinities
| Helper identifier | Description | Expected Input |
|-------------------------------|------------------------------------------------------|------------------------------------------------|
| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` |
| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` |
| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` |
| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` |
### Capabilities
| Helper identifier | Description | Expected Input |
|----------------------------------------------|------------------------------------------------------------------------------------------------|-------------------|
| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context |
| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context |
| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context |
| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context |
| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context |
| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context |
| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context |
### Errors
| Helper identifier | Description | Expected Input |
|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|
| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` |
### Images
| Helper identifier | Description | Expected Input |
|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. |
| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` |
### Ingress
| Helper identifier | Description | Expected Input |
|--------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences |
### Labels
| Helper identifier | Description | Expected Input |
|-----------------------------|------------------------------------------------------|-------------------|
| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context |
| `common.labels.matchLabels` | Return the proper Docker Image Registry Secret Names | `.` Chart context |
### Names
| Helper identifier | Description | Expected Inpput |
|-------------------------|------------------------------------------------------------|-------------------|
| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context |
| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context |
| `common.names.chart` | Chart name plus version | `.` Chart context |
### Secrets
| Helper identifier | Description | Expected Input |
|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. |
| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. |
| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. |
| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` |
### Storage
| Helper identifier | Description | Expected Input |
|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. |
### TplValues
| Helper identifier | Description | Expected Input |
|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` |
### Utils
| Helper identifier | Description | Expected Input |
|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------|
| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` |
| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` |
| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` |
| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` |
### Validations
| Helper identifier | Description | Expected Input |
|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) |
| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) |
| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. |
| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. |
| `common.validations.values.redis.passwords` | This helper will ensure required password for Redis<sup>TM</sup> are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. |
| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. |
| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB&reg; are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. |
### Warnings
| Helper identifier | Description | Expected Input |
|------------------------------|----------------------------------|------------------------------------------------------------|
| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. |
## Special input schemas
### ImageRoot
```yaml
registry:
type: string
description: Docker registry where the image is located
example: docker.io
repository:
type: string
description: Repository and image name
example: bitnami/nginx
tag:
type: string
description: image tag
example: 1.16.1-debian-10-r63
pullPolicy:
type: string
description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
pullSecrets:
type: array
items:
type: string
description: Optionally specify an array of imagePullSecrets.
debug:
type: boolean
description: Set to true if you would like to see extra information on logs
example: false
## An instance would be:
# registry: docker.io
# repository: bitnami/nginx
# tag: 1.16.1-debian-10-r63
# pullPolicy: IfNotPresent
# debug: false
```
### Persistence
```yaml
enabled:
type: boolean
description: Whether enable persistence.
example: true
storageClass:
type: string
description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning.
example: "-"
accessMode:
type: string
description: Access mode for the Persistent Volume Storage.
example: ReadWriteOnce
size:
type: string
description: Size the Persistent Volume Storage.
example: 8Gi
path:
type: string
description: Path to be persisted.
example: /bitnami
## An instance would be:
# enabled: true
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 8Gi
# path: /bitnami
```
### ExistingSecret
```yaml
name:
type: string
description: Name of the existing secret.
example: mySecret
keyMapping:
description: Mapping between the expected key name and the name of the key in the existing secret.
type: object
## An instance would be:
# name: mySecret
# keyMapping:
# password: myPasswordKey
```
#### Example of use
When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets.
```yaml
# templates/secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "common.names.fullname" . }}
labels:
app: {{ include "common.names.fullname" . }}
type: Opaque
data:
password: {{ .Values.password | b64enc | quote }}
# templates/dpl.yaml
---
...
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }}
key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }}
...
# values.yaml
---
name: mySecret
keyMapping:
password: myPasswordKey
```
### ValidateValue
#### NOTES.txt
```console
{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}}
{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}}
{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
```
If we force those values to be empty we will see some alerts
```console
$ helm install test mychart --set path.to.value00="",path.to.value01=""
'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value:
export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode)
'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value:
export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode)
```
## Upgrading
### To 1.0.0
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
**What changes were introduced in this major version?**
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information.
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
**Considerations when upgrading to this version**
- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
**Useful links**
- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/
- https://helm.sh/docs/topics/v2_v3_migration/
- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/

View File

@ -0,0 +1,94 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Return a soft nodeAffinity definition
{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.nodes.soft" -}}
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: {{ .key }}
operator: In
values:
{{- range .values }}
- {{ . }}
{{- end }}
weight: 1
{{- end -}}
{{/*
Return a hard nodeAffinity definition
{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.nodes.hard" -}}
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: {{ .key }}
operator: In
values:
{{- range .values }}
- {{ . }}
{{- end }}
{{- end -}}
{{/*
Return a nodeAffinity definition
{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.nodes" -}}
{{- if eq .type "soft" }}
{{- include "common.affinities.nodes.soft" . -}}
{{- else if eq .type "hard" }}
{{- include "common.affinities.nodes.hard" . -}}
{{- end -}}
{{- end -}}
{{/*
Return a soft podAffinity/podAntiAffinity definition
{{ include "common.affinities.pods.soft" (dict "component" "FOO" "context" $) -}}
*/}}
{{- define "common.affinities.pods.soft" -}}
{{- $component := default "" .component -}}
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }}
{{- if not (empty $component) }}
{{ printf "app.kubernetes.io/component: %s" $component }}
{{- end }}
namespaces:
- {{ .context.Release.Namespace | quote }}
topologyKey: kubernetes.io/hostname
weight: 1
{{- end -}}
{{/*
Return a hard podAffinity/podAntiAffinity definition
{{ include "common.affinities.pods.hard" (dict "component" "FOO" "context" $) -}}
*/}}
{{- define "common.affinities.pods.hard" -}}
{{- $component := default "" .component -}}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }}
{{- if not (empty $component) }}
{{ printf "app.kubernetes.io/component: %s" $component }}
{{- end }}
namespaces:
- {{ .context.Release.Namespace | quote }}
topologyKey: kubernetes.io/hostname
{{- end -}}
{{/*
Return a podAffinity/podAntiAffinity definition
{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}}
*/}}
{{- define "common.affinities.pods" -}}
{{- if eq .type "soft" }}
{{- include "common.affinities.pods.soft" . -}}
{{- else if eq .type "hard" }}
{{- include "common.affinities.pods.hard" . -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,95 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Return the target Kubernetes version
*/}}
{{- define "common.capabilities.kubeVersion" -}}
{{- if .Values.global }}
{{- if .Values.global.kubeVersion }}
{{- .Values.global.kubeVersion -}}
{{- else }}
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
{{- end -}}
{{- else }}
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for deployment.
*/}}
{{- define "common.capabilities.deployment.apiVersion" -}}
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for statefulset.
*/}}
{{- define "common.capabilities.statefulset.apiVersion" -}}
{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "apps/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "common.capabilities.ingress.apiVersion" -}}
{{- if .Values.ingress -}}
{{- if .Values.ingress.apiVersion -}}
{{- .Values.ingress.apiVersion -}}
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "extensions/v1beta1" -}}
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "networking.k8s.io/v1" -}}
{{- end }}
{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "extensions/v1beta1" -}}
{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "networking.k8s.io/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for RBAC resources.
*/}}
{{- define "common.capabilities.rbac.apiVersion" -}}
{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "rbac.authorization.k8s.io/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for CRDs.
*/}}
{{- define "common.capabilities.crd.apiVersion" -}}
{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}}
{{- print "apiextensions.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "apiextensions.k8s.io/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Returns true if the used Helm version is 3.3+.
A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure.
This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error.
**To be removed when the catalog's minimun Helm version is 3.3**
*/}}
{{- define "common.capabilities.supportsHelmVersion" -}}
{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }}
{{- true -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,23 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Through error when upgrading using empty passwords values that must not be empty.
Usage:
{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}}
{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}}
{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }}
Required password params:
- validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error.
- context - Context - Required. Parent context.
*/}}
{{- define "common.errors.upgrade.passwords.empty" -}}
{{- $validationErrors := join "" .validationErrors -}}
{{- if and $validationErrors .context.Release.IsUpgrade -}}
{{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}}
{{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}}
{{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}}
{{- $errorString = print $errorString "\n%s" -}}
{{- printf $errorString $validationErrors | fail -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,47 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Return the proper image name
{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }}
*/}}
{{- define "common.images.image" -}}
{{- $registryName := .imageRoot.registry -}}
{{- $repositoryName := .imageRoot.repository -}}
{{- $tag := .imageRoot.tag | toString -}}
{{- if .global }}
{{- if .global.imageRegistry }}
{{- $registryName = .global.imageRegistry -}}
{{- end -}}
{{- end -}}
{{- if $registryName }}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- else -}}
{{- printf "%s:%s" $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }}
*/}}
{{- define "common.images.pullSecrets" -}}
{{- $pullSecrets := list }}
{{- if .global }}
{{- range .global.imagePullSecrets -}}
{{- $pullSecrets = append $pullSecrets . -}}
{{- end -}}
{{- end -}}
{{- range .images -}}
{{- range .pullSecrets -}}
{{- $pullSecrets = append $pullSecrets . -}}
{{- end -}}
{{- end -}}
{{- if (not (empty $pullSecrets)) }}
imagePullSecrets:
{{- range $pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,42 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Generate backend entry that is compatible with all Kubernetes API versions.
Usage:
{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }}
Params:
- serviceName - String. Name of an existing service backend
- servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer.
- context - Dict - Required. The context for the template evaluation.
*/}}
{{- define "common.ingress.backend" -}}
{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}}
{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}}
serviceName: {{ .serviceName }}
servicePort: {{ .servicePort }}
{{- else -}}
service:
name: {{ .serviceName }}
port:
{{- if typeIs "string" .servicePort }}
name: {{ .servicePort }}
{{- else if typeIs "int" .servicePort }}
number: {{ .servicePort }}
{{- end }}
{{- end -}}
{{- end -}}
{{/*
Print "true" if the API pathType field is supported
Usage:
{{ include "common.ingress.supportsPathType" . }}
*/}}
{{- define "common.ingress.supportsPathType" -}}
{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}}
{{- print "false" -}}
{{- else -}}
{{- print "true" -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Kubernetes standard labels
*/}}
{{- define "common.labels.standard" -}}
app.kubernetes.io/name: {{ include "common.names.name" . }}
helm.sh/chart: {{ include "common.names.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector
*/}}
{{- define "common.labels.matchLabels" -}}
app.kubernetes.io/name: {{ include "common.names.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}

View File

@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "common.names.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "common.names.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "common.names.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,129 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Generate secret name.
Usage:
{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }}
Params:
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
+info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret
- defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment.
- context - Dict - Required. The context for the template evaluation.
*/}}
{{- define "common.secrets.name" -}}
{{- $name := (include "common.names.fullname" .context) -}}
{{- if .defaultNameSuffix -}}
{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- with .existingSecret -}}
{{- if not (typeIs "string" .) -}}
{{- with .name -}}
{{- $name = . -}}
{{- end -}}
{{- else -}}
{{- $name = . -}}
{{- end -}}
{{- end -}}
{{- printf "%s" $name -}}
{{- end -}}
{{/*
Generate secret key.
Usage:
{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }}
Params:
- existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user
to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility.
+info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret
- key - String - Required. Name of the key in the secret.
*/}}
{{- define "common.secrets.key" -}}
{{- $key := .key -}}
{{- if .existingSecret -}}
{{- if not (typeIs "string" .existingSecret) -}}
{{- if .existingSecret.keyMapping -}}
{{- $key = index .existingSecret.keyMapping $.key -}}
{{- end -}}
{{- end }}
{{- end -}}
{{- printf "%s" $key -}}
{{- end -}}
{{/*
Generate secret password or retrieve one if already created.
Usage:
{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }}
Params:
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
- key - String - Required - Name of the key in the secret.
- providedValues - List<String> - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value.
- length - int - Optional - Length of the generated random password.
- strong - Boolean - Optional - Whether to add symbols to the generated random password.
- chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart.
- context - Context - Required - Parent context.
*/}}
{{- define "common.secrets.passwords.manage" -}}
{{- $password := "" }}
{{- $subchart := "" }}
{{- $chartName := default "" .chartName }}
{{- $passwordLength := default 10 .length }}
{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }}
{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }}
{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }}
{{- if $secret }}
{{- if index $secret.data .key }}
{{- $password = index $secret.data .key }}
{{- end -}}
{{- else if $providedPasswordValue }}
{{- $password = $providedPasswordValue | toString | b64enc | quote }}
{{- else }}
{{- if .context.Values.enabled }}
{{- $subchart = $chartName }}
{{- end -}}
{{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}}
{{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}}
{{- $passwordValidationErrors := list $requiredPasswordError -}}
{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}}
{{- if .strong }}
{{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }}
{{- $password = randAscii $passwordLength }}
{{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }}
{{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }}
{{- else }}
{{- $password = randAlphaNum $passwordLength | b64enc | quote }}
{{- end }}
{{- end -}}
{{- printf "%s" $password -}}
{{- end -}}
{{/*
Returns whether a previous generated secret already exists
Usage:
{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }}
Params:
- secret - String - Required - Name of the 'Secret' resource where the password is stored.
- context - Context - Required - Parent context.
*/}}
{{- define "common.secrets.exists" -}}
{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }}
{{- if $secret }}
{{- true -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,23 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Return the proper Storage Class
{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }}
*/}}
{{- define "common.storage.class" -}}
{{- $storageClass := .persistence.storageClass -}}
{{- if .global -}}
{{- if .global.storageClass -}}
{{- $storageClass = .global.storageClass -}}
{{- end -}}
{{- end -}}
{{- if $storageClass -}}
{{- if (eq "-" $storageClass) -}}
{{- printf "storageClassName: \"\"" -}}
{{- else }}
{{- printf "storageClassName: %s" $storageClass -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,13 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Renders a value that contains template.
Usage:
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
*/}}
{{- define "common.tplvalues.render" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (.value | toYaml) .context }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,62 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Print instructions to get a secret value.
Usage:
{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }}
*/}}
{{- define "common.utils.secret.getvalue" -}}
{{- $varname := include "common.utils.fieldToEnvVar" . -}}
export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode)
{{- end -}}
{{/*
Build env var name given a field
Usage:
{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }}
*/}}
{{- define "common.utils.fieldToEnvVar" -}}
{{- $fieldNameSplit := splitList "-" .field -}}
{{- $upperCaseFieldNameSplit := list -}}
{{- range $fieldNameSplit -}}
{{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}}
{{- end -}}
{{ join "_" $upperCaseFieldNameSplit }}
{{- end -}}
{{/*
Gets a value from .Values given
Usage:
{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }}
*/}}
{{- define "common.utils.getValueFromKey" -}}
{{- $splitKey := splitList "." .key -}}
{{- $value := "" -}}
{{- $latestObj := $.context.Values -}}
{{- range $splitKey -}}
{{- if not $latestObj -}}
{{- printf "please review the entire path of '%s' exists in values" $.key | fail -}}
{{- end -}}
{{- $value = ( index $latestObj . ) -}}
{{- $latestObj = $value -}}
{{- end -}}
{{- printf "%v" (default "" $value) -}}
{{- end -}}
{{/*
Returns first .Values key with a defined value or first of the list if all non-defined
Usage:
{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }}
*/}}
{{- define "common.utils.getKeyFromList" -}}
{{- $key := first .keys -}}
{{- $reverseKeys := reverse .keys }}
{{- range $reverseKeys }}
{{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }}
{{- if $value -}}
{{- $key = . }}
{{- end -}}
{{- end -}}
{{- printf "%s" $key -}}
{{- end -}}

View File

@ -0,0 +1,14 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Warning about using rolling tag.
Usage:
{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }}
*/}}
{{- define "common.warnings.rollingTag" -}}
{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }}
WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
{{- end }}
{{- end -}}

View File

@ -0,0 +1,72 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Validate Cassandra required passwords are not empty.
Usage:
{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
Params:
- secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret"
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
*/}}
{{- define "common.validations.values.cassandra.passwords" -}}
{{- $existingSecret := include "common.cassandra.values.existingSecret" . -}}
{{- $enabled := include "common.cassandra.values.enabled" . -}}
{{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}}
{{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}}
{{- if and (not $existingSecret) (eq $enabled "true") -}}
{{- $requiredPasswords := list -}}
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for existingSecret.
Usage:
{{ include "common.cassandra.values.existingSecret" (dict "context" $) }}
Params:
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
*/}}
{{- define "common.cassandra.values.existingSecret" -}}
{{- if .subchart -}}
{{- .context.Values.cassandra.dbUser.existingSecret | quote -}}
{{- else -}}
{{- .context.Values.dbUser.existingSecret | quote -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for enabled cassandra.
Usage:
{{ include "common.cassandra.values.enabled" (dict "context" $) }}
*/}}
{{- define "common.cassandra.values.enabled" -}}
{{- if .subchart -}}
{{- printf "%v" .context.Values.cassandra.enabled -}}
{{- else -}}
{{- printf "%v" (not .context.Values.enabled) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for the key dbUser
Usage:
{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false
*/}}
{{- define "common.cassandra.values.key.dbUser" -}}
{{- if .subchart -}}
cassandra.dbUser
{{- else -}}
dbUser
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,103 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Validate MariaDB required passwords are not empty.
Usage:
{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
Params:
- secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret"
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
*/}}
{{- define "common.validations.values.mariadb.passwords" -}}
{{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}}
{{- $enabled := include "common.mariadb.values.enabled" . -}}
{{- $architecture := include "common.mariadb.values.architecture" . -}}
{{- $authPrefix := include "common.mariadb.values.key.auth" . -}}
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
{{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}}
{{- if and (not $existingSecret) (eq $enabled "true") -}}
{{- $requiredPasswords := list -}}
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
{{- if not (empty $valueUsername) -}}
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
{{- end -}}
{{- if (eq $architecture "replication") -}}
{{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}}
{{- end -}}
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for existingSecret.
Usage:
{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }}
Params:
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
*/}}
{{- define "common.mariadb.values.auth.existingSecret" -}}
{{- if .subchart -}}
{{- .context.Values.mariadb.auth.existingSecret | quote -}}
{{- else -}}
{{- .context.Values.auth.existingSecret | quote -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for enabled mariadb.
Usage:
{{ include "common.mariadb.values.enabled" (dict "context" $) }}
*/}}
{{- define "common.mariadb.values.enabled" -}}
{{- if .subchart -}}
{{- printf "%v" .context.Values.mariadb.enabled -}}
{{- else -}}
{{- printf "%v" (not .context.Values.enabled) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for architecture
Usage:
{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
*/}}
{{- define "common.mariadb.values.architecture" -}}
{{- if .subchart -}}
{{- .context.Values.mariadb.architecture -}}
{{- else -}}
{{- .context.Values.architecture -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for the key auth
Usage:
{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
*/}}
{{- define "common.mariadb.values.key.auth" -}}
{{- if .subchart -}}
mariadb.auth
{{- else -}}
auth
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,108 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Validate MongoDB(R) required passwords are not empty.
Usage:
{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
Params:
- secret - String - Required. Name of the secret where MongoDB(R) values are stored, e.g: "mongodb-passwords-secret"
- subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false
*/}}
{{- define "common.validations.values.mongodb.passwords" -}}
{{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}}
{{- $enabled := include "common.mongodb.values.enabled" . -}}
{{- $authPrefix := include "common.mongodb.values.key.auth" . -}}
{{- $architecture := include "common.mongodb.values.architecture" . -}}
{{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}}
{{- $valueKeyUsername := printf "%s.username" $authPrefix -}}
{{- $valueKeyDatabase := printf "%s.database" $authPrefix -}}
{{- $valueKeyPassword := printf "%s.password" $authPrefix -}}
{{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}}
{{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}}
{{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}}
{{- if and (not $existingSecret) (eq $enabled "true") (eq $authEnabled "true") -}}
{{- $requiredPasswords := list -}}
{{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}}
{{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }}
{{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }}
{{- if and $valueUsername $valueDatabase -}}
{{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredPassword -}}
{{- end -}}
{{- if (eq $architecture "replicaset") -}}
{{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}}
{{- end -}}
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for existingSecret.
Usage:
{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }}
Params:
- subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false
*/}}
{{- define "common.mongodb.values.auth.existingSecret" -}}
{{- if .subchart -}}
{{- .context.Values.mongodb.auth.existingSecret | quote -}}
{{- else -}}
{{- .context.Values.auth.existingSecret | quote -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for enabled mongodb.
Usage:
{{ include "common.mongodb.values.enabled" (dict "context" $) }}
*/}}
{{- define "common.mongodb.values.enabled" -}}
{{- if .subchart -}}
{{- printf "%v" .context.Values.mongodb.enabled -}}
{{- else -}}
{{- printf "%v" (not .context.Values.enabled) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for the key auth
Usage:
{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false
*/}}
{{- define "common.mongodb.values.key.auth" -}}
{{- if .subchart -}}
mongodb.auth
{{- else -}}
auth
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for architecture
Usage:
{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false
*/}}
{{- define "common.mongodb.values.architecture" -}}
{{- if .subchart -}}
{{- .context.Values.mongodb.architecture -}}
{{- else -}}
{{- .context.Values.architecture -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,131 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Validate PostgreSQL required passwords are not empty.
Usage:
{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
Params:
- secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret"
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
*/}}
{{- define "common.validations.values.postgresql.passwords" -}}
{{- $existingSecret := include "common.postgresql.values.existingSecret" . -}}
{{- $enabled := include "common.postgresql.values.enabled" . -}}
{{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}}
{{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}}
{{- if and (not $existingSecret) (eq $enabled "true") -}}
{{- $requiredPasswords := list -}}
{{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}}
{{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}}
{{- if (eq $enabledReplication "true") -}}
{{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}}
{{- end -}}
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to decide whether evaluate global values.
Usage:
{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }}
Params:
- key - String - Required. Field to be evaluated within global, e.g: "existingSecret"
*/}}
{{- define "common.postgresql.values.use.global" -}}
{{- if .context.Values.global -}}
{{- if .context.Values.global.postgresql -}}
{{- index .context.Values.global.postgresql .key | quote -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for existingSecret.
Usage:
{{ include "common.postgresql.values.existingSecret" (dict "context" $) }}
*/}}
{{- define "common.postgresql.values.existingSecret" -}}
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}}
{{- if .subchart -}}
{{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}}
{{- else -}}
{{- default (.context.Values.existingSecret | quote) $globalValue -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for enabled postgresql.
Usage:
{{ include "common.postgresql.values.enabled" (dict "context" $) }}
*/}}
{{- define "common.postgresql.values.enabled" -}}
{{- if .subchart -}}
{{- printf "%v" .context.Values.postgresql.enabled -}}
{{- else -}}
{{- printf "%v" (not .context.Values.enabled) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for the key postgressPassword.
Usage:
{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
*/}}
{{- define "common.postgresql.values.key.postgressPassword" -}}
{{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}}
{{- if not $globalValue -}}
{{- if .subchart -}}
postgresql.postgresqlPassword
{{- else -}}
postgresqlPassword
{{- end -}}
{{- else -}}
global.postgresql.postgresqlPassword
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for enabled.replication.
Usage:
{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
*/}}
{{- define "common.postgresql.values.enabled.replication" -}}
{{- if .subchart -}}
{{- printf "%v" .context.Values.postgresql.replication.enabled -}}
{{- else -}}
{{- printf "%v" .context.Values.replication.enabled -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for the key replication.password.
Usage:
{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false
*/}}
{{- define "common.postgresql.values.key.replicationPassword" -}}
{{- if .subchart -}}
postgresql.replication.password
{{- else -}}
replication.password
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,72 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Validate Redis(TM) required passwords are not empty.
Usage:
{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }}
Params:
- secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret"
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
*/}}
{{- define "common.validations.values.redis.passwords" -}}
{{- $existingSecret := include "common.redis.values.existingSecret" . -}}
{{- $enabled := include "common.redis.values.enabled" . -}}
{{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}}
{{- $valueKeyRedisPassword := printf "%s%s" $valueKeyPrefix "password" -}}
{{- $valueKeyRedisUsePassword := printf "%s%s" $valueKeyPrefix "usePassword" -}}
{{- if and (not $existingSecret) (eq $enabled "true") -}}
{{- $requiredPasswords := list -}}
{{- $usePassword := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUsePassword "context" .context) -}}
{{- if eq $usePassword "true" -}}
{{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}}
{{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}}
{{- end -}}
{{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}}
{{- end -}}
{{- end -}}
{{/*
Redis Auxiliary function to get the right value for existingSecret.
Usage:
{{ include "common.redis.values.existingSecret" (dict "context" $) }}
Params:
- subchart - Boolean - Optional. Whether Redis(TM) is used as subchart or not. Default: false
*/}}
{{- define "common.redis.values.existingSecret" -}}
{{- if .subchart -}}
{{- .context.Values.redis.existingSecret | quote -}}
{{- else -}}
{{- .context.Values.existingSecret | quote -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right value for enabled redis.
Usage:
{{ include "common.redis.values.enabled" (dict "context" $) }}
*/}}
{{- define "common.redis.values.enabled" -}}
{{- if .subchart -}}
{{- printf "%v" .context.Values.redis.enabled -}}
{{- else -}}
{{- printf "%v" (not .context.Values.enabled) -}}
{{- end -}}
{{- end -}}
{{/*
Auxiliary function to get the right prefix path for the values
Usage:
{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }}
Params:
- subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false
*/}}
{{- define "common.redis.values.keys.prefix" -}}
{{- if .subchart -}}redis.{{- else -}}{{- end -}}
{{- end -}}

View File

@ -0,0 +1,46 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Validate values must not be empty.
Usage:
{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}}
{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}}
{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }}
Validate value params:
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
*/}}
{{- define "common.validations.values.multiple.empty" -}}
{{- range .required -}}
{{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}}
{{- end -}}
{{- end -}}
{{/*
Validate a value must not be empty.
Usage:
{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }}
Validate value params:
- valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password"
- secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret"
- field - String - Optional. Name of the field in the secret data, e.g: "mysql-password"
- subchart - String - Optional - Name of the subchart that the validated password is part of.
*/}}
{{- define "common.validations.values.single.empty" -}}
{{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }}
{{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }}
{{- if not $value -}}
{{- $varname := "my-value" -}}
{{- $getCurrentValue := "" -}}
{{- if and .secret .field -}}
{{- $varname = include "common.utils.fieldToEnvVar" . -}}
{{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}}
{{- end -}}
{{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,3 @@
## bitnami/common
## It is required by CI/CD tools and processes.
exampleValue: common-chart

View File

@ -0,0 +1,3 @@
commonAnnotations:
helm.sh/hook: "\"pre-install, pre-upgrade\""
helm.sh/hook-weight: "-1"

View File

@ -0,0 +1 @@
# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml.

View File

@ -0,0 +1,2 @@
shmVolume:
enabled: false

View File

@ -0,0 +1 @@
Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map.

View File

@ -0,0 +1,4 @@
If you don't want to provide the whole configuration file and only specify certain parameters, you can copy here your extended `.conf` files.
These files will be injected as a config maps and add/overwrite the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`.
More info in the [bitnami-docker-postgresql README](https://github.com/bitnami/bitnami-docker-postgresql#configuration-file).

View File

@ -0,0 +1,3 @@
You can copy here your custom `.sh`, `.sql` or `.sql.gz` file so they are executed during the first boot of the image.
More info in the [bitnami-docker-postgresql](https://github.com/bitnami/bitnami-docker-postgresql#initializing-a-new-instance) repository.

View File

@ -0,0 +1,59 @@
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster:
{{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection
{{- if .Values.replication.enabled }}
{{ template "common.names.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection
{{- end }}
{{- if not (eq (include "postgresql.username" .) "postgres") }}
To get the password for "postgres" run:
export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-postgres-password}" | base64 --decode)
{{- end }}
To get the password for "{{ template "postgresql.username" . }}" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run {{ template "common.names.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
--labels="{{ template "common.names.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "common.names.fullname" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }}
{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
Note: Since NetworkPolicy is enabled, only pods with label {{ template "common.names.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster.
{{- end }}
To connect to your database from outside the cluster execute the following commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }})
{{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }}
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "common.names.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
{{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }}
{{- else if contains "ClusterIP" .Values.service.type }}
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "common.names.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} &
{{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }}
{{- end }}
{{- include "postgresql.validateValues" . -}}
{{- include "common.warnings.rollingTag" .Values.image -}}
{{- $passwordValidationErrors := include "common.validations.values.postgresql.passwords" (dict "secret" (include "common.names.fullname" .) "context" $) -}}
{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $passwordValidationErrors) "context" $) -}}

View File

@ -0,0 +1,337 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "postgresql.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "postgresql.primary.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- $fullname := default (printf "%s-%s" .Release.Name $name) .Values.fullnameOverride -}}
{{- if .Values.replication.enabled -}}
{{- printf "%s-%s" $fullname "primary" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s" $fullname | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper PostgreSQL image name
*/}}
{{- define "postgresql.image" -}}
{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }}
{{- end -}}
{{/*
Return the proper PostgreSQL metrics image name
*/}}
{{- define "postgresql.metrics.image" -}}
{{ include "common.images.image" (dict "imageRoot" .Values.metrics.image "global" .Values.global) }}
{{- end -}}
{{/*
Return the proper image name (for the init container volume-permissions image)
*/}}
{{- define "postgresql.volumePermissions.image" -}}
{{ include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) }}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "postgresql.imagePullSecrets" -}}
{{ include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.metrics.image .Values.volumePermissions.image) "global" .Values.global) }}
{{- end -}}
{{/*
Return PostgreSQL postgres user password
*/}}
{{- define "postgresql.postgres.password" -}}
{{- if .Values.global.postgresql.postgresqlPostgresPassword }}
{{- .Values.global.postgresql.postgresqlPostgresPassword -}}
{{- else if .Values.postgresqlPostgresPassword -}}
{{- .Values.postgresqlPostgresPassword -}}
{{- else -}}
{{- randAlphaNum 10 -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL password
*/}}
{{- define "postgresql.password" -}}
{{- if .Values.global.postgresql.postgresqlPassword }}
{{- .Values.global.postgresql.postgresqlPassword -}}
{{- else if .Values.postgresqlPassword -}}
{{- .Values.postgresqlPassword -}}
{{- else -}}
{{- randAlphaNum 10 -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL replication password
*/}}
{{- define "postgresql.replication.password" -}}
{{- if .Values.global.postgresql.replicationPassword }}
{{- .Values.global.postgresql.replicationPassword -}}
{{- else if .Values.replication.password -}}
{{- .Values.replication.password -}}
{{- else -}}
{{- randAlphaNum 10 -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL username
*/}}
{{- define "postgresql.username" -}}
{{- if .Values.global.postgresql.postgresqlUsername }}
{{- .Values.global.postgresql.postgresqlUsername -}}
{{- else -}}
{{- .Values.postgresqlUsername -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL replication username
*/}}
{{- define "postgresql.replication.username" -}}
{{- if .Values.global.postgresql.replicationUser }}
{{- .Values.global.postgresql.replicationUser -}}
{{- else -}}
{{- .Values.replication.user -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL port
*/}}
{{- define "postgresql.port" -}}
{{- if .Values.global.postgresql.servicePort }}
{{- .Values.global.postgresql.servicePort -}}
{{- else -}}
{{- .Values.service.port -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL created database
*/}}
{{- define "postgresql.database" -}}
{{- if .Values.global.postgresql.postgresqlDatabase }}
{{- .Values.global.postgresql.postgresqlDatabase -}}
{{- else if .Values.postgresqlDatabase -}}
{{- .Values.postgresqlDatabase -}}
{{- end -}}
{{- end -}}
{{/*
Get the password secret.
*/}}
{{- define "postgresql.secretName" -}}
{{- if .Values.global.postgresql.existingSecret }}
{{- printf "%s" (tpl .Values.global.postgresql.existingSecret $) -}}
{{- else if .Values.existingSecret -}}
{{- printf "%s" (tpl .Values.existingSecret $) -}}
{{- else -}}
{{- printf "%s" (include "common.names.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Return true if we should use an existingSecret.
*/}}
{{- define "postgresql.useExistingSecret" -}}
{{- if or .Values.global.postgresql.existingSecret .Values.existingSecret -}}
{{- true -}}
{{- end -}}
{{- end -}}
{{/*
Return true if a secret object should be created
*/}}
{{- define "postgresql.createSecret" -}}
{{- if not (include "postgresql.useExistingSecret" .) -}}
{{- true -}}
{{- end -}}
{{- end -}}
{{/*
Get the configuration ConfigMap name.
*/}}
{{- define "postgresql.configurationCM" -}}
{{- if .Values.configurationConfigMap -}}
{{- printf "%s" (tpl .Values.configurationConfigMap $) -}}
{{- else -}}
{{- printf "%s-configuration" (include "common.names.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Get the extended configuration ConfigMap name.
*/}}
{{- define "postgresql.extendedConfigurationCM" -}}
{{- if .Values.extendedConfConfigMap -}}
{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}}
{{- else -}}
{{- printf "%s-extended-configuration" (include "common.names.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Return true if a configmap should be mounted with PostgreSQL configuration
*/}}
{{- define "postgresql.mountConfigurationCM" -}}
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }}
{{- true -}}
{{- end -}}
{{- end -}}
{{/*
Get the initialization scripts ConfigMap name.
*/}}
{{- define "postgresql.initdbScriptsCM" -}}
{{- if .Values.initdbScriptsConfigMap -}}
{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}}
{{- else -}}
{{- printf "%s-init-scripts" (include "common.names.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Get the initialization scripts Secret name.
*/}}
{{- define "postgresql.initdbScriptsSecret" -}}
{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}}
{{- end -}}
{{/*
Get the metrics ConfigMap name.
*/}}
{{- define "postgresql.metricsCM" -}}
{{- printf "%s-metrics" (include "common.names.fullname" .) -}}
{{- end -}}
{{/*
Get the readiness probe command
*/}}
{{- define "postgresql.readinessProbeCommand" -}}
- |
{{- if (include "postgresql.database" .) }}
exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
{{- if contains "bitnami/" .Values.image.repository }}
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
{{- end -}}
{{- end -}}
{{/*
Compile all warnings into a single message, and call fail.
*/}}
{{- define "postgresql.validateValues" -}}
{{- $messages := list -}}
{{- $messages := append $messages (include "postgresql.validateValues.ldapConfigurationMethod" .) -}}
{{- $messages := append $messages (include "postgresql.validateValues.psp" .) -}}
{{- $messages := append $messages (include "postgresql.validateValues.tls" .) -}}
{{- $messages := without $messages "" -}}
{{- $message := join "\n" $messages -}}
{{- if $message -}}
{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}}
{{- end -}}
{{- end -}}
{{/*
Validate values of Postgresql - If ldap.url is used then you don't need the other settings for ldap
*/}}
{{- define "postgresql.validateValues.ldapConfigurationMethod" -}}
{{- if and .Values.ldap.enabled (and (not (empty .Values.ldap.url)) (not (empty .Values.ldap.server))) }}
postgresql: ldap.url, ldap.server
You cannot set both `ldap.url` and `ldap.server` at the same time.
Please provide a unique way to configure LDAP.
More info at https://www.postgresql.org/docs/current/auth-ldap.html
{{- end -}}
{{- end -}}
{{/*
Validate values of Postgresql - If PSP is enabled RBAC should be enabled too
*/}}
{{- define "postgresql.validateValues.psp" -}}
{{- if and .Values.psp.create (not .Values.rbac.create) }}
postgresql: psp.create, rbac.create
RBAC should be enabled if PSP is enabled in order for PSP to work.
More info at https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for podsecuritypolicy.
*/}}
{{- define "podsecuritypolicy.apiVersion" -}}
{{- if semverCompare "<1.10-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "policy/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for networkpolicy.
*/}}
{{- define "postgresql.networkPolicy.apiVersion" -}}
{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}}
"extensions/v1beta1"
{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}}
"networking.k8s.io/v1"
{{- end -}}
{{- end -}}
{{/*
Validate values of Postgresql TLS - When TLS is enabled, so must be VolumePermissions
*/}}
{{- define "postgresql.validateValues.tls" -}}
{{- if and .Values.tls.enabled (not .Values.volumePermissions.enabled) }}
postgresql: tls.enabled, volumePermissions.enabled
When TLS is enabled you must enable volumePermissions as well to ensure certificates files have
the right permissions.
{{- end -}}
{{- end -}}
{{/*
Return the path to the cert file.
*/}}
{{- define "postgresql.tlsCert" -}}
{{- required "Certificate filename is required when TLS in enabled" .Values.tls.certFilename | printf "/opt/bitnami/postgresql/certs/%s" -}}
{{- end -}}
{{/*
Return the path to the cert key file.
*/}}
{{- define "postgresql.tlsCertKey" -}}
{{- required "Certificate Key filename is required when TLS in enabled" .Values.tls.certKeyFilename | printf "/opt/bitnami/postgresql/certs/%s" -}}
{{- end -}}
{{/*
Return the path to the CA cert file.
*/}}
{{- define "postgresql.tlsCACert" -}}
{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.certCAFilename -}}
{{- end -}}
{{/*
Return the path to the CRL file.
*/}}
{{- define "postgresql.tlsCRL" -}}
{{- if .Values.tls.crlFilename -}}
{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.crlFilename -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,31 @@
{{ if and (or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration) (not .Values.configurationConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.names.fullname" . }}-configuration
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
data:
{{- if (.Files.Glob "files/postgresql.conf") }}
{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }}
{{- else if .Values.postgresqlConfiguration }}
postgresql.conf: |
{{- range $key, $value := default dict .Values.postgresqlConfiguration }}
{{- if kindIs "string" $value }}
{{ $key | snakecase }} = '{{ $value }}'
{{- else }}
{{ $key | snakecase }} = {{ $value }}
{{- end }}
{{- end }}
{{- end }}
{{- if (.Files.Glob "files/pg_hba.conf") }}
{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }}
{{- else if .Values.pgHbaConfiguration }}
pg_hba.conf: |
{{ .Values.pgHbaConfiguration | indent 4 }}
{{- end }}
{{ end }}

View File

@ -0,0 +1,26 @@
{{- if and (or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf) (not .Values.extendedConfConfigMap)}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.names.fullname" . }}-extended-configuration
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
data:
{{- with .Files.Glob "files/conf.d/*.conf" }}
{{ .AsConfig | indent 2 }}
{{- end }}
{{ with .Values.postgresqlExtendedConf }}
override.conf: |
{{- range $key, $value := . }}
{{- if kindIs "string" $value }}
{{ $key | snakecase }} = '{{ $value }}'
{{- else }}
{{ $key | snakecase }} = {{ $value }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,4 @@
{{- range .Values.extraDeploy }}
---
{{ include "common.tplvalues.render" (dict "value" . "context" $) }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.names.fullname" . }}-init-scripts
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }}
binaryData:
{{- range $path, $bytes := . }}
{{ base $path }}: {{ $.Files.Get $path | b64enc | quote }}
{{- end }}
{{- end }}
data:
{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}" }}
{{ .AsConfig | indent 2 }}
{{- end }}
{{- with .Values.initdbScripts }}
{{ toYaml . | indent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,14 @@
{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.metricsCM" . }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
data:
custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }}
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if .Values.metrics.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "common.names.fullname" . }}-metrics
labels:
{{- include "common.labels.standard" . | nindent 4 }}
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- toYaml .Values.metrics.service.annotations | nindent 4 }}
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.metrics.service.type }}
{{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }}
{{- end }}
ports:
- name: http-metrics
port: 9187
targetPort: http-metrics
selector:
{{- include "common.labels.matchLabels" . | nindent 4 }}
role: primary
{{- end }}

View File

@ -0,0 +1,39 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }}
metadata:
name: {{ template "common.names.fullname" . }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
podSelector:
matchLabels:
{{- include "common.labels.matchLabels" . | nindent 6 }}
ingress:
# Allow inbound connections
- ports:
- port: {{ template "postgresql.port" . }}
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ template "common.names.fullname" . }}-client: "true"
{{- if .Values.networkPolicy.explicitNamespacesSelector }}
namespaceSelector:
{{ toYaml .Values.networkPolicy.explicitNamespacesSelector | indent 12 }}
{{- end }}
- podSelector:
matchLabels:
{{- include "common.labels.matchLabels" . | nindent 14 }}
role: read
{{- end }}
{{- if .Values.metrics.enabled }}
# Allow prometheus scrapes
- ports:
- port: 9187
{{- end }}
{{- end }}

View File

@ -0,0 +1,38 @@
{{- if .Values.psp.create }}
apiVersion: {{ include "podsecuritypolicy.apiVersion" . }}
kind: PodSecurityPolicy
metadata:
name: {{ template "common.names.fullname" . }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
privileged: false
volumes:
- 'configMap'
- 'secret'
- 'persistentVolumeClaim'
- 'emptyDir'
- 'projected'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
{{- with .Values.metrics.prometheusRule.namespace }}
namespace: {{ . }}
{{- end }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- with .Values.metrics.prometheusRule.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
{{- with .Values.metrics.prometheusRule.rules }}
groups:
- name: {{ template "postgresql.name" $ }}
rules: {{ tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if .Values.rbac.create }}
kind: Role
apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }}
metadata:
name: {{ template "common.names.fullname" . }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
rules:
{{- if .Values.psp.create }}
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- {{ template "common.names.fullname" . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if .Values.rbac.create }}
kind: RoleBinding
apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }}
metadata:
name: {{ template "common.names.fullname" . }}
labels:
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: {{ template "common.names.fullname" . }}
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More