diff --git a/assets/cockroach-labs/cockroachdb-14.0.5.tgz b/assets/cockroach-labs/cockroachdb-14.0.5.tgz new file mode 100644 index 000000000..52873dc23 Binary files /dev/null and b/assets/cockroach-labs/cockroachdb-14.0.5.tgz differ diff --git a/assets/jfrog/artifactory-ha-107.90.15.tgz b/assets/jfrog/artifactory-ha-107.90.15.tgz new file mode 100644 index 000000000..abc25b4c0 Binary files /dev/null and b/assets/jfrog/artifactory-ha-107.90.15.tgz differ diff --git a/assets/jfrog/artifactory-jcr-107.90.15.tgz b/assets/jfrog/artifactory-jcr-107.90.15.tgz new file mode 100644 index 000000000..18c173412 Binary files /dev/null and b/assets/jfrog/artifactory-jcr-107.90.15.tgz differ diff --git a/assets/kuma/kuma-2.9.0.tgz b/assets/kuma/kuma-2.9.0.tgz new file mode 100644 index 000000000..64f3c7670 Binary files /dev/null and b/assets/kuma/kuma-2.9.0.tgz differ diff --git a/assets/nats/nats-1.2.6.tgz b/assets/nats/nats-1.2.6.tgz new file mode 100644 index 000000000..adeebb364 Binary files /dev/null and b/assets/nats/nats-1.2.6.tgz differ diff --git a/assets/speedscale/speedscale-operator-2.2.567.tgz b/assets/speedscale/speedscale-operator-2.2.567.tgz new file mode 100644 index 000000000..4e9e66402 Binary files /dev/null and b/assets/speedscale/speedscale-operator-2.2.567.tgz differ diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/CONTRIBUTING.md b/charts/cockroach-labs/cockroachdb/14.0.5/CONTRIBUTING.md new file mode 100644 index 000000000..e248d72e1 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/CONTRIBUTING.md @@ -0,0 +1,14 @@ +# Contributing + +Contributions are welcome! + +For every change, please increment the `version` contained in +[Chart.yaml](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml). +The `version` roughly follows the [SEMVER](https://semver.org/) versioning +pattern. For changes which do not affect backwards compatibility, the PATCH or +MINOR version must be incremented, e.g. `4.1.3` -> `4.1.4`. For changes which +affect the backwards compatibility of the chart, the major version must be +incremented, e.g. `4.1.3` -> `5.0.0`. Examples of changes which affect backwards +compatibility include any major version releases of CockroachDB, as well as any +breaking changes to the CockroachDB chart templates. + diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/Chart.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/Chart.yaml new file mode 100644 index 000000000..4084fbd99 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/Chart.yaml @@ -0,0 +1,18 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: CockroachDB + catalog.cattle.io/kube-version: '>=1.8-0' + catalog.cattle.io/release-name: cockroachdb +apiVersion: v1 +appVersion: 24.2.4 +description: CockroachDB is a scalable, survivable, strongly-consistent SQL database. +home: https://www.cockroachlabs.com +icon: file://assets/icons/cockroachdb.png +kubeVersion: '>=1.8-0' +maintainers: +- email: helm-charts@cockroachlabs.com + name: cockroachlabs +name: cockroachdb +sources: +- https://github.com/cockroachdb/cockroach +version: 14.0.5 diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/README.md b/charts/cockroach-labs/cockroachdb/14.0.5/README.md new file mode 100644 index 000000000..1ef8640bb --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/README.md @@ -0,0 +1,589 @@ + +# CockroachDB Helm Chart + +[CockroachDB](https://github.com/cockroachdb/cockroach) - the open source, cloud-native distributed SQL database. + +## Documentation + +Below is a brief overview of operating the CockroachDB Helm Chart and some specific implementation details. For additional information on deploying CockroachDB, please see: +> + +Note that the documentation requires Helm 3.0 or higher. + +## Prerequisites Details + +* Kubernetes 1.8 +* PV support on the underlying infrastructure (only if using `storage.persistentVolume`). [Docker for windows hostpath provisioner is not supported](https://github.com/cockroachdb/docs/issues/3184). +* If you want to secure your cluster to use TLS certificates for all network communication, [Helm must be installed with RBAC privileges](https://helm.sh/docs/topics/rbac/) or else you will get an "attempt to grant extra privileges" error. + +## StatefulSet Details + +* + +## StatefulSet Caveats + +* + +## Chart Details + +This chart will do the following: + +* Set up a dynamically scalable CockroachDB cluster using a Kubernetes StatefulSet. + +## Add the CockroachDB Repository + +```shell +helm repo add cockroachdb https://charts.cockroachdb.com/ +``` + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```shell +helm install my-release cockroachdb/cockroachdb +``` + +Note that for a production cluster, you will likely want to override the following parameters in [`values.yaml`](values.yaml) with your own values. + +- `statefulset.resources.requests.memory` and `statefulset.resources.limits.memory` allocate memory resources to CockroachDB pods in your cluster. +- `conf.cache` and `conf.max-sql-memory` are memory limits that we recommend setting to 1/4 of the above resource allocation. When running CockroachDB, you must set these limits explicitly to avoid running out of memory. +- `storage.persistentVolume.size` defaults to `100Gi` of disk space per pod, which you may increase or decrease for your use case. +- `storage.persistentVolume.storageClass` uses the default storage class for your environment. We strongly recommend that you specify a storage class which uses an SSD. +- `tls.enabled` must be set to `yes`/`true` to deploy in secure mode. + +For more information on overriding the `values.yaml` parameters, please see: +> + +Confirm that all pods are `Running` successfully and init has been completed: + +```shell +kubectl get pods +``` + +``` +NAME READY STATUS RESTARTS AGE +my-release-cockroachdb-0 1/1 Running 0 1m +my-release-cockroachdb-1 1/1 Running 0 1m +my-release-cockroachdb-2 1/1 Running 0 1m +my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m +``` + +Confirm that persistent volumes are created and claimed for each pod: + +```shell +kubectl get pv +``` + +``` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-64878ebf-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 51s +pvc-64945b4f-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 51s +pvc-649d920d-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 51s +``` + +### Running in secure mode + +In order to set up a secure cockroachdb cluster set `tls.enabled` to `yes`/`true` + +There are 3 ways to configure a secure cluster, with this chart. This all relates to how the certificates are issued: + +* Self-signer (default) +* Cert-manager +* Manual + +#### Self-signer + +This is the default behaviour, and requires no configuration beyond setting certificate durations if user wants to set custom duration. + +If you are running in this mode, self-signed certificates are created by self-signed utility for the nodes and root client and are stored in a secret. +You can look for the certificates created: +```shell +kubectl get secrets +``` + +```shell +crdb-cockroachdb-ca-secret Opaque 2 23s +crdb-cockroachdb-client-secret kubernetes.io/tls 3 22s +crdb-cockroachdb-node-secret kubernetes.io/tls 3 23s +``` + + +#### Manual + +If you wish to supply the certificates to the nodes yourself set `tls.certs.provided` to `yes`/`true`. You may want to use this if you want to use a different certificate authority from the one being used by Kubernetes or if your Kubernetes cluster doesn't fully support certificate-signing requests. To use this, first set up your certificates and load them into your Kubernetes cluster as Secrets using the commands below: + +```shell +$ mkdir certs +$ mkdir my-safe-directory +$ cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key +$ cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key +$ kubectl create secret generic cockroachdb-root --from-file=certs +secret/cockroachdb-root created +$ cockroach cert create-node --certs-dir=certs --ca-key=my-safe-directory/ca.key localhost 127.0.0.1 my-release-cockroachdb-public my-release-cockroachdb-public.my-namespace my-release-cockroachdb-public.my-namespace.svc.cluster.local *.my-release-cockroachdb *.my-release-cockroachdb.my-namespace *.my-release-cockroachdb.my-namespace.svc.cluster.local +$ kubectl create secret generic cockroachdb-node --from-file=certs +secret/cockroachdb-node created +``` + +> Note: The subject alternative names are based on a release called `my-release` in the `my-namespace` namespace. Make sure they match the services created with the release during `helm install` + +If your certificates are stored in tls secrets such as secrets generated by cert-manager, the secret will contain files named: + +* `ca.crt` +* `tls.crt` +* `tls.key` + +Cockroachdb, however, expects the files to be named like this: + +* `ca.crt` +* `node.crt` +* `node.key` +* `client.root.crt` +* `client.root.key` + +By enabling `tls.certs.tlsSecret` the tls secrets are projected on to the correct filenames, when they are mounted to the cockroachdb pods. + +#### Cert-manager + +If you wish to supply certificates with [cert-manager][3], set + +* `tls.certs.certManager` to `yes`/`true` +* `tls.certs.certManagerIssuer` to an IssuerRef (as they appear in certificate resources) pointing to a clusterIssuer or issuer, you have set up in the cluster + +Example issuer: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cockroachdb-ca + namespace: cockroachdb +data: + tls.crt: [BASE64 Encoded ca.crt] + tls.key: [BASE64 Encoded ca.key] +type: kubernetes.io/tls +--- +apiVersion: cert-manager.io/v1alpha3 +kind: Issuer +metadata: + name: cockroachdb-cert-issuer + namespace: cockroachdb +spec: + ca: + secretName: cockroachdb-ca +``` + +## Upgrading the cluster + +### Chart version 3.0.0 and after + +Launch a temporary interactive pod and start the built-in SQL client: + +```shell +kubectl run cockroachdb --rm -it \ +--image=cockroachdb/cockroach \ +--restart=Never \ +-- sql --insecure --host=my-release-cockroachdb-public +``` + +> If you are running in secure mode, you will have to provide a client certificate to the cluster in order to authenticate, so the above command will not work. See [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) for an example of how to set up an interactive SQL shell against a secure cluster or [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/example-app-secure.yaml) for an example application connecting to a secure cluster. + +Set `cluster.preserve_downgrade_option`, where `$current_version` is the CockroachDB version currently running (e.g., `19.2`): + +```sql +> SET CLUSTER SETTING cluster.preserve_downgrade_option = '$current_version'; +``` + +Exit the shell and delete the temporary pod: + +```sql +> \q +``` + +Kick off the upgrade process by changing the new Docker image, where `$new_version` is the CockroachDB version to which you are upgrading: + +```shell +helm upgrade my-release cockroachdb/cockroachdb \ +--set image.tag=$new_version \ +--reuse-values +``` + +Kubernetes will carry out a safe [rolling upgrade](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) of your CockroachDB nodes one-by-one. Monitor the cluster's pods until all have been successfully restarted: + +```shell +kubectl get pods +``` + +``` +NAME READY STATUS RESTARTS AGE +my-release-cockroachdb-0 1/1 Running 0 2m +my-release-cockroachdb-1 1/1 Running 0 3m +my-release-cockroachdb-2 1/1 Running 0 3m +my-release-cockroachdb-3 0/1 ContainerCreating 0 25s +my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s +``` + +```shell +kubectl get pods \ +-o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' +``` + +``` +my-release-cockroachdb-0 cockroachdb/cockroach:v24.2.4 +my-release-cockroachdb-1 cockroachdb/cockroach:v24.2.4 +my-release-cockroachdb-2 cockroachdb/cockroach:v24.2.4 +my-release-cockroachdb-3 cockroachdb/cockroach:v24.2.4 +``` + +Resume normal operations. Once you are comfortable that the stability and performance of the cluster is what you'd expect post-upgrade, finalize the upgrade: + +```shell +kubectl run cockroachdb --rm -it \ +--image=cockroachdb/cockroach \ +--restart=Never \ +-- sql --insecure --host=my-release-cockroachdb-public +``` + +```sql +> RESET CLUSTER SETTING cluster.preserve_downgrade_option; +> \q +``` + +### Chart versions prior to 3.0.0 + +Due to a change in the label format in version 3.0.0 of this chart, upgrading requires that you delete the StatefulSet. Luckily there is a way to do it without actually deleting all the resources managed by the StatefulSet. Use the workaround below to upgrade from charts versions previous to 3.0.0: + +Get the new labels from the specs rendered by Helm: + +```shell +helm template -f deploy.vals.yml cockroachdb/cockroachdb -x templates/statefulset.yaml \ +| yq r - spec.template.metadata.labels +``` + +``` +app.kubernetes.io/name: cockroachdb +app.kubernetes.io/instance: my-release +app.kubernetes.io/component: cockroachdb +``` + +Place the new labels on all pods of the StatefulSet (change `my-release-cockroachdb-0` to the name of each pod): + +```shell +kubectl label pods my-release-cockroachdb-0 \ +app.kubernetes.io/name=cockroachdb \ +app.kubernetes.io/instance=my-release \ +app.kubernetes.io/component=cockroachdb +``` + +Delete the StatefulSet without deleting pods: + +```shell +kubectl delete statefulset my-release-cockroachdb --cascade=false +``` + +Verify that no pod is deleted and then upgrade as normal. A new StatefulSet will be created, taking over the management of the existing pods and upgrading them if needed. + +### See also + +For more information about upgrading a cluster to the latest major release of CockroachDB, see [Upgrade to CockroachDB](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version.html). + +Note that there are sometimes backward-incompatible changes to SQL features between major CockroachDB releases. For details, see the [Upgrade Policy](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-policy). + +## Configuration + +The following table lists the configurable parameters of the CockroachDB chart and their default values. +For details see the [`values.yaml`](values.yaml) file. + +| Parameter | Description | Default | +| --------- | ----------- | ------- | +| `clusterDomain` | Cluster's default DNS domain | `cluster.local` | +| `conf.attrs` | CockroachDB node attributes | `[]` | +| `conf.cache` | Size of CockroachDB's in-memory cache | `25%` | +| `conf.cluster-name` | Name of CockroachDB cluster | `""` | +| `conf.disable-cluster-name-verification` | Disable CockroachDB cluster name verification | `no` | +| `conf.join` | List of already-existing CockroachDB instances | `[]` | +| `conf.max-disk-temp-storage` | Max storage capacity for temp data | `0` | +| `conf.max-offset` | Max allowed clock offset for CockroachDB cluster | `500ms` | +| `conf.max-sql-memory` | Max memory to use processing SQL querie | `25%` | +| `conf.locality` | Locality attribute for this deployment | `""` | +| `conf.single-node` | Disable CockroachDB clustering (standalone mode) | `no` | +| `conf.sql-audit-dir` | Directory for SQL audit log | `""` | +| `conf.port` | WARNING this parameter is deprecated and will be removed in future version. Use `service.ports.grpc.internal.port` instead | `""` | +| `conf.http-port` | WARNING this parameter is deprecated and will be removed in future version. Use `service.ports.http.port` instead | `""` | +| `conf.path` | CockroachDB data directory mount path | `cockroach-data` | +| `conf.store.enabled` | Enable store configuration for CockroachDB | `false` | +| `conf.store.type` | CockroachDB storage type | `""` | +| `conf.store.size` | CockroachDB storage size | `""` | +| `conf.store.attrs` | CockroachDB storage attributes | `""` | +| `image.repository` | Container image name | `cockroachdb/cockroach` | +| `image.tag` | Container image tag | `v24.2.4` | +| `image.pullPolicy` | Container pull policy | `IfNotPresent` | +| `image.credentials` | `registry`, `user` and `pass` credentials to pull private image | `{}` | +| `statefulset.replicas` | StatefulSet replicas number | `3` | +| `statefulset.updateStrategy` | Update strategy for StatefulSet Pods | `{"type": "RollingUpdate"}` | +| `statefulset.podManagementPolicy` | `OrderedReady`/`Parallel` Pods creation/deletion order | `Parallel` | +| `statefulset.budget.maxUnavailable` | k8s PodDisruptionBudget parameter | `1` | +| `statefulset.args` | Extra command-line arguments | `[]` | +| `statefulset.env` | Extra env vars | `[]` | +| `statefulset.secretMounts` | Additional Secrets to mount at cluster members | `[]` | +| `statefulset.labels` | Additional labels of StatefulSet and its Pods | `{"app.kubernetes.io/component": "cockroachdb"}` | +| `statefulset.annotations` | Additional annotations of StatefulSet Pods | `{}` | +| `statefulset.nodeAffinity` | [Node affinity rules][2] of StatefulSet Pods | `{}` | +| `statefulset.podAffinity` | [Inter-Pod affinity rules][1] of StatefulSet Pods | `{}` | +| `statefulset.podAntiAffinity` | [Anti-affinity rules][1] of StatefulSet Pods | auto | +| `statefulset.podAntiAffinity.topologyKey` | The topologyKey for auto [anti-affinity rules][1] | `kubernetes.io/hostname` | +| `statefulset.podAntiAffinity.type` | Type of auto [anti-affinity rules][1] | `soft` | +| `statefulset.podAntiAffinity.weight` | Weight for `soft` auto [anti-affinity rules][1] | `100` | +| `statefulset.nodeSelector` | Node labels for StatefulSet Pods assignment | `{}` | +| `statefulset.priorityClassName` | [PriorityClassName][4] for StatefulSet Pods | `""` | +| `statefulset.tolerations` | Node taints to tolerate by StatefulSet Pods | `[]` | +| `statefulset.topologySpreadConstraints` | [Topology Spread Constraints rules][5] of StatefulSet Pods | auto | +| `statefulset.topologySpreadConstraints.maxSkew` | Degree to which Pods may be unevenly distributed | `1` | +| `statefulset.topologySpreadConstraints.topologyKey` | The key of node labels | `topology.kubernetes.io/zone` | +| `statefulset.topologySpreadConstraints.whenUnsatisfiable` | `ScheduleAnyway`/`DoNotSchedule` for unsatisfiable constraints | `ScheduleAnyway` | +| `statefulset.resources` | Resource requests and limits for StatefulSet Pods | `{}` | +| `statefulset.customLivenessProbe` | Custom Liveness probe | `{}` | +| `statefulset.customReadinessProbe` | Custom Rediness probe | `{}` | +| `statefulset.customStartupProbe` | Custom Startup probe | `{}` | +| `statefulset.terminationGracePeriodSeconds` | Termination grace period for CRDB statefulset pods | `300` | +| `service.ports.grpc.external.port` | CockroachDB primary serving port in Services | `26257` | +| `service.ports.grpc.external.name` | CockroachDB primary serving port name in Services | `grpc` | +| `service.ports.grpc.internal.port` | CockroachDB inter-communication port in Pods and Services | `26257` | +| `service.ports.grpc.internal.name` | CockroachDB inter-communication port name in Services | `grpc-internal` | +| `service.ports.http.port` | CockroachDB HTTP port in Pods and Services | `8080` | +| `service.ports.http.name` | CockroachDB HTTP port name in Services | `http` | +| `service.public.type` | Public Service type | `ClusterIP` | +| `service.public.labels` | Additional labels of public Service | `{"app.kubernetes.io/component": "cockroachdb"}` | +| `service.public.annotations` | Additional annotations of public Service | `{}` | +| `service.discovery.labels` | Additional labels of discovery Service | `{"app.kubernetes.io/component": "cockroachdb"}` | +| `service.discovery.annotations` | Additional annotations of discovery Service | `{}` | +| `ingress.enabled` | Enable ingress resource for CockroachDB | `false` | +| `ingress.labels` | Additional labels of Ingress | `{}` | +| `ingress.annotations` | Additional annotations of Ingress | `{}` | +| `ingress.paths` | Paths for the default host | `[/]` | +| `ingress.hosts` | CockroachDB Ingress hostnames | `[]` | +| `ingress.tls[0].hosts` | CockroachDB Ingress tls hostnames | `nil` | +| `ingress.tls[0].secretName` | CockroachDB Ingress tls secret name | `nil` | +| `prometheus.enabled` | Enable automatic monitoring of all instances when Prometheus is running | `true` | +| `serviceMonitor.enabled` | Create [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor) Resource for scraping metrics using [PrometheusOperator](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#prometheus-operator) | `false` | +| `serviceMonitor.labels` | Additional labels of ServiceMonitor | `{}` | +| `serviceMonitor.annotations` | Additional annotations of ServiceMonitor | `{}` | +| `serviceMonitor.interval` | ServiceMonitor scrape metrics interval | `10s` | +| `serviceMonitor.scrapeTimeout` | ServiceMonitor scrape timeout | `nil` | +| `serviceMonitor.tlsConfig` | Additional TLS configuration of ServiceMonitor | `{}` | +| `serviceMonitor.namespaced` | Limit ServiceMonitor to current namespace | `false` | +| `storage.hostPath` | Absolute path on host to store data | `""` | +| `storage.persistentVolume.enabled` | Whether to use PersistentVolume to store data | `yes` | +| `storage.persistentVolume.size` | PersistentVolume size | `100Gi` | +| `storage.persistentVolume.storageClass` | PersistentVolume class | `""` | +| `storage.persistentVolume.labels` | Additional labels of PersistentVolumeClaim | `{}` | +| `storage.persistentVolume.annotations` | Additional annotations of PersistentVolumeClaim | `{}` | +| `init.labels` | Additional labels of init Job and its Pod | `{"app.kubernetes.io/component": "init"}` | +| `init.jobAnnotations` | Additional annotations of the init Job itself | `{}` | +| `init.annotations` | Additional annotations of the Pod of init Job | `{}` | +| `init.affinity` | [Affinity rules][2] of init Job Pod | `{}` | +| `init.nodeSelector` | Node labels for init Job Pod assignment | `{}` | +| `init.tolerations` | Node taints to tolerate by init Job Pod | `[]` | +| `init.resources` | Resource requests and limits for the `cluster-init` container | `{}` | +| `init.terminationGracePeriodSeconds` | Termination grace period for CRDB init job | `300` | +| `tls.enabled` | Whether to run securely using TLS certificates | `no` | +| `tls.serviceAccount.create` | Whether to create a new RBAC service account | `yes` | +| `tls.serviceAccount.name` | Name of RBAC service account to use | `""` | +| `tls.copyCerts.image` | Image used in copy certs init container | `busybox` | +| `tls.copyCerts.resources` | Resource requests and limits for the `copy-certs` container | `{}` | +| `tls.certs.provided` | Bring your own certs scenario, i.e certificates are provided | `no` | +| `tls.certs.clientRootSecret` | If certs are provided, secret name for client root cert | `cockroachdb-root` | +| `tls.certs.nodeSecret` | If certs are provided, secret name for node cert | `cockroachdb-node` | +| `tls.certs.tlsSecret` | Own certs are stored in TLS secret | `no` | +| `tls.certs.selfSigner.enabled` | Whether cockroachdb should generate its own self-signed certs | `true` | +| `tls.certs.selfSigner.caProvided` | Bring your own CA scenario. This CA will be used to generate node and client cert | `false` | +| `tls.certs.selfSigner.caSecret` | If CA is provided, secret name for CA cert | `""` | +| `tls.certs.selfSigner.minimumCertDuration` | Minimum cert duration for all the certs, all certs duration will be validated against this duration | `624h` | +| `tls.certs.selfSigner.caCertDuration` | Duration of CA cert in hour | `43824h` | +| `tls.certs.selfSigner.caCertExpiryWindow` | Expiry window of CA cert means a window before actual expiry in which CA cert should be rotated | `648h` | +| `tls.certs.selfSigner.clientCertDuration` | Duration of client cert in hour | `672h | +| `tls.certs.selfSigner.clientCertExpiryWindow` | Expiry window of client cert means a window before actual expiry in which client cert should be rotated | `48h` | +| `tls.certs.selfSigner.nodeCertDuration` | Duration of node cert in hour | `8760h` | +| `tls.certs.selfSigner.nodeCertExpiryWindow` | Expiry window of node cert means a window before actual expiry in which node certs should be rotated | `168h` | +| `tls.certs.selfSigner.rotateCerts` | Whether to rotate the certs generate by cockroachdb | `true` | +| `tls.certs.selfSigner.readinessWait` | Wait time for each cockroachdb replica to become ready once it comes in running state. Only considered when rotateCerts is set to true | `30s` | +| `tls.certs.selfSigner.podUpdateTimeout` | Wait time for each cockroachdb replica to get to running state. Only considered when rotateCerts is set to true | `2m` | +| `tls.certs.certManager` | Provision certificates with cert-manager | `false` | +| `tls.certs.certManagerIssuer.group` | IssuerRef group to use when generating certificates | `cert-manager.io` | +| `tls.certs.certManagerIssuer.kind` | IssuerRef kind to use when generating certificates | `Issuer` | +| `tls.certs.certManagerIssuer.name` | IssuerRef name to use when generating certificates | `cockroachdb` | +| `tls.certs.certManagerIssuer.caCertDuration` | Duration of CA cert in hour | `43824h` | +| `tls.certs.certManagerIssuer.caCertExpiryWindow` | Expiry window of CA cert means a window before actual expiry in which CA cert should be rotated | `648h` | +| `tls.certs.certManagerIssuer.clientCertDuration` | Duration of client cert in hours | `672h` | +| `tls.certs.certManagerIssuer.clientCertExpiryWindow` | Expiry window of client cert means a window before actual expiry in which client cert should be rotated | `48h` | +| `tls.certs.certManagerIssuer.nodeCertDuration` | Duration of node cert in hours | `8760h` | +| `tls.certs.certManagerIssuer.nodeCertExpiryWindow` | Expiry window of node certificates means a window before actual expiry in which node certs should be rotated. | `168h` | +| `tls.selfSigner.image.repository` | Image to use for self signing TLS certificates | `cockroachlabs-helm-charts/cockroach-self-signer-cert`| +| `tls.selfSigner.image.tag` | Image tag to use for self signing TLS certificates | `0.1` | +| `tls.selfSigner.image.pullPolicy` | Self signing TLS certificates container pull policy | `IfNotPresent` | +| `tls.selfSigner.image.credentials` | `registry`, `user` and `pass` credentials to pull private image | `{}` | +| `networkPolicy.enabled` | Enable NetworkPolicy for CockroachDB's Pods | `no` | +| `networkPolicy.ingress.grpc` | Whitelist resources to access gRPC port of CockroachDB's Pods | `[]` | +| `networkPolicy.ingress.http` | Whitelist resources to access gRPC port of CockroachDB's Pods | `[]` | + + +Override the default parameters using the `--set key=value[,key=value]` argument to `helm install`. + +Alternatively, a YAML file that specifies custom values for the parameters can be provided while installing the chart. For example: + +```shell +helm install my-release -f my-values.yaml cockroachdb/cockroachdb +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Deep dive + +### Connecting to the CockroachDB cluster + +Once you've created the cluster, you can start talking to it by connecting to its `-public` Service. CockroachDB is PostgreSQL wire protocol compatible, so there's a [wide variety of supported clients](https://www.cockroachlabs.com/docs/install-client-drivers.html). As an example, we'll open up a SQL shell using CockroachDB's built-in shell and play around with it a bit, like this (likely needing to replace `my-release-cockroachdb-public` with the name of the `-public` Service that was created with your installed chart): + +```shell +kubectl run cockroach-client --rm -it \ +--image=cockroachdb/cockroach \ +--restart=Never \ +-- sql --insecure --host my-release-cockroachdb-public +``` + +``` +Waiting for pod default/cockroach-client to be running, status is Pending, +pod ready: false +If you don't see a command prompt, try pressing enter. +root@my-release-cockroachdb-public:26257> SHOW DATABASES; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| pg_catalog | +| system | ++--------------------+ +(3 rows) +root@my-release-cockroachdb-public:26257> CREATE DATABASE bank; +CREATE DATABASE +root@my-release-cockroachdb-public:26257> CREATE TABLE bank.accounts (id INT +PRIMARY KEY, balance DECIMAL); +CREATE TABLE +root@my-release-cockroachdb-public:26257> INSERT INTO bank.accounts VALUES +(1234, 10000.50); +INSERT 1 +root@my-release-cockroachdb-public:26257> SELECT * FROM bank.accounts; ++------+---------+ +| id | balance | ++------+---------+ +| 1234 | 10000.5 | ++------+---------+ +(1 row) +root@my-release-cockroachdb-public:26257> \q +Waiting for pod default/cockroach-client to terminate, status is Running +pod "cockroach-client" deleted +``` + +> If you are running in secure mode, you will have to provide a client certificate to the cluster in order to authenticate, so the above command will not work. See [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) for an example of how to set up an interactive SQL shell against a secure cluster or [here](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/example-app-secure.yaml) for an example application connecting to a secure cluster. + +### Cluster health + +Because our pod spec includes regular health checks of the CockroachDB processes, simply running `kubectl get pods` and looking at the `STATUS` column is sufficient to determine the health of each instance in the cluster. + +If you want more detailed information about the cluster, the best place to look is the Admin UI. + +### Accessing the Admin UI + +If you want to see information about how the cluster is doing, you can try pulling up the CockroachDB Admin UI by port-forwarding from your local machine to one of the pods (replacing `my-release-cockroachdb-0` with the name of one of your pods: + +```shell +kubectl port-forward my-release-cockroachdb-0 8080 +``` + +You should then be able to access the Admin UI by visiting in your web browser. + +### Failover + +If any CockroachDB member fails, it is restarted or recreated automatically by the Kubernetes infrastructure, and will re-join the cluster automatically when it comes back up. You can test this scenario by killing any of the CockroachDB pods: + +```shell +kubectl delete pod my-release-cockroachdb-1 +``` + +```shell +kubectl get pods -l "app.kubernetes.io/instance=my-release,app.kubernetes.io/component=cockroachdb" +``` + +``` +NAME READY STATUS RESTARTS AGE +my-release-cockroachdb-0 1/1 Running 0 5m +my-release-cockroachdb-2 1/1 Running 0 5m +``` + +After a while: + +```shell +kubectl get pods -l "app.kubernetes.io/instance=my-release,app.kubernetes.io/component=cockroachdb" +``` + +``` +NAME READY STATUS RESTARTS AGE +my-release-cockroachdb-0 1/1 Running 0 5m +my-release-cockroachdb-1 1/1 Running 0 20s +my-release-cockroachdb-2 1/1 Running 0 5m +``` + +You can check the state of re-joining from the new pod's logs: + +```shell +kubectl logs my-release-cockroachdb-1 +``` + +``` +[...] +I161028 19:32:09.754026 1 server/node.go:586 [n1] node connected via gossip and +verified as part of cluster {"35ecbc27-3f67-4e7d-9b8f-27c31aae17d6"} +[...] +cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 +build: beta-20161027-55-gd2d3c7f @ 2016/10/28 19:27:25 (go1.7.3) +admin: http://0.0.0.0:8080 +sql: +postgresql://root@my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257?sslmode=disable +logs: cockroach-data/logs +store[0]: path=cockroach-data +status: restarted pre-existing node +clusterID: {35ecbc27-3f67-4e7d-9b8f-27c31aae17d6} +nodeID: 2 +[...] +``` + +### NetworkPolicy + +To enable NetworkPolicy for CockroachDB, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `yes`/`true`. + +For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the `DefaultDeny` Namespace annotation. Note: this will enforce policy for _all_ pods in the Namespace: + +```shell +kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}" +``` + +For more precise policy, set `networkPolicy.ingress.grpc` and `networkPolicy.ingress.http` rules. This will only allow pods that match the provided rules to connect to CockroachDB. + +### Scaling + +Scaling should be managed via the `helm upgrade` command. After resizing your cluster on your cloud environment (e.g., GKE or EKS), run the following command to add a pod. This assumes you scaled from 3 to 4 nodes: + +```shell +helm upgrade \ +my-release \ +cockroachdb/cockroachdb \ +--set statefulset.replicas=4 \ +--reuse-values +``` + +Note, that if you are running in secure mode (`tls.enabled` is `yes`/`true`) and increase the size of your cluster, you will also have to approve the CSR (certificate-signing request) of each new node (using `kubectl get csr` and `kubectl certificate approve`). + +[1]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity +[2]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity +[3]: https://cert-manager.io/ +[4]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[5]: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/app-readme.md b/charts/cockroach-labs/cockroachdb/14.0.5/app-readme.md new file mode 100644 index 000000000..8fcc1fd6f --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/app-readme.md @@ -0,0 +1,9 @@ +# CockroachDB Chart + +CockroachDB is a Distributed SQL database that runs natively in Kubernetes. It gives you resilient, horizontal scale across multiple clouds with always-on availability and data partitioned by location. + +CockroachDB scales horizontally without reconfiguration or need for a massive architectural overhaul. Simply add a new node to the cluster and CockroachDB takes care of the underlying complexity. + + - Scale by simply adding new nodes to a CockroachDB cluster + - Automate balancing and distribution of ranges, not shards + - Optimize server utilization evenly across all nodes diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/NOTES.txt b/charts/cockroach-labs/cockroachdb/14.0.5/templates/NOTES.txt new file mode 100644 index 000000000..13b421f62 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/NOTES.txt @@ -0,0 +1,50 @@ +CockroachDB can be accessed via port {{ .Values.service.ports.grpc.external.port }} at the +following DNS name from within your cluster: + +{{ template "cockroachdb.fullname" . }}-public.{{ .Release.Namespace }}.svc.cluster.local + +Because CockroachDB supports the PostgreSQL wire protocol, you can connect to +the cluster using any available PostgreSQL client. + +{{- if not .Values.tls.enabled }} + +For example, you can open up a SQL shell to the cluster by running: + + kubectl run -it --rm cockroach-client \ + --image=cockroachdb/cockroach \ + --restart=Never \ + {{- if .Values.networkPolicy.enabled }} + --labels="{{ template "cockroachdb.fullname" . }}-client=true" \ + {{- end }} + --command -- \ + ./cockroach sql --insecure --host={{ template "cockroachdb.fullname" . }}-public.{{ .Release.Namespace }} + +From there, you can interact with the SQL shell as you would any other SQL +shell, confident that any data you write will be safe and available even if +parts of your cluster fail. +{{- else }} + +Note that because the cluster is running in secure mode, any client application +that you attempt to connect will either need to have a valid client certificate +or a valid username and password. +{{- end }} + +{{- if and (.Values.networkPolicy.enabled) (not (empty .Values.networkPolicy.ingress.grpc)) }} + +Note: Since NetworkPolicy is enabled, the only Pods allowed to connect to this +CockroachDB cluster are: + +1. Having the label: "{{ template "cockroachdb.fullname" . }}-client=true" + +2. Matching the following rules: {{- toYaml .Values.networkPolicy.ingress.grpc | nindent 0 }} +{{- end }} + +Finally, to open up the CockroachDB admin UI, you can port-forward from your +local machine into one of the instances in the cluster: + + kubectl port-forward -n {{ .Release.Namespace }} {{ template "cockroachdb.fullname" . }}-0 {{ index .Values.conf `http-port` | int64 }} + +Then you can access the admin UI at http{{ if .Values.tls.enabled }}s{{ end }}://localhost:{{ index .Values.conf `http-port` | int64 }}/ in your web browser. + +For more information on using CockroachDB, please see the project's docs at: +https://www.cockroachlabs.com/docs/ diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/_helpers.tpl b/charts/cockroach-labs/cockroachdb/14.0.5/templates/_helpers.tpl new file mode 100644 index 000000000..9ef769a70 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/_helpers.tpl @@ -0,0 +1,291 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "cockroachdb.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 56 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "cockroachdb.fullname" -}} +{{- if .Values.fullnameOverride -}} + {{- .Values.fullnameOverride | trunc 56 | trimSuffix "-" -}} +{{- else -}} + {{- $name := default .Chart.Name .Values.nameOverride -}} + {{- if contains $name .Release.Name -}} + {{- .Release.Name | trunc 56 | trimSuffix "-" -}} + {{- else -}} + {{- printf "%s-%s" .Release.Name $name | trunc 56 | trimSuffix "-" -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified app name for cluster scope resource. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name with release namespace appended at the end. +*/}} +{{- define "cockroachdb.clusterfullname" -}} +{{- if .Values.fullnameOverride -}} + {{- printf "%s-%s" .Values.fullnameOverride .Release.Namespace | trunc 56 | trimSuffix "-" -}} +{{- else -}} + {{- $name := default .Chart.Name .Values.nameOverride -}} + {{- if contains $name .Release.Name -}} + {{- printf "%s-%s" .Release.Name .Release.Namespace | trunc 56 | trimSuffix "-" -}} + {{- else -}} + {{- printf "%s-%s-%s" .Release.Name $name .Release.Namespace | trunc 56 | trimSuffix "-" -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "cockroachdb.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 56 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create the name of the ServiceAccount to use. +*/}} +{{- define "cockroachdb.serviceAccount.name" -}} +{{- if .Values.statefulset.serviceAccount.create -}} + {{- default (include "cockroachdb.fullname" .) .Values.statefulset.serviceAccount.name -}} +{{- else -}} + {{- default "default" .Values.statefulset.serviceAccount.name -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for NetworkPolicy. +*/}} +{{- define "cockroachdb.networkPolicy.apiVersion" -}} +{{- if semverCompare ">=1.4-0, <=1.7-0" .Capabilities.KubeVersion.Version -}} + {{- print "extensions/v1beta1" -}} +{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.Version -}} + {{- print "networking.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for StatefulSets +*/}} +{{- define "cockroachdb.statefulset.apiVersion" -}} +{{- if semverCompare "<1.12-0" .Capabilities.KubeVersion.Version -}} + {{- print "apps/v1beta1" -}} +{{- else -}} + {{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return CockroachDB store expression +*/}} +{{- define "cockroachdb.conf.store" -}} +{{- $isInMemory := eq (.Values.conf.store.type | toString) "mem" -}} +{{- $persistentSize := empty .Values.conf.store.size | ternary .Values.storage.persistentVolume.size .Values.conf.store.size -}} + +{{- $store := dict -}} +{{- $_ := set $store "type" ($isInMemory | ternary "type=mem" "") -}} +{{- $_ := set $store "path" ($isInMemory | ternary "" (print "path=" .Values.conf.path)) -}} +{{- $_ := set $store "size" (print "size=" ($isInMemory | ternary .Values.conf.store.size $persistentSize)) -}} +{{- $_ := set $store "attrs" (empty .Values.conf.store.attrs | ternary "" (print "attrs=" .Values.conf.store.attrs)) -}} + +{{ compact (values $store) | join "," }} +{{- end -}} + +{{/* +Define the default values for the certificate selfSigner inputs +*/}} +{{- define "selfcerts.fullname" -}} + {{- printf "%s-%s" (include "cockroachdb.fullname" .) "self-signer" | trunc 56 | trimSuffix "-" -}} +{{- end -}} + +{{- define "rotatecerts.fullname" -}} + {{- printf "%s-%s" (include "cockroachdb.fullname" .) "rotate-self-signer" | trunc 56 | trimSuffix "-" -}} +{{- end -}} + +{{- define "selfcerts.minimumCertDuration" -}} + {{- if .Values.tls.certs.selfSigner.minimumCertDuration -}} + {{- print (.Values.tls.certs.selfSigner.minimumCertDuration | trimSuffix "h") -}} + {{- else }} + {{- $minCertDuration := min (sub (.Values.tls.certs.selfSigner.clientCertDuration | trimSuffix "h" ) (.Values.tls.certs.selfSigner.clientCertExpiryWindow | trimSuffix "h")) (sub (.Values.tls.certs.selfSigner.nodeCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.nodeCertExpiryWindow | trimSuffix "h")) -}} + {{- print $minCertDuration -}} + {{- end }} +{{- end -}} + +{{/* +Define the cron schedules for certificate rotate jobs and converting from hours to valid cron string. +We assume that each month has 31 days, hence the cron job may run few days earlier in a year. In a cron schedule, +we can not set a cron of more than a year, hence we try to run the cron in such a way that the cron run comes to +as close possible to the expiry window. However, it is possible that cron may run earlier than the expiry window. +*/}} +{{- define "selfcerts.caRotateSchedule" -}} +{{- $tempHours := sub (.Values.tls.certs.selfSigner.caCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.caCertExpiryWindow | trimSuffix "h") -}} +{{- $days := "*" -}} +{{- $months := "*" -}} +{{- $hours := mod $tempHours 24 -}} +{{- if not (eq $hours $tempHours) -}} +{{- $tempDays := div $tempHours 24 -}} +{{- $days = mod $tempDays 31 -}} +{{- if not (eq $days $tempDays) -}} +{{- $days = add $days 1 -}} +{{- $tempMonths := div $tempDays 31 -}} +{{- $months = mod $tempMonths 12 -}} +{{- if not (eq $months $tempMonths) -}} +{{- $months = add $months 1 -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- if ne (toString $months) "*" -}} +{{- $months = printf "*/%s" (toString $months) -}} +{{- else -}} +{{- if ne (toString $days) "*" -}} +{{- $days = printf "*/%s" (toString $days) -}} +{{- else -}} +{{- if ne $hours 0 -}} +{{- $hours = printf "*/%s" (toString $hours) -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- printf "0 %s %s %s *" (toString $hours) (toString $days) (toString $months) -}} +{{- end -}} + +{{- define "selfcerts.clientRotateSchedule" -}} +{{- $tempHours := int64 (include "selfcerts.minimumCertDuration" .) -}} +{{- $days := "*" -}} +{{- $months := "*" -}} +{{- $hours := mod $tempHours 24 -}} +{{- if not (eq $hours $tempHours) -}} +{{- $tempDays := div $tempHours 24 -}} +{{- $days = mod $tempDays 31 -}} +{{- if not (eq $days $tempDays) -}} +{{- $days = add $days 1 -}} +{{- $tempMonths := div $tempDays 31 -}} +{{- $months = mod $tempMonths 12 -}} +{{- if not (eq $months $tempMonths) -}} +{{- $months = add $months 1 -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- if ne (toString $months) "*" -}} +{{- $months = printf "*/%s" (toString $months) -}} +{{- else -}} +{{- if ne (toString $days) "*" -}} +{{- $days = printf "*/%s" (toString $days) -}} +{{- else -}} +{{- if ne $hours 0 -}} +{{- $hours = printf "*/%s" (toString $hours) -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- printf "0 %s %s %s *" (toString $hours) (toString $days) (toString $months) -}} +{{- end -}} + +{{/* +Define the appropriate validations for the certificate selfSigner inputs +*/}} + +{{/* +Validate that if caProvided is true, then the caSecret must not be empty and secret must be present in the namespace. +*/}} +{{- define "cockroachdb.tls.certs.selfSigner.caProvidedValidation" -}} +{{- if .Values.tls.certs.selfSigner.caProvided -}} +{{- if eq "" .Values.tls.certs.selfSigner.caSecret -}} + {{ fail "CA secret can't be empty if caProvided is set to true" }} +{{- else -}} + {{- if not (lookup "v1" "Secret" .Release.Namespace .Values.tls.certs.selfSigner.caSecret) }} + {{ fail "CA secret is not present in the release namespace" }} + {{- end }} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Validate that if caCertDuration or caCertExpiryWindow must not be empty and caCertExpiryWindow must be greater than +minimumCertDuration. +*/}} +{{- define "cockroachdb.tls.certs.selfSigner.caCertValidation" -}} +{{- if not .Values.tls.certs.selfSigner.caProvided -}} +{{- if or (not .Values.tls.certs.selfSigner.caCertDuration) (not .Values.tls.certs.selfSigner.caCertExpiryWindow) }} + {{ fail "CA cert duration or CA cert expiry window can not be empty" }} +{{- else }} +{{- if gt (int64 (include "selfcerts.minimumCertDuration" .)) (int64 (.Values.tls.certs.selfSigner.caCertExpiryWindow | trimSuffix "h")) -}} + {{ fail "CA cert expiration window should not be less than minimum Cert duration" }} +{{- end -}} +{{- if gt (int64 (include "selfcerts.minimumCertDuration" .)) (sub (.Values.tls.certs.selfSigner.caCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.caCertExpiryWindow | trimSuffix "h")) -}} + {{ fail "CA cert Duration minus CA cert expiration window should not be less than minimum Cert duration" }} +{{- end -}} +{{- end -}} +{{- end }} +{{- end -}} + +{{/* +Validate that if clientCertDuration must not be empty and it must be greater than minimumCertDuration. +*/}} +{{- define "cockroachdb.tls.certs.selfSigner.clientCertValidation" -}} +{{- if or (not .Values.tls.certs.selfSigner.clientCertDuration) (not .Values.tls.certs.selfSigner.clientCertExpiryWindow) }} + {{ fail "Client cert duration can not be empty" }} +{{- else }} +{{- if lt (sub (.Values.tls.certs.selfSigner.clientCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.clientCertExpiryWindow | trimSuffix "h")) (int64 (include "selfcerts.minimumCertDuration" .)) }} + {{ fail "Client cert duration minus client cert expiry window should not be less than minimum Cert duration" }} +{{- end }} +{{- end }} +{{- end -}} + +{{/* +Validate that nodeCertDuration must not be empty and nodeCertDuration minus nodeCertExpiryWindow must be greater than minimumCertDuration. +*/}} +{{- define "cockroachdb.tls.certs.selfSigner.nodeCertValidation" -}} +{{- if or (not .Values.tls.certs.selfSigner.nodeCertDuration) (not .Values.tls.certs.selfSigner.nodeCertExpiryWindow) }} + {{ fail "Node cert duration can not be empty" }} +{{- else }} +{{- if lt (sub (.Values.tls.certs.selfSigner.nodeCertDuration | trimSuffix "h") (.Values.tls.certs.selfSigner.nodeCertExpiryWindow | trimSuffix "h")) (int64 (include "selfcerts.minimumCertDuration" .))}} + {{ fail "Node cert duration minus node cert expiry window should not be less than minimum Cert duration" }} +{{- end }} +{{- end }} +{{- end -}} + +{{/* +Validate that if user enabled tls, then either self-signed certificates or certificate manager is enabled +*/}} +{{- define "cockroachdb.tlsValidation" -}} +{{- if .Values.tls.enabled -}} +{{- if and .Values.tls.certs.selfSigner.enabled .Values.tls.certs.certManager -}} + {{ fail "Can not enable the self signed certificates and certificate manager at the same time" }} +{{- end -}} +{{- if and (not .Values.tls.certs.selfSigner.enabled) (not .Values.tls.certs.certManager) -}} + {{- if not .Values.tls.certs.provided -}} + {{ fail "You have to enable either self signed certificates or certificate manager, if you have enabled tls" }} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + +{{- define "cockroachdb.tls.certs.selfSigner.validation" -}} +{{ include "cockroachdb.tls.certs.selfSigner.caProvidedValidation" . }} +{{ include "cockroachdb.tls.certs.selfSigner.caCertValidation" . }} +{{ include "cockroachdb.tls.certs.selfSigner.clientCertValidation" . }} +{{ include "cockroachdb.tls.certs.selfSigner.nodeCertValidation" . }} +{{- end -}} + +{{- define "cockroachdb.securityContext.versionValidation" }} +{{- /* Allow using `securityContext` for custom images. */}} +{{- if ne "cockroachdb/cockroach" .Values.image.repository -}} + {{ print true }} +{{- else -}} +{{- if semverCompare ">=22.1.2" .Values.image.tag -}} + {{ print true }} +{{- else -}} +{{- if semverCompare ">=21.2.13, <22.1.0" .Values.image.tag -}} + {{ print true }} +{{- else -}} + {{ print false }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/backendconfig.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/backendconfig.yaml new file mode 100644 index 000000000..2edc88619 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/backendconfig.yaml @@ -0,0 +1,21 @@ +{{- if .Values.iap.enabled }} +apiVersion: cloud.google.com/v1beta1 +kind: BackendConfig +metadata: + name: {{ template "cockroachdb.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + iap: + enabled: true + oauthclientCredentials: + secretName: {{ template "cockroachdb.fullname" . }}.iap + timeoutSec: 120 +{{- end }} \ No newline at end of file diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.ca.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.ca.yaml new file mode 100644 index 000000000..4043fafb0 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.ca.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.certManager }} + {{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }} +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ template "cockroachdb.fullname" . }}-ca-cert + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + duration: {{ .Values.tls.certs.certManagerIssuer.caCertDuration }} + renewBefore: {{ .Values.tls.certs.certManagerIssuer.caCertExpiryWindow }} + isCA: true + secretName: {{ .Values.tls.certs.caSecret }} + privateKey: + algorithm: ECDSA + size: 256 + commonName: root + subject: + organizations: + - Cockroach + issuerRef: + name: {{ .Values.tls.certs.certManagerIssuer.name }} + kind: {{ .Values.tls.certs.certManagerIssuer.kind }} + group: {{ .Values.tls.certs.certManagerIssuer.group }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.client.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.client.yaml new file mode 100644 index 000000000..dd0272f3e --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.client.yaml @@ -0,0 +1,40 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.certManager }} +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ template "cockroachdb.fullname" . }}-root-client + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + duration: {{ .Values.tls.certs.certManagerIssuer.clientCertDuration }} + renewBefore: {{ .Values.tls.certs.certManagerIssuer.clientCertExpiryWindow }} + usages: + - digital signature + - key encipherment + - client auth + privateKey: + algorithm: RSA + size: 2048 + commonName: root + subject: + organizations: + - Cockroach + secretName: {{ .Values.tls.certs.clientRootSecret }} + issuerRef: + {{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }} + name: {{ template "cockroachdb.fullname" . }}-ca-issuer + kind: Issuer + group: cert-manager.io + {{- else }} + name: {{ .Values.tls.certs.certManagerIssuer.name }} + kind: {{ .Values.tls.certs.certManagerIssuer.kind }} + group: {{ .Values.tls.certs.certManagerIssuer.group }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.issuer.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.issuer.yaml new file mode 100644 index 000000000..5cf579ff9 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.issuer.yaml @@ -0,0 +1,20 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.certManager }} + {{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }} +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: {{ template "cockroachdb.fullname" . }}-ca-issuer + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + ca: + secretName: {{ .Values.tls.certs.caSecret }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.node.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.node.yaml new file mode 100644 index 000000000..05e909d0b --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/certificate.node.yaml @@ -0,0 +1,50 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.certManager }} +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ template "cockroachdb.fullname" . }}-node + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + duration: {{ .Values.tls.certs.certManagerIssuer.nodeCertDuration }} + renewBefore: {{ .Values.tls.certs.certManagerIssuer.nodeCertExpiryWindow }} + usages: + - digital signature + - key encipherment + - server auth + - client auth + privateKey: + algorithm: RSA + size: 2048 + commonName: node + subject: + organizations: + - Cockroach + dnsNames: + - "localhost" + - "127.0.0.1" + - {{ printf "%s-public" (include "cockroachdb.fullname" .) | quote }} + - {{ printf "%s-public.%s" (include "cockroachdb.fullname" .) .Release.Namespace | quote }} + - {{ printf "%s-public.%s.svc.%s" (include "cockroachdb.fullname" .) .Release.Namespace .Values.clusterDomain | quote }} + - {{ printf "*.%s" (include "cockroachdb.fullname" .) | quote }} + - {{ printf "*.%s.%s" (include "cockroachdb.fullname" .) .Release.Namespace | quote }} + - {{ printf "*.%s.%s.svc.%s" (include "cockroachdb.fullname" .) .Release.Namespace .Values.clusterDomain | quote }} + secretName: {{ .Values.tls.certs.nodeSecret }} + issuerRef: + {{- if .Values.tls.certs.certManagerIssuer.isSelfSignedIssuer }} + name: {{ template "cockroachdb.fullname" . }}-ca-issuer + kind: Issuer + group: cert-manager.io + {{- else }} + name: {{ .Values.tls.certs.certManagerIssuer.name }} + kind: {{ .Values.tls.certs.certManagerIssuer.kind }} + group: {{ .Values.tls.certs.certManagerIssuer.group }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/clusterrole.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/clusterrole.yaml new file mode 100644 index 000000000..6b8a3dc5f --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/clusterrole.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.tls.enabled (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }} +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "cockroachdb.clusterfullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +rules: + - apiGroups: ["certificates.k8s.io"] + resources: ["certificatesigningrequests"] + verbs: ["create", "get", "watch"] +{{- end }} \ No newline at end of file diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/clusterrolebinding.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/clusterrolebinding.yaml new file mode 100644 index 000000000..3c18694ef --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/clusterrolebinding.yaml @@ -0,0 +1,23 @@ +{{- if and .Values.tls.enabled (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }} +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "cockroachdb.clusterfullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "cockroachdb.clusterfullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "cockroachdb.serviceAccount.name" . }} + namespace: {{ .Release.Namespace | quote }} +{{- end }} \ No newline at end of file diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/cronjob-ca-certSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/cronjob-ca-certSelfSigner.yaml new file mode 100644 index 000000000..4cd53900c --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/cronjob-ca-certSelfSigner.yaml @@ -0,0 +1,62 @@ +{{- if and .Values.tls.enabled (and .Values.tls.certs.selfSigner.enabled (not .Values.tls.certs.selfSigner.caProvided)) }} + {{- if .Values.tls.certs.selfSigner.rotateCerts }} + {{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }} +apiVersion: batch/v1 + {{- else }} +apiVersion: batch/v1beta1 + {{- end }} +kind: CronJob +metadata: + name: {{ template "rotatecerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} +spec: + schedule: {{ template "selfcerts.caRotateSchedule" . }} + jobTemplate: + spec: + backoffLimit: 1 + template: + metadata: + {{- with .Values.tls.selfSigner.labels }} + labels: {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.tls.selfSigner.annotations }} + annotations: {{- toYaml . | nindent 12 }} + {{- end }} + spec: + restartPolicy: Never + {{- with .Values.tls.selfSigner.affinity }} + affinity: {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.tls.selfSigner.nodeSelector }} + nodeSelector: {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.tls.selfSigner.tolerations }} + tolerations: {{- toYaml . | nindent 12 }} + {{- end }} + containers: + - name: cert-rotate-job + image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}" + imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}" + args: + - rotate + - --ca + - --ca-duration={{ .Values.tls.certs.selfSigner.caCertDuration }} + - --ca-expiry={{ .Values.tls.certs.selfSigner.caCertExpiryWindow }} + - --ca-cron={{ template "selfcerts.caRotateSchedule" . }} + - --readiness-wait={{ .Values.tls.certs.selfSigner.readinessWait }} + - --pod-update-timeout={{ .Values.tls.certs.selfSigner.podUpdateTimeout }} + env: + - name: STATEFULSET_NAME + value: {{ template "cockroachdb.fullname" . }} + - name: NAMESPACE + value: {{ .Release.Namespace }} + - name: CLUSTER_DOMAIN + value: {{ .Values.clusterDomain}} + serviceAccountName: {{ template "rotatecerts.fullname" . }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/cronjob-client-node-certSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/cronjob-client-node-certSelfSigner.yaml new file mode 100644 index 000000000..d500cbeb6 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/cronjob-client-node-certSelfSigner.yaml @@ -0,0 +1,69 @@ +{{- if and .Values.tls.certs.selfSigner.enabled .Values.tls.certs.selfSigner.rotateCerts }} + {{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }} +apiVersion: batch/v1 + {{- else }} +apiVersion: batch/v1beta1 + {{- end }} +kind: CronJob +metadata: + name: {{ template "rotatecerts.fullname" . }}-client + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} +spec: + schedule: {{ template "selfcerts.clientRotateSchedule" . }} + jobTemplate: + spec: + backoffLimit: 1 + template: + metadata: + {{- with .Values.tls.selfSigner.labels }} + labels: {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.tls.selfSigner.annotations }} + annotations: {{- toYaml . | nindent 12 }} + {{- end }} + spec: + restartPolicy: Never + {{- with .Values.tls.selfSigner.affinity }} + affinity: {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.tls.selfSigner.nodeSelector }} + nodeSelector: {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.tls.selfSigner.tolerations }} + tolerations: {{- toYaml . | nindent 12 }} + {{- end }} + containers: + - name: cert-rotate-job + image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}" + imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}" + args: + - rotate + {{- if .Values.tls.certs.selfSigner.caProvided }} + - --ca-secret={{ .Values.tls.certs.selfSigner.caSecret }} + {{- else }} + - --ca-duration={{ .Values.tls.certs.selfSigner.caCertDuration }} + - --ca-expiry={{ .Values.tls.certs.selfSigner.caCertExpiryWindow }} + {{- end }} + - --client + - --client-duration={{ .Values.tls.certs.selfSigner.clientCertDuration }} + - --client-expiry={{ .Values.tls.certs.selfSigner.clientCertExpiryWindow }} + - --node + - --node-duration={{ .Values.tls.certs.selfSigner.nodeCertDuration }} + - --node-expiry={{ .Values.tls.certs.selfSigner.nodeCertExpiryWindow }} + - --node-client-cron={{ template "selfcerts.clientRotateSchedule" . }} + - --readiness-wait={{ .Values.tls.certs.selfSigner.readinessWait }} + - --pod-update-timeout={{ .Values.tls.certs.selfSigner.podUpdateTimeout }} + env: + - name: STATEFULSET_NAME + value: {{ template "cockroachdb.fullname" . }} + - name: NAMESPACE + value: {{ .Release.Namespace }} + - name: CLUSTER_DOMAIN + value: {{ .Values.clusterDomain}} + serviceAccountName: {{ template "rotatecerts.fullname" . }} + {{- end}} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/ingress.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/ingress.yaml new file mode 100644 index 000000000..2fa6373c8 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/ingress.yaml @@ -0,0 +1,90 @@ +{{- if .Values.ingress.enabled -}} +{{- $paths := .Values.ingress.paths -}} +{{- $ports := .Values.service.ports -}} +{{- $fullName := include "cockroachdb.fullname" . -}} +{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }} +apiVersion: networking.k8s.io/v1 +{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1/Ingress" }} +apiVersion: networking.k8s.io/v1beta1 +{{- else -}} +apiVersion: extensions/v1beta1 +{{- end }} +kind: Ingress +metadata: +{{- if or .Values.ingress.annotations .Values.iap.enabled }} + annotations: + {{- range $key, $value := .Values.ingress.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + {{- if .Values.iap.enabled }} + kubernetes.io/ingress.class: "gce" + kubernetes.io/ingress.allow-http: "false" + {{- end }} +{{- end }} + name: {{ $fullName }}-ingress + namespace: {{ .Release.Namespace }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ $.Release.Name | quote }} + app.kubernetes.io/managed-by: {{ $.Release.Service | quote }} +{{- if .Values.ingress.labels }} +{{- toYaml .Values.ingress.labels | nindent 4 }} +{{- end }} +spec: + rules: + {{- if .Values.ingress.hosts }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host }} + http: + paths: + {{- range $path := $paths }} + - path: {{ $path | quote }} + {{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }} + {{- if $.Values.iap.enabled }} + pathType: ImplementationSpecific + {{- else }} + pathType: Prefix + {{- end }} + {{- end }} + backend: + {{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }} + service: + name: {{ $fullName }}-public + port: + name: {{ $ports.http.name | quote }} + {{- else }} + serviceName: {{ $fullName }}-public + servicePort: {{ $ports.http.name | quote }} + {{- end }} + {{- end }} + {{- end }} + {{- else }} + - http: + paths: + {{- range $path := $paths }} + - path: {{ $path | quote }} + {{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }} + {{- if $.Values.iap.enabled }} + pathType: ImplementationSpecific + {{- else }} + pathType: Prefix + {{- end }} + {{- end }} + backend: + {{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }} + service: + name: {{ $fullName }}-public + port: + name: {{ $ports.http.name | quote }} + {{- else }} + serviceName: {{ $fullName }}-public + servicePort: {{ $ports.http.name | quote }} + {{- end }} + {{- end }} + {{- end }} + {{- if .Values.ingress.tls }} + tls: +{{- toYaml .Values.ingress.tls | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/job-certSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/job-certSelfSigner.yaml new file mode 100644 index 000000000..54ed2cad3 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/job-certSelfSigner.yaml @@ -0,0 +1,83 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "selfcerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + annotations: + # This is what defines this resource as a hook. Without this line, the + # job is considered part of the release. + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "4" + "helm.sh/hook-delete-policy": hook-succeeded,hook-failed + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} +spec: + template: + metadata: + name: {{ template "selfcerts.fullname" . }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.tls.selfSigner.labels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tls.selfSigner.annotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if and .Values.tls.certs.selfSigner.securityContext.enabled }} + securityContext: + seccompProfile: + type: "RuntimeDefault" + runAsGroup: 1000 + runAsUser: 1000 + fsGroup: 1000 + runAsNonRoot: true + {{- end }} + restartPolicy: Never + {{- with .Values.tls.selfSigner.affinity }} + affinity: {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tls.selfSigner.nodeSelector }} + nodeSelector: {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tls.selfSigner.tolerations }} + tolerations: {{- toYaml . | nindent 8 }} + {{- end }} + containers: + - name: cert-generate-job + image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}" + imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}" + args: + - generate + {{- if .Values.tls.certs.selfSigner.caProvided }} + - --ca-secret={{ .Values.tls.certs.selfSigner.caSecret }} + {{- else }} + - --ca-duration={{ .Values.tls.certs.selfSigner.caCertDuration }} + - --ca-expiry={{ .Values.tls.certs.selfSigner.caCertExpiryWindow }} + {{- end }} + - --client-duration={{ .Values.tls.certs.selfSigner.clientCertDuration }} + - --client-expiry={{ .Values.tls.certs.selfSigner.clientCertExpiryWindow }} + - --node-duration={{ .Values.tls.certs.selfSigner.nodeCertDuration }} + - --node-expiry={{ .Values.tls.certs.selfSigner.nodeCertExpiryWindow }} + env: + - name: STATEFULSET_NAME + value: {{ template "cockroachdb.fullname" . }} + - name: NAMESPACE + value: {{ .Release.Namespace | quote }} + - name: CLUSTER_DOMAIN + value: {{ .Values.clusterDomain}} + {{- if and .Values.tls.certs.selfSigner.securityContext.enabled }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + {{- end }} + serviceAccountName: {{ template "selfcerts.fullname" . }} +{{- end}} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/job-cleaner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/job-cleaner.yaml new file mode 100644 index 000000000..1503ac459 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/job-cleaner.yaml @@ -0,0 +1,70 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "selfcerts.fullname" . }}-cleaner + namespace: {{ .Release.Namespace | quote }} + annotations: + # This is what defines this resource as a hook. Without this line, the + # job is considered part of the release. + "helm.sh/hook": pre-delete + "helm.sh/hook-delete-policy": hook-succeeded,hook-failed + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} +spec: + backoffLimit: 1 + template: + metadata: + name: {{ template "selfcerts.fullname" . }}-cleaner + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.tls.selfSigner.labels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tls.selfSigner.annotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if and .Values.tls.certs.selfSigner.securityContext.enabled }} + securityContext: + seccompProfile: + type: "RuntimeDefault" + runAsGroup: 1000 + runAsUser: 1000 + fsGroup: 1000 + runAsNonRoot: true + {{- end }} + restartPolicy: Never + {{- with .Values.tls.selfSigner.affinity }} + affinity: {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tls.selfSigner.nodeSelector }} + nodeSelector: {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tls.selfSigner.tolerations }} + tolerations: {{- toYaml . | nindent 8 }} + {{- end }} + containers: + - name: cleaner + image: "{{ .Values.tls.selfSigner.image.registry }}/{{ .Values.tls.selfSigner.image.repository }}:{{ .Values.tls.selfSigner.image.tag }}" + imagePullPolicy: "{{ .Values.tls.selfSigner.image.pullPolicy }}" + args: + - cleanup + - --namespace={{ .Release.Namespace }} + env: + - name: STATEFULSET_NAME + value: {{ template "cockroachdb.fullname" . }} + {{- if and .Values.tls.certs.selfSigner.securityContext.enabled }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + {{- end }} + serviceAccountName: {{ template "rotatecerts.fullname" . }} +{{- end}} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/job.init.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/job.init.yaml new file mode 100644 index 000000000..dbc1eaa17 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/job.init.yaml @@ -0,0 +1,303 @@ +{{ $isClusterInitEnabled := and (eq (len .Values.conf.join) 0) (not (index .Values.conf `single-node`)) }} +{{ $isDatabaseProvisioningEnabled := .Values.init.provisioning.enabled }} +{{- if or $isClusterInitEnabled $isDatabaseProvisioningEnabled }} + {{ template "cockroachdb.tlsValidation" . }} +kind: Job +apiVersion: batch/v1 +metadata: + name: {{ template "cockroachdb.fullname" . }}-init + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.init.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + helm.sh/hook: post-install,post-upgrade + helm.sh/hook-delete-policy: before-hook-creation + {{- with .Values.init.jobAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + template: + metadata: + labels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.init.labels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.init.annotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if eq (include "cockroachdb.securityContext.versionValidation" .) "true" }} + {{- if and .Values.init.securityContext.enabled }} + securityContext: + seccompProfile: + type: "RuntimeDefault" + runAsGroup: 1000 + runAsUser: 1000 + fsGroup: 1000 + runAsNonRoot: true + {{- end }} + {{- end }} + restartPolicy: OnFailure + terminationGracePeriodSeconds: {{ .Values.init.terminationGracePeriodSeconds }} + {{- if or .Values.image.credentials (and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager)) }} + imagePullSecrets: + {{- if .Values.image.credentials }} + - name: {{ template "cockroachdb.fullname" . }}.db.registry + {{- end }} + {{- if and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }} + - name: {{ template "cockroachdb.fullname" . }}.self-signed-certs.registry + {{- end }} + {{- end }} + serviceAccountName: {{ template "cockroachdb.serviceAccount.name" . }} + {{- if .Values.tls.enabled }} + initContainers: + - name: copy-certs + image: {{ .Values.tls.copyCerts.image | quote }} + imagePullPolicy: {{ .Values.tls.selfSigner.image.pullPolicy | quote }} + command: + - /bin/sh + - -c + - "cp -f /certs/* /cockroach-certs/; chmod 0400 /cockroach-certs/*.key" + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + {{- if and .Values.init.securityContext.enabled }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + {{- end }} + volumeMounts: + - name: client-certs + mountPath: /cockroach-certs/ + - name: certs-secret + mountPath: /certs/ + {{- with .Values.tls.copyCerts.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + {{- end }} + {{- with .Values.init.affinity }} + affinity: {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.init.nodeSelector }} + nodeSelector: {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.init.tolerations }} + tolerations: {{- toYaml . | nindent 8 }} + {{- end }} + containers: + - name: cluster-init + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + # Run the command in an `while true` loop because this Job is bound + # to come up before the CockroachDB Pods (due to the time needed to + # get PersistentVolumes attached to Nodes), and sleeping 5 seconds + # between attempts is much better than letting the Pod fail when + # the init command does and waiting out Kubernetes' non-configurable + # exponential back-off for Pod restarts. + # Command completes either when cluster initialization succeeds, + # or when cluster has been initialized already. + command: + - /bin/bash + - -c + - >- + {{- if $isClusterInitEnabled }} + initCluster() { + while true; do + local output=$( + set -x; + + /cockroach/cockroach init \ + {{- if .Values.tls.enabled }} + --certs-dir=/cockroach-certs/ \ + {{- else }} + --insecure \ + {{- end }} + {{- with index .Values.conf "cluster-name" }} + --cluster-name={{.}} \ + {{- end }} + --host={{ template "cockroachdb.fullname" . }}-0.{{ template "cockroachdb.fullname" . -}} + :{{ .Values.service.ports.grpc.internal.port | int64 }} \ + {{- if .Values.init.pcr.enabled -}} + {{- if .Values.init.pcr.isPrimary }} + --virtualized \ + {{- else }} + --virtualized-empty \ + {{- end }} + {{- end }} + 2>&1); + + local exitCode="$?"; + echo $output; + + if [[ "$output" =~ .*"Cluster successfully initialized".* || "$output" =~ .*"cluster has already been initialized".* ]]; then + break; + fi + + echo "Cluster is not ready to be initialized, retrying in 5 seconds" + sleep 5; + done + } + + initCluster; + {{- end }} + + {{- if $isDatabaseProvisioningEnabled }} + provisionCluster() { + while true; do + /cockroach/cockroach sql \ + {{- if .Values.tls.enabled }} + --certs-dir=/cockroach-certs/ \ + {{- else }} + --insecure \ + {{- end }} + --host={{ template "cockroachdb.fullname" . }}-0.{{ template "cockroachdb.fullname" . -}} + :{{ .Values.service.ports.grpc.internal.port | int64 }} \ + --execute=" + {{- range $clusterSetting, $clusterSettingValue := .Values.init.provisioning.clusterSettings }} + SET CLUSTER SETTING {{ $clusterSetting }} = '${{ $clusterSetting | replace "." "_" }}_CLUSTER_SETTING'; + {{- end }} + + {{- range $user := .Values.init.provisioning.users }} + CREATE USER IF NOT EXISTS {{ $user.name }} WITH + {{- if $user.password }} + PASSWORD '${{ $user.name }}_PASSWORD' + {{- else }} + PASSWORD null + {{- end }} + {{ join " " $user.options }} + ; + {{- end }} + + {{- range $database := .Values.init.provisioning.databases }} + CREATE DATABASE IF NOT EXISTS {{ $database.name }} + {{- if $database.options }} + {{ join " " $database.options }} + {{- end }} + ; + + {{- range $owner := $database.owners }} + GRANT ALL ON DATABASE {{ $database.name }} TO {{ $owner }}; + {{- end }} + + {{- range $owner := $database.owners_with_grant_option }} + GRANT ALL ON DATABASE {{ $database.name }} TO {{ $owner }} WITH GRANT OPTION; + {{- end }} + + {{- if $database.backup }} + CREATE SCHEDULE IF NOT EXISTS {{ $database.name }}_scheduled_backup + FOR BACKUP DATABASE {{ $database.name }} INTO '{{ $database.backup.into }}' + + {{- if $database.backup.options }} + WITH {{ join "," $database.backup.options }} + {{- end }} + RECURRING '{{ $database.backup.recurring }}' + {{- if $database.backup.fullBackup }} + FULL BACKUP '{{ $database.backup.fullBackup }}' + {{- else }} + FULL BACKUP ALWAYS + {{- end }} + + {{- if and $database.backup.schedule $database.backup.schedule.options }} + WITH SCHEDULE OPTIONS {{ join "," $database.backup.schedule.options }} + {{- end }} + ; + {{- end }} + {{- end }} + " + &>/dev/null; + + local exitCode="$?"; + + if [[ "$exitCode" -eq "0" ]] + then break; + fi + + sleep 5; + done + + echo "Provisioning completed successfully"; + } + + provisionCluster; + {{- end }} + env: + {{- $secretName := printf "%s-init" (include "cockroachdb.fullname" .) }} + {{- range $user := .Values.init.provisioning.users }} + {{- if $user.password }} + - name: {{ $user.name }}_PASSWORD + valueFrom: + secretKeyRef: + name: {{ $secretName }} + key: {{ $user.name }}-password + {{- end }} + {{- end }} + {{- range $clusterSetting, $clusterSettingValue := .Values.init.provisioning.clusterSettings }} + {{- if $clusterSettingValue }} + - name: {{ $clusterSetting | replace "." "_" }}_CLUSTER_SETTING + valueFrom: + secretKeyRef: + name: {{ $secretName }} + key: {{ $clusterSetting | replace "." "-" }}-cluster-setting + {{- end }} + {{- end }} + {{- if .Values.tls.enabled }} + volumeMounts: + - name: client-certs + mountPath: /cockroach-certs/ + {{- end }} + {{- with .Values.init.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + {{- if and .Values.init.securityContext.enabled }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] + {{- end }} + {{- if .Values.tls.enabled }} + volumes: + - name: client-certs + emptyDir: {} + {{- if or .Values.tls.certs.provided .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }} + - name: certs-secret + {{- if or .Values.tls.certs.tlsSecret .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }} + projected: + sources: + - secret: + {{- if .Values.tls.certs.selfSigner.enabled }} + name: {{ template "cockroachdb.fullname" . }}-client-secret + {{ else }} + name: {{ .Values.tls.certs.clientRootSecret }} + {{ end -}} + items: + - key: ca.crt + path: ca.crt + mode: 0400 + - key: tls.crt + path: client.root.crt + mode: 0400 + - key: tls.key + path: client.root.key + mode: 0400 + {{- else }} + secret: + secretName: {{ .Values.tls.certs.clientRootSecret }} + defaultMode: 0400 + {{- end }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/networkpolicy.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/networkpolicy.yaml new file mode 100644 index 000000000..d41afa32b --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/networkpolicy.yaml @@ -0,0 +1,59 @@ +{{- if .Values.networkPolicy.enabled }} +kind: NetworkPolicy +apiVersion: {{ template "cockroachdb.networkPolicy.apiVersion" . }} +metadata: + name: {{ template "cockroachdb.serviceAccount.name" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + podSelector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 6 }} + {{- end }} + ingress: + - ports: + - port: grpc + {{- with .Values.networkPolicy.ingress.grpc }} + from: + # Allow connections via custom rules. + {{- toYaml . | nindent 8 }} + # Allow client connection via pre-considered label. + - podSelector: + matchLabels: + {{ template "cockroachdb.fullname" . }}-client: "true" + # Allow other CockroachDBs to connect to form a cluster. + - podSelector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 14 }} + {{- end }} + {{- if gt (.Values.statefulset.replicas | int64) 1 }} + # Allow init Job to connect to bootstrap a cluster. + - podSelector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.init.labels }} + {{- toYaml . | nindent 14 }} + {{- end }} + {{- end }} + {{- end }} + # Allow connections to admin UI and for Prometheus. + - ports: + - port: http + {{- with .Values.networkPolicy.ingress.http }} + from: {{- toYaml . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/poddisruptionbudget.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/poddisruptionbudget.yaml new file mode 100644 index 000000000..f707e4054 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/poddisruptionbudget.yaml @@ -0,0 +1,26 @@ +kind: PodDisruptionBudget +{{- if or (.Capabilities.APIVersions.Has "policy/v1") (semverCompare ">=1.21-0" .Capabilities.KubeVersion.Version) }} +apiVersion: policy/v1 +{{- else }} +apiVersion: policy/v1beta1 +{{- end }} +metadata: + name: {{ template "cockroachdb.fullname" . }}-budget + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + selector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 6 }} + {{- end }} + maxUnavailable: {{ .Values.statefulset.budget.maxUnavailable | int64 }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/role-certRotateSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/role-certRotateSelfSigner.yaml new file mode 100644 index 000000000..f0e2b90ce --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/role-certRotateSelfSigner.yaml @@ -0,0 +1,27 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "rotatecerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["create", "get", "update", "delete"] + - apiGroups: ["apps"] + resources: ["statefulsets"] + verbs: ["get"] + resourceNames: + - {{ template "cockroachdb.fullname" . }} + - apiGroups: [""] + resources: ["pods"] + verbs: ["delete", "get"] +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/role-certSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/role-certSelfSigner.yaml new file mode 100644 index 000000000..1cbaab3dd --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/role-certSelfSigner.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "selfcerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + annotations: + # This is what defines this resource as a hook. Without this line, the + # job is considered part of the release. + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "2" + "helm.sh/hook-delete-policy": hook-succeeded,hook-failed + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["create", "get", "update", "delete"] + - apiGroups: ["apps"] + resources: ["statefulsets"] + verbs: ["get"] + resourceNames: + - {{ template "cockroachdb.fullname" . }} + - apiGroups: [""] + resources: ["pods"] + verbs: ["delete", "get"] +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/role.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/role.yaml new file mode 100644 index 000000000..ebe5ce8ae --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/role.yaml @@ -0,0 +1,23 @@ +{{- if .Values.tls.enabled }} +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "cockroachdb.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +rules: + - apiGroups: [""] + resources: ["secrets"] + {{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }} + verbs: ["get"] + {{- else }} + verbs: ["create", "get"] + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding-certRotateSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding-certRotateSelfSigner.yaml new file mode 100644 index 000000000..c1a45f797 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding-certRotateSelfSigner.yaml @@ -0,0 +1,23 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "rotatecerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ template "rotatecerts.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "rotatecerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding-certSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding-certSelfSigner.yaml new file mode 100644 index 000000000..5725d02a4 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding-certSelfSigner.yaml @@ -0,0 +1,29 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "selfcerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + annotations: + # This is what defines this resource as a hook. Without this line, the + # job is considered part of the release. + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "3" + "helm.sh/hook-delete-policy": hook-succeeded,hook-failed + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ template "selfcerts.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "selfcerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding.yaml new file mode 100644 index 000000000..00d9f9a55 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/rolebinding.yaml @@ -0,0 +1,23 @@ +{{- if .Values.tls.enabled }} +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "cockroachdb.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ template "cockroachdb.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "cockroachdb.serviceAccount.name" . }} + namespace: {{ .Release.Namespace | quote }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.backendconfig.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.backendconfig.yaml new file mode 100644 index 000000000..61103060a --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.backendconfig.yaml @@ -0,0 +1,25 @@ +{{- if .Values.iap.enabled }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.fullname" . }}.iap + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +type: Opaque +data: + {{- if eq "" .Values.iap.clientId }} + {{ fail "iap.clientID can't be empty if iap.enabled is set to true" }} + {{- end }} + client_id: {{ .Values.iap.clientId | b64enc }} + {{- if eq "" .Values.iap.clientSecret }} + {{ fail "iap.clientSecret can't be empty if iap.enabled is set to true" }} + {{- end }} + client_secret: {{ .Values.iap.clientSecret | b64enc }} +{{- end }} \ No newline at end of file diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.logconfig.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.logconfig.yaml new file mode 100644 index 000000000..40b929ae7 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.logconfig.yaml @@ -0,0 +1,19 @@ +{{- if .Values.conf.log.enabled }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.fullname" . }}-log-config + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +type: Opaque +stringData: + log-config.yaml: | + {{- toYaml .Values.conf.log.config | nindent 4 }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.registry.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.registry.yaml new file mode 100644 index 000000000..a054069fb --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secret.registry.yaml @@ -0,0 +1,23 @@ +{{- range $name, $cred := dict "db" (.Values.image.credentials) "init-certs" (.Values.tls.selfSigner.image.credentials) }} +{{- if not (empty $cred) }} +{{- if or (and (eq $name "init-certs") $.Values.tls.enabled) (ne $name "init-certs") }} +--- +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.fullname" $ }}.{{ $name }}.registry + namespace: {{ $.Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" $ }} + app.kubernetes.io/name: {{ template "cockroachdb.name" $ }} + app.kubernetes.io/instance: {{ $.Release.Name | quote }} + app.kubernetes.io/managed-by: {{ $.Release.Service | quote }} + {{- with $.Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +type: kubernetes.io/dockerconfigjson +data: + .dockerconfigjson: {{ printf `{"auths":{%s:{"auth":"%s"}}}` ($cred.registry | quote) (printf "%s:%s" $cred.username $cred.password | b64enc) | b64enc | quote }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/secrets.init.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secrets.init.yaml new file mode 100644 index 000000000..4d13a35ff --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/secrets.init.yaml @@ -0,0 +1,20 @@ +{{- if .Values.init.provisioning.enabled }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "cockroachdb.fullname" . }}-init + namespace: {{ .Release.Namespace | quote }} +type: Opaque +stringData: + +{{- range $user := .Values.init.provisioning.users }} +{{- if $user.password }} + {{ $user.name }}-password: {{ $user.password | quote }} +{{- end }} +{{- end }} + +{{- range $clusterSetting, $clusterSettingValue := .Values.init.provisioning.clusterSettings }} + {{ $clusterSetting | replace "." "-" }}-cluster-setting: {{ $clusterSettingValue | quote }} +{{- end }} + +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/service.discovery.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/service.discovery.yaml new file mode 100644 index 000000000..8fe2a427a --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/service.discovery.yaml @@ -0,0 +1,64 @@ +# This service only exists to create DNS entries for each pod in +# the StatefulSet such that they can resolve each other's IP addresses. +# It does not create a load-balanced ClusterIP and should not be used directly +# by clients in most circumstances. +kind: Service +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.service.discovery.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + # Use this annotation in addition to the actual field below because the + # annotation will stop being respected soon, but the field is broken in + # some versions of Kubernetes: + # https://github.com/kubernetes/kubernetes/issues/58662 + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" + # Enable automatic monitoring of all instances when Prometheus is running + # in the cluster. + {{- if .Values.prometheus.enabled }} + prometheus.io/scrape: "true" + prometheus.io/path: _status/vars + prometheus.io/port: {{ .Values.service.ports.http.port | quote }} + {{- end }} + {{- with .Values.service.discovery.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + clusterIP: None + # We want all Pods in the StatefulSet to have their addresses published for + # the sake of the other CockroachDB Pods even before they're ready, since they + # have to be able to talk to each other in order to become ready. + publishNotReadyAddresses: true + ports: + {{- $ports := .Values.service.ports }} + # The main port, served by gRPC, serves Postgres-flavor SQL, inter-node + # traffic and the CLI. + - name: {{ $ports.grpc.external.name | quote }} + port: {{ $ports.grpc.external.port | int64 }} + targetPort: grpc + {{- if ne ($ports.grpc.internal.port | int64) ($ports.grpc.external.port | int64) }} + - name: {{ $ports.grpc.internal.name | quote }} + port: {{ $ports.grpc.internal.port | int64 }} + targetPort: grpc + {{- end }} + # The secondary port serves the UI as well as health and debug endpoints. + - name: {{ $ports.http.name | quote }} + port: {{ $ports.http.port | int64 }} + targetPort: http + selector: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/service.public.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/service.public.yaml new file mode 100644 index 000000000..251e9ab08 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/service.public.yaml @@ -0,0 +1,55 @@ +# This Service is meant to be used by clients of the database. +# It exposes a ClusterIP that will automatically load balance connections +# to the different database Pods. +kind: Service +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.fullname" . }}-public + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.service.public.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if or .Values.service.public.annotations .Values.tls.enabled .Values.iap.enabled }} + annotations: + {{- with .Values.service.public.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.tls.enabled }} + service.alpha.kubernetes.io/app-protocols: '{"http":"HTTPS"}' + {{- end }} + {{- if .Values.iap.enabled }} + beta.cloud.google.com/backend-config: '{"default": "{{ template "cockroachdb.fullname" . }}"}' + {{- end }} + {{- end }} +spec: + type: {{ .Values.service.public.type | quote }} + ports: + {{- $ports := .Values.service.ports }} + # The main port, served by gRPC, serves Postgres-flavor SQL, inter-node + # traffic and the CLI. + - name: {{ $ports.grpc.external.name | quote }} + port: {{ $ports.grpc.external.port | int64 }} + targetPort: grpc + {{- if ne ($ports.grpc.internal.port | int64) ($ports.grpc.external.port | int64) }} + - name: {{ $ports.grpc.internal.name | quote }} + port: {{ $ports.grpc.internal.port | int64 }} + targetPort: grpc + {{- end }} + # The secondary port serves the UI as well as health and debug endpoints. + - name: {{ $ports.http.name | quote }} + port: {{ $ports.http.port | int64 }} + targetPort: http + selector: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceMonitor.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceMonitor.yaml new file mode 100644 index 000000000..42f2390b4 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceMonitor.yaml @@ -0,0 +1,54 @@ +{{- $serviceMonitor := .Values.serviceMonitor -}} +{{- $ports := .Values.service.ports -}} +{{- if $serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ template "cockroachdb.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- if $serviceMonitor.labels }} + {{- toYaml $serviceMonitor.labels | nindent 4 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if $serviceMonitor.annotations }} + annotations: + {{- toYaml $serviceMonitor.annotations | nindent 4 }} + {{- end }} +spec: + selector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.service.discovery.labels }} + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 6 }} + {{- end }} + namespaceSelector: + {{- if $serviceMonitor.namespaced }} + matchNames: + - {{ .Release.Namespace }} + {{- else }} + any: true + {{- end }} + endpoints: + - port: {{ $ports.http.name | quote }} + path: /_status/vars + {{- if $serviceMonitor.interval }} + interval: {{ $serviceMonitor.interval }} + {{- end }} + {{- if $serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ $serviceMonitor.scrapeTimeout }} + {{- end }} + {{- if .Values.serviceMonitor.tlsConfig }} + tlsConfig: {{ toYaml .Values.serviceMonitor.tlsConfig | nindent 6 }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount-certRotateSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount-certRotateSelfSigner.yaml new file mode 100644 index 000000000..a27cba921 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount-certRotateSelfSigner.yaml @@ -0,0 +1,22 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} + {{ template "cockroachdb.tls.certs.selfSigner.validation" . }} +kind: ServiceAccount +apiVersion: v1 +metadata: + name: {{ template "rotatecerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.tls.certs.selfSigner.svcAccountAnnotations }} + annotations: + {{- with .Values.tls.certs.selfSigner.svcAccountAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount-certSelfSigner.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount-certSelfSigner.yaml new file mode 100644 index 000000000..3ce2d63e9 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount-certSelfSigner.yaml @@ -0,0 +1,25 @@ +{{- if and .Values.tls.enabled .Values.tls.certs.selfSigner.enabled }} + {{ template "cockroachdb.tls.certs.selfSigner.validation" . }} +kind: ServiceAccount +apiVersion: v1 +metadata: + name: {{ template "selfcerts.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + annotations: + # This is what defines this resource as a hook. Without this line, the + # job is considered part of the release. + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": hook-succeeded,hook-failed + {{- with .Values.tls.certs.selfSigner.svcAccountAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount.yaml new file mode 100644 index 000000000..3af9be9aa --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/serviceaccount.yaml @@ -0,0 +1,21 @@ +{{- if .Values.statefulset.serviceAccount.create }} +kind: ServiceAccount +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.serviceAccount.name" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.statefulset.serviceAccount.annotations }} + annotations: + {{- with .Values.statefulset.serviceAccount.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/statefulset.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/statefulset.yaml new file mode 100644 index 000000000..318ae7709 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/statefulset.yaml @@ -0,0 +1,435 @@ +kind: StatefulSet +apiVersion: {{ template "cockroachdb.statefulset.apiVersion" . }} +metadata: + name: {{ template "cockroachdb.fullname" . }} + namespace: {{ .Release.Namespace | quote }} + labels: + helm.sh/chart: {{ template "cockroachdb.chart" . }} + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + app.kubernetes.io/managed-by: {{ .Release.Service | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + serviceName: {{ template "cockroachdb.fullname" . }} + replicas: {{ .Values.statefulset.replicas | int64 }} + updateStrategy: {{- toYaml .Values.statefulset.updateStrategy | nindent 4 }} + podManagementPolicy: {{ .Values.statefulset.podManagementPolicy | quote }} + selector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 6 }} + {{- end }} + template: + metadata: + labels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.statefulset.annotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if or .Values.image.credentials (and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager)) }} + imagePullSecrets: + {{- if .Values.image.credentials }} + - name: {{ template "cockroachdb.fullname" . }}.db.registry + {{- end }} + {{- if and .Values.tls.enabled .Values.tls.selfSigner.image.credentials (not .Values.tls.certs.provided) (not .Values.tls.certs.certManager) }} + - name: {{ template "cockroachdb.fullname" . }}.self-signed-certs.registry + {{- end }} + {{- end }} + serviceAccountName: {{ template "cockroachdb.serviceAccount.name" . }} + {{- if .Values.tls.enabled }} + initContainers: + - name: copy-certs + image: {{ .Values.tls.copyCerts.image | quote }} + imagePullPolicy: {{ .Values.tls.selfSigner.image.pullPolicy | quote }} + command: + - /bin/sh + - -c + - "cp -f /certs/* /cockroach-certs/; chmod 0400 /cockroach-certs/*.key" + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + {{- if .Values.statefulset.securityContext.enabled }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + {{- end }} + volumeMounts: + - name: certs + mountPath: /cockroach-certs/ + - name: certs-secret + mountPath: /certs/ + {{- with .Values.tls.copyCerts.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + {{- range $ic := .Values.statefulset.initContainers }} + - {{- toYaml $ic | nindent 10 }} + {{ with $.Values.statefulset.volumeMounts}} + volumeMounts: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- end }} + {{- end }} + {{- if or .Values.statefulset.nodeAffinity .Values.statefulset.podAffinity .Values.statefulset.podAntiAffinity }} + affinity: + {{- with .Values.statefulset.nodeAffinity }} + nodeAffinity: {{- toYaml . | nindent 10 }} + {{- end }} + {{- with .Values.statefulset.podAffinity }} + podAffinity: {{- toYaml . | nindent 10 }} + {{- end }} + {{- if .Values.statefulset.podAntiAffinity }} + podAntiAffinity: + {{- if .Values.statefulset.podAntiAffinity.type }} + {{- if eq .Values.statefulset.podAntiAffinity.type "hard" }} + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: {{ .Values.statefulset.podAntiAffinity.topologyKey }} + labelSelector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 18 }} + {{- end }} + {{- else if eq .Values.statefulset.podAntiAffinity.type "soft" }} + preferredDuringSchedulingIgnoredDuringExecution: + - weight: {{ .Values.statefulset.podAntiAffinity.weight | int64 }} + podAffinityTerm: + topologyKey: {{ .Values.statefulset.podAntiAffinity.topologyKey }} + labelSelector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 20 }} + {{- end }} + {{- end }} + {{- else }} + {{- toYaml .Values.statefulset.podAntiAffinity | nindent 10 }} + {{- end }} + {{- end }} + {{- end }} + {{- if semverCompare ">=1.16-0" .Capabilities.KubeVersion.Version }} + topologySpreadConstraints: + - labelSelector: + matchLabels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.statefulset.labels }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.statefulset.topologySpreadConstraints }} + maxSkew: {{ .maxSkew }} + topologyKey: {{ .topologyKey }} + whenUnsatisfiable: {{ .whenUnsatisfiable }} + {{- end }} + {{- end }} + {{- with .Values.statefulset.nodeSelector }} + nodeSelector: {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.statefulset.priorityClassName }} + priorityClassName: {{ .Values.statefulset.priorityClassName }} + {{- end }} + {{- with .Values.statefulset.tolerations }} + tolerations: {{- toYaml . | nindent 8 }} + {{- end }} + # No pre-stop hook is required, a SIGTERM plus some time is all that's + # needed for graceful shutdown of a node. + terminationGracePeriodSeconds: {{ .Values.init.terminationGracePeriodSeconds }} + containers: + - name: db + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + args: + - shell + - -ecx + # The use of qualified `hostname -f` is crucial: + # Other nodes aren't able to look up the unqualified hostname. + # + # `--join` CLI flag is hardcoded to exactly 3 Pods, because: + # 1. Having `--join` value depending on `statefulset.replicas` + # will trigger undesired restart of existing Pods when + # StatefulSet is scaled up/down. We want to scale without + # restarting existing Pods. + # 2. At least one Pod in `--join` is enough to successfully + # join CockroachDB cluster and gossip with all other existing + # Pods, even if there are 3 or more Pods. + # 3. It's harmless for `--join` to have 3 Pods even for 1-Pod + # clusters, while it gives us opportunity to scale up even if + # some Pods of existing cluster are down (for whatever reason). + # See details explained here: + # https://github.com/helm/charts/pull/18993#issuecomment-558795102 + - >- + exec /cockroach/cockroach + {{- if index .Values.conf `single-node` }} + start-single-node + {{- else }} + start --join= + {{- if .Values.conf.join }} + {{- join `,` .Values.conf.join -}} + {{- else }} + {{- range $i, $_ := until 3 -}} + {{- if gt $i 0 -}},{{- end -}} + ${STATEFULSET_NAME}-{{ $i }}.${STATEFULSET_FQDN}:{{ $.Values.service.ports.grpc.internal.port | int64 -}} + {{- end -}} + {{- end }} + {{- with index .Values.conf `cluster-name` }} + --cluster-name={{ . }} + {{- if index $.Values.conf `disable-cluster-name-verification` }} + --disable-cluster-name-verification + {{- end }} + {{- end }} + {{- end }} + --advertise-host=$(hostname).${STATEFULSET_FQDN} + {{- if .Values.tls.enabled }} + --certs-dir=/cockroach/cockroach-certs/ + {{- else }} + --insecure + {{- end }} + {{- with .Values.conf.attrs }} + --attrs={{ join `:` . }} + {{- end }} + {{- if index .Values.conf `http-port` }} + --http-port={{ index .Values.conf `http-port` | int64 }} + {{- else }} + --http-port={{ index .Values.service.ports.http.port | int64 }} + {{- end }} + {{ if .Values.conf.port }} + --port={{ .Values.conf.port | int64 }} + {{- else }} + --port={{ .Values.service.ports.grpc.internal.port | int64 }} + {{- end }} + --cache={{ .Values.conf.cache }} + {{- with index .Values.conf `max-disk-temp-storage` }} + --max-disk-temp-storage={{ . }} + {{- end }} + {{- with index .Values.conf `max-offset` }} + --max-offset={{ . }} + {{- end }} + --max-sql-memory={{ index .Values.conf `max-sql-memory` }} + {{- with .Values.conf.locality }} + --locality={{ . }} + {{- end }} + {{- with index .Values.conf `sql-audit-dir` }} + --sql-audit-dir={{ . }} + {{- end }} + {{- if .Values.conf.store.enabled }} + --store={{ template "cockroachdb.conf.store" . }} + {{- end }} + {{- if .Values.conf.log.enabled }} + --log-config-file=/cockroach/log-config/log-config.yaml + {{- else }} + --logtostderr={{ .Values.conf.logtostderr }} + {{- end }} + {{- range .Values.statefulset.args }} + {{ . }} + {{- end }} + env: + - name: STATEFULSET_NAME + value: {{ template "cockroachdb.fullname" . }} + - name: STATEFULSET_FQDN + value: {{ template "cockroachdb.fullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }} + - name: COCKROACH_CHANNEL + value: kubernetes-helm + {{- with .Values.statefulset.env }} + {{- toYaml . | nindent 12 }} + {{- end }} + ports: + - name: grpc + {{ if .Values.conf.port }} + containerPort: {{ .Values.conf.port | int64 }} + {{- else }} + containerPort: {{ .Values.service.ports.grpc.internal.port | int64 }} + {{- end }} + protocol: TCP + - name: http + {{- if index .Values.conf `http-port` }} + containerPort: {{ index .Values.conf `http-port` | int64 }} + {{- else }} + containerPort: {{ index .Values.service.ports.http.port | int64 }} + {{- end }} + protocol: TCP + volumeMounts: + - name: datadir + mountPath: /cockroach/{{ .Values.conf.path }}/ + {{- if .Values.tls.enabled }} + - name: certs + mountPath: /cockroach/cockroach-certs/ + {{- if .Values.tls.certs.provided }} + - name: certs-secret + mountPath: /cockroach/certs/ + {{- end }} + {{- end }} + {{- range .Values.statefulset.secretMounts }} + - name: {{ printf "secret-%s" . | quote }} + mountPath: {{ printf "/etc/cockroach/secrets/%s" . | quote }} + readOnly: true + {{- end }} + {{- if .Values.conf.log.enabled }} + - name: log-config + mountPath: /cockroach/log-config + readOnly: true + {{- end }} + {{ with .Values.statefulset.volumeMounts }} + {{ toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.statefulset.customStartupProbe }} + startupProbe: + {{ toYaml .Values.statefulset.customStartupProbe | nindent 12 }} + {{- end }} + livenessProbe: + {{- if .Values.statefulset.customLivenessProbe }} + {{ toYaml .Values.statefulset.customLivenessProbe | nindent 12 }} + {{- else }} + httpGet: + path: /health + port: http + {{- if .Values.tls.enabled }} + scheme: HTTPS + {{- end }} + initialDelaySeconds: 30 + periodSeconds: 5 + {{- end }} + readinessProbe: + {{- if .Values.statefulset.customReadinessProbe }} + {{ toYaml .Values.statefulset.customReadinessProbe | nindent 12 }} + {{- else }} + httpGet: + path: /health?ready=1 + port: http + {{- if .Values.tls.enabled }} + scheme: HTTPS + {{- end }} + initialDelaySeconds: 10 + periodSeconds: 5 + failureThreshold: 2 + {{- end }} + {{- if eq (include "cockroachdb.securityContext.versionValidation" .) "true" }} + {{- if .Values.statefulset.securityContext.enabled }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + {{- end }} + {{- end }} + {{- with .Values.statefulset.resources }} + resources: {{- toYaml . | nindent 12 }} + {{- end }} + volumes: + - name: datadir + {{- if .Values.storage.persistentVolume.enabled }} + persistentVolumeClaim: + claimName: datadir + {{- else if .Values.storage.hostPath }} + hostPath: + path: {{ .Values.storage.hostPath | quote }} + {{- else }} + emptyDir: {} + {{- end }} + {{ with .Values.statefulset.volumes }} + {{ toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: certs + emptyDir: {} + {{- if or .Values.tls.certs.provided .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }} + - name: certs-secret + {{- if or .Values.tls.certs.tlsSecret .Values.tls.certs.certManager .Values.tls.certs.selfSigner.enabled }} + projected: + sources: + - secret: + {{- if .Values.tls.certs.selfSigner.enabled }} + name: {{ template "cockroachdb.fullname" . }}-node-secret + {{ else }} + name: {{ .Values.tls.certs.nodeSecret }} + {{ end -}} + items: + - key: ca.crt + path: ca.crt + mode: 256 + - key: tls.crt + path: node.crt + mode: 256 + - key: tls.key + path: node.key + mode: 256 + {{- else }} + secret: + secretName: {{ .Values.tls.certs.nodeSecret }} + defaultMode: 256 + {{- end }} + {{- end }} + {{- end }} + {{- range .Values.statefulset.secretMounts }} + - name: {{ printf "secret-%s" . | quote }} + secret: + secretName: {{ . | quote }} + {{- end }} + {{- if .Values.conf.log.enabled }} + - name: log-config + secret: + secretName: {{ template "cockroachdb.fullname" . }}-log-config + {{- end }} + {{- if eq (include "cockroachdb.securityContext.versionValidation" .) "true" }} + {{- if and .Values.securityContext.enabled }} + securityContext: + seccompProfile: + type: "RuntimeDefault" + fsGroup: 1000 + runAsGroup: 1000 + runAsUser: 1000 + runAsNonRoot: true + {{- end }} + {{- end }} +{{- if .Values.storage.persistentVolume.enabled }} + volumeClaimTemplates: + - metadata: + name: datadir + labels: + app.kubernetes.io/name: {{ template "cockroachdb.name" . }} + app.kubernetes.io/instance: {{ .Release.Name | quote }} + {{- with .Values.storage.persistentVolume.labels }} + {{- toYaml . | nindent 10 }} + {{- end }} + {{- with .Values.labels }} + {{- toYaml . | nindent 10 }} + {{- end }} + {{- with .Values.storage.persistentVolume.annotations }} + annotations: {{- toYaml . | nindent 10 }} + {{- end }} + spec: + accessModes: ["ReadWriteOnce"] + {{- if .Values.storage.persistentVolume.storageClass }} + {{- if (eq "-" .Values.storage.persistentVolume.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: {{ .Values.storage.persistentVolume.storageClass | quote}} + {{- end }} + {{- end }} + resources: + requests: + storage: {{ .Values.storage.persistentVolume.size | quote }} +{{- end }} diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/templates/tests/client.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/templates/tests/client.yaml new file mode 100644 index 000000000..8656b8ed6 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/templates/tests/client.yaml @@ -0,0 +1,65 @@ +kind: Pod +apiVersion: v1 +metadata: + name: {{ template "cockroachdb.fullname" . }}-test + namespace: {{ .Release.Namespace | quote }} +{{- if .Values.networkPolicy.enabled }} + labels: + {{ template "cockroachdb.fullname" . }}-client: "true" +{{- end }} + annotations: + helm.sh/hook: test-success +spec: + restartPolicy: Never +{{- if .Values.image.credentials }} + imagePullSecrets: + - name: {{ template "cockroachdb.fullname" . }}.db.registry +{{- end }} + {{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }} + volumes: + - name: client-certs + {{- if or .Values.tls.certs.tlsSecret .Values.tls.certs.certManager }} + projected: + sources: + - secret: + name: {{ .Values.tls.certs.clientRootSecret }} + items: + - key: ca.crt + path: ca.crt + mode: 0400 + - key: tls.crt + path: client.root.crt + mode: 0400 + - key: tls.key + path: client.root.key + mode: 0400 + {{- else }} + secret: + secretName: {{ .Values.tls.certs.clientRootSecret }} + defaultMode: 0400 + {{- end }} + {{- end }} + containers: + - name: client-test + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + {{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }} + volumeMounts: + - name: client-certs + mountPath: /cockroach-certs + {{- end }} + command: + - /cockroach/cockroach + - sql + {{- if or .Values.tls.certs.provided .Values.tls.certs.certManager }} + - --certs-dir + - /cockroach-certs + {{- else }} + - --insecure + {{- end}} + - --host + - {{ template "cockroachdb.fullname" . }}-public.{{ .Release.Namespace }} + - --port + - {{ .Values.service.ports.grpc.external.port | quote }} + - -e + - SHOW DATABASES; diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/values.schema.json b/charts/cockroach-labs/cockroachdb/14.0.5/values.schema.json new file mode 100644 index 000000000..b23c47974 --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/values.schema.json @@ -0,0 +1,97 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "properties": { + "tls": { + "type": "object", + "properties": { + "certs": { + "type": "object", + "properties": { + "selfSigner": { + "type": "object", + "required": ["enabled", "caProvided"], + "properties": { + "enabled": { + "type": "boolean" + }, + "caProvided": { + "type": "boolean" + } + }, + "if": { + "properties": { + "enabled": { + "const": true + } + } + }, + "then": { + "if": { + "properties": { + "caProvided": { + "const": false + } + } + }, + "then": { + "properties": { + "caCertDuration" : { + "type": "string", + "pattern": "^[0-9]*h$" + }, + "caCertExpiryWindow": { + "type": "string", + "pattern": "^[0-9]*h$" + } + } + }, + "properties": { + "clientCertDuration": { + "type": "string", + "pattern": "^[0-9]*h$" + }, + "clientCertExpiryWindow": { + "type": "string", + "pattern": "^[0-9]*h$" + }, + "nodeCertDuration": { + "type": "string", + "pattern": "^[0-9]*h$" + }, + "nodeCertExpiryWindow": { + "type": "string", + "pattern": "^[0-9]*h$" + }, + "rotateCerts": { + "type": "boolean" + } + } + } + } + } + }, + "selfSigner": { + "type": "object", + "properties": { + "image": { + "type": "object", + "required": ["repository", "tag", "pullPolicy"], + "properties": { + "repository": { + "type": "string" + }, + "tag": { + "type": "string" + }, + "pullPolicy": { + "type": "string", + "pattern": "^(Always|Never|IfNotPresent)$" + } + } + } + } + } + } + } + } +} \ No newline at end of file diff --git a/charts/cockroach-labs/cockroachdb/14.0.5/values.yaml b/charts/cockroach-labs/cockroachdb/14.0.5/values.yaml new file mode 100644 index 000000000..137f8f22e --- /dev/null +++ b/charts/cockroach-labs/cockroachdb/14.0.5/values.yaml @@ -0,0 +1,651 @@ +# Generated file, DO NOT EDIT. Source: build/templates/values.yaml +# Overrides the chart name against the label "app.kubernetes.io/name: " placed on every resource this chart creates. +nameOverride: "" + +# Override the resource names created by this chart which originally is generated using release and chart name. +fullnameOverride: "" + +image: + repository: cockroachdb/cockroach + tag: v24.2.4 + pullPolicy: IfNotPresent + credentials: {} + # registry: docker.io + # username: john_doe + # password: changeme + + +# Additional labels to apply to all Kubernetes resources created by this chart. +labels: {} + # app.kubernetes.io/part-of: my-app + + +# Cluster's default DNS domain. +# You should overwrite it if you're using a different one, +# otherwise CockroachDB nodes discovery won't work. +clusterDomain: cluster.local + + +conf: + # An ordered list of CockroachDB node attributes. + # Attributes are arbitrary strings specifying machine capabilities. + # Machine capabilities might include specialized hardware or number of cores + # (e.g. "gpu", "x16c"). + attrs: [] + # - x16c + # - gpu + + # Total size in bytes for caches, shared evenly if there are multiple + # storage devices. Size suffixes are supported (e.g. `1GB` and `1GiB`). + # A percentage of physical memory can also be specified (e.g. `.25`). + cache: 25% + + # Sets a name to verify the identity of a cluster. + # The value must match between all nodes specified via `conf.join`. + # This can be used as an additional verification when either the node or + # cluster, or both, have not yet been initialized and do not yet know their + # cluster ID. + # To introduce a cluster name into an already-initialized cluster, pair this + # option with `conf.disable-cluster-name-verification: yes`. + cluster-name: "" + + # Tell the server to ignore `conf.cluster-name` mismatches. + # This is meant for use when opting an existing cluster into starting to use + # cluster name verification, or when changing the cluster name. + # The cluster should be restarted once with `conf.cluster-name` and + # `conf.disable-cluster-name-verification: yes` combined, and once all nodes + # have been updated to know the new cluster name, the cluster can be restarted + # again with `conf.disable-cluster-name-verification: no`. + # This option has no effect if `conf.cluster-name` is not specified. + disable-cluster-name-verification: false + + # The addresses for connecting a CockroachDB nodes to an existing cluster. + # If you are deploying a second CockroachDB instance that should join a first + # one, use the below list to join to the existing instance. + # Each item in the array should be a FQDN (and port if needed) resolvable by + # new Pods. + join: [] + + # New logging configuration. + log: + enabled: false + # https://www.cockroachlabs.com/docs/v21.1/configure-logs + config: {} + # file-defaults: + # dir: /custom/dir/path/ + # fluent-defaults: + # format: json-fluent + # sinks: + # stderr: + # channels: [DEV] + + # Logs at or above this threshold to STDERR. Ignored when "log" is enabled + logtostderr: INFO + + # Maximum storage capacity available to store temporary disk-based data for + # SQL queries that exceed the memory budget (e.g. join, sorts, etc are + # sometimes able to spill intermediate results to disk). + # Accepts numbers interpreted as bytes, size suffixes (e.g. `32GB` and + # `32GiB`) or a percentage of disk size (e.g. `10%`). + # The location of the temporary files is within the first store dir. + # If expressed as a percentage, `max-disk-temp-storage` is interpreted + # relative to the size of the storage device on which the first store is + # placed. The temp space usage is never counted towards any store usage + # (although it does share the device with the first store) so, when + # configuring this, make sure that the size of this temp storage plus the size + # of the first store don't exceed the capacity of the storage device. + # If the first store is an in-memory one (i.e. `type=mem`), then this + # temporary "disk" data is also kept in-memory. + # A percentage value is interpreted as a percentage of the available internal + # memory. + # max-disk-temp-storage: 0GB + + # Maximum allowed clock offset for the cluster. If observed clock offsets + # exceed this limit, servers will crash to minimize the likelihood of + # reading inconsistent data. Increasing this value will increase the time + # to recovery of failures as well as the frequency of uncertainty-based + # read restarts. + # Note, that this value must be the same on all nodes in the cluster. + # In order to change it, all nodes in the cluster must be stopped + # simultaneously and restarted with the new value. + # max-offset: 500ms + + # Maximum memory capacity available to store temporary data for SQL clients, + # including prepared queries and intermediate data rows during query + # execution. Accepts numbers interpreted as bytes, size suffixes + # (e.g. `1GB` and `1GiB`) or a percentage of physical memory (e.g. `.25`). + max-sql-memory: 25% + + # An ordered, comma-separated list of key-value pairs that describe the + # topography of the machine. Topography might include country, datacenter + # or rack designations. Data is automatically replicated to maximize + # diversities of each tier. The order of tiers is used to determine + # the priority of the diversity, so the more inclusive localities like + # country should come before less inclusive localities like datacenter. + # The tiers and order must be the same on all nodes. Including more tiers + # is better than including fewer. For example: + # locality: country=us,region=us-west,datacenter=us-west-1b,rack=12 + # locality: country=ca,region=ca-east,datacenter=ca-east-2,rack=4 + # locality: planet=earth,province=manitoba,colo=secondary,power=3 + locality: "" + + # Run CockroachDB instances in standalone mode with replication disabled + # (replication factor = 1). + # Enabling this option makes the following values to be ignored: + # - `conf.cluster-name` + # - `conf.disable-cluster-name-verification` + # - `conf.join` + # + # WARNING: Enabling this option makes each deployed Pod as a STANDALONE + # CockroachDB instance, so the StatefulSet does NOT FORM A CLUSTER. + # Don't use this option for production deployments unless you clearly + # understand what you're doing. + # Usually, this option is intended to be used in conjunction with + # `statefulset.replicas: 1` for temporary one-time deployments (like + # running E2E tests, for example). + single-node: false + + # If non-empty, create a SQL audit log in the specified directory. + sql-audit-dir: "" + + # WARNING this parameter is deprecated and will be removed in a future version. Use `.service.ports.grpc.internal.port` instead + port: "" + + # WARNING this parameter is deprecated and will be removed in a future version. Use `.service.ports.http.port` instead + http-port: "" + + # CockroachDB's data mount path. + path: cockroach-data + + # CockroachDB's storage configuration https://www.cockroachlabs.com/docs/v21.1/cockroach-start.html#storage + # Uses --store flag + store: + enabled: false + # Should be empty or 'mem' + type: + # Required for type=mem. If type and size is empty - storage.persistentVolume.size is used + size: + # Arbitrary strings, separated by colons, specifying disk type or capability + attrs: + +statefulset: + replicas: 3 + updateStrategy: + type: RollingUpdate + podManagementPolicy: Parallel + budget: + maxUnavailable: 1 + + # List of additional command-line arguments you want to pass to the + # `cockroach start` command. + args: [] + # - --disable-cluster-name-verification + + # List of extra environment variables to pass into container + env: [] + # - name: COCKROACH_ENGINE_MAX_SYNC_DURATION + # value: "24h" + + # List of Secrets names in the same Namespace as the CockroachDB cluster, + # which shall be mounted into `/etc/cockroach/secrets/` for every cluster + # member. + secretMounts: [] + + # Additional labels to apply to this StatefulSet and all its Pods. + labels: + app.kubernetes.io/component: cockroachdb + + # Additional annotations to apply to the Pods of this StatefulSet. + annotations: {} + + # Affinity rules for scheduling Pods of this StatefulSet on Nodes. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity + nodeAffinity: {} + # Inter-Pod Affinity rules for scheduling Pods of this StatefulSet. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity + podAffinity: {} + # Anti-affinity rules for scheduling Pods of this StatefulSet. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity + # You may either toggle options below for default anti-affinity rules, + # or specify the whole set of anti-affinity rules instead of them. + podAntiAffinity: + # The topologyKey to be used. + # Can be used to spread across different nodes, AZs, regions etc. + topologyKey: kubernetes.io/hostname + # Type of anti-affinity rules: either `soft`, `hard` or empty value (which + # disables anti-affinity rules). + type: soft + # Weight for `soft` anti-affinity rules. + # Does not apply for other anti-affinity types. + weight: 100 + + # Node selection constraints for scheduling Pods of this StatefulSet. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector + nodeSelector: {} + + # PriorityClassName given to Pods of this StatefulSet + # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass + priorityClassName: "" + + # Taints to be tolerated by Pods of this StatefulSet. + # https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + tolerations: [] + + # https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + topologySpreadConstraints: + maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + + # Uncomment the following resources definitions or pass them from + # command line to control the CPU and memory resources allocated + # by Pods of this StatefulSet. + resources: {} + # limits: + # cpu: 100m + # memory: 512Mi + # requests: + # cpu: 100m + # memory: 512Mi + + # terminationGracePeriodSeconds is the duration in seconds the Pod needs to terminate gracefully. + terminationGracePeriodSeconds: 300 + + # Custom Liveness probe + # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request + customLivenessProbe: {} + # httpGet: + # path: /health + # port: http + # scheme: HTTPS + # initialDelaySeconds: 30 + # periodSeconds: 5 + + # Custom Rediness probe + # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes + customReadinessProbe: {} + # httpGet: + # path: /health + # port: http + # scheme: HTTPS + # initialDelaySeconds: 30 + # periodSeconds: 5 + + # Custom Startup Probe + # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes + customStartupProbe: {} + # httpGet: + # path: /health + # port: http + # scheme: HTTPS + # initialDelaySeconds: 30 + # periodSeconds: 5 + + securityContext: + enabled: true + + serviceAccount: + # Specifies whether this ServiceAccount should be created. + create: true + # The name of this ServiceAccount to use. + # If not set and `create` is `true`, then service account is auto-generated. + # If not set and `create` is `false`, then it uses default service account. + name: "" + # Additional serviceAccount annotations (e.g. for attaching AWS IAM roles to pods) + annotations: {} + + # initContainers allows you to add additional containers to cockroachdb statefulset. + initContainers: [] +# - name: "fetch-metadata" +# image: "badouralix/curl-jq" +# command: +# - "sh" +# - "-c" +# - "curl -s -H \"Metadata:true\" --noproxy \"*\" \"http://169.254.169.254/metadata/instance?api-version=2021-02-01\" | jq '.' > /metadata/instance_metadata.json" +# resources: {} +# # requests: +# # cpu: "10m" +# # memory: "128Mi" +# # limits: +# # cpu: "10m" +# # memory: "128Mi" +# securityContext: +# allowPrivilegeEscalation: false +# capabilities: +# drop: +# - ALL +# privileged: false +# readOnlyRootFilesystem: true + + # volumeMounts are mounted on the same path in the main crdb container and all init containers. + volumeMounts: [] +# - name: metadata +# mountPath: /metadata + + # volumes allows you to add additional volumes to cockroachdb statefulset. + volumes: [] +# - name: metadata +# emptyDir: {} + +service: + ports: + # You can set a different external and internal gRPC ports and their name. + grpc: + external: + port: 26257 + name: grpc + # If the port number is different than `external.port`, then it will be + # named as `internal.name` in Service. + internal: + # CockroachDB's port to listen to inter-communications and client connections. + port: 26257 + # If using Istio set it to `cockroach`. + name: grpc-internal + http: + # CockroachDB's port to listen to HTTP requests. + port: 8080 + name: http + + # This Service is meant to be used by clients of the database. + # It exposes a ClusterIP that will automatically load balance connections + # to the different database Pods. + public: + type: ClusterIP + # Additional labels to apply to this Service. + labels: + app.kubernetes.io/component: cockroachdb + # Additional annotations to apply to this Service. + annotations: {} + + # This service only exists to create DNS entries for each pod in + # the StatefulSet such that they can resolve each other's IP addresses. + # It does not create a load-balanced ClusterIP and should not be used directly + # by clients in most circumstances. + discovery: + # Additional labels to apply to this Service. + labels: + app.kubernetes.io/component: cockroachdb + # Additional annotations to apply to this Service. + annotations: {} + +# CockroachDB's ingress for web ui. +ingress: + enabled: false + labels: {} + annotations: {} + # kubernetes.io/ingress.class: nginx + # cert-manager.io/cluster-issuer: letsencrypt + paths: [/] + hosts: [] + # - cockroachlabs.com + tls: [] + # - hosts: [cockroachlabs.com] + # secretName: cockroachlabs-tls + +prometheus: + enabled: true + +securityContext: + enabled: true + +# CockroachDB's Prometheus operator ServiceMonitor support +serviceMonitor: + enabled: false + labels: {} + annotations: {} + interval: 10s + # scrapeTimeout: 10s + # Limits the ServiceMonitor to the current namespace if set to `true`. + namespaced: false + + # tlsConfig: TLS configuration to use when scraping the endpoint. + # Of type: https://github.com/coreos/prometheus-operator/blob/main/Documentation/api.md#tlsconfig + tlsConfig: {} + +# CockroachDB's data persistence. +# If neither `persistentVolume` nor `hostPath` is used, then data will be +# persisted in ad-hoc `emptyDir`. +storage: + # Absolute path on host to store CockroachDB's data. + # If not specified, then `emptyDir` will be used instead. + # If specified, but `persistentVolume.enabled` is `true`, then has no effect. + hostPath: "" + + # If `enabled` is `true` then a PersistentVolumeClaim will be created and + # used to store CockroachDB's data, otherwise `hostPath` is used. + persistentVolume: + enabled: true + + size: 100Gi + + # If defined, then `storageClassName: `. + # If set to "-", then `storageClassName: ""`, which disables dynamic + # provisioning. + # If undefined or empty (default), then no `storageClassName` spec is set, + # so the default provisioner will be chosen (gp2 on AWS, standard on + # GKE, AWS & OpenStack). + storageClass: "" + + # Additional labels to apply to the created PersistentVolumeClaims. + labels: {} + # Additional annotations to apply to the created PersistentVolumeClaims. + annotations: {} + + +# Kubernetes Job which initializes multi-node CockroachDB cluster. +# It's not created if `statefulset.replicas` is `1`. +init: + # Additional labels to apply to this Job and its Pod. + labels: + app.kubernetes.io/component: init + + # Additional annotations to apply to this Job. + jobAnnotations: {} + + # Additional annotations to apply to the Pod of this Job. + annotations: {} + + # Affinity rules for scheduling the Pod of this Job. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity + affinity: {} + + # Node selection constraints for scheduling the Pod of this Job. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector + nodeSelector: {} + + # Taints to be tolerated by the Pod of this Job. + # https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + tolerations: [] + + # The init Pod runs at cluster creation to initialize CockroachDB. It finishes + # quickly and doesn't continue to consume resources in the Kubernetes + # cluster. Normally, you should leave this section commented out, but if your + # Kubernetes cluster uses Resource Quotas and requires all pods to specify + # resource requests or limits, you can set those here. + resources: {} + # requests: + # cpu: "10m" + # memory: "128Mi" + # limits: + # cpu: "10m" + # memory: "128Mi" + + # terminationGracePeriodSeconds is the duration in seconds the Pod needs to terminate gracefully. + terminationGracePeriodSeconds: 300 + + securityContext: + enabled: true + + # Setup Physical Cluster Replication (PCR) between primary and standby cluster. + # If isPrimary is set to true, the CockroachDB cluster created is the primary cluster. + # If isPrimary is set to false, the CockroachDB cluster created is the standby cluster. + pcr: + enabled: false + # isPrimary: true + + provisioning: + enabled: false + # https://www.cockroachlabs.com/docs/stable/cluster-settings.html + clusterSettings: + # cluster.organization: "'FooCorp - Local Testing'" + # enterprise.license: "'xxxxx'" + users: [] + # - name: + # password: + # # https://www.cockroachlabs.com/docs/stable/create-user.html#parameters + # options: [LOGIN] + databases: [] + # - name: + # # https://www.cockroachlabs.com/docs/stable/create-database.html#parameters + # options: [encoding='utf-8'] + # owners: [] + # # https://www.cockroachlabs.com/docs/stable/grant.html#parameters + # owners_with_grant_option: [] + # # Backup schedules are not idemponent for now and will fail on next run + # # https://github.com/cockroachdb/cockroach/issues/57892 + # backup: + # into: s3:// + # # Enterprise-only option (revision_history) + # # https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html#backup-options + # options: [revision_history] + # recurring: '@always' + # # Enterprise-only feature. Remove this value to use `FULL BACKUP ALWAYS` + # fullBackup: '@daily' + # schedule: + # # https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html#schedule-options + # options: [first_run = 'now'] + + +# Whether to run securely using TLS certificates. +tls: + enabled: true + copyCerts: + image: busybox + certs: + # Bring your own certs scenario. If provided, tls.init section will be ignored. + provided: false + # Secret name for the client root cert. + clientRootSecret: cockroachdb-root + # Secret name for node cert. + nodeSecret: cockroachdb-node + # Secret name for CA cert + caSecret: cockroach-ca + # Enable if the secret is a dedicated TLS. + # TLS secrets are created by cert-mananger, for example. + tlsSecret: false + # Enable if the you want cockroach db to create its own certificates + selfSigner: + # If set, the cockroach db will generate its own certificates + enabled: true + # Run selfSigner as non-root + securityContext: + enabled: true + # If set, the user should provide the CA certificate to sign other certificates. + caProvided: false + # It holds the name of the secret with caCerts. If caProvided is set, this can not be empty. + caSecret: "" + # Minimum Certificate duration for all the certificates, all certs duration will be validated against this. + minimumCertDuration: 624h + # Duration of CA certificates in hour + caCertDuration: 43800h + # Expiry window of CA certificates means a window before actual expiry in which CA certs should be rotated. + caCertExpiryWindow: 648h + # Duration of Client certificates in hour + clientCertDuration: 672h + # Expiry window of client certificates means a window before actual expiry in which client certs should be rotated. + clientCertExpiryWindow: 48h + # Duration of node certificates in hour + nodeCertDuration: 8760h + # Expiry window of node certificates means a window before actual expiry in which node certs should be rotated. + nodeCertExpiryWindow: 168h + # If set, the cockroachdb cert selfSigner will rotate the certificates before expiry. + rotateCerts: true + # Wait time for each cockroachdb replica to become ready once it comes in running state. Only considered when rotateCerts is set to true + readinessWait: 30s + # Wait time for each cockroachdb replica to get to running state. Only considered when rotateCerts is set to true + podUpdateTimeout: 2m + # ServiceAccount annotations for selfSigner jobs (e.g. for attaching AWS IAM roles to pods) + svcAccountAnnotations: {} + + # Use cert-manager to issue certificates for mTLS. + certManager: false + # Specify an Issuer or a ClusterIssuer to use, when issuing + # node and client certificates. The values correspond to the + # issuerRef specified in the certificate. + certManagerIssuer: + group: cert-manager.io + kind: Issuer + name: cockroachdb + # Make it false when you are providing your own CA issuer + isSelfSignedIssuer: true + # Duration of CA certificates in hour + caCertDuration: 43800h + # Expiry window of CA certificates means a window before actual expiry in which CA certs should be rotated. + caCertExpiryWindow: 648h + # Duration of Client certificates in hours + clientCertDuration: 672h + # Expiry window of client certificates means a window before actual expiry in which client certs should be rotated. + clientCertExpiryWindow: 48h + # Duration of node certificates in hours + nodeCertDuration: 8760h + # Expiry window of node certificates means a window before actual expiry in which node certs should be rotated. + nodeCertExpiryWindow: 168h + + selfSigner: + # Additional labels to apply to the Pod of this Job. + labels: {} + + # Additional annotations to apply to the Pod of this Job. + annotations: {} + + # Affinity rules for scheduling the Pod of this Job. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity + affinity: {} + + # Node selection constraints for scheduling the Pod of this Job. + # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector + nodeSelector: {} + + # Taints to be tolerated by the Pod of this Job. + # https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + tolerations: [] + + # Image Placeholder for the selfSigner utility. This will be changed once the CI workflows for the image is in place. + image: + repository: cockroachlabs-helm-charts/cockroach-self-signer-cert + tag: "1.5" + pullPolicy: IfNotPresent + credentials: {} + registry: gcr.io + # username: john_doe + # password: changeme + +networkPolicy: + enabled: false + + ingress: + # List of sources which should be able to access the CockroachDB Pods via + # gRPC port. Items in this list are combined using a logical OR operation. + # Rules for allowing inter-communication are applied automatically. + # If empty, then connections from any Pod is allowed. + grpc: [] + # - podSelector: + # matchLabels: + # app.kubernetes.io/name: my-app-django + # app.kubernetes.io/instance: my-app + + # List of sources which should be able to access the CockroachDB Pods via + # HTTP port. Items in this list are combined using a logical OR operation. + # If empty, then connections from any Pod is allowed. + http: [] + # - namespaceSelector: + # matchLabels: + # project: my-project + +# To put the admin interface behind Identity Aware Proxy (IAP) on Google Cloud Platform +# make sure to set ingress.paths: ['/*'] +iap: + enabled: false + # Create Google Cloud OAuth credentials and set client id and secret + # clientId: + # clientSecret: diff --git a/charts/jfrog/artifactory-ha/107.90.15/.helmignore b/charts/jfrog/artifactory-ha/107.90.15/.helmignore new file mode 100644 index 000000000..b6e97f07f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/.helmignore @@ -0,0 +1,24 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +OWNERS + +tests/ \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/CHANGELOG.md b/charts/jfrog/artifactory-ha/107.90.15/CHANGELOG.md new file mode 100644 index 000000000..1370bd3ff --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/CHANGELOG.md @@ -0,0 +1,1466 @@ +# JFrog Artifactory-ha Chart Changelog +All changes to this chart will be documented in this file. + +## [107.90.15] - July 18, 2024 +* Fixed #adding colon in image registry which breaks deployment [GH-1892](https://github.com/jfrog/charts/pull/1892) +* Added new `nginx.hosts` to use Nginx server_name directive instead of `ingress.hosts` +* Added a deprecation notice of ingress.hosts when `ngnix.enabled` is true +* Added new evidence service +* Corrected database connection values based on sizing +* **IMPORTANT** +* Separate access from artifactory tomcat to run on its own dedicated tomcat + * With this change access will be running in its own dedicated container + * This will give the ability to control resources and java options specific to access + Can be done by passing the following, + `access.javaOpts.other` + `access.resources` + `access.extraEnvironmentVariables` +* Updating the example link for downloading the DB driver +* Added Binary Provider recommendations + +## [107.89.0] - May 30, 2024 +* Fix the indentation of the commented-out sections in the values.yaml file + +## [107.88.0] - May 29, 2024 +* **IMPORTANT** +* Refactored `nginx.artifactoryConf` and `nginx.mainConf` configuration (moved to files/nginx-artifactory-conf.yaml and files/nginx-main-conf.yaml instead of keys in values.yaml) + +## [107.87.0] - May 29, 2024 +* Renamed `.Values.artifactory.openMetrics` to `.Values.artifactory.metrics` +* Align all liveness and readiness probes (Removed hard-coded values) + +## [107.85.0] - May 29, 2024 +* Changed `migration.enabled` to false by default. For 6.x to 7.x migration, this flag needs to be set to `true` + +## [107.84.0] - May 29, 2024 +* Added image section for `initContainers` instead of `initContainerImage` +* Renamed `router.image.imagePullPolicy` to `router.image.pullPolicy` +* Removed loggers.image section +* Added support for `global.verisons.initContainers` to override `initContainers.image.tag` +* Fixed an issue with extraSystemYaml merge +* **IMPORTANT** +* Renamed `artifactory.setSecurityContext` to `artifactory.podSecurityContext` +* Renamed `artifactory.uid` to `artifactory.podSecurityContext.runAsUser` +* Renamed `artifactory.gid` to `artifactory.podSecurityContext.runAsGroup` and `artifactory.podSecurityContext.fsGroup` +* Renamed `artifactory.fsGroupChangePolicy` to `artifactory.podSecurityContext.fsGroupChangePolicy` +* Renamed `artifactory.seLinuxOptions` to `artifactory.podSecurityContext.seLinuxOptions` +* Added flag `allowNonPostgresql` defaults to false +* Update postgresql tag version to `15.6.0-debian-12-r5` +* Added a check if `initContainerImage` exists +* Fixed a wrong imagePullPolicy configuration +* Fixed an issue to generate unified secret to support artifactory fullname [GH-1882](https://github.com/jfrog/charts/issues/1882) +* Fixed an issue template render on loggers [GH-1883](https://github.com/jfrog/charts/issues/1883) +* Override metadata and observability image tag with `global.verisons.artifactory` value +* Fixed resource constraints for "setup" initContainer of nginx deployment [GH-962] (https://github.com/jfrog/charts/issues/962) +* Added .Values.artifactory.unifiedSecretsPrependReleaseName` for unified secret to prepend release name +* Fixed maxCacheSize and cacheProviderDir mix up under azure-blob-storage-v2-direct template in binarystore.xml + +## [107.83.0] - Mar 12, 2024 +* Added image section for `metadata` and `observability` + +## [107.82.0] - Mar 04, 2024 +* Added `disableRouterBypass` flag as experimental feature, to disable the artifactoryPath /artifactory/ and route all traffic through the Router. +* Removed Replicator Service + +## [107.81.0] - Feb 20, 2024 +* **IMPORTANT** +* Refactored systemYaml configuration (moved to files/system.yaml instead of key in values.yaml) +* Added ability to provide `extraSystemYaml` configuration in values.yaml which will merge with the existing system yaml when `systemYamlOverride` is not given [GH-1848](https://github.com/jfrog/charts/pull/1848) +* Added option to modify the new cache configs, maxFileSizeLimit and skipDuringUpload +* Added IPV4/IPV6 Dualstack flag support for Artifactory and nginx service +* Added `singleStackIPv6Cluster` flag, which manages the Nginx configuration to enable listening on IPv6 and proxying +* Fixing broken link for creating additional kubernetes resources. Refer [here](https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-ha-values.yaml) +* Refactored installerInfo configuration (moved to files/installer-info.json instead of key in values.yaml) + +## [107.80.0] - Feb 20, 2024 +* Updated README.md to create a namespace using `--create-namespace` as part of helm install + +## [107.79.0] - Feb 20, 2024 +* **IMPORTANT** +* Added `unifiedSecretInstallation` flag which enables single unified secret holding all internal (chart) secrets to `true` by default +* Added support for azure-blob-storage-v2-direct config +* Added option to set Nginx to write access_log to container STDOUT +* **Important change:** +* Update postgresql tag version to `15.2.0-debian-11-r23` +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default bundles PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x/13.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true + +## [107.77.0] - April 22, 2024 +* Removed integration service +* Added recommended postgresql sizing configurations under sizing directory +* Updated artifactory-federation (probes, port, embedded mode) +* **IMPORTANT** +* setSecurityContext has been renamed to podSecurityContext. +* Moved podSecurityContext to values.yaml +* Fixing broken nginx port [GH-1860](https://github.com/jfrog/charts/issues/1860) +* Added nginx.customCommand to use custom commands for the nginx container + +## [107.76.0] - Dec 13, 2023 +* Added connectionTimeout and socketTimeout paramaters under AWSS3 binarystore section +* Reduced nginx startupProbe initialDelaySeconds + +## [107.74.0] - Nov 30, 2023 +* Added recommended sizing configurations under sizing directory, please refer [here](README.md/#apply-sizing-configurations-to-the-chart) +* **IMPORTANT** +* Added min kubeVersion ">= 1.19.0-0" in chart.yaml + +## [107.70.0] - Nov 30, 2023 +* Fixed - StatefulSet pod annotations changed from range to toYaml [GH-1828](https://github.com/jfrog/charts/issues/1828) +* Fixed - Invalid format for awsS3V3 `multiPartLimit,multipartElementSize` in binarystore.xml +* Fixed - Artifactory primary service condition +* Fixed - SecurityContext with runAsGroup in artifactory-ha [GH-1838](https://github.com/jfrog/charts/issues/1838) +* Added support for custom labels in the Nginx pods [GH-1836](https://github.com/jfrog/charts/pull/1836) +* Added podSecurityContext and containerSecurityContext for nginx +* Added support for nginx on openshift, set `podSecurityContext` and `containerSecurityContext` to false +* Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + +## [107.69.0] - Sep 18, 2023 +* Adjust rtfs context +* Fixed - Metadata service does not respect customVolumeMounts for DB CAs [GH-1815](https://github.com/jfrog/charts/issues/1815) + +## [107.68.8] - Sep 18, 2023 +* Reverted - Enabled `unifiedSecretInstallation` by default [GH-1819](https://github.com/jfrog/charts/issues/1819) +* Removed unused `artifactory.javaOpts` from values.yaml +* Removed openshift condition check from NOTES.txt +* Fixed an issue with artifactory node replicaCount [GH-1808](https://github.com/jfrog/charts/issues/1808) + +## [107.68.7] - Aug 28, 2023 +* Enabled `unifiedSecretInstallation` by default +* Removed unused `artifactory.javaOpts` from values.yaml + +## [107.67.0] - Aug 28, 2023 +* Add 'extraJavaOpts' and 'port' values to federation service + +## [107.66.0] - Aug 28, 2023 +* Added federation service container in artifactory +* Add rtfs service to ingress in artifactory + +## [107.64.0] - Aug 28,2023 +* Added support to configure event.webhooks within generated system.yaml +* Fixed an issue to generate ssl certificate should support artifactory-ha fullname +* Added 'multiPartLimit' and 'multipartElementSize' parameters to awsS3V3 binary providers. +* Increased default Artifactory Tomcat acceptCount config to 400 +* Fixed Illegal Strict-Transport-Security header in nginx config + +## [107.63.0] - Aug 28, 2023 +* Added support for Openshift by adding the securityContext in container level. +* **IMPORTANT** +* Disable securityContext in container and pod level to deploy postgres on openshift. +* Fixed support for fsGroup in non openshift environment and runAsGroup in openshift environment. +* Fixed - Helm Template Error when using artifactory.loggers [GH-1791](https://github.com/jfrog/charts/issues/1791) +* Removed the nginx disable condition for openshift +* Fixed jfconnect disabling as micro-service on splitcontainers [GH-1806](https://github.com/jfrog/charts/issues/1806) + +## [107.62.0] - Jun 5, 2023 +* Added support for 'port' and 'useHttp' parameters for s3-storage-v3 binary provider [GH-1767](https://github.com/jfrog/charts/issues/1767) + +## [107.61.0] - May 31, 2023 +* Added new binary provider `google-storage-v2-direct` + +## [107.60.0] - May 31, 2023 +* Enabled `splitServicesToContainers` to true by default +* Updated the recommended values for small, medium and large installations to support the 'splitServicesToContainers' + +## [107.59.0] - May 31, 2023 +* Fixed reference of `terminationGracePeriodSeconds` +* **Breaking change** +* Updated the defaults of replicaCount (Values.artifactory.primary.replicaCount and Values.artifactory.node.replicaCount) to support Cloud-Native High Availability. Refer [Cloud-Native High Availability](https://jfrog.com/help/r/jfrog-installation-setup-documentation/cloud-native-high-availability) +* Updated the values of the recommended resources - values-small, values-medium and values-large according to the Cloud-Native HA support. +* **IMPORTANT** +* In the absence of custom parameters for primary.replicaCount and node.replicaCount on your deployment, it is recommended to specify the current values explicitly to prevent any undesired changes to the deployment structure. +* Please be advised that the configuration for resources allocation (requests, limits, javaOpts, affinity rules, etc) will now be applied solely under Values.artifactory.primary when using the new defaults. +* **Upgrade** +* Upgrade from primary-members to primary-only is recommended, and can be done by deploy the chart with the new values. +* During the upgrade, members pods should be deleted and new primary pods should be created. This might trigger the creation of new PVCs. +* Added Support for Cold Artifact Storage as part of the systemYaml configuration (disabled by default) +* Added new binary provider `s3-storage-v3-archive` +* Fixed jfconnect disabling as micro-service on non-splitcontainers +* Fixed an issue whereby, Artifactory failed to start when using persistence storage type `nfs` due to missing binarystore.xml + + +## [107.58.0] - Mar 23, 2023 +* Updated postgresql multi-arch tag version to `13.10.0-debian-11-r14` +* Removed obselete remove-lost-found initContainer` +* Added env JF_SHARED_NODE_HAENABLED under frontend when running in the container split mode + +## [107.57.0] - Mar 02, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1793` + +## [107.55.0] - Feb 21, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1760` +* Adding a custom preStop to Artifactory router for allowing graceful termination to complete +* Fixed an invalid reference of node selector on artifactory-ha chart + +## [107.53.0] - Jan 20, 2023 +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.50.0] - Jan 20, 2023 +* Updated postgresql tag version to `13.9.0-debian-11-r11` +* Fixed make lint issue on artifactory-ha chart [GH-1714](https://github.com/jfrog/charts/issues/1714) +* Fixed an issue for capabilities check of ingress +* Updated jfrogUrl text path in migrate.sh file +* Added a note that from 107.46.x chart versions, `copyOnEveryStartup` is not needed for binarystore.xml, it is always copied via initContainers. For more Info, Refer [GH-1723](https://github.com/jfrog/charts/issues/1723) + +## [107.49.0] - Jan 16, 2023 +* Changed logic in wait-for-primary container to use /dev/tcp instead of curl +* Added support for setting `seLinuxOptions` in `securityContext` [GH-1700](https://github.com/jfrog/charts/pull/1700) +* Added option to enable/disable proxy_request_buffering and proxy_buffering_off [GH-1686](https://github.com/jfrog/charts/pull/1686) +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.48.0] - Oct 27, 2022 +* Updated router version to `7.51.0` + +## [107.47.0] - Sep 29, 2022 +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-941` +* Added support for annotations for artifactory statefulset and nginx deployment [GH-1665](https://github.com/jfrog/charts/pull/1665) +* Updated router version to `7.49.0` + +## [107.46.0] - Sep 14, 2022 +* **IMPORTANT** +* Added support for lifecycle hooks for all containers, changed `artifactory.postStartCommand` to `.Values.artifactory.lifecycle.postStart.exec.command` +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.6-902` +* Update nginx configuration to allow websocket requests when using pipelines +* Fixed an issue to allow artifactory to make direct API calls to store instead via jfconnect service when `splitServicesToContainers=true` +* Refactor binarystore.xml configuration (moved to `files/binarystore.xml` instead of key in values.yaml) +* Added new binary providers `s3-storage-v3-direct`, `azure-blob-storage-direct`, `google-storage-v2` +* Deprecated (removed) `aws-s3` binary provider [JetS3t library](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore#ConfiguringtheFilestore-BinaryProvider) +* Deprecated (removed) `google-storage` binary provider and force persistence storage type `google-storage` to work with `google-storage-v2` only +* Copy binarystore.xml in init Container to fix existing persistence on file system in clear text +* Removed obselete `.Values.artifactory.binarystore.enabled` key +* Removed `newProbes.enabled`, default to new probes +* Added nginx.customCommand using inotifyd to reload nginx's config upon ssl secret or configmap changes [GH-1640](https://github.com/jfrog/charts/pull/1640) + +## [107.43.0] - Aug 25, 2022 +* Added flag `artifactory.replicator.ingress.enabled` to enable/disable ingress for replicator +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.6-854` +* Updated router version to `7.45.0` +* Added flag `artifactory.schedulerName` to set for the pods the value of schedulerName field [GH-1606](https://github.com/jfrog/charts/issues/1606) +* Enabled TLS based on access or router in values.yaml + +## [107.42.0] - Aug 25, 2022 +* Enabled database creds secret to use from unified secret +* Updated router version to `7.42.0` +* Added support to truncate (> 63 chars) for unifiedCustomSecretVolumeName + +## [107.41.0] - June 27, 2022 +* Added support for nginx.terminationGracePeriodSeconds [GH-1645](https://github.com/jfrog/charts/issues/1645) +* Fix nginx lifecycle values [GH-1646](https://github.com/jfrog/charts/pull/1646) +* Use an alternate command for `find` to copy custom certificates +* Added support for circle of trust using `circleOfTrustCertificatesSecret` secret name [GH-1623](https://github.com/jfrog/charts/pull/1623) + +## [107.40.0] - Jun 16, 2022 +* Deprecated k8s PodDisruptionBudget api policy/v1beta1 [GH-1618](https://github.com/jfrog/charts/issues/1618) +* Disabled node PodDisruptionBudget, statefulset and artifactory-primary service from artifactory-ha chart when member nodes are 0 +* From artifactory 7.38.x, joinKey can be retrived from Admin > User Management > Settings in UI +* Fixed template name for artifactory-ha database creds [GH-1602](https://github.com/jfrog/charts/pull/1602) +* Allow templating for pod annotations [GH-1634](https://github.com/jfrog/charts/pull/1634) +* Added flags to control enable/disable infra services in splitServicesToContainers + +## [107.39.0] - May 16, 2022 +* Fix default `artifactory.async.corePoolSize` [GH-1612](https://github.com/jfrog/charts/issues/1612) +* Added support of nginx annotations +* Reduce startupProbe `initialDelaySeconds` +* Align all liveness and readiness probes failureThreshold to `5` seconds +* Added new flag `unifiedSecretInstallation` to enables single unified secret holding all the artifactory-ha secrets +* Updated router version to `7.38.0` + +## [107.38.0] - May 04, 2022 +* Added support for `global.nodeSelector` to artifactory and nginx pods +* Updated router version to `7.36.1` +* Added support for custom global probes timeout +* Updated frontend container command +* Added topologySpreadConstraints to artifactory and nginx, and add lifecycle hooks to nginx [GH-1596](https://github.com/jfrog/charts/pull/1596) +* Added support of extraEnvironmentVariables for all infra services containers +* Enabled the consumption (jfconnect) flag by default +* Fix jfconnect disabling on non-splitcontainers + +## [107.37.0] - Mar 08, 2022 +* Added support for customPorts in nginx deployment +* Bugfix - Wrong proxy_pass configurations for /artifactory/ in the default artifactory.conf +* Added signedUrlExpirySeconds option to artifactory.persistence.type aws-S3-V3 +* Updated router version to `7.35.0` +* Added useInstanceCredentials,enableSignedUrlRedirect option to google-storage-v2 +* Changed dependency charts repo to `charts.jfrog.io` + +## [107.36.0] - Mar 03, 2022 +* Remove pdn tracker which starts replicator service +* Added silent option for curl probes +* Added readiness health check for the artifactory container for k8s version < 1.20 +* Fix property file migration issue to system.yaml 6.x to 7.x + +## [107.35.0] - Feb 08, 2022 +* Updated router version to `7.32.1` + +## [107.33.0] - Jan 11, 2022 +* Make default value of anti-affinity to soft +* Readme fixes +* Added support for setting `fsGroupChangePolicy` +* Added nginx customInitContainers, customVolumes, customSidecarContainers [GH-1565](https://github.com/jfrog/charts/pull/1565) +* Updated router version to `7.30.0` + +## [107.32.0] - Dec 23, 2021 +* Updated logger image to `jfrog/ubi-minimal:8.5-204` +* Added default `8091` as `artifactory.tomcat.maintenanceConnector.port` for probes check +* Refactored probes to replace httpGet probes with basic exec + curl +* Refactored `database-creds` secret to create only when database values are passed +* Added new endpoints for probes `/artifactory/api/v1/system/liveness` and `/artifactory/api/v1/system/readiness` +* Enabled `newProbes:true` by default to use these endpoints +* Fix filebeat sidecar spool file permissions +* Updated filebeat sidecar container to `7.16.2` + +## [107.31.0] - Dec 17, 2021 +* Remove integration service feature flag to make it mandatory service +* Update postgresql tag version to `13.4.0-debian-10-r39` +* Refactored `router.requiredServiceTypes` to support platform chart + +## [107.30.0] - Nov 30, 2021 +* Fixed incorrect permission for filebeat.yaml +* Updated healthcheck (liveness/readiness) api for integration service +* Disable readiness health check for the artifactory container when running in the container split mode +* Ability to start replicator on enabling pdn tracker + +## [107.29.0] - Nov 30, 2021 +* Added integration service container in artifactory +* Add support for Ingress Class Name in Ingress Spec [GH-1516](https://github.com/jfrog/charts/pull/1516) +* Fixed chart values to use curl instead of wget [GH-1529](https://github.com/jfrog/charts/issues/1529) +* Updated nginx config to allow websockets when pipelines is enabled +* Moved router.topology.local.requireqservicetypes from system.yaml to router as environment variable +* Added jfconnect in system.yaml +* Updated artifactory container’s health probes to use artifactory api on rt-split +* Updated initContainerImage to `jfrog/ubi-minimal:8.5-204` +* Updated router version to `7.28.2` +* Set Jfconnect enabled to `false` in the artifactory container when running in the container split mode + +## [107.28.0] - Nov 11, 2021 +* Added default values cpu and memeory in initContainers +* Updated router version to `7.26.0` +* Bug fix - jmx port not exposed in artifactory service +* Updated (`rbac.create` and `serviceAccount.create` to false by default) for least privileges +* Fixed incorrect data type for `Values.router.serviceRegistry.insecure` in default values.yaml [GH-1514](https://github.com/jfrog/charts/pull/1514/files) +* **IMPORTANT** +* Changed init-container images from `alpine` to `ubi8/ubi-minimal` +* Added support for AWS License Manager using `.Values.aws.licenseConfigSecretName` + +## [107.27.0] - Oct 6, 2021 +* **Breaking change** +* Aligned probe structure (moved probes variables under config block) +* Added support for new probes(set to false by default) +* Bugfix - Invalid format for `multiPartLimit,multipartElementSize,maxCacheSize` in binarystore.xml [GH-1466](https://github.com/jfrog/charts/issues/1466) +* Added missioncontrol container in artifactory +* Dropped NET_RAW capability for the containers +* Added resources to migration-artifactory init container +* Added resources to all rt split containers +* Updated router version to `7.25.1` +* Added support for Ingress networking.k8s.io/v1/Ingress for k8s >=1.22 [GH-1487](https://github.com/jfrog/charts/pull/1487) +* Added min kubeVersion ">= 1.14.0-0" in chart.yaml +* Update alpine tag version to `3.14.2` +* Update busybox tag version to `1.33.1` +* Update postgresql tag version to `13.4.0-debian-10-r39` + +## [107.26.0] - Aug 20, 2021 +* Added Observability container (only when `splitServicesToContainers` is enabled) +* Added min kubeVersion ">= 1.12.0-0" in chart.yaml + +## [107.25.0] - Aug 13, 2021 +* Updated readme of chart to point to wiki. Refer [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory) +* Added startupProbe and livenessProbe for RT-split containers +* Updated router version to 7.24.1 +* Added security hardening fixes +* Enabled startup probes for k8s >= 1.20.x +* Changed network policy to allow all ingress and egress traffic +* Added Observability changes +* Added support for global.versions.router (only when `splitServicesToContainers` is enabled) + +## [107.24.0] - July 27, 2021 +* Support global and product specific tags at the same time +* Added support for artifactory containers split + +## [107.23.0] - July 8, 2021 +* Bug fix - logger sideCar picks up Wrong File in helm +* Allow filebeat metrics configuration in values.yaml + +## [107.22.0] - July 6, 2021 +* Update alpine tag version to `3.14.0` +* Added `nodePort` support to artifactory-service and nginx-service templates +* Removed redundant `terminationGracePeriodSeconds` in statefulset +* Increased `startupProbe.failureThreshold` time + +## [107.21.3] - July 2, 2021 +* Added ability to change sendreasonphrase value in server.xml via system yaml + +## [107.19.3] - May 20, 2021 +* Fix broken support for startupProbe for k8s < 1.18.x +* Removed an extraneous resources block from the prepare-custom-persistent-volume container in the primary statefulset +* Added support for `nameOverride` and `fullnameOverride` in values.yaml + +## [107.18.6] - May 4, 2021 +* Removed `JF_SHARED_NODE_PRIMARY` env to support for Cloud Native HA +* Bumping chart version to align with app version +* Add `securityContext` option on nginx container + +## [5.0.0] - April 22, 2021 +* **Breaking change:** +* Increased default postgresql persistence size to `200Gi` +* Update postgresql tag version to `13.2.0-debian-10-r55` +* Update postgresql chart version to `10.3.18` in chart.yaml - [10.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1000) +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true +* **IMPORTANT** +* This chart is only helm v3 compatible +* Fix support for Cloud Native HA +* Fixed filebeat-configmap naming +* Explicitly set ServiceAccount `automountServiceAccountToken` to 'true' +* Update alpine tag version to `3.13.5` + +## [4.13.2] - April 15, 2021 +* Updated Artifactory version to 7.17.9 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.9) + +## [4.13.1] - April 6, 2021 +* Updated Artifactory version to 7.17.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.6) +* Update alpine tag version to `3.13.4` + +## [4.13.0] - April 5, 2021 +* **IMPORTANT** +* Added `charts.jfrog.io` as default JFrog Helm repository +* Updated Artifactory version to 7.17.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.5) + +## [4.12.2] - Mar 31, 2021 +* Updated Artifactory version to 7.17.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.4) + +## [4.12.1] - Mar 30, 2021 +* Updated Artifactory version to 7.17.3 +* Add `timeoutSeconds` to all exec probes - Please refer [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) + +## [4.12.0] - Mar 24, 2021 +* Updated Artifactory version to 7.17.2 +* Optimized startupProbe time + +## [4.11.0] - Mar 18, 2021 +* Add support to startupProbe + +## [4.10.0] - Mar 15, 2021 +* Updated Artifactory version to 7.16.3 + +## [4.9.5] - Mar 09, 2021 +* Added HSTS header to nginx conf + +## [4.9.4] - Mar 9, 2021 +* Removed bintray URL references in the chart + +## [4.9.3] - Mar 04, 2021 +* Updated Artifactory version to 7.15.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.4) + +## [4.9.2] - Mar 04, 2021 +* Fixed creation of nginx-certificate-secret when Nginx is disabled + +## [4.9.1] - Feb 19, 2021 +* Update busybox tag version to `1.32.1` + +## [4.9.0] - Feb 18, 2021 +* Updated Artifactory version to 7.15.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.3) +* Add option to specify update strategy for Artifactory statefulset + +## [4.8.1] - Feb 11, 2021 +* Exposed "multiPartLimit" and "multipartElementSize" for the Azure Blob Storage Binary Provider + +## [4.8.0] - Feb 08, 2021 +* Updated Artifactory version to 7.12.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.8) +* Support for custom certificates using secrets +* **Important:** Switched docker images download from `docker.bintray.io` to `releases-docker.jfrog.io` +* Update alpine tag version to `3.13.1` + +## [4.7.9] - Feb 3, 2021 +* Fix copyOnEveryStartup for HA cluster license + +## [4.7.8] - Jan 25, 2021 +* Add support for hostAliases + +## [4.7.7] - Jan 11, 2021 +* Fix failures when using creds file for configurating google storage + +## [4.7.6] - Jan 11, 2021 +* Updated Artifactory version to 7.12.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.6) + +## [4.7.5] - Jan 07, 2021 +* Added support for optional tracker dedicated ingress `.Values.artifactory.replicator.trackerIngress.enabled` (defaults to false) + +## [4.7.4] - Jan 04, 2021 +* Fixed gid support for statefulset + +## [4.7.3] - Dec 31, 2020 +* Added gid support for statefulset +* Add setSecurityContext flag to allow securityContext block to be removed from artifactory statefulset + +## [4.7.2] - Dec 29, 2020 +* **Important:** Removed `.Values.metrics` and `.Values.fluentd` (Fluentd and Prometheus integrations) +* Add support for creating additional kubernetes resources - [refer here](https://github.com/jfrog/log-analytics-prometheus/blob/master/artifactory-ha-values.yaml) +* Updated Artifactory version to 7.12.5 + +## [4.7.1] - Dec 21, 2020 +* Updated Artifactory version to 7.12.3 + +## [4.7.0] - Dec 18, 2020 +* Updated Artifactory version to 7.12.2 +* Added `.Values.artifactory.openMetrics.enabled` + +## [4.6.1] - Dec 11, 2020 +* Added configurable `.Values.global.versions.artifactory` in values.yaml + +## [4.6.0] - Dec 10, 2020 +* Update postgresql tag version to `12.5.0-debian-10-r25` +* Fixed `artifactory.persistence.googleStorage.endpoint` from `storage.googleapis.com` to `commondatastorage.googleapis.com` +* Updated chart maintainers email + +## [4.5.5] - Dec 4, 2020 +* **Important:** Renamed `.Values.systemYaml` to `.Values.systemYamlOverride` + +## [4.5.4] - Dec 1, 2020 +* Improve error message returned when attempting helm upgrade command + +## [4.5.3] - Nov 30, 2020 +* Updated Artifactory version to 7.11.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) + +# [4.5.2] - Nov 23, 2020 +* Updated Artifactory version to 7.11.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) +* Updated port namings on services and pods to allow for istio protocol discovery +* Change semverCompare checks to support hosted Kubernetes +* Add flag to disable creation of ServiceMonitor when enabling prometheus metrics +* Prevent the PostHook command to be executed if the user did not specify a command in the values file +* Fix issue with tls file generation when nginx.https.enabled is false + +## [4.5.1] - Nov 19, 2020 +* Updated Artifactory version to 7.11.2 +* Bugfix - access.config.import.xml override Access Federation configurations + +## [4.5.0] - Nov 17, 2020 +* Updated Artifactory version to 7.11.1 +* Update alpine tag version to `3.12.1` + +## [4.4.6] - Nov 10, 2020 +* Pass system.yaml via external secret for advanced usecases +* Added support for custom ingress +* Bugfix - stateful set not picking up changes to database secrets + +## [4.4.5] - Nov 9, 2020 +* Updated Artifactory version to 7.10.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.6) + +## [4.4.4] - Nov 2, 2020 +* Add enablePathStyleAccess property for aws-s3-v3 binary provider template + +## [4.4.3] - Nov 2, 2020 +* Updated Artifactory version to 7.10.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.5) + +## [4.4.2] - Oct 22, 2020 +* Chown bug fix where Linux capability cannot chown all files causing log line warnings +* Fix Frontend timeout linting issue + +## [4.4.1] - Oct 20, 2020 +* Add flag to disable prepare-custom-persistent-volume init container + +## [4.4.0] - Oct 19, 2020 +* Updated Artifactory version to 7.10.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.2) + +## [4.3.4] - Oct 19, 2020 +* Add support to specify priorityClassName for nginx deployment + +## [4.3.3] - Oct 15, 2020 +* Fixed issue with node PodDisruptionBudget which also getting applied on the primary +* Fix mandatory masterKey check issue when upgrading from 6.x to 7.x + +## [4.3.2] - Oct 14, 2020 +* Add support to allow more than 1 Primary in Artifactory-ha STS + +## [4.3.1] - Oct 9, 2020 +* Add global support for customInitContainersBegin + +## [4.3.0] - Oct 07, 2020 +* Updated Artifactory version to 7.9.1 +* **Breaking change:** Fix `storageClass` to correct `storageClassName` in values.yaml + +## [4.2.0] - Oct 5, 2020 +* Expose Prometheus metrics via a ServiceMonitor +* Parse log files for metric data with Fluentd + +## [4.1.0] - Sep 30, 2020 +* Updated Artifactory version to 7.9.0 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.9) + +## [4.0.12] - Sep 25, 2020 +* Update to use linux capability CAP_CHOWN instead of root base init container to avoid any use of root containers to pass Redhat security requirements + +## [4.0.11] - Sep 28, 2020 +* Setting chart coordinates in migitation yaml + +## [4.0.10] - Sep 25, 2020 +* Update filebeat version to `7.9.2` + +## [4.0.9] - Sep 24, 2020 +* Fixed broken issue - when setting `waitForDatabase:false` container startup still waits for DB + +## [4.0.8] - Sep 22, 2020 +* Updated readme + +## [4.0.7] - Sep 22, 2020 +* Fix lint issue in migitation yaml + +## [4.0.6] - Sep 22, 2020 +* Fix broken migitation yaml + +## [4.0.5] - Sep 21, 2020 +* Added mitigation yaml for Artifactory - [More info](https://github.com/jfrog/chartcenter/blob/master/docs/securitymitigationspec.md) + +## [4.0.4] - Sep 17, 2020 +* Added configurable session(UI) timeout in frontend microservice + +## [4.0.3] - Sep 17, 2020 +* Fix small typo in README and added proper required text to be shown while postgres upgrades + +## [4.0.2] - Sep 14, 2020 +* Updated Artifactory version to 7.7.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7.8) + +## [4.0.1] - Sep 8, 2020 +* Added support for artifactory pro license (single node) installation. + +## [4.0.0] - Sep 2, 2020 +* **Breaking change:** Changed `imagePullSecrets` value from string to list +* **Breaking change:** Added `image.registry` and changed `image.version` to `image.tag` for docker images +* Added support for global values +* Updated maintainers in chart.yaml +* Update postgresql tag version to `12.3.0-debian-10-r71` +* Update postgresqlsub chart version to `9.3.4` - [9.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#900) +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x's postgresql.image.tag and databaseUpgradeReady=true. + +## [3.1.0] - Aug 13, 2020 +* Updated Artifactory version to 7.7.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7) + +## [3.0.15] - Aug 10, 2020 +* Added enableSignedUrlRedirect for persistent storage type aws-s3-v3. + +## [3.0.14] - Jul 31, 2020 +* Update the README section on Nginx SSL termination to reflect the actual YAML structure. + +## [3.0.13] - Jul 30, 2020 +* Added condition to disable the migration scripts. + +## [3.0.12] - Jul 29, 2020 +* Document Artifactory node affinity. + +## [3.0.11] - Jul 28, 2020 +* Added maxConnections for persistent storage type aws-s3-v3. + +## [3.0.10] - Jul 28, 2020 +Bugfix / support for userPluginSecrets with Artifactory 7 + +## [3.0.9] - Jul 27, 2020 +* Add tpl to external database secrets. +* Modified `scheme` to `artifactory-ha.scheme` + +## [3.0.8] - Jul 23, 2020 +* Added condition to disable the migration init container. + +## [3.0.7] - Jul 21, 2020 +* Updated Artifactory-ha Chart to add node and primary labels to pods and service objects. + +## [3.0.6] - Jul 20, 2020 +* Support custom CA and certificates + +## [3.0.5] - Jul 13, 2020 +* Updated Artifactory version to 7.6.3 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.3 +* Fixed Mysql database jar path in `preStartCommand` in README + +## [3.0.4] - Jul 8, 2020 +* Move some postgresql values to where they should be according to the subchart + +## [3.0.3] - Jul 8, 2020 +* Set Artifactory access client connections to the same value as the access threads. + +## [3.0.2] - Jul 6, 2020 +* Updated Artifactory version to 7.6.2 +* **IMPORTANT** +* Added ChartCenter Helm repository in README + +## [3.0.1] - Jul 01, 2020 +* Add dedicated ingress object for Replicator service when enabled + +## [3.0.0] - Jun 30, 2020 +* Update postgresql tag version to `10.13.0-debian-10-r38` +* Update alpine tag version to `3.12` +* Update busybox tag version to `1.31.1` +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass postgresql.image.tag=9.6.18-debian-10-r7 and databaseUpgradeReady=true + +## [2.6.0] - Jun 29, 2020 +* Updated Artifactory version to 7.6.1 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.1 +* Add tpl for external database secrets + +## [2.5.8] - Jun 25, 2020 +* Stop loading the Nginx stream module because it is now a core module + +## [2.5.7] - Jun 18, 2020 +* Fixes bootstrap configMap issue on member node + +## [2.5.6] - Jun 11, 2020 +* Support list of custom secrets + +## [2.5.5] - Jun 11, 2020 +* NOTES.txt fixed incorrect information + +## [2.5.4] - Jun 12, 2020 +* Updated Artifactory version to 7.5.7 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5.7 + +## [2.5.3] - Jun 8, 2020 +* Statically setting primary service type to ClusterIP. +* Prevents primary service from being exposed publicly when using LoadBalancer type on cloud providers. + +## [2.5.2] - Jun 8, 2020 +* Readme update - configuring Artifactory with oracledb + +## [2.5.1] - Jun 5, 2020 +* Fixes broken PDB issue upgrading from 6.x to 7.x + +## [2.5.0] - Jun 1, 2020 +* Updated Artifactory version to 7.5.5 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5 +* Fixes bootstrap configMap permission issue +* Update postgresql tag version to `9.6.18-debian-10-r7` + +## [2.4.10] - May 27, 2020 +* Added Tomcat maxThreads & acceptCount + +## [2.4.9] - May 25, 2020 +* Fixed postgresql README `image` Parameters + +## [2.4.8] - May 24, 2020 +* Fixed typo in README regarding migration timeout + +## [2.4.7] - May 19, 2020 +* Added metadata maxOpenConnections + +## [2.4.6] - May 07, 2020 +* Fix `installerInfo` string format + +## [2.4.5] - Apr 27, 2020 +* Updated Artifactory version to 7.4.3 + +## [2.4.4] - Apr 27, 2020 +* Change customInitContainers order to run before the "migration-ha-artifactory" initContainer + +## [2.4.3] - Apr 24, 2020 +* Fix `artifactory.persistence.awsS3V3.useInstanceCredentials` incorrect conditional logic +* Bump postgresql tag version to `9.6.17-debian-10-r72` in values.yaml + +## [2.4.2] - Apr 16, 2020 +* Custom volume mounts in migration init container. + +## [2.4.1] - Apr 16, 2020 +* Fix broken support for gcpServiceAccount for googleStorage + +## [2.4.0] - Apr 14, 2020 +* Updated Artifactory version to 7.4.1 + +## [2.3.1] - April 13, 2020 +* Update README with helm v3 commands + +## [2.3.0] - April 10, 2020 +* Use dependency charts from `https://charts.bitnami.com/bitnami` +* Bump postgresql chart version to `8.7.3` in requirements.yaml +* Bump postgresql tag version to `9.6.17-debian-10-r21` in values.yaml + +## [2.2.11] - Apr 8, 2020 +* Added recommended ingress annotation to avoid 413 errors + +## [2.2.10] - Apr 8, 2020 +* Moved migration scripts under `files` directory +* Support preStartCommand in migration Init container as `artifactory.migration.preStartCommand` + +## [2.2.9] - Apr 01, 2020 +* Support masterKey and joinKey as secrets + +## [2.2.8] - Apr 01, 2020 +* Ensure that the join key is also copied when provided by an external secret +* Migration container in primary and node statefulset now respects custom versions and the specified node/primary resources + +## [2.2.7] - Apr 01, 2020 +* Added cache-layer in chain definition of Google Cloud Storage template +* Fix readme use to `-hex 32` instead of `-hex 16` + +## [2.2.6] - Mar 31, 2020 +* Change the way the artifactory `command:` is set so it will properly pass a SIGTERM to java + +## [2.2.5] - Mar 31, 2020 +* Removed duplicate `artifactory-license` volume from primary node + +## [2.2.4] - Mar 31, 2020 +* Restore `artifactory-license` volume for the primary node + +## [2.2.3] - Mar 29, 2020 +* Add Nginx log options: stderr as logfile and log level + +## [2.2.2] - Mar 30, 2020 +* Apply initContainers.resources to `copy-system-yaml`, `prepare-custom-persistent-volume`, and `migration-artifactory-ha` containers +* Use the same defaulting mechanism used for the artifactory version used elsewhere in the chart +* Removed duplicate `artifactory-license` volume that prevented using an external secret + +## [2.2.1] - Mar 29, 2020 +* Fix loggers sidecars configurations to support new file system layout and new log names + +## [2.2.0] - Mar 29, 2020 +* Fix broken admin user bootstrap configuration +* **Breaking change:** renamed `artifactory.accessAdmin` to `artifactory.admin` + +## [2.1.3] - Mar 24, 2020 +* Use `postgresqlExtendedConf` for setting custom PostgreSQL configuration (instead of `postgresqlConfiguration`) + +## [2.1.2] - Mar 21, 2020 +* Support for SSL offload in Nginx service(LoadBalancer) layer. Introduced `nginx.service.ssloffload` field with boolean type. + +## [2.1.1] - Mar 23, 2020 +* Moved installer info to values.yaml so it is fully customizable + +## [2.1.0] - Mar 23, 2020 +* Updated Artifactory version to 7.3.2 + +## [2.0.36] - Mar 20, 2020 +* Add support GCP credentials.json authentication + +## [2.0.35] - Mar 20, 2020 +* Add support for masterKey trim during 6.x to 7.x migration if 6.x masterKey is 32 hex (64 characters) + +## [2.0.34] - Mar 19, 2020 +* Add support for NFS directories `haBackupDir` and `haDataDir` + +## [2.0.33] - Mar 18, 2020 +* Increased Nginx proxy_buffers size + +## [2.0.32] - Mar 17, 2020 +* Changed all single quotes to double quotes in values files +* useInstanceCredentials variable was declared in S3 settings but not used in chart. Now it is being used. + +## [2.0.31] - Mar 17, 2020 +* Fix rendering of Service Account annotations + +## [2.0.30] - Mar 16, 2020 +* Add Unsupported message from 6.18 to 7.2.x (migration) + +## [2.0.29] - Mar 11, 2020 +* Upgrade Docs update + +## [2.0.28] - Mar 11, 2020 +* Unified charts public release + +## [2.0.27] - Mar 8, 2020 +* Add an optional wait for primary node to be ready with a proper test for http status + +## [2.0.23] - Mar 6, 2020 +* Fix path to `/artifactory_bootstrap` +* Add support for controlling the name of the ingress and allow to set more than one cname + +## [2.0.22] - Mar 4, 2020 +* Add support for disabling `consoleLog` in `system.yaml` file + +## [2.0.21] - Feb 28, 2020 +* Add support to process `valueFrom` for extraEnvironmentVariables + +## [2.0.20] - Feb 26, 2020 +* Store join key to secret + +## [2.0.19] - Feb 26, 2020 +* Updated Artifactory version to 7.2.1 + +## [2.0.12] - Feb 07, 2020 +* Remove protection flag `databaseUpgradeReady` which was added to check internal postgres upgrade + +## [2.0.0] - Feb 07, 2020 +* Updated Artifactory version to 7.0.0 + +## [1.4.10] - Feb 13, 2020 +* Add support for SSH authentication to Artifactory + +## [1.4.9] - Feb 10, 2020 +* Fix custom DB password indention + +## [1.4.8] - Feb 9, 2020 +* Add support for `tpl` in the `postStartCommand` + +## [1.4.7] - Feb 4, 2020 +* Support customisable Nginx kind + +## [1.4.6] - Feb 2, 2020 +* Add a comment stating that it is recommended to use an external PostgreSQL with a static password for production installations + +## [1.4.5] - Feb 2, 2020 +* Add support for primary or member node specific preStartCommand + +## [1.4.4] - Jan 30, 2020 +* Add the option to configure resources for the logger containers + +## [1.4.3] - Jan 26, 2020 +* Improve `database.user` and `database.password` logic in order to support more use cases and make the configuration less repetitive + +## [1.4.2] - Jan 22, 2020 +* Refined pod disruption budgets to separate nginx and Artifactory pods + +## [1.4.1] - Jan 19, 2020 +* Fix replicator port config in nginx replicator configmap + +## [1.4.0] - Jan 19, 2020 +* Updated Artifactory version to 6.17.0 + +## [1.3.8] - Jan 16, 2020 +* Added example for external nginx-ingress + +## [1.3.7] - Jan 07, 2020 +* Add support for customizable `mountOptions` of NFS PVs + +## [1.3.6] - Dec 30, 2019 +* Fix for nginx probes failing when launched with http disabled + +## [1.3.5] - Dec 24, 2019 +* Better support for custom `artifactory.internalPort` + +## [1.3.4] - Dec 23, 2019 +* Mark empty map values with `{}` + +## [1.3.3] - Dec 16, 2019 +* Another fix for toggling nginx service ports + +## [1.3.2] - Dec 12, 2019 +* Fix for toggling nginx service ports + +## [1.3.1] - Dec 10, 2019 +* Add support for toggling nginx service ports + +## [1.3.0] - Dec 1, 2019 +* Updated Artifactory version to 6.16.0 + +## [1.2.4] - Nov 28, 2019 +* Add support for using existing PriorityClass + +## [1.2.3] - Nov 27, 2019 +* Add support for PriorityClass + +## [1.2.2] - Nov 20, 2019 +* Update Artifactory logo + +## [1.2.1] - Nov 18, 2019 +* Add the option to provide service account annotations (in order to support stuff like https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html) + +## [1.2.0] - Nov 18, 2019 +* Updated Artifactory version to 6.15.0 + +## [1.1.12] - Nov 17, 2019 +* Fix `README.md` format (broken table) + +## [1.1.11] - Nov 17, 2019 +* Update comment on Artifactory master key + +## [1.1.10] - Nov 17, 2019 +* Fix creation of double slash in nginx artifactory configuration + +## [1.1.9] - Nov 14, 2019 +* Set explicit `postgresql.postgresqlPassword=""` to avoid helm v3 error + +## [1.1.8] - Nov 12, 2019 +* Updated Artifactory version to 6.14.1 + +## [1.1.7] - Nov 11, 2019 +* Additional documentation for masterKey + +## [1.1.6] - Nov 10, 2019 +* Update PostgreSQL chart version to 7.0.1 +* Use formal PostgreSQL configuration format + +## [1.1.5] - Nov 8, 2019 +* Add support `artifactory.service.loadBalancerSourceRanges` for whitelisting when setting `artifactory.service.type=LoadBalancer` + +## [1.1.4] - Nov 6, 2019 +* Add support for any type of environment variable by using `extraEnvironmentVariables` as-is + +## [1.1.3] - Nov 6, 2019 +* Add nodeselector support for Postgresql + +## [1.1.2] - Nov 5, 2019 +* Add support for the aws-s3-v3 filestore, which adds support for pod IAM roles + +## [1.1.1] - Nov 4, 2019 +* When using `copyOnEveryStartup`, make sure that the target base directories are created before copying the files + +## [1.1.0] - Nov 3, 2019 +* Updated Artifactory version to 6.14.0 + +## [1.0.1] - Nov 3, 2019 +* Make sure the artifactory pod exits when one of the pre-start stages fail + +## [1.0.0] - Oct 27, 2019 +**IMPORTANT - BREAKING CHANGES!**
+**DOWNTIME MIGHT BE REQUIRED FOR AN UPGRADE!** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), must use the upgrade instructions in [UPGRADE_NOTES.md](UPGRADE_NOTES.md)! +* PostgreSQL sub chart was upgraded to version `6.5.x`. This version is **not backward compatible** with the old version (`0.9.5`)! +* Note the following **PostgreSQL** Helm chart changes + * The chart configuration has changed! See [values.yaml](values.yaml) for the new keys used + * **PostgreSQL** is deployed as a StatefulSet + * See [PostgreSQL helm chart](https://hub.helm.sh/charts/stable/postgresql) for all available configurations + +## [0.17.3] - Oct 24, 2019 +* Change the preStartCommand to support templating + +## [0.17.2] - Oct 21, 2019 +* Add support for setting `artifactory.primary.labels` +* Add support for setting `artifactory.node.labels` +* Add support for setting `nginx.labels` + +## [0.17.1] - Oct 10, 2019 +* Updated Artifactory version to 6.13.1 + +## [0.17.0] - Oct 7, 2019 +* Updated Artifactory version to 6.13.0 + +## [0.16.7] - Sep 24, 2019 +* Option to skip wait-for-db init container with '--set waitForDatabase=false' + +## [0.16.6] - Sep 24, 2019 +* Add support for setting `nginx.service.labels` + +## [0.16.5] - Sep 23, 2019 +* Add support for setting `artifactory.customInitContainersBegin` + +## [0.16.4] - Sep 20, 2019 +* Add support for setting `initContainers.resources` + +## [0.16.3] - Sep 11, 2019 +* Updated Artifactory version to 6.12.2 + +## [0.16.2] - Sep 9, 2019 +* Updated Artifactory version to 6.12.1 + +## [0.16.1] - Aug 22, 2019 +* Fix the nginx server_name directive used with ingress.hosts + +## [0.16.0] - Aug 21, 2019 +* Updated Artifactory version to 6.12.0 + +## [0.15.15] - Aug 18, 2019 +* Fix existingSharedClaim permissions issue and example + +## [0.15.14] - Aug 14, 2019 +* Updated Artifactory version to 6.11.6 + +## [0.15.13] - Aug 11, 2019 +* Fix Ingress routing and add an example + +## [0.15.12] - Aug 6, 2019 +* Do not mount `access/etc/bootstrap.creds` unless user specifies a custom password or secret (Access already generates a random password if not provided one) +* If custom `bootstrap.creds` is provided (using keys or custom secret), prepare it with an init container so the temp file does not persist + +## [0.15.11] - Aug 5, 2019 +* Improve binarystore config + 1. Convert to a secret + 2. Move config to values.yaml + 3. Support an external secret + +## [0.15.10] - Aug 5, 2019 +* Don't create the nginx configmaps when nginx.enabled is false + +## [0.15.9] - Aug 1, 2019 +* Fix masterkey/masterKeySecretName not specified warning render logic in NOTES.txt + +## [0.15.8] - Jul 28, 2019 +* Simplify nginx setup and shorten initial wait for probes + +## [0.15.7] - Jul 25, 2019 +* Updated README about how to apply Artifactory licenses + +## [0.15.6] - Jul 22, 2019 +* Change Ingress API to be compatible with recent kubernetes versions + +## [0.15.5] - Jul 22, 2019 +* Updated Artifactory version to 6.11.3 + +## [0.15.4] - Jul 11, 2019 +* Add `artifactory.customVolumeMounts` support to member node statefulset template + +## [0.15.3] - Jul 11, 2019 +* Add ingress.hosts to the Nginx server_name directive when ingress is enabled to help with Docker repository sub domain configuration + +## [0.15.2] - Jul 3, 2019 +* Add the option for changing nginx config using values.yaml and remove outdated reverse proxy documentation + +## [0.15.1] - Jul 1, 2019 +* Updated Artifactory version to 6.11.1 + +## [0.15.0] - Jun 27, 2019 +* Updated Artifactory version to 6.11.0 and Restart Primary node when bootstrap.creds file has been modified in artifactory-ha + +## [0.14.4] - Jun 24, 2019 +* Add the option to provide an IP for the access-admin endpoints + +## [0.14.3] - Jun 24, 2019 +* Update chart maintainers + +## [0.14.2] - Jun 24, 2019 +* Change Nginx to point to the artifactory externalPort + +## [0.14.1] - Jun 23, 2019 +* Add values files for small, medium and large installations + +## [0.14.0] - Jun 20, 2019 +* Use ConfigMaps for nginx configuration and remove nginx postStart command + +## [0.13.10] - Jun 19, 2019 +* Updated Artifactory version to 6.10.4 + +## [0.13.9] - Jun 18, 2019 +* Add the option to provide additional ingress rules + +## [0.13.8] - Jun 14, 2019 +* Updated readme with improved external database setup example + +## [0.13.7] - Jun 6, 2019 +* Updated Artifactory version to 6.10.3 +* Updated installer-info template + +## [0.13.6] - Jun 6, 2019 +* Updated Google Cloud Storage API URL and https settings + +## [0.13.5] - Jun 5, 2019 +* Delete the db.properties file on Artifactory startup + +## [0.13.4] - Jun 3, 2019 +* Updated Artifactory version to 6.10.2 + +## [0.13.3] - May 21, 2019 +* Updated Artifactory version to 6.10.1 + +## [0.13.2] - May 19, 2019 +* Fix missing logger image tag + +## [0.13.1] - May 15, 2019 +* Support `artifactory.persistence.cacheProviderDir` for on-premise cluster + +## [0.13.0] - May 7, 2019 +* Updated Artifactory version to 6.10.0 + +## [0.12.23] - May 5, 2019 +* Add support for setting `artifactory.async.corePoolSize` + +## [0.12.22] - May 2, 2019 +* Remove unused property `artifactory.releasebundle.feature.enabled` + +## [0.12.21] - Apr 30, 2019 +* Add support for JMX monitoring + +## [0.12.20] - Apr29, 2019 +* Added support for headless services + +## [0.12.19] - Apr 28, 2019 +* Added support for `cacheProviderDir` + +## [0.12.18] - Apr 18, 2019 +* Changing API StatefulSet version to `v1` and permission fix for custom `artifactory.conf` for Nginx + +## [0.12.17] - Apr 16, 2019 +* Updated documentation for Reverse Proxy Configuration + +## [0.12.16] - Apr 12, 2019 +* Added support for `customVolumeMounts` + +## [0.12.15] - Aprl 12, 2019 +* Added support for `bucketExists` flag for googleStorage + +## [0.12.14] - Apr 11, 2019 +* Replace `curl` examples with `wget` due to the new base image + +## [0.12.13] - Aprl 07, 2019 +* Add support for providing the Artifactory license as a parameter + +## [0.12.12] - Apr 10, 2019 +* Updated Artifactory version to 6.9.1 + +## [0.12.11] - Aprl 04, 2019 +* Add support for templated extraEnvironmentVariables + +## [0.12.10] - Aprl 07, 2019 +* Change network policy API group + +## [0.12.9] - Aprl 04, 2019 +* Apply the existing PVC for members (in addition to primary) + +## [0.12.8] - Aprl 03, 2019 +* Bugfix for userPluginSecrets + +## [0.12.7] - Apr 4, 2019 +* Add information about upgrading Artifactory with auto-generated postgres password + +## [0.12.6] - Aprl 03, 2019 +* Added installer info + +## [0.12.5] - Aprl 03, 2019 +* Allow secret names for user plugins to contain template language + +## [0.12.4] - Apr 02, 2019 +* Fix issue #253 (use existing PVC for data and backup storage) + +## [0.12.3] - Apr 02, 2019 +* Allow NetworkPolicy configurations (defaults to allow all) + +## [0.12.2] - Aprl 01, 2019 +* Add support for user plugin secret + +## [0.12.1] - Mar 26, 2019 +* Add the option to copy a list of files to ARTIFACTORY_HOME on startup + +## [0.12.0] - Mar 26, 2019 +* Updated Artifactory version to 6.9.0 + +## [0.11.18] - Mar 25, 2019 +* Add CI tests for persistence, ingress support and nginx + +## [0.11.17] - Mar 22, 2019 +* Add the option to change the default access-admin password + +## [0.11.16] - Mar 22, 2019 +* Added support for `.Probe.path` to customise the paths used for health probes + +## [0.11.15] - Mar 21, 2019 +* Added support for `artifactory.customSidecarContainers` to create custom sidecar containers +* Added support for `artifactory.customVolumes` to create custom volumes + +## [0.11.14] - Mar 21, 2019 +* Make ingress path configurable + +## [0.11.13] - Mar 19, 2019 +* Move the copy of bootstrap config from postStart to preStart for Primary + +## [0.11.12] - Mar 19, 2019 +* Fix existingClaim example + +## [0.11.11] - Mar 18, 2019 +* Disable the option to use nginx PVC with more than one replica + +## [0.11.10] - Mar 15, 2019 +* Wait for nginx configuration file before using it + +## [0.11.9] - Mar 15, 2019 +* Revert securityContext changes since they were causing issues + +## [0.11.8] - Mar 15, 2019 +* Fix issue #247 (init container failing to run) + +## [0.11.7] - Mar 14, 2019 +* Updated Artifactory version to 6.8.7 + +## [0.11.6] - Mar 13, 2019 +* Move securityContext to container level + +## [0.11.5] - Mar 11, 2019 +* Add the option to use existing volume claims for Artifactory storage + +## [0.11.4] - Mar 11, 2019 +* Updated Artifactory version to 6.8.6 + +## [0.11.3] - Mar 5, 2019 +* Updated Artifactory version to 6.8.4 + +## [0.11.2] - Mar 4, 2019 +* Add support for catalina logs sidecars + +## [0.11.1] - Feb 27, 2019 +* Updated Artifactory version to 6.8.3 + +## [0.11.0] - Feb 25, 2019 +* Add nginx support for tail sidecars + +## [0.10.3] - Feb 21, 2019 +* Add s3AwsVersion option to awsS3 configuration for use with IAM roles + +## [0.10.2] - Feb 19, 2019 +* Updated Artifactory version to 6.8.2 + +## [0.10.1] - Feb 17, 2019 +* Updated Artifactory version to 6.8.1 +* Add example of `SERVER_XML_EXTRA_CONNECTOR` usage + +## [0.10.0] - Feb 15, 2019 +* Updated Artifactory version to 6.8.0 + +## [0.9.7] - Feb 13, 2019 +* Updated Artifactory version to 6.7.3 + +## [0.9.6] - Feb 7, 2019 +* Add support for tail sidecars to view logs from k8s api + +## [0.9.5] - Feb 6, 2019 +* Fix support for customizing statefulset `terminationGracePeriodSeconds` + +## [0.9.4] - Feb 5, 2019 +* Add support for customizing statefulset `terminationGracePeriodSeconds` + +## [0.9.3] - Feb 5, 2019 +* Remove the inactive server remove plugin + +## [0.9.2] - Feb 3, 2019 +* Updated Artifactory version to 6.7.2 + +## [0.9.1] - Jan 27, 2019 +* Fix support for Azure Blob Storage Binary provider + +## [0.9.0] - Jan 23, 2019 +* Updated Artifactory version to 6.7.0 + +## [0.8.10] - Jan 22, 2019 +* Added support for `artifactory.customInitContainers` to create custom init containers + +## [0.8.9] - Jan 18, 2019 +* Added support of values ingress.labels + +## [0.8.8] - Jan 16, 2019 +* Mount replicator.yaml (config) directly to /replicator_extra_conf + +## [0.8.7] - Jan 15, 2018 +* Add support for Azure Blob Storage Binary provider + +## [0.8.6] - Jan 13, 2019 +* Fix documentation about nginx group id + +## [0.8.5] - Jan 13, 2019 +* Updated Artifactory version to 6.6.5 + +## [0.8.4] - Jan 8, 2019 +* Make artifactory.replicator.publicUrl required when the replicator is enabled + +## [0.8.3] - Jan 1, 2019 +* Updated Artifactory version to 6.6.3 +* Add support for `artifactory.extraEnvironmentVariables` to pass more environment variables to Artifactory + +## [0.8.2] - Dec 28, 2018 +* Fix location `replicator.yaml` is copied to + +## [0.8.1] - Dec 27, 2018 +* Updated Artifactory version to 6.6.1 + +## [0.8.0] - Dec 20, 2018 +* Updated Artifactory version to 6.6.0 + +## [0.7.17] - Dec 17, 2018 +* Updated Artifactory version to 6.5.13 + +## [0.7.16] - Dec 12, 2018 +* Fix documentation about Artifactory license setup using secret + +## [0.7.15] - Dec 9, 2018 +* AWS S3 add `roleName` for using IAM role + +## [0.7.14] - Dec 6, 2018 +* AWS S3 `identity` and `credential` are now added only if have a value to allow using IAM role + +## [0.7.13] - Dec 5, 2018 +* Remove Distribution certificates creation. + +## [0.7.12] - Dec 2, 2018 +* Remove Java option "-Dartifactory.locking.provider.type=db". This is already the default setting. + +## [0.7.11] - Nov 30, 2018 +* Updated Artifactory version to 6.5.9 + +## [0.7.10] - Nov 29, 2018 +* Fixed the volumeMount for the replicator.yaml + +## [0.7.9] - Nov 29, 2018 +* Optionally include primary node into poddisruptionbudget + +## [0.7.8] - Nov 29, 2018 +* Updated postgresql version to 9.6.11 + +## [0.7.7] - Nov 27, 2018 +* Updated Artifactory version to 6.5.8 + +## [0.7.6] - Nov 18, 2018 +* Added support for configMap to use custom Reverse Proxy Configuration with Nginx + +## [0.7.5] - Nov 14, 2018 +* Updated Artifactory version to 6.5.3 + +## [0.7.4] - Nov 13, 2018 +* Allow pod anti-affinity settings to include primary node + +## [0.7.3] - Nov 12, 2018 +* Support artifactory.preStartCommand for running command before entrypoint starts + +## [0.7.2] - Nov 7, 2018 +* Support database.url parameter (DB_URL) + +## [0.7.1] - Oct 29, 2018 +* Change probes port to 8040 (so they will not be blocked when all tomcat threads on 8081 are exhausted) + +## [0.7.0] - Oct 28, 2018 +* Update postgresql chart to version 0.9.5 to be able and use `postgresConfig` options + +## [0.6.9] - Oct 23, 2018 +* Fix providing external secret for database credentials + +## [0.6.8] - Oct 22, 2018 +* Allow user to configure externalTrafficPolicy for Loadbalancer + +## [0.6.7] - Oct 22, 2018 +* Updated ingress annotation support (with examples) to support docker registry v2 + +## [0.6.6] - Oct 21, 2018 +* Updated Artifactory version to 6.5.2 + +## [0.6.5] - Oct 19, 2018 +* Allow providing pre-existing secret containing master key +* Allow arbitrary annotations on primary and member node pods +* Enforce size limits when using local storage with `emptyDir` +* Allow `soft` or `hard` specification of member node anti-affinity +* Allow providing pre-existing secrets containing external database credentials +* Fix `s3` binary store provider to properly use the `cache-fs` provider +* Allow arbitrary properties when using the `s3` binary store provider + +## [0.6.4] - Oct 18, 2018 +* Updated Artifactory version to 6.5.1 + +## [0.6.3] - Oct 17, 2018 +* Add Apache 2.0 license + +## [0.6.2] - Oct 14, 2018 +* Make S3 endpoint configurable (was hardcoded with `s3.amazonaws.com`) + +## [0.6.1] - Oct 11, 2018 +* Allows ingress default `backend` to be enabled or disabled (defaults to enabled) + +## [0.6.0] - Oct 11, 2018 +* Updated Artifactory version to 6.5.0 + +## [0.5.3] - Oct 9, 2018 +* Quote ingress hosts to support wildcard names + +## [0.5.2] - Oct 2, 2018 +* Add `helm repo add jfrog https://charts.jfrog.io` to README + +## [0.5.1] - Oct 2, 2018 +* Set Artifactory to 6.4.1 + +## [0.5.0] - Sep 27, 2018 +* Set Artifactory to 6.4.0 + +## [0.4.7] - Sep 26, 2018 +* Add ci/test-values.yaml + +## [0.4.6] - Sep 25, 2018 +* Add PodDisruptionBudget for member nodes, defaulting to minAvailable of 1 + +## [0.4.4] - Sep 2, 2018 +* Updated Artifactory version to 6.3.2 + +## [0.4.0] - Aug 22, 2018 +* Added support to run as non root +* Updated Artifactory version to 6.2.0 + +## [0.3.0] - Aug 22, 2018 +* Enabled RBAC Support +* Added support for PostStartCommand (To download Database JDBC connector) +* Increased postgresql max_connections +* Added support for `nginx.conf` ConfigMap +* Updated Artifactory version to 6.1.0 diff --git a/charts/jfrog/artifactory-ha/107.90.15/Chart.lock b/charts/jfrog/artifactory-ha/107.90.15/Chart.lock new file mode 100644 index 000000000..eb9409971 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +digest: sha256:404ce007353baaf92a6c5f24b249d5b336c232e5fd2c29f8a0e4d0095a09fd53 +generated: "2022-03-08T08:54:51.805126+05:30" diff --git a/charts/jfrog/artifactory-ha/107.90.15/Chart.yaml b/charts/jfrog/artifactory-ha/107.90.15/Chart.yaml new file mode 100644 index 000000000..18acb5eae --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/Chart.yaml @@ -0,0 +1,30 @@ +annotations: + artifactoryServiceVersion: 7.90.21 + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Artifactory HA + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-ha +apiVersion: v2 +appVersion: 7.90.15 +dependencies: +- condition: postgresql.enabled + name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +description: Universal Repository Manager supporting all major packaging formats, + build tools and CI servers. +home: https://www.jfrog.com/artifactory/ +icon: file://assets/icons/artifactory-ha.png +keywords: +- artifactory +- jfrog +- devops +kubeVersion: '>= 1.19.0-0' +maintainers: +- email: installers@jfrog.com + name: Chart Maintainers at JFrog +name: artifactory-ha +sources: +- https://github.com/jfrog/charts +type: application +version: 107.90.15 diff --git a/charts/jfrog/artifactory-ha/107.90.15/LICENSE b/charts/jfrog/artifactory-ha/107.90.15/LICENSE new file mode 100644 index 000000000..8dada3eda --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/jfrog/artifactory-ha/107.90.15/README.md b/charts/jfrog/artifactory-ha/107.90.15/README.md new file mode 100644 index 000000000..49155926e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/README.md @@ -0,0 +1,69 @@ +# JFrog Artifactory High Availability Helm Chart + +**IMPORTANT!** Our Helm Chart docs have moved to our main documentation site. Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to [Installing Artifactory - Helm HA Installation](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-HelmHAInstallation). + +**Note:** From Artifactory 7.17.4 and above, the Helm HA installation can be installed so that each node you install can run all tasks in the cluster. + +Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to the documentation site. + +## Prerequisites Details + +* Kubernetes 1.19+ +* Artifactory HA license + +## Chart Details +This chart will do the following: + +* Deploy Artifactory highly available cluster. 1 primary node and 2 member nodes. +* Deploy a PostgreSQL database **NOTE:** For production grade installations it is recommended to use an external PostgreSQL +* Deploy an Nginx server + +## Installing the Chart + +### Add JFrog Helm repository + +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client + +```bash +helm repo add jfrog https://charts.jfrog.io +``` +2. Next, create a unique Master Key (Artifactory requires a unique master key) and pass it to the template during installation. +3. Now, update the repository. + +```bash +helm repo update +``` + +### Install Chart +To install the chart with the release name `artifactory`: +```bash +helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha --create-namespace +``` + +### Apply Sizing configurations to the Chart +To apply the chart with recommended sizing configurations : +For small configurations : +```bash +helm upgrade --install artifactory-ha jfrog/artifactory-ha -f sizing/artifactory-small-extra-config.yaml -f sizing/artifactory-small.yaml --namespace artifactory-ha --create-namespace +``` + +## Uninstalling Artifactory + +Uninstall is supported only on Helm v3 and on. + +Uninstall Artifactory using the following command. + +```bash +helm uninstall artifactory-ha && sleep 90 && kubectl delete pvc -l app=artifactory-ha +``` + +## Deleting Artifactory + +**IMPORTANT:** Deleting Artifactory will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. You do not need to uninstall Artifactory before deleting it. + +To delete Artifactory use the following command. + +```bash +helm delete artifactory-ha --namespace artifactory-ha +``` + diff --git a/charts/jfrog/artifactory-ha/107.90.15/app-readme.md b/charts/jfrog/artifactory-ha/107.90.15/app-readme.md new file mode 100644 index 000000000..a5aa5fd47 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/app-readme.md @@ -0,0 +1,16 @@ +# JFrog Artifactory High Availability Helm Chart + +Universal Repository Manager supporting all major packaging formats, build tools and CI servers. + +## Chart Details +This chart will do the following: + +* Deploy Artifactory highly available cluster. 1 primary node and 2 member nodes. +* Deploy a PostgreSQL database +* Deploy an Nginx server(optional) + +## Useful links +Blog: [Herd Trust Into Your Rancher Labs Multi-Cloud Strategy with Artifactory](https://jfrog.com/blog/herd-trust-into-your-rancher-labs-multi-cloud-strategy-with-artifactory/) + +## Activate Your Artifactory Instance +Don't have a license? Please send an email to [rancher-jfrog-licenses@jfrog.com](mailto:rancher-jfrog-licenses@jfrog.com) to get it. diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/.helmignore b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/.helmignore new file mode 100644 index 000000000..f0c131944 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/Chart.lock b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/Chart.lock new file mode 100644 index 000000000..3687f52df --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.4.2 +digest: sha256:dce0349883107e3ff103f4f17d3af4ad1ea3c7993551b1c28865867d3e53d37c +generated: "2021-03-30T09:13:28.360322819Z" diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/Chart.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/Chart.yaml new file mode 100644 index 000000000..4b197b207 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/Chart.yaml @@ -0,0 +1,29 @@ +annotations: + category: Database +apiVersion: v2 +appVersion: 11.11.0 +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.x.x +description: Chart for PostgreSQL, an object-relational database management system + (ORDBMS) with an emphasis on extensibility and on standards-compliance. +home: https://github.com/bitnami/charts/tree/master/bitnami/postgresql +icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-220x234.png +keywords: +- postgresql +- postgres +- database +- sql +- replication +- cluster +maintainers: +- email: containers@bitnami.com + name: Bitnami +- email: cedric@desaintmartin.fr + name: desaintmartin +name: postgresql +sources: +- https://github.com/bitnami/bitnami-docker-postgresql +- https://www.postgresql.org/ +version: 10.3.18 diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/README.md b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/README.md new file mode 100644 index 000000000..63d3605bb --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/README.md @@ -0,0 +1,770 @@ +# PostgreSQL + +[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance. + +For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha) + +## TL;DR + +```console +$ helm repo add bitnami https://charts.bitnami.com/bitnami +$ helm install my-release bitnami/postgresql +``` + +## Introduction + +This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/). + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 +- PV provisioner support in the underlying infrastructure + +## Installing the Chart +To install the chart with the release name `my-release`: + +```console +$ helm install my-release bitnami/postgresql +``` + +The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation. + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```console +$ helm delete my-release +``` + +The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release. + +To delete the PVC's associated with `my-release`: + +```console +$ kubectl delete pvc -l release=my-release +``` + +> **Note**: Deleting the PVC's will delete postgresql data as well. Please be cautious before doing it. + +## Parameters + +The following tables lists the configurable parameters of the PostgreSQL chart and their default values. + +| Parameter | Description | Default | +|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| +| `global.imageRegistry` | Global Docker Image registry | `nil` | +| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` | +| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` | +| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` | +| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` | +| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` | +| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| `image.registry` | PostgreSQL Image registry | `docker.io` | +| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` | +| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` | +| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `image.debug` | Specify if debug values should be set | `false` | +| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` | +| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` | +| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | +| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | +| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` | +| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` | +| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | +| `volumePermissions.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` | +| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` | +| `ldap.enabled` | Enable LDAP support | `false` | +| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` | +| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` | +| `ldap.server` | IP address or name of the LDAP server. | `nil` | +| `ldap.port` | Port number on the LDAP server to connect to | `nil` | +| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` | +| `ldap.tls` | Set to `1` to use TLS encryption | `nil` | +| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` | +| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` | +| `ldap.search_attr` | Attribute to match against the user name in the search | `nil` | +| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` | +| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` | +| `ldap.bindDN` | DN of user to bind to LDAP | `nil` | +| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` | +| `replication.enabled` | Enable replication | `false` | +| `replication.user` | Replication user | `repl_user` | +| `replication.password` | Replication user password | `repl_password` | +| `replication.readReplicas` | Number of read replicas replicas | `1` | +| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` | +| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.readReplicas`. | `0` | +| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` | +| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-postgres-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be used to authenticate on LDAP. The value is evaluated as a template. | `nil` | +| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`, in which case`postgres` is the admin username). | _random 10 character alphanumeric string_ | +| `postgresqlUsername` | PostgreSQL user (creates a non-admin user when `postgresqlUsername` is not `postgres`) | `postgres` | +| `postgresqlPassword` | PostgreSQL user password | _random 10 character alphanumeric string_ | +| `postgresqlDatabase` | PostgreSQL database | `nil` | +| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) | +| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` | +| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` | +| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` | +| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` | +| `postgresqlConfiguration` | Runtime Config Parameters | `nil` | +| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` | +| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` | +| `postgresqlSharedPreloadLibraries` | Shared preload libraries (comma-separated list) | `pgaudit` | +| `postgresqlMaxConnections` | Maximum total connections | `nil` | +| `postgresqlPostgresConnectionLimit` | Maximum total connections for the postgres user | `nil` | +| `postgresqlDbUserConnectionLimit` | Maximum total connections for the non-admin user | `nil` | +| `postgresqlTcpKeepalivesInterval` | TCP keepalives interval | `nil` | +| `postgresqlTcpKeepalivesIdle` | TCP keepalives idle | `nil` | +| `postgresqlTcpKeepalivesCount` | TCP keepalives count | `nil` | +| `postgresqlStatementTimeout` | Statement timeout | `nil` | +| `postgresqlPghbaRemoveFilters` | Comma-separated list of patterns to remove from the pg_hba.conf file | `nil` | +| `customStartupProbe` | Override default startup probe | `nil` | +| `customLivenessProbe` | Override default liveness probe | `nil` | +| `customReadinessProbe` | Override default readiness probe | `nil` | +| `audit.logHostname` | Add client hostnames to the log file | `false` | +| `audit.logConnections` | Add client log-in operations to the log file | `false` | +| `audit.logDisconnections` | Add client log-outs operations to the log file | `false` | +| `audit.pgAuditLog` | Add operations to log using the pgAudit extension | `nil` | +| `audit.clientMinMessages` | Message log level to share with the user | `nil` | +| `audit.logLinePrefix` | Template string for the log line prefix | `nil` | +| `audit.logTimezone` | Timezone for the log timestamps | `nil` | +| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` | +| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` | +| `initdbScripts` | Dictionary of initdb scripts | `nil` | +| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` | +| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` | +| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` | +| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` | +| `service.type` | Kubernetes Service type | `ClusterIP` | +| `service.port` | PostgreSQL port | `5432` | +| `service.nodePort` | Kubernetes Service nodePort | `nil` | +| `service.annotations` | Annotations for PostgreSQL service | `{}` (evaluated as a template) | +| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | +| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | `[]` (evaluated as a template) | +| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | +| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for primary and read replica(s) Pod(s) | `true` | +| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` | +| `persistence.enabled` | Enable persistence using PVC | `true` | +| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` | +| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` | +| `persistence.subPath` | Subdirectory of the volume to mount at | `""` | +| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` | +| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` | +| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` | +| `persistence.annotations` | Annotations for the PVC | `{}` | +| `persistence.selector` | Selector to match an existing Persistent Volume (this value is evaluated as a template) | `{}` | +| `commonAnnotations` | Annotations to be added to all deployed resources (rendered as a template) | `{}` | +| `primary.podAffinityPreset` | PostgreSQL primary pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.podAntiAffinityPreset` | PostgreSQL primary pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `primary.nodeAffinityPreset.type` | PostgreSQL primary node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.nodeAffinityPreset.key` | PostgreSQL primary node label key to match Ignored if `primary.affinity` is set. | `""` | +| `primary.nodeAffinityPreset.values` | PostgreSQL primary node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `primary.affinity` | Affinity for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.nodeSelector` | Node labels for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.tolerations` | Tolerations for PostgreSQL primary pods assignment | `[]` (evaluated as a template) | +| `primary.anotations` | Map of annotations to add to the statefulset (postgresql primary) | `{}` | +| `primary.labels` | Map of labels to add to the statefulset (postgresql primary) | `{}` | +| `primary.podAnnotations` | Map of annotations to add to the pods (postgresql primary) | `{}` | +| `primary.podLabels` | Map of labels to add to the pods (postgresql primary) | `{}` | +| `primary.priorityClassName` | Priority Class to use for each pod (postgresql primary) | `nil` | +| `primary.extraInitContainers` | Additional init containers to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumes` | Additional volumes to add to the pods (postgresql primary) | `[]` | +| `primary.sidecars` | Add additional containers to the pod | `[]` | +| `primary.service.type` | Allows using a different service type for primary | `nil` | +| `primary.service.nodePort` | Allows using a different nodePort for primary | `nil` | +| `primary.service.clusterIP` | Allows using a different clusterIP for primary | `nil` | +| `primaryAsStandBy.enabled` | Whether to enable current cluster's primary as standby server of another cluster or not. | `false` | +| `primaryAsStandBy.primaryHost` | The Host of replication primary in the other cluster. | `nil` | +| `primaryAsStandBy.primaryPort ` | The Port of replication primary in the other cluster. | `nil` | +| `readReplicas.podAffinityPreset` | PostgreSQL read only pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.podAntiAffinityPreset` | PostgreSQL read only pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `readReplicas.nodeAffinityPreset.type` | PostgreSQL read only node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.nodeAffinityPreset.key` | PostgreSQL read only node label key to match Ignored if `primary.affinity` is set. | `""` | +| `readReplicas.nodeAffinityPreset.values` | PostgreSQL read only node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `readReplicas.affinity` | Affinity for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.nodeSelector` | Node labels for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.anotations` | Map of annotations to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.resources` | CPU/Memory resource requests/limits override for readReplicass. Will fallback to `values.resources` if not defined. | `{}` | +| `readReplicas.labels` | Map of labels to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.podAnnotations` | Map of annotations to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.podLabels` | Map of labels to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.priorityClassName` | Priority Class to use for each pod (postgresql readReplicas) | `nil` | +| `readReplicas.extraInitContainers` | Additional init containers to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumes` | Additional volumes to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.sidecars` | Add additional containers to the pod | `[]` | +| `readReplicas.service.type` | Allows using a different service type for readReplicas | `nil` | +| `readReplicas.service.nodePort` | Allows using a different nodePort for readReplicas | `nil` | +| `readReplicas.service.clusterIP` | Allows using a different clusterIP for readReplicas | `nil` | +| `readReplicas.persistence.enabled` | Whether to enable readReplicas replicas persistence | `true` | +| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` | +| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` | +| `securityContext.*` | Other pod security context to be included as-is in the pod spec | `{}` | +| `securityContext.enabled` | Enable security context | `true` | +| `securityContext.fsGroup` | Group ID for the pod | `1001` | +| `containerSecurityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `containerSecurityContext.enabled` | Enable container security context | `true` | +| `containerSecurityContext.runAsUser` | User ID for the container | `1001` | +| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` | +| `serviceAccount.name` | Name of existing service account | `nil` | +| `networkPolicy.enabled` | Enable NetworkPolicy | `false` | +| `networkPolicy.allowExternal` | Don't require client label for connections | `true` | +| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` | +| `startupProbe.enabled` | Enable startupProbe | `false` | +| `startupProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `startupProbe.periodSeconds` | How often to perform the probe | 15 | +| `startupProbe.timeoutSeconds` | When the probe times | 5 | +| `startupProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 10 | +| `startupProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 | +| `livenessProbe.enabled` | Enable livenessProbe | `true` | +| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `readinessProbe.enabled` | Enable readinessProbe | `true` | +| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 | +| `readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `tls.enabled` | Enable TLS traffic support | `false` | +| `tls.preferServerCiphers` | Whether to use the server's TLS cipher preferences rather than the client's | `true` | +| `tls.certificatesSecret` | Name of an existing secret that contains the certificates | `nil` | +| `tls.certFilename` | Certificate filename | `""` | +| `tls.certKeyFilename` | Certificate key filename | `""` | +| `tls.certCAFilename` | CA Certificate filename. If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate. | `nil` | +| `tls.crlFilename` | File containing a Certificate Revocation List | `nil` | +| `metrics.enabled` | Start a prometheus exporter | `false` | +| `metrics.service.type` | Kubernetes Service type | `ClusterIP` | +| `service.clusterIP` | Static clusterIP or None for headless services | `nil` | +| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` | +| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` | +| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` | +| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` | +| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` | +| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` | +| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` | +| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` | +| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | +| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql | +| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` | +| `metrics.image.registry` | PostgreSQL Exporter Image registry | `docker.io` | +| `metrics.image.repository` | PostgreSQL Exporter Image name | `bitnami/postgres-exporter` | +| `metrics.image.tag` | PostgreSQL Exporter Image tag | `{TAG_NAME}` | +| `metrics.image.pullPolicy` | PostgreSQL Exporter Image pull policy | `IfNotPresent` | +| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `metrics.customMetrics` | Additional custom metrics | `nil` | +| `metrics.extraEnvVars` | Extra environment variables to add to exporter | `{}` (evaluated as a template) | +| `metrics.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `metrics.securityContext.enabled` | Enable security context for metrics | `false` | +| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` | +| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` | +| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | +| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` | +| `psp.create` | Create Pod Security Policy | `false` | +| `rbac.create` | Create Role and RoleBinding (required for PSP to work) | `false` | +| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, + +```console +$ helm install my-release \ + --set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \ + bitnami/postgresql +``` + +The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`. + +> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available. + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```console +$ helm install my-release -f values.yaml bitnami/postgresql +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + +### Customizing primary and read replica services in a replicated configuration + +At the top level, there is a service object which defines the services for both primary and readReplicas. For deeper customization, there are service objects for both the primary and read types individually. This allows you to override the values in the top level service object so that the primary and read can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the primary and read to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the primary.service or readReplicas.service objects will take precedence over the top level service object. + +### Change PostgreSQL version + +To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=X.Y.Z`. This approach is also applicable to other images like exporters. + +### postgresql.conf / pg_hba.conf files as configMap + +This helm chart also supports to customize the whole configuration file. + +Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server. + +Alternatively, you can add additional PostgreSQL configuration parameters using the `postgresqlExtendedConf` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}. Alternatively, to replace the entire default configuration use `postgresqlConfiguration`. + +In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options. + +### Allow settings to be loaded from files other than the default `postgresql.conf` + +If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory. +Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option. + +### Initialize a fresh instance + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap. + +Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict. + +In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter. + +The allowed extensions are `.sh`, `.sql` and `.sql.gz`. + +### Securing traffic using TLS + +TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart: + +- `tls.enabled`: Enable TLS support. Defaults to `false` +- `tls.certificatesSecret`: Name of an existing secret that contains the certificates. No defaults. +- `tls.certFilename`: Certificate filename. No defaults. +- `tls.certKeyFilename`: Certificate key filename. No defaults. + +For example: + +* First, create the secret with the cetificates files: + + ```console + kubectl create secret generic certificates-tls-secret --from-file=./cert.crt --from-file=./cert.key --from-file=./ca.crt + ``` + +* Then, use the following parameters: + + ```console + volumePermissions.enabled=true + tls.enabled=true + tls.certificatesSecret="certificates-tls-secret" + tls.certFilename="cert.crt" + tls.certKeyFilename="cert.key" + ``` + + > Note TLS and VolumePermissions: PostgreSQL requires certain permissions on sensitive files (such as certificate keys) to start up. Due to an on-going [issue](https://github.com/kubernetes/kubernetes/issues/57923) regarding kubernetes permissions and the use of `containerSecurityContext.runAsUser`, you must enable `volumePermissions` to ensure everything works as expected. + +### Sidecars + +If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec. + +```yaml +# For the PostgreSQL primary +primary: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +# For the PostgreSQL replicas +readReplicas: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +``` + +### Metrics + +The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml). + +The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details. + +### Use of global variables + +In more complex scenarios, we may have the following tree of dependencies + +``` + +--------------+ + | | + +------------+ Chart 1 +-----------+ + | | | | + | --------+------+ | + | | | + | | | + | | | + | | | + v v v ++-------+------+ +--------+------+ +--------+------+ +| | | | | | +| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 | +| | | | | | ++--------------+ +---------------+ +---------------+ +``` + +The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters: + +``` +postgresql.postgresqlPassword=testtest +subchart1.postgresql.postgresqlPassword=testtest +subchart2.postgresql.postgresqlPassword=testtest +postgresql.postgresqlDatabase=db1 +subchart1.postgresql.postgresqlDatabase=db1 +subchart2.postgresql.postgresqlDatabase=db1 +``` + +If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows: + +``` +global.postgresql.postgresqlPassword=testtest +global.postgresql.postgresqlDatabase=db1 +``` + +This way, the credentials will be available in all of the subcharts. + +## Persistence + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container. + +Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. +See the [Parameters](#parameters) section to configure the PVC or to disable persistence. + +If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished. + +## NetworkPolicy + +To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`. + +For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace: + +```console +$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}" +``` + +With NetworkPolicy enabled, traffic will be limited to just port 5432. + +For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL. +This label will be displayed in the output of a successful install. + +## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image + +- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image. +- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift. +- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,containerSecurityContext.enabled=false,shmVolume.chmod.enabled=false + +### Deploy chart using Docker Official PostgreSQL Image + +From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image. +Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory. + +``` +image.repository=postgres +image.tag=10.6 +postgresqlDataDir=/data/pgdata +persistence.mountPath=/data/ +``` + +### Setting Pod's affinity + +This chart allows you to set your custom affinity using the `XXX.affinity` paremeter(s). Find more infomation about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). + +As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters. + +## Troubleshooting + +Find more information about how to deal with common errors related to Bitnami’s Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues). + +## Upgrading + +It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart: + +```bash +$ helm upgrade my-release bitnami/postgresql \ + --set postgresqlPassword=[POSTGRESQL_PASSWORD] \ + --set replication.password=[REPLICATION_PASSWORD] +``` + +> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes. + +### To 10.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Move dependency information from the *requirements.yaml* to the *Chart.yaml* +- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock* +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart. + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ + +#### Breaking changes + +- The term `master` has been replaced with `primary` and `slave` with `readReplicas` throughout the chart. Role names have changed from `master` and `slave` to `primary` and `read`. + +To upgrade to `10.0.0`, it should be done reusing the PVCs used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is `postgresql`): + +> NOTE: Please, create a backup of your database before running any of those actions. + +Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release: + +```console +$ export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) +$ export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=master -o jsonpath="{.items[0].metadata.name}") +``` + +Delete the PostgreSQL statefulset. Notice the option `--cascade=false`: + +```console +$ kubectl delete statefulsets.apps postgresql-postgresql --cascade=false +``` + +Now the upgrade works: + +```console +$ helm upgrade postgresql bitnami/postgresql --set postgresqlPassword=$POSTGRESQL_PASSWORD --set persistence.existingClaim=$POSTGRESQL_PVC +``` + +You will have to delete the existing PostgreSQL pod and the new statefulset is going to create a new one + +```console +$ kubectl delete pod postgresql-postgresql-0 +``` + +Finally, you should see the lines below in PostgreSQL container logs: + +```console +$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}") +... +postgresql 08:05:12.59 INFO ==> Deploying PostgreSQL with persisted data... +... +``` + +### To 9.0.0 + +In this version the chart was adapted to follow the Helm label best practices, see [PR 3021](https://github.com/bitnami/charts/pull/3021). That means the backward compatibility is not guarantee when upgrading the chart to this major version. + +As a workaround, you can delete the existing statefulset (using the `--cascade=false` flag pods are not deleted) before upgrade the chart. For example, this can be a valid workflow: + +- Deploy an old version (8.X.X) + +```console +$ helm install postgresql bitnami/postgresql --version 8.10.14 +``` + +- Old version is up and running + +```console +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 1 2020-08-04 13:39:54.783480286 +0000 UTC deployed postgresql-8.10.14 11.8.0 + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 76s +``` + +- The upgrade to the latest one (9.X.X) is going to fail + +```console +$ helm upgrade postgresql bitnami/postgresql +Error: UPGRADE FAILED: cannot patch "postgresql-postgresql" with kind StatefulSet: StatefulSet.apps "postgresql-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden +``` + +- Delete the statefulset + +```console +$ kubectl delete statefulsets.apps --cascade=false postgresql-postgresql +statefulset.apps "postgresql-postgresql" deleted +``` + +- Now the upgrade works + +```console +$ helm upgrade postgresql bitnami/postgresql +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 3 2020-08-04 13:42:08.020385884 +0000 UTC deployed postgresql-9.1.2 11.8.0 +``` + +- We can kill the existing pod and the new statefulset is going to create a new one: + +```console +$ kubectl delete pod postgresql-postgresql-0 +pod "postgresql-postgresql-0" deleted + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 19s +``` + +Please, note that without the `--cascade=false` both objects (statefulset and pod) are going to be removed and both objects will be deployed again with the `helm upgrade` command + +### To 8.0.0 + +Prefixes the port names with their protocols to comply with Istio conventions. + +If you depend on the port names in your setup, make sure to update them to reflect this change. + +### To 7.1.0 + +Adds support for LDAP configuration. + +### To 7.0.0 + +Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec. + +In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage. + +This major version bump signifies this change. + +### To 6.5.7 + +In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies: + +- protobuf +- protobuf-c +- json-c +- geos +- proj + +### To 5.0.0 + +In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/). + +For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs: + +```console +Welcome to the Bitnami postgresql container +Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql +Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues +Send us your feedback at containers@bitnami.com + +INFO ==> ** Starting PostgreSQL setup ** +NFO ==> Validating settings in POSTGRESQL_* env vars.. +INFO ==> Initializing PostgreSQL database... +INFO ==> postgresql.conf file not detected. Generating it... +INFO ==> pg_hba.conf file not detected. Generating it... +INFO ==> Deploying PostgreSQL with persisted data... +INFO ==> Configuring replication parameters +INFO ==> Loading custom scripts... +INFO ==> Enabling remote connections +INFO ==> Stopping PostgreSQL... +INFO ==> ** PostgreSQL setup finished! ** + +INFO ==> ** Starting PostgreSQL ** + [1] FATAL: database files are incompatible with server + [1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3. +``` + +In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one. + +### To 4.0.0 + +This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately. + +IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error + +``` +The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development +``` + +### To 3.0.0 + +This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods. +It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride. + +#### Breaking changes + +- `affinty` has been renamed to `master.affinity` and `slave.affinity`. +- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`. +- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`. + +### To 2.0.0 + +In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps: + +- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running + +```console +$ kubectl get svc +``` + +- Install (not upgrade) the new version + +```console +$ helm repo update +$ helm install my-release bitnami/postgresql +``` + +- Connect to the new pod (you can obtain the name by running `kubectl get pods`): + +```console +$ kubectl exec -it NAME bash +``` + +- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart: + +```console +$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql +``` + +After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`). +This operation could take some time depending on the database size. + +- Once you have the backup file, you can restore it with a command like the one below: + +```console +$ psql -U postgres DATABASE_NAME < /tmp/backup.sql +``` + +In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt). + +If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below. + +```console +$ psql -U postgres +postgres=# drop database DATABASE_NAME; +postgres=# create database DATABASE_NAME; +postgres=# create user USER_NAME; +postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD'; +postgres=# grant all privileges on database DATABASE_NAME to USER_NAME; +postgres=# alter database DATABASE_NAME owner to USER_NAME; +``` diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/.helmignore b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/.helmignore new file mode 100644 index 000000000..50af03172 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/Chart.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/Chart.yaml new file mode 100644 index 000000000..bcc3808d0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/Chart.yaml @@ -0,0 +1,23 @@ +annotations: + category: Infrastructure +apiVersion: v2 +appVersion: 1.4.2 +description: A Library Helm Chart for grouping common logic between bitnami charts. + This chart is not deployable by itself. +home: https://github.com/bitnami/charts/tree/master/bitnami/common +icon: https://bitnami.com/downloads/logos/bitnami-mark.png +keywords: +- common +- helper +- template +- function +- bitnami +maintainers: +- email: containers@bitnami.com + name: Bitnami +name: common +sources: +- https://github.com/bitnami/charts +- http://www.bitnami.com/ +type: library +version: 1.4.2 diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/README.md b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/README.md new file mode 100644 index 000000000..7287cbb5f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/README.md @@ -0,0 +1,322 @@ +# Bitnami Common Library Chart + +A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts. + +## TL;DR + +```yaml +dependencies: + - name: common + version: 0.x.x + repository: https://charts.bitnami.com/bitnami +``` + +```bash +$ helm dependency update +``` + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "common.names.fullname" . }} +data: + myvalue: "Hello World" +``` + +## Introduction + +This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications. + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 + +## Parameters + +The following table lists the helpers available in the library which are scoped in different sections. + +### Affinities + +| Helper identifier | Description | Expected Input | +|-------------------------------|------------------------------------------------------|------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | +| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | + +### Capabilities + +| Helper identifier | Description | Expected Input | +|----------------------------------------------|------------------------------------------------------------------------------------------------|-------------------| +| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context | +| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context | +| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context | +| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context | +| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context | +| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context | +| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context | + +### Errors + +| Helper identifier | Description | Expected Input | +|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| +| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` | + +### Images + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. | +| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` | + +### Ingress + +| Helper identifier | Description | Expected Input | +|--------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences | + +### Labels + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|-------------------| +| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context | +| `common.labels.matchLabels` | Return the proper Docker Image Registry Secret Names | `.` Chart context | + +### Names + +| Helper identifier | Description | Expected Inpput | +|-------------------------|------------------------------------------------------------|-------------------| +| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context | +| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context | +| `common.names.chart` | Chart name plus version | `.` Chart context | + +### Secrets + +| Helper identifier | Description | Expected Input | +|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. | +| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. | +| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. | +| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` | + +### Storage + +| Helper identifier | Description | Expected Input | +|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. | + +### TplValues + +| Helper identifier | Description | Expected Input | +|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` | + +### Utils + +| Helper identifier | Description | Expected Input | +|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------| +| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` | +| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` | +| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` | +| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` | + +### Validations + +| Helper identifier | Description | Expected Input | +|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) | +| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) | +| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. | +| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. | +| `common.validations.values.redis.passwords` | This helper will ensure required password for RedisTM are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. | +| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. | +| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB® are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. | + +### Warnings + +| Helper identifier | Description | Expected Input | +|------------------------------|----------------------------------|------------------------------------------------------------| +| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. | + +## Special input schemas + +### ImageRoot + +```yaml +registry: + type: string + description: Docker registry where the image is located + example: docker.io + +repository: + type: string + description: Repository and image name + example: bitnami/nginx + +tag: + type: string + description: image tag + example: 1.16.1-debian-10-r63 + +pullPolicy: + type: string + description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + +pullSecrets: + type: array + items: + type: string + description: Optionally specify an array of imagePullSecrets. + +debug: + type: boolean + description: Set to true if you would like to see extra information on logs + example: false + +## An instance would be: +# registry: docker.io +# repository: bitnami/nginx +# tag: 1.16.1-debian-10-r63 +# pullPolicy: IfNotPresent +# debug: false +``` + +### Persistence + +```yaml +enabled: + type: boolean + description: Whether enable persistence. + example: true + +storageClass: + type: string + description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning. + example: "-" + +accessMode: + type: string + description: Access mode for the Persistent Volume Storage. + example: ReadWriteOnce + +size: + type: string + description: Size the Persistent Volume Storage. + example: 8Gi + +path: + type: string + description: Path to be persisted. + example: /bitnami + +## An instance would be: +# enabled: true +# storageClass: "-" +# accessMode: ReadWriteOnce +# size: 8Gi +# path: /bitnami +``` + +### ExistingSecret + +```yaml +name: + type: string + description: Name of the existing secret. + example: mySecret +keyMapping: + description: Mapping between the expected key name and the name of the key in the existing secret. + type: object + +## An instance would be: +# name: mySecret +# keyMapping: +# password: myPasswordKey +``` + +#### Example of use + +When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets. + +```yaml +# templates/secret.yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "common.names.fullname" . }} + labels: + app: {{ include "common.names.fullname" . }} +type: Opaque +data: + password: {{ .Values.password | b64enc | quote }} + +# templates/dpl.yaml +--- +... + env: + - name: PASSWORD + valueFrom: + secretKeyRef: + name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }} + key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }} +... + +# values.yaml +--- +name: mySecret +keyMapping: + password: myPasswordKey +``` + +### ValidateValue + +#### NOTES.txt + +```console +{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}} + +{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} +``` + +If we force those values to be empty we will see some alerts + +```console +$ helm install test mychart --set path.to.value00="",path.to.value01="" + 'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value: + + export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode) + + 'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value: + + export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode) +``` + +## Upgrading + +### To 1.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information. +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_affinities.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_affinities.tpl new file mode 100644 index 000000000..493a6dc7e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_affinities.tpl @@ -0,0 +1,94 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return a soft nodeAffinity definition +{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.soft" -}} +preferredDuringSchedulingIgnoredDuringExecution: + - preference: + matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} + weight: 1 +{{- end -}} + +{{/* +Return a hard nodeAffinity definition +{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.hard" -}} +requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} +{{- end -}} + +{{/* +Return a nodeAffinity definition +{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.nodes.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.nodes.hard" . -}} + {{- end -}} +{{- end -}} + +{{/* +Return a soft podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.soft" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.soft" -}} +{{- $component := default "" .component -}} +preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname + weight: 1 +{{- end -}} + +{{/* +Return a hard podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.hard" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.hard" -}} +{{- $component := default "" .component -}} +requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname +{{- end -}} + +{{/* +Return a podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.pods" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.pods.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.pods.hard" . -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_capabilities.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_capabilities.tpl new file mode 100644 index 000000000..4dde56a38 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_capabilities.tpl @@ -0,0 +1,95 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return the target Kubernetes version +*/}} +{{- define "common.capabilities.kubeVersion" -}} +{{- if .Values.global }} + {{- if .Values.global.kubeVersion }} + {{- .Values.global.kubeVersion -}} + {{- else }} + {{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} + {{- end -}} +{{- else }} +{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for deployment. +*/}} +{{- define "common.capabilities.deployment.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for statefulset. +*/}} +{{- define "common.capabilities.statefulset.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apps/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "common.capabilities.ingress.apiVersion" -}} +{{- if .Values.ingress -}} +{{- if .Values.ingress.apiVersion -}} +{{- .Values.ingress.apiVersion -}} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end }} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for RBAC resources. +*/}} +{{- define "common.capabilities.rbac.apiVersion" -}} +{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "rbac.authorization.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "rbac.authorization.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for CRDs. +*/}} +{{- define "common.capabilities.crd.apiVersion" -}} +{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apiextensions.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "apiextensions.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Returns true if the used Helm version is 3.3+. +A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure. +This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error. +**To be removed when the catalog's minimun Helm version is 3.3** +*/}} +{{- define "common.capabilities.supportsHelmVersion" -}} +{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_errors.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_errors.tpl new file mode 100644 index 000000000..a79cc2e32 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_errors.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Through error when upgrading using empty passwords values that must not be empty. + +Usage: +{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}} +{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}} +{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }} + +Required password params: + - validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error. + - context - Context - Required. Parent context. +*/}} +{{- define "common.errors.upgrade.passwords.empty" -}} + {{- $validationErrors := join "" .validationErrors -}} + {{- if and $validationErrors .context.Release.IsUpgrade -}} + {{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}} + {{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}} + {{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}} + {{- $errorString = print $errorString "\n%s" -}} + {{- printf $errorString $validationErrors | fail -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_images.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_images.tpl new file mode 100644 index 000000000..60f04fd6e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_images.tpl @@ -0,0 +1,47 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper image name +{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }} +*/}} +{{- define "common.images.image" -}} +{{- $registryName := .imageRoot.registry -}} +{{- $repositoryName := .imageRoot.repository -}} +{{- $tag := .imageRoot.tag | toString -}} +{{- if .global }} + {{- if .global.imageRegistry }} + {{- $registryName = .global.imageRegistry -}} + {{- end -}} +{{- end -}} +{{- if $registryName }} +{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- else -}} +{{- printf "%s:%s" $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }} +*/}} +{{- define "common.images.pullSecrets" -}} + {{- $pullSecrets := list }} + + {{- if .global }} + {{- range .global.imagePullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- range .images -}} + {{- range .pullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- if (not (empty $pullSecrets)) }} +imagePullSecrets: + {{- range $pullSecrets }} + - name: {{ . }} + {{- end }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_ingress.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_ingress.tpl new file mode 100644 index 000000000..622ef50e3 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_ingress.tpl @@ -0,0 +1,42 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Generate backend entry that is compatible with all Kubernetes API versions. + +Usage: +{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }} + +Params: + - serviceName - String. Name of an existing service backend + - servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.ingress.backend" -}} +{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}} +{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}} +serviceName: {{ .serviceName }} +servicePort: {{ .servicePort }} +{{- else -}} +service: + name: {{ .serviceName }} + port: + {{- if typeIs "string" .servicePort }} + name: {{ .servicePort }} + {{- else if typeIs "int" .servicePort }} + number: {{ .servicePort }} + {{- end }} +{{- end -}} +{{- end -}} + +{{/* +Print "true" if the API pathType field is supported +Usage: +{{ include "common.ingress.supportsPathType" . }} +*/}} +{{- define "common.ingress.supportsPathType" -}} +{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}} +{{- print "false" -}} +{{- else -}} +{{- print "true" -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_labels.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_labels.tpl new file mode 100644 index 000000000..252066c7e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_labels.tpl @@ -0,0 +1,18 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Kubernetes standard labels +*/}} +{{- define "common.labels.standard" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +helm.sh/chart: {{ include "common.names.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end -}} + +{{/* +Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector +*/}} +{{- define "common.labels.matchLabels" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_names.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_names.tpl new file mode 100644 index 000000000..adf2a74f4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_names.tpl @@ -0,0 +1,32 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "common.names.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "common.names.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "common.names.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_secrets.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_secrets.tpl new file mode 100644 index 000000000..60b84a701 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_secrets.tpl @@ -0,0 +1,129 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Generate secret name. + +Usage: +{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.secrets.name" -}} +{{- $name := (include "common.names.fullname" .context) -}} + +{{- if .defaultNameSuffix -}} +{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- with .existingSecret -}} +{{- if not (typeIs "string" .) -}} +{{- with .name -}} +{{- $name = . -}} +{{- end -}} +{{- else -}} +{{- $name = . -}} +{{- end -}} +{{- end -}} + +{{- printf "%s" $name -}} +{{- end -}} + +{{/* +Generate secret key. + +Usage: +{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - key - String - Required. Name of the key in the secret. +*/}} +{{- define "common.secrets.key" -}} +{{- $key := .key -}} + +{{- if .existingSecret -}} + {{- if not (typeIs "string" .existingSecret) -}} + {{- if .existingSecret.keyMapping -}} + {{- $key = index .existingSecret.keyMapping $.key -}} + {{- end -}} + {{- end }} +{{- end -}} + +{{- printf "%s" $key -}} +{{- end -}} + +{{/* +Generate secret password or retrieve one if already created. + +Usage: +{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - key - String - Required - Name of the key in the secret. + - providedValues - List - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value. + - length - int - Optional - Length of the generated random password. + - strong - Boolean - Optional - Whether to add symbols to the generated random password. + - chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.passwords.manage" -}} + +{{- $password := "" }} +{{- $subchart := "" }} +{{- $chartName := default "" .chartName }} +{{- $passwordLength := default 10 .length }} +{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }} +{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- if index $secret.data .key }} + {{- $password = index $secret.data .key }} + {{- end -}} +{{- else if $providedPasswordValue }} + {{- $password = $providedPasswordValue | toString | b64enc | quote }} +{{- else }} + + {{- if .context.Values.enabled }} + {{- $subchart = $chartName }} + {{- end -}} + + {{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}} + {{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}} + {{- $passwordValidationErrors := list $requiredPasswordError -}} + {{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}} + + {{- if .strong }} + {{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }} + {{- $password = randAscii $passwordLength }} + {{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }} + {{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }} + {{- else }} + {{- $password = randAlphaNum $passwordLength | b64enc | quote }} + {{- end }} +{{- end -}} +{{- printf "%s" $password -}} +{{- end -}} + +{{/* +Returns whether a previous generated secret already exists + +Usage: +{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.exists" -}} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_storage.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_storage.tpl new file mode 100644 index 000000000..60e2a844f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_storage.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper Storage Class +{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }} +*/}} +{{- define "common.storage.class" -}} + +{{- $storageClass := .persistence.storageClass -}} +{{- if .global -}} + {{- if .global.storageClass -}} + {{- $storageClass = .global.storageClass -}} + {{- end -}} +{{- end -}} + +{{- if $storageClass -}} + {{- if (eq "-" $storageClass) -}} + {{- printf "storageClassName: \"\"" -}} + {{- else }} + {{- printf "storageClassName: %s" $storageClass -}} + {{- end -}} +{{- end -}} + +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_tplvalues.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_tplvalues.tpl new file mode 100644 index 000000000..2db166851 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_tplvalues.tpl @@ -0,0 +1,13 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Renders a value that contains template. +Usage: +{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "common.tplvalues.render" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_utils.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_utils.tpl new file mode 100644 index 000000000..ea083a249 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_utils.tpl @@ -0,0 +1,62 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Print instructions to get a secret value. +Usage: +{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }} +*/}} +{{- define "common.utils.secret.getvalue" -}} +{{- $varname := include "common.utils.fieldToEnvVar" . -}} +export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode) +{{- end -}} + +{{/* +Build env var name given a field +Usage: +{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }} +*/}} +{{- define "common.utils.fieldToEnvVar" -}} + {{- $fieldNameSplit := splitList "-" .field -}} + {{- $upperCaseFieldNameSplit := list -}} + + {{- range $fieldNameSplit -}} + {{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}} + {{- end -}} + + {{ join "_" $upperCaseFieldNameSplit }} +{{- end -}} + +{{/* +Gets a value from .Values given +Usage: +{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }} +*/}} +{{- define "common.utils.getValueFromKey" -}} +{{- $splitKey := splitList "." .key -}} +{{- $value := "" -}} +{{- $latestObj := $.context.Values -}} +{{- range $splitKey -}} + {{- if not $latestObj -}} + {{- printf "please review the entire path of '%s' exists in values" $.key | fail -}} + {{- end -}} + {{- $value = ( index $latestObj . ) -}} + {{- $latestObj = $value -}} +{{- end -}} +{{- printf "%v" (default "" $value) -}} +{{- end -}} + +{{/* +Returns first .Values key with a defined value or first of the list if all non-defined +Usage: +{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }} +*/}} +{{- define "common.utils.getKeyFromList" -}} +{{- $key := first .keys -}} +{{- $reverseKeys := reverse .keys }} +{{- range $reverseKeys }} + {{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }} + {{- if $value -}} + {{- $key = . }} + {{- end -}} +{{- end -}} +{{- printf "%s" $key -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_warnings.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_warnings.tpl new file mode 100644 index 000000000..ae10fa41e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/_warnings.tpl @@ -0,0 +1,14 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Warning about using rolling tag. +Usage: +{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }} +*/}} +{{- define "common.warnings.rollingTag" -}} + +{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }} +WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment. ++info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ +{{- end }} + +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_cassandra.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_cassandra.tpl new file mode 100644 index 000000000..8679ddffb --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_cassandra.tpl @@ -0,0 +1,72 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Cassandra required passwords are not empty. + +Usage: +{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret" + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.cassandra.passwords" -}} + {{- $existingSecret := include "common.cassandra.values.existingSecret" . -}} + {{- $enabled := include "common.cassandra.values.enabled" . -}} + {{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}} + {{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.cassandra.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.cassandra.dbUser.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.dbUser.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled cassandra. + +Usage: +{{ include "common.cassandra.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.cassandra.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.cassandra.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key dbUser + +Usage: +{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.key.dbUser" -}} + {{- if .subchart -}} + cassandra.dbUser + {{- else -}} + dbUser + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_mariadb.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_mariadb.tpl new file mode 100644 index 000000000..bb5ed7253 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_mariadb.tpl @@ -0,0 +1,103 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MariaDB required passwords are not empty. + +Usage: +{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret" + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mariadb.passwords" -}} + {{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mariadb.values.enabled" . -}} + {{- $architecture := include "common.mariadb.values.architecture" . -}} + {{- $authPrefix := include "common.mariadb.values.key.auth" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- if not (empty $valueUsername) -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replication") -}} + {{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mariadb. + +Usage: +{{ include "common.mariadb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mariadb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mariadb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.key.auth" -}} + {{- if .subchart -}} + mariadb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_mongodb.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_mongodb.tpl new file mode 100644 index 000000000..7d5ecbccb --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_mongodb.tpl @@ -0,0 +1,108 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MongoDB(R) required passwords are not empty. + +Usage: +{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MongoDB(R) values are stored, e.g: "mongodb-passwords-secret" + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mongodb.passwords" -}} + {{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mongodb.values.enabled" . -}} + {{- $authPrefix := include "common.mongodb.values.key.auth" . -}} + {{- $architecture := include "common.mongodb.values.architecture" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyDatabase := printf "%s.database" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}} + {{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}} + + {{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}} + + {{- if and (not $existingSecret) (eq $enabled "true") (eq $authEnabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }} + {{- if and $valueUsername $valueDatabase -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replicaset") -}} + {{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mongodb. + +Usage: +{{ include "common.mongodb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mongodb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mongodb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.key.auth" -}} + {{- if .subchart -}} + mongodb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_postgresql.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_postgresql.tpl new file mode 100644 index 000000000..992bcd390 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_postgresql.tpl @@ -0,0 +1,131 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate PostgreSQL required passwords are not empty. + +Usage: +{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret" + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.postgresql.passwords" -}} + {{- $existingSecret := include "common.postgresql.values.existingSecret" . -}} + {{- $enabled := include "common.postgresql.values.enabled" . -}} + {{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}} + {{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}} + + {{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}} + {{- if (eq $enabledReplication "true") -}} + {{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to decide whether evaluate global values. + +Usage: +{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }} +Params: + - key - String - Required. Field to be evaluated within global, e.g: "existingSecret" +*/}} +{{- define "common.postgresql.values.use.global" -}} + {{- if .context.Values.global -}} + {{- if .context.Values.global.postgresql -}} + {{- index .context.Values.global.postgresql .key | quote -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.postgresql.values.existingSecret" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.existingSecret" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}} + + {{- if .subchart -}} + {{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}} + {{- else -}} + {{- default (.context.Values.existingSecret | quote) $globalValue -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled postgresql. + +Usage: +{{ include "common.postgresql.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key postgressPassword. + +Usage: +{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.postgressPassword" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}} + + {{- if not $globalValue -}} + {{- if .subchart -}} + postgresql.postgresqlPassword + {{- else -}} + postgresqlPassword + {{- end -}} + {{- else -}} + global.postgresql.postgresqlPassword + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled.replication. + +Usage: +{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.enabled.replication" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.replication.enabled -}} + {{- else -}} + {{- printf "%v" .context.Values.replication.enabled -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key replication.password. + +Usage: +{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.replicationPassword" -}} + {{- if .subchart -}} + postgresql.replication.password + {{- else -}} + replication.password + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_redis.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_redis.tpl new file mode 100644 index 000000000..3e2a47c03 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_redis.tpl @@ -0,0 +1,72 @@ + +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Redis(TM) required passwords are not empty. + +Usage: +{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret" + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.redis.passwords" -}} + {{- $existingSecret := include "common.redis.values.existingSecret" . -}} + {{- $enabled := include "common.redis.values.enabled" . -}} + {{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}} + {{- $valueKeyRedisPassword := printf "%s%s" $valueKeyPrefix "password" -}} + {{- $valueKeyRedisUsePassword := printf "%s%s" $valueKeyPrefix "usePassword" -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $usePassword := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUsePassword "context" .context) -}} + {{- if eq $usePassword "true" -}} + {{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Redis Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.redis.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Redis(TM) is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.redis.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled redis. + +Usage: +{{ include "common.redis.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.redis.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.redis.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right prefix path for the values + +Usage: +{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.keys.prefix" -}} + {{- if .subchart -}}redis.{{- else -}}{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_validations.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_validations.tpl new file mode 100644 index 000000000..9a814cf40 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/templates/validations/_validations.tpl @@ -0,0 +1,46 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate values must not be empty. + +Usage: +{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}} +{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" +*/}} +{{- define "common.validations.values.multiple.empty" -}} + {{- range .required -}} + {{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}} + {{- end -}} +{{- end -}} + +{{/* +Validate a value must not be empty. + +Usage: +{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" + - subchart - String - Optional - Name of the subchart that the validated password is part of. +*/}} +{{- define "common.validations.values.single.empty" -}} + {{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }} + {{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }} + + {{- if not $value -}} + {{- $varname := "my-value" -}} + {{- $getCurrentValue := "" -}} + {{- if and .secret .field -}} + {{- $varname = include "common.utils.fieldToEnvVar" . -}} + {{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}} + {{- end -}} + {{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/values.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/values.yaml new file mode 100644 index 000000000..9ecdc93f5 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/charts/common/values.yaml @@ -0,0 +1,3 @@ +## bitnami/common +## It is required by CI/CD tools and processes. +exampleValue: common-chart diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/commonAnnotations.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/commonAnnotations.yaml new file mode 100644 index 000000000..97e18a4cc --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/commonAnnotations.yaml @@ -0,0 +1,3 @@ +commonAnnotations: + helm.sh/hook: "\"pre-install, pre-upgrade\"" + helm.sh/hook-weight: "-1" diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/default-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/default-values.yaml new file mode 100644 index 000000000..fc2ba605a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/default-values.yaml @@ -0,0 +1 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/shmvolume-disabled-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/shmvolume-disabled-values.yaml new file mode 100644 index 000000000..347d3b40a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/ci/shmvolume-disabled-values.yaml @@ -0,0 +1,2 @@ +shmVolume: + enabled: false diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/README.md b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/README.md new file mode 100644 index 000000000..1813a2fea --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/README.md @@ -0,0 +1 @@ +Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map. diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/conf.d/README.md b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/conf.d/README.md new file mode 100644 index 000000000..184c1875d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/conf.d/README.md @@ -0,0 +1,4 @@ +If you don't want to provide the whole configuration file and only specify certain parameters, you can copy here your extended `.conf` files. +These files will be injected as a config maps and add/overwrite the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +More info in the [bitnami-docker-postgresql README](https://github.com/bitnami/bitnami-docker-postgresql#configuration-file). diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/docker-entrypoint-initdb.d/README.md b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/docker-entrypoint-initdb.d/README.md new file mode 100644 index 000000000..cba38091e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/files/docker-entrypoint-initdb.d/README.md @@ -0,0 +1,3 @@ +You can copy here your custom `.sh`, `.sql` or `.sql.gz` file so they are executed during the first boot of the image. + +More info in the [bitnami-docker-postgresql](https://github.com/bitnami/bitnami-docker-postgresql#initializing-a-new-instance) repository. \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/NOTES.txt b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/NOTES.txt new file mode 100644 index 000000000..4e98958c1 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/NOTES.txt @@ -0,0 +1,59 @@ +** Please be patient while the chart is being deployed ** + +PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster: + + {{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection +{{- if .Values.replication.enabled }} + {{ template "common.names.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection +{{- end }} + +{{- if not (eq (include "postgresql.username" .) "postgres") }} + +To get the password for "postgres" run: + + export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-postgres-password}" | base64 --decode) +{{- end }} + +To get the password for "{{ template "postgresql.username" . }}" run: + + export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-password}" | base64 --decode) + +To connect to your database run the following command: + + kubectl run {{ template "common.names.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} + --labels="{{ template "common.names.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "common.names.fullname" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} +Note: Since NetworkPolicy is enabled, only pods with label {{ template "common.names.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster. +{{- end }} + +To connect to your database from outside the cluster execute the following commands: + +{{- if contains "NodePort" .Values.service.type }} + + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }}) + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "LoadBalancer" .Values.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "common.names.fullname" . }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "ClusterIP" .Values.service.type }} + + kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "common.names.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} & + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{- end }} + +{{- include "postgresql.validateValues" . -}} + +{{- include "common.warnings.rollingTag" .Values.image -}} + +{{- $passwordValidationErrors := include "common.validations.values.postgresql.passwords" (dict "secret" (include "common.names.fullname" .) "context" $) -}} + +{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $passwordValidationErrors) "context" $) -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/_helpers.tpl b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/_helpers.tpl new file mode 100644 index 000000000..1f98efe78 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/_helpers.tpl @@ -0,0 +1,337 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Expand the name of the chart. +*/}} +{{- define "postgresql.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "postgresql.primary.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- $fullname := default (printf "%s-%s" .Release.Name $name) .Values.fullnameOverride -}} +{{- if .Values.replication.enabled -}} +{{- printf "%s-%s" $fullname "primary" | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s" $fullname | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper PostgreSQL image name +*/}} +{{- define "postgresql.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper PostgreSQL metrics image name +*/}} +{{- define "postgresql.metrics.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.metrics.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper image name (for the init container volume-permissions image) +*/}} +{{- define "postgresql.volumePermissions.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +*/}} +{{- define "postgresql.imagePullSecrets" -}} +{{ include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.metrics.image .Values.volumePermissions.image) "global" .Values.global) }} +{{- end -}} + +{{/* +Return PostgreSQL postgres user password +*/}} +{{- define "postgresql.postgres.password" -}} +{{- if .Values.global.postgresql.postgresqlPostgresPassword }} + {{- .Values.global.postgresql.postgresqlPostgresPassword -}} +{{- else if .Values.postgresqlPostgresPassword -}} + {{- .Values.postgresqlPostgresPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL password +*/}} +{{- define "postgresql.password" -}} +{{- if .Values.global.postgresql.postgresqlPassword }} + {{- .Values.global.postgresql.postgresqlPassword -}} +{{- else if .Values.postgresqlPassword -}} + {{- .Values.postgresqlPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication password +*/}} +{{- define "postgresql.replication.password" -}} +{{- if .Values.global.postgresql.replicationPassword }} + {{- .Values.global.postgresql.replicationPassword -}} +{{- else if .Values.replication.password -}} + {{- .Values.replication.password -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL username +*/}} +{{- define "postgresql.username" -}} +{{- if .Values.global.postgresql.postgresqlUsername }} + {{- .Values.global.postgresql.postgresqlUsername -}} +{{- else -}} + {{- .Values.postgresqlUsername -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication username +*/}} +{{- define "postgresql.replication.username" -}} +{{- if .Values.global.postgresql.replicationUser }} + {{- .Values.global.postgresql.replicationUser -}} +{{- else -}} + {{- .Values.replication.user -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL port +*/}} +{{- define "postgresql.port" -}} +{{- if .Values.global.postgresql.servicePort }} + {{- .Values.global.postgresql.servicePort -}} +{{- else -}} + {{- .Values.service.port -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL created database +*/}} +{{- define "postgresql.database" -}} +{{- if .Values.global.postgresql.postgresqlDatabase }} + {{- .Values.global.postgresql.postgresqlDatabase -}} +{{- else if .Values.postgresqlDatabase -}} + {{- .Values.postgresqlDatabase -}} +{{- end -}} +{{- end -}} + +{{/* +Get the password secret. +*/}} +{{- define "postgresql.secretName" -}} +{{- if .Values.global.postgresql.existingSecret }} + {{- printf "%s" (tpl .Values.global.postgresql.existingSecret $) -}} +{{- else if .Values.existingSecret -}} + {{- printf "%s" (tpl .Values.existingSecret $) -}} +{{- else -}} + {{- printf "%s" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if we should use an existingSecret. +*/}} +{{- define "postgresql.useExistingSecret" -}} +{{- if or .Values.global.postgresql.existingSecret .Values.existingSecret -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a secret object should be created +*/}} +{{- define "postgresql.createSecret" -}} +{{- if not (include "postgresql.useExistingSecret" .) -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the configuration ConfigMap name. +*/}} +{{- define "postgresql.configurationCM" -}} +{{- if .Values.configurationConfigMap -}} +{{- printf "%s" (tpl .Values.configurationConfigMap $) -}} +{{- else -}} +{{- printf "%s-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the extended configuration ConfigMap name. +*/}} +{{- define "postgresql.extendedConfigurationCM" -}} +{{- if .Values.extendedConfConfigMap -}} +{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}} +{{- else -}} +{{- printf "%s-extended-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a configmap should be mounted with PostgreSQL configuration +*/}} +{{- define "postgresql.mountConfigurationCM" -}} +{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts ConfigMap name. +*/}} +{{- define "postgresql.initdbScriptsCM" -}} +{{- if .Values.initdbScriptsConfigMap -}} +{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}} +{{- else -}} +{{- printf "%s-init-scripts" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts Secret name. +*/}} +{{- define "postgresql.initdbScriptsSecret" -}} +{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}} +{{- end -}} + +{{/* +Get the metrics ConfigMap name. +*/}} +{{- define "postgresql.metricsCM" -}} +{{- printf "%s-metrics" (include "common.names.fullname" .) -}} +{{- end -}} + +{{/* +Get the readiness probe command +*/}} +{{- define "postgresql.readinessProbeCommand" -}} +- | +{{- if (include "postgresql.database" .) }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- else }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- end }} +{{- if contains "bitnami/" .Values.image.repository }} + [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] +{{- end -}} +{{- end -}} + +{{/* +Compile all warnings into a single message, and call fail. +*/}} +{{- define "postgresql.validateValues" -}} +{{- $messages := list -}} +{{- $messages := append $messages (include "postgresql.validateValues.ldapConfigurationMethod" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.psp" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.tls" .) -}} +{{- $messages := without $messages "" -}} +{{- $message := join "\n" $messages -}} + +{{- if $message -}} +{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}} +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If ldap.url is used then you don't need the other settings for ldap +*/}} +{{- define "postgresql.validateValues.ldapConfigurationMethod" -}} +{{- if and .Values.ldap.enabled (and (not (empty .Values.ldap.url)) (not (empty .Values.ldap.server))) }} +postgresql: ldap.url, ldap.server + You cannot set both `ldap.url` and `ldap.server` at the same time. + Please provide a unique way to configure LDAP. + More info at https://www.postgresql.org/docs/current/auth-ldap.html +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If PSP is enabled RBAC should be enabled too +*/}} +{{- define "postgresql.validateValues.psp" -}} +{{- if and .Values.psp.create (not .Values.rbac.create) }} +postgresql: psp.create, rbac.create + RBAC should be enabled if PSP is enabled in order for PSP to work. + More info at https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for podsecuritypolicy. +*/}} +{{- define "podsecuritypolicy.apiVersion" -}} +{{- if semverCompare "<1.10-0" .Capabilities.KubeVersion.GitVersion -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "policy/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for networkpolicy. +*/}} +{{- define "postgresql.networkPolicy.apiVersion" -}} +{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"extensions/v1beta1" +{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"networking.k8s.io/v1" +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql TLS - When TLS is enabled, so must be VolumePermissions +*/}} +{{- define "postgresql.validateValues.tls" -}} +{{- if and .Values.tls.enabled (not .Values.volumePermissions.enabled) }} +postgresql: tls.enabled, volumePermissions.enabled + When TLS is enabled you must enable volumePermissions as well to ensure certificates files have + the right permissions. +{{- end -}} +{{- end -}} + +{{/* +Return the path to the cert file. +*/}} +{{- define "postgresql.tlsCert" -}} +{{- required "Certificate filename is required when TLS in enabled" .Values.tls.certFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the cert key file. +*/}} +{{- define "postgresql.tlsCertKey" -}} +{{- required "Certificate Key filename is required when TLS in enabled" .Values.tls.certKeyFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the CA cert file. +*/}} +{{- define "postgresql.tlsCACert" -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.certCAFilename -}} +{{- end -}} + +{{/* +Return the path to the CRL file. +*/}} +{{- define "postgresql.tlsCRL" -}} +{{- if .Values.tls.crlFilename -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.crlFilename -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/configmap.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/configmap.yaml new file mode 100644 index 000000000..3a5ea18ae --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/configmap.yaml @@ -0,0 +1,31 @@ +{{ if and (or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration) (not .Values.configurationConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- if (.Files.Glob "files/postgresql.conf") }} +{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }} +{{- else if .Values.postgresqlConfiguration }} + postgresql.conf: | +{{- range $key, $value := default dict .Values.postgresqlConfiguration }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- if (.Files.Glob "files/pg_hba.conf") }} +{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }} +{{- else if .Values.pgHbaConfiguration }} + pg_hba.conf: | +{{ .Values.pgHbaConfiguration | indent 4 }} +{{- end }} +{{ end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/extended-config-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/extended-config-configmap.yaml new file mode 100644 index 000000000..b0dad253b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/extended-config-configmap.yaml @@ -0,0 +1,26 @@ +{{- if and (or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf) (not .Values.extendedConfConfigMap)}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-extended-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- with .Files.Glob "files/conf.d/*.conf" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{ with .Values.postgresqlExtendedConf }} + override.conf: | +{{- range $key, $value := . }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/extra-list.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/extra-list.yaml new file mode 100644 index 000000000..9ac65f9e1 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/extra-list.yaml @@ -0,0 +1,4 @@ +{{- range .Values.extraDeploy }} +--- +{{ include "common.tplvalues.render" (dict "value" . "context" $) }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/initialization-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/initialization-configmap.yaml new file mode 100644 index 000000000..7796c67a9 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/initialization-configmap.yaml @@ -0,0 +1,25 @@ +{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-init-scripts + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }} +binaryData: +{{- range $path, $bytes := . }} + {{ base $path }}: {{ $.Files.Get $path | b64enc | quote }} +{{- end }} +{{- end }} +data: +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{- with .Values.initdbScripts }} +{{ toYaml . | indent 2 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/metrics-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/metrics-configmap.yaml new file mode 100644 index 000000000..fa539582b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/metrics-configmap.yaml @@ -0,0 +1,14 @@ +{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "postgresql.metricsCM" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: + custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/metrics-svc.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/metrics-svc.yaml new file mode 100644 index 000000000..af8b67e2f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/metrics-svc.yaml @@ -0,0 +1,26 @@ +{{- if .Values.metrics.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-metrics + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- toYaml .Values.metrics.service.annotations | nindent 4 }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ .Values.metrics.service.type }} + {{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }} + {{- end }} + ports: + - name: http-metrics + port: 9187 + targetPort: http-metrics + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/networkpolicy.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/networkpolicy.yaml new file mode 100644 index 000000000..4f2740ea0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/networkpolicy.yaml @@ -0,0 +1,39 @@ +{{- if .Values.networkPolicy.enabled }} +kind: NetworkPolicy +apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + ingress: + # Allow inbound connections + - ports: + - port: {{ template "postgresql.port" . }} + {{- if not .Values.networkPolicy.allowExternal }} + from: + - podSelector: + matchLabels: + {{ template "common.names.fullname" . }}-client: "true" + {{- if .Values.networkPolicy.explicitNamespacesSelector }} + namespaceSelector: +{{ toYaml .Values.networkPolicy.explicitNamespacesSelector | indent 12 }} + {{- end }} + - podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 14 }} + role: read + {{- end }} + {{- if .Values.metrics.enabled }} + # Allow prometheus scrapes + - ports: + - port: 9187 + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/podsecuritypolicy.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/podsecuritypolicy.yaml new file mode 100644 index 000000000..0c49694fa --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/podsecuritypolicy.yaml @@ -0,0 +1,38 @@ +{{- if .Values.psp.create }} +apiVersion: {{ include "podsecuritypolicy.apiVersion" . }} +kind: PodSecurityPolicy +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + privileged: false + volumes: + - 'configMap' + - 'secret' + - 'persistentVolumeClaim' + - 'emptyDir' + - 'projected' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/prometheusrule.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/prometheusrule.yaml new file mode 100644 index 000000000..d0f408c78 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/prometheusrule.yaml @@ -0,0 +1,23 @@ +{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + name: {{ template "common.names.fullname" . }} +{{- with .Values.metrics.prometheusRule.namespace }} + namespace: {{ . }} +{{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- with .Values.metrics.prometheusRule.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} +spec: +{{- with .Values.metrics.prometheusRule.rules }} + groups: + - name: {{ template "postgresql.name" $ }} + rules: {{ tpl (toYaml .) $ | nindent 8 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/role.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/role.yaml new file mode 100644 index 000000000..017a5716b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/role.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: Role +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +rules: + {{- if .Values.psp.create }} + - apiGroups: ["extensions"] + resources: ["podsecuritypolicies"] + verbs: ["use"] + resourceNames: + - {{ template "common.names.fullname" . }} + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/rolebinding.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/rolebinding.yaml new file mode 100644 index 000000000..189775a15 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/rolebinding.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: RoleBinding +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: {{ template "common.names.fullname" . }} + apiGroup: rbac.authorization.k8s.io +subjects: + - kind: ServiceAccount + name: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/secrets.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/secrets.yaml new file mode 100644 index 000000000..d492cd593 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/secrets.yaml @@ -0,0 +1,24 @@ +{{- if (include "postgresql.createSecret" .) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +type: Opaque +data: + {{- if not (eq (include "postgresql.username" .) "postgres") }} + postgresql-postgres-password: {{ include "postgresql.postgres.password" . | b64enc | quote }} + {{- end }} + postgresql-password: {{ include "postgresql.password" . | b64enc | quote }} + {{- if .Values.replication.enabled }} + postgresql-replication-password: {{ include "postgresql.replication.password" . | b64enc | quote }} + {{- end }} + {{- if (and .Values.ldap.enabled .Values.ldap.bind_password)}} + postgresql-ldap-password: {{ .Values.ldap.bind_password | b64enc | quote }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/serviceaccount.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/serviceaccount.yaml new file mode 100644 index 000000000..03f0f50e7 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/serviceaccount.yaml @@ -0,0 +1,12 @@ +{{- if and (.Values.serviceAccount.enabled) (not .Values.serviceAccount.name) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "common.labels.standard" . | nindent 4 }} + name: {{ template "common.names.fullname" . }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/servicemonitor.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/servicemonitor.yaml new file mode 100644 index 000000000..587ce85b8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/servicemonitor.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "common.names.fullname" . }} + {{- if .Values.metrics.serviceMonitor.namespace }} + namespace: {{ .Values.metrics.serviceMonitor.namespace }} + {{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.metrics.serviceMonitor.additionalLabels }} + {{- toYaml .Values.metrics.serviceMonitor.additionalLabels | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + +spec: + endpoints: + - port: http-metrics + {{- if .Values.metrics.serviceMonitor.interval }} + interval: {{ .Values.metrics.serviceMonitor.interval }} + {{- end }} + {{- if .Values.metrics.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout }} + {{- end }} + namespaceSelector: + matchNames: + - {{ .Release.Namespace }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/statefulset-readreplicas.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/statefulset-readreplicas.yaml new file mode 100644 index 000000000..b038299bf --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/statefulset-readreplicas.yaml @@ -0,0 +1,411 @@ +{{- if .Values.replication.enabled }} +{{- $readReplicasResources := coalesce .Values.readReplicas.resources .Values.resources -}} +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: "{{ template "common.names.fullname" . }}-read" + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: read +{{- with .Values.readReplicas.labels }} +{{ toYaml . | indent 4 }} +{{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.readReplicas.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: {{ .Values.replication.readReplicas }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: read + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + app.kubernetes.io/component: read + role: read +{{- with .Values.readReplicas.podLabels }} +{{ toYaml . | indent 8 }} +{{- end }} +{{- with .Values.readReplicas.podAnnotations }} + annotations: +{{ toYaml . | indent 8 }} +{{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.readReplicas.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAffinityPreset "component" "read" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAntiAffinityPreset "component" "read" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.readReplicas.nodeAffinityPreset.type "key" .Values.readReplicas.nodeAffinityPreset.key "values" .Values.readReplicas.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.readReplicas.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.readReplicas.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name}} + {{- end }} + {{- if or .Values.readReplicas.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{ if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.readReplicas.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.readReplicas.priorityClassName }} + priorityClassName: {{ .Values.readReplicas.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if $readReplicasResources }} + resources: {{- toYaml $readReplicasResources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + - name: POSTGRES_REPLICATION_MODE + value: "slave" + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + - name: POSTGRES_MASTER_HOST + value: {{ template "common.names.fullname" . }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ include "postgresql.port" . | quote }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{ end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.readReplicas.extraVolumeMounts }} + {{- toYaml .Values.readReplicas.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.readReplicas.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.sidecars "context" $ ) | nindent 8 }} +{{- end }} + volumes: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} + {{- if or (not .Values.persistence.enabled) (not .Values.readReplicas.persistence.enabled) }} + - name: data + emptyDir: {} + {{- end }} + {{- if .Values.readReplicas.extraVolumes }} + {{- toYaml .Values.readReplicas.extraVolumes | nindent 8 }} + {{- end }} + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} +{{- if and .Values.persistence.enabled .Values.readReplicas.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/statefulset.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/statefulset.yaml new file mode 100644 index 000000000..f8163fd99 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/statefulset.yaml @@ -0,0 +1,609 @@ +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: {{ template "postgresql.primary.fullname" . }} + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: primary + {{- with .Values.primary.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.primary.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: 1 + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: primary + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + role: primary + app.kubernetes.io/component: primary + {{- with .Values.primary.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.primary.podAnnotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.primary.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.primary.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAffinityPreset "component" "primary" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAntiAffinityPreset "component" "primary" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.primary.nodeAffinityPreset.type "key" .Values.primary.nodeAffinityPreset.key "values" .Values.primary.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.primary.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.primary.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.primary.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.primary.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + {{- end }} + {{- if or .Values.primary.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.primary.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.primary.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.primary.priorityClassName }} + priorityClassName: {{ .Values.primary.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + {{- if .Values.postgresqlInitdbArgs }} + - name: POSTGRES_INITDB_ARGS + value: {{ .Values.postgresqlInitdbArgs | quote }} + {{- end }} + {{- if .Values.postgresqlInitdbWalDir }} + - name: POSTGRES_INITDB_WALDIR + value: {{ .Values.postgresqlInitdbWalDir | quote }} + {{- end }} + {{- if .Values.initdbUser }} + - name: POSTGRESQL_INITSCRIPTS_USERNAME + value: {{ .Values.initdbUser }} + {{- end }} + {{- if .Values.initdbPassword }} + - name: POSTGRESQL_INITSCRIPTS_PASSWORD + value: {{ .Values.initdbPassword }} + {{- end }} + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + {{- if .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_MASTER_HOST + value: {{ .Values.primaryAsStandBy.primaryHost }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ .Values.primaryAsStandBy.primaryPort | quote }} + {{- end }} + {{- if or .Values.replication.enabled .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_REPLICATION_MODE + {{- if .Values.primaryAsStandBy.enabled }} + value: "slave" + {{- else }} + value: "master" + {{- end }} + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + {{- if not (eq .Values.replication.synchronousCommit "off")}} + - name: POSTGRES_SYNCHRONOUS_COMMIT_MODE + value: {{ .Values.replication.synchronousCommit | quote }} + - name: POSTGRES_NUM_SYNCHRONOUS_REPLICAS + value: {{ .Values.replication.numSynchronousReplicas | quote }} + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + {{- end }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + - name: POSTGRES_USER + value: {{ include "postgresql.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + {{- if (include "postgresql.database" .) }} + - name: POSTGRES_DB + value: {{ (include "postgresql.database" .) | quote }} + {{- end }} + {{- if .Values.extraEnv }} + {{- include "common.tplvalues.render" (dict "value" .Values.extraEnv "context" $) | nindent 12 }} + {{- end }} + - name: POSTGRESQL_ENABLE_LDAP + value: {{ ternary "yes" "no" .Values.ldap.enabled | quote }} + {{- if .Values.ldap.enabled }} + - name: POSTGRESQL_LDAP_SERVER + value: {{ .Values.ldap.server }} + - name: POSTGRESQL_LDAP_PORT + value: {{ .Values.ldap.port | quote }} + - name: POSTGRESQL_LDAP_SCHEME + value: {{ .Values.ldap.scheme }} + {{- if .Values.ldap.tls }} + - name: POSTGRESQL_LDAP_TLS + value: "1" + {{- end }} + - name: POSTGRESQL_LDAP_PREFIX + value: {{ .Values.ldap.prefix | quote }} + - name: POSTGRESQL_LDAP_SUFFIX + value: {{ .Values.ldap.suffix | quote }} + - name: POSTGRESQL_LDAP_BASE_DN + value: {{ .Values.ldap.baseDN }} + - name: POSTGRESQL_LDAP_BIND_DN + value: {{ .Values.ldap.bindDN }} + {{- if (not (empty .Values.ldap.bind_password)) }} + - name: POSTGRESQL_LDAP_BIND_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-ldap-password + {{- end}} + - name: POSTGRESQL_LDAP_SEARCH_ATTR + value: {{ .Values.ldap.search_attr }} + - name: POSTGRESQL_LDAP_SEARCH_FILTER + value: {{ .Values.ldap.search_filter }} + - name: POSTGRESQL_LDAP_URL + value: {{ .Values.ldap.url }} + {{- end}} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + {{- if .Values.extraEnvVarsCM }} + envFrom: + - configMapRef: + name: {{ tpl .Values.extraEnvVarsCM . }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.startupProbe.enabled }} + startupProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.startupProbe.periodSeconds }} + timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }} + successThreshold: {{ .Values.startupProbe.successThreshold }} + failureThreshold: {{ .Values.startupProbe.failureThreshold }} + {{- else if .Values.customStartupProbe }} + startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customStartupProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + mountPath: /docker-entrypoint-initdb.d/ + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + mountPath: /docker-entrypoint-initdb.d/secret + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.primary.extraVolumeMounts }} + {{- toYaml .Values.primary.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.primary.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.primary.sidecars "context" $ ) | nindent 8 }} +{{- end }} +{{- if .Values.metrics.enabled }} + - name: metrics + image: {{ template "postgresql.metrics.image" . }} + imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }} + {{- if .Values.metrics.securityContext.enabled }} + securityContext: {{- omit .Values.metrics.securityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + {{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase)" (include "postgresql.database" .) }} + {{- $sslmode := ternary "require" "disable" .Values.tls.enabled }} + {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} + - name: DATA_SOURCE_NAME + value: {{ printf "host=127.0.0.1 port=%d user=%s sslmode=%s sslcert=%s sslkey=%s" (int (include "postgresql.port" .)) (include "postgresql.username" .) $sslmode (include "postgresql.tlsCert" .) (include "postgresql.tlsCertKey" .) }} + {{- else }} + - name: DATA_SOURCE_URI + value: {{ printf "127.0.0.1:%d/%s?sslmode=%s" (int (include "postgresql.port" .)) $database $sslmode }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: DATA_SOURCE_PASS_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: DATA_SOURCE_PASS + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: DATA_SOURCE_USER + value: {{ template "postgresql.username" . }} + {{- if .Values.metrics.extraEnvVars }} + {{- include "common.tplvalues.render" (dict "value" .Values.metrics.extraEnvVars "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.metrics.customMetrics }} + - name: custom-metrics + mountPath: /conf + readOnly: true + args: ["--extend.query-path", "/conf/custom-metrics.yaml"] + {{- end }} + ports: + - name: http-metrics + containerPort: 9187 + {{- if .Values.metrics.resources }} + resources: {{- toYaml .Values.metrics.resources | nindent 12 }} + {{- end }} +{{- end }} + volumes: + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + configMap: + name: {{ template "postgresql.initdbScriptsCM" . }} + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + secret: + secretName: {{ template "postgresql.initdbScriptsSecret" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.primary.extraVolumes }} + {{- toYaml .Values.primary.extraVolumes | nindent 8 }} + {{- end }} + {{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} + - name: custom-metrics + configMap: + name: {{ template "postgresql.metricsCM" . }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} +{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }} + - name: data + persistentVolumeClaim: +{{- with .Values.persistence.existingClaim }} + claimName: {{ tpl . $ }} +{{- end }} +{{- else if not .Values.persistence.enabled }} + - name: data + emptyDir: {} +{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc-headless.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc-headless.yaml new file mode 100644 index 000000000..6f5f3b9ee --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc-headless.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-headless + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + # Use this annotation in addition to the actual publishNotReadyAddresses + # field below because the annotation will stop being respected soon but the + # field is broken in some versions of Kubernetes: + # https://github.com/kubernetes/kubernetes/issues/58662 + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" + namespace: {{ .Release.Namespace }} +spec: + type: ClusterIP + clusterIP: None + # We want all pods in the StatefulSet to have their addresses published for + # the sake of the other Postgresql pods even before they're ready, since they + # have to be able to talk to each other in order to become ready. + publishNotReadyAddresses: true + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc-read.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc-read.yaml new file mode 100644 index 000000000..56195ea1e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc-read.yaml @@ -0,0 +1,43 @@ +{{- if .Values.replication.enabled }} +{{- $serviceAnnotations := coalesce .Values.readReplicas.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.readReplicas.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.readReplicas.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.readReplicas.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.readReplicas.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.readReplicas.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-read + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: read +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc.yaml new file mode 100644 index 000000000..a29431b6a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/templates/svc.yaml @@ -0,0 +1,41 @@ +{{- $serviceAnnotations := coalesce .Values.primary.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.primary.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.primary.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.primary.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.primary.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.primary.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/values.schema.json b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/values.schema.json new file mode 100644 index 000000000..66a2a9dd0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/values.schema.json @@ -0,0 +1,103 @@ +{ + "$schema": "http://json-schema.org/schema#", + "type": "object", + "properties": { + "postgresqlUsername": { + "type": "string", + "title": "Admin user", + "form": true + }, + "postgresqlPassword": { + "type": "string", + "title": "Password", + "form": true + }, + "persistence": { + "type": "object", + "properties": { + "size": { + "type": "string", + "title": "Persistent Volume Size", + "form": true, + "render": "slider", + "sliderMin": 1, + "sliderMax": 100, + "sliderUnit": "Gi" + } + } + }, + "resources": { + "type": "object", + "title": "Required Resources", + "description": "Configure resource requests", + "form": true, + "properties": { + "requests": { + "type": "object", + "properties": { + "memory": { + "type": "string", + "form": true, + "render": "slider", + "title": "Memory Request", + "sliderMin": 10, + "sliderMax": 2048, + "sliderUnit": "Mi" + }, + "cpu": { + "type": "string", + "form": true, + "render": "slider", + "title": "CPU Request", + "sliderMin": 10, + "sliderMax": 2000, + "sliderUnit": "m" + } + } + } + } + }, + "replication": { + "type": "object", + "form": true, + "title": "Replication Details", + "properties": { + "enabled": { + "type": "boolean", + "title": "Enable Replication", + "form": true + }, + "readReplicas": { + "type": "integer", + "title": "read Replicas", + "form": true, + "hidden": { + "value": false, + "path": "replication/enabled" + } + } + } + }, + "volumePermissions": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "form": true, + "title": "Enable Init Containers", + "description": "Change the owner of the persist volume mountpoint to RunAsUser:fsGroup" + } + } + }, + "metrics": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "title": "Configure metrics exporter", + "form": true + } + } + } + } +} diff --git a/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/values.yaml b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/values.yaml new file mode 100644 index 000000000..82ce09234 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/charts/postgresql/values.yaml @@ -0,0 +1,824 @@ +## Global Docker image parameters +## Please, note that this will override the image parameters, including dependencies, configured to use the global value +## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## +global: + postgresql: {} +# imageRegistry: myRegistryName +# imagePullSecrets: +# - myRegistryKeySecretName +# storageClass: myStorageClass + +## Bitnami PostgreSQL image version +## ref: https://hub.docker.com/r/bitnami/postgresql/tags/ +## +image: + registry: docker.io + repository: bitnami/postgresql + tag: 11.11.0-debian-10-r71 + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + + ## Set to true if you would like to see extra information on logs + ## It turns BASH and/or NAMI debugging in the image + ## + debug: false + +## String to partially override common.names.fullname template (will maintain the release name) +## +# nameOverride: + +## String to fully override common.names.fullname template +## +# fullnameOverride: + +## +## Init containers parameters: +## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup +## +volumePermissions: + enabled: false + image: + registry: docker.io + repository: bitnami/bitnami-shell + tag: "10" + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: Always + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Init container Security Context + ## Note: the chown of the data folder is done to securityContext.runAsUser + ## and not the below volumePermissions.securityContext.runAsUser + ## When runAsUser is set to special value "auto", init container will try to chwon the + ## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2` + ## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed). + ## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with + ## pod securityContext.enabled=false and shmVolume.chmod.enabled=false + ## + securityContext: + runAsUser: 0 + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## Pod Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +securityContext: + enabled: true + fsGroup: 1001 + +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +containerSecurityContext: + enabled: true + runAsUser: 1001 + +## Pod Service Account +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + enabled: false + ## Name of an already existing service account. Setting this value disables the automatic service account creation. + # name: + +## Pod Security Policy +## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +## +psp: + create: false + +## Creates role for ServiceAccount +## Required for PSP +## +rbac: + create: false + +replication: + enabled: false + user: repl_user + password: repl_password + readReplicas: 1 + ## Set synchronous commit mode: on, off, remote_apply, remote_write and local + ## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL + synchronousCommit: 'off' + ## From the number of `readReplicas` defined above, set the number of those that will have synchronous replication + ## NOTE: It cannot be > readReplicas + numSynchronousReplicas: 0 + ## Replication Cluster application name. Useful for defining multiple replication policies + ## + applicationName: my_application + +## PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!) +# postgresqlPostgresPassword: + +## PostgreSQL user (has superuser privileges if username is `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +postgresqlUsername: postgres + +## PostgreSQL password +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +# postgresqlPassword: + +## PostgreSQL password using existing secret +## existingSecret: secret +## + +## Mount PostgreSQL secret as a file instead of passing environment variable +# usePasswordFile: false + +## Create a database +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run +## +# postgresqlDatabase: + +## PostgreSQL data dir +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +postgresqlDataDir: /bitnami/postgresql/data + +## An array to add extra environment variables +## For example: +## extraEnv: +## - name: FOO +## value: "bar" +## +# extraEnv: +extraEnv: [] + +## Name of a ConfigMap containing extra env vars +## +# extraEnvVarsCM: + +## Specify extra initdb args +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbArgs: + +## Specify a custom location for the PostgreSQL transaction log +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbWalDir: + +## PostgreSQL configuration +## Specify runtime configuration parameters as a dict, using camelCase, e.g. +## {"sharedBuffers": "500MB"} +## Alternatively, you can put your postgresql.conf under the files/ directory +## ref: https://www.postgresql.org/docs/current/static/runtime-config.html +## +# postgresqlConfiguration: + +## PostgreSQL extended configuration +## As above, but _appended_ to the main configuration +## Alternatively, you can put your *.conf under the files/conf.d/ directory +## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf +## +# postgresqlExtendedConf: + +## Configure current cluster's primary server to be the standby server in other cluster. +## This will allow cross cluster replication and provide cross cluster high availability. +## You will need to configure pgHbaConfiguration if you want to enable this feature with local cluster replication enabled. +## +primaryAsStandBy: + enabled: false + # primaryHost: + # primaryPort: + +## PostgreSQL client authentication configuration +## Specify content for pg_hba.conf +## Default: do not create pg_hba.conf +## Alternatively, you can put your pg_hba.conf under the files/ directory +# pgHbaConfiguration: |- +# local all all trust +# host all all localhost trust +# host mydatabase mysuser 192.168.0.0/24 md5 + +## ConfigMap with PostgreSQL configuration +## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration +# configurationConfigMap: + +## ConfigMap with PostgreSQL extended configuration +# extendedConfConfigMap: + +## initdb scripts +## Specify dictionary of scripts to be run at first boot +## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory +## +# initdbScripts: +# my_init_script.sh: | +# #!/bin/sh +# echo "Do something." + +## ConfigMap with scripts to be run at first boot +## NOTE: This will override initdbScripts +# initdbScriptsConfigMap: + +## Secret with scripts to be run at first boot (in case it contains sensitive information) +## NOTE: This can work along initdbScripts or initdbScriptsConfigMap +# initdbScriptsSecret: + +## Specify the PostgreSQL username and password to execute the initdb scripts +# initdbUser: +# initdbPassword: + +## Audit settings +## https://github.com/bitnami/bitnami-docker-postgresql#auditing +## +audit: + ## Log client hostnames + ## + logHostname: false + ## Log connections to the server + ## + logConnections: false + ## Log disconnections + ## + logDisconnections: false + ## Operation to audit using pgAudit (default if not set) + ## + pgAuditLog: "" + ## Log catalog using pgAudit + ## + pgAuditLogCatalog: "off" + ## Log level for clients + ## + clientMinMessages: error + ## Template for log line prefix (default if not set) + ## + logLinePrefix: "" + ## Log timezone + ## + logTimezone: "" + +## Shared preload libraries +## +postgresqlSharedPreloadLibraries: "pgaudit" + +## Maximum total connections +## +postgresqlMaxConnections: + +## Maximum connections for the postgres user +## +postgresqlPostgresConnectionLimit: + +## Maximum connections for the created user +## +postgresqlDbUserConnectionLimit: + +## TCP keepalives interval +## +postgresqlTcpKeepalivesInterval: + +## TCP keepalives idle +## +postgresqlTcpKeepalivesIdle: + +## TCP keepalives count +## +postgresqlTcpKeepalivesCount: + +## Statement timeout +## +postgresqlStatementTimeout: + +## Remove pg_hba.conf lines with the following comma-separated patterns +## (cannot be used with custom pg_hba.conf) +## +postgresqlPghbaRemoveFilters: + +## Optional duration in seconds the pod needs to terminate gracefully. +## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods +## +# terminationGracePeriodSeconds: 30 + +## LDAP configuration +## +ldap: + enabled: false + url: '' + server: '' + port: '' + prefix: '' + suffix: '' + baseDN: '' + bindDN: '' + bind_password: + search_attr: '' + search_filter: '' + scheme: '' + tls: {} + +## PostgreSQL service configuration +## +service: + ## PosgresSQL service type + ## + type: ClusterIP + # clusterIP: None + port: 5432 + + ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## + # nodePort: + + ## Provide any additional annotations which may be required. Evaluated as a template. + ## + annotations: {} + ## Set the LoadBalancer service type to internal only. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer + ## + # loadBalancerIP: + ## Load Balancer sources. Evaluated as a template. + ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## + # loadBalancerSourceRanges: + # - 10.10.10.0/24 + +## Start primary and read(s) pod(s) without limitations on shm memory. +## By default docker and containerd (and possibly other container runtimes) +## limit `/dev/shm` to `64M` (see e.g. the +## [docker issue](https://github.com/docker-library/postgres/issues/416) and the +## [containerd issue](https://github.com/containerd/containerd/issues/3654), +## which could be not enough if PostgreSQL uses parallel workers heavily. +## +shmVolume: + ## Set `shmVolume.enabled` to `true` to mount a new tmpfs volume to remove + ## this limitation. + ## + enabled: true + ## Set to `true` to `chmod 777 /dev/shm` on a initContainer. + ## This option is ignored if `volumePermissions.enabled` is `false` + ## + chmod: + enabled: true + +## PostgreSQL data Persistent Volume Storage Class +## If defined, storageClassName: +## If set to "-", storageClassName: "", which disables dynamic provisioning +## If undefined (the default) or set to null, no storageClassName spec is +## set, choosing the default provisioner. (gp2 on AWS, standard on +## GKE, AWS & OpenStack) +## +persistence: + enabled: true + ## A manually managed Persistent Volume and Claim + ## If defined, PVC must be created manually before volume will be bound + ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart + ## + # existingClaim: + + ## The path the volume will be mounted at, useful when using different + ## PostgreSQL images. + ## + mountPath: /bitnami/postgresql + + ## The subdirectory of the volume to mount to, useful in dev environments + ## and one PV for multiple services. + ## + subPath: '' + + # storageClass: "-" + accessModes: + - ReadWriteOnce + size: 8Gi + annotations: {} + ## selector can be used to match an existing PersistentVolume + ## selector: + ## matchLabels: + ## app: my-app + selector: {} + +## updateStrategy for PostgreSQL StatefulSet and its reads StatefulSets +## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies +## +updateStrategy: + type: RollingUpdate + +## +## PostgreSQL Primary parameters +## +primary: + ## PostgreSQL Primary pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL Primary pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL Primary node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: primary.podAffinityPreset, primary.podAntiAffinityPreset, and primary.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL primary Volume mounts + ## + extraVolumeMounts: [] + ## Additional PostgreSQL primary Volumes + ## + extraVolumes: [] + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for primary + ## + service: {} + # type: + # nodePort: + # clusterIP: + +## +## PostgreSQL read only replica parameters +## +readReplicas: + ## PostgreSQL read only pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL read only pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL read only node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: readReplicas.podAffinityPreset, readReplicas.podAntiAffinityPreset, and readReplicas.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL read replicas Volume mounts + ## + extraVolumeMounts: [] + + ## Additional PostgreSQL read replicas Volumes + ## + extraVolumes: [] + + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for read + ## + service: {} + # type: + # nodePort: + # clusterIP: + + ## Whether to enable PostgreSQL read replicas data Persistent + ## + persistence: + enabled: true + + # Override the resource configuration for read replicas + resources: {} + # requests: + # memory: 256Mi + # cpu: 250m + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: + requests: + memory: 256Mi + cpu: 250m + +## Add annotations to all the deployed resources +## +commonAnnotations: {} + +networkPolicy: + ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. + ## + enabled: false + + ## The Policy model to apply. When set to false, only pods with the correct + ## client label will have network access to the port PostgreSQL is listening + ## on. When true, PostgreSQL will accept connections from any source + ## (with the correct destination port). + ## + allowExternal: true + + ## if explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace + ## and that match other criteria, the ones that have the good label, can reach the DB. + ## But sometimes, we want the DB to be accessible to clients from other namespaces, in this case, we can use this + ## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added. + ## + ## Example: + ## explicitNamespacesSelector: + ## matchLabels: + ## role: frontend + ## matchExpressions: + ## - {key: role, operator: In, values: [frontend]} + ## + explicitNamespacesSelector: {} + +## Configure extra options for startup, liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes +## +startupProbe: + enabled: false + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 5 + failureThreshold: 10 + successThreshold: 1 + +livenessProbe: + enabled: true + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Custom Startup probe +## +customStartupProbe: {} + +## Custom Liveness probe +## +customLivenessProbe: {} + +## Custom Rediness probe +## +customReadinessProbe: {} + +## +## TLS configuration +## +tls: + # Enable TLS traffic + enabled: false + # + # Whether to use the server's TLS cipher preferences rather than the client's. + preferServerCiphers: true + # + # Name of the Secret that contains the certificates + certificatesSecret: '' + # + # Certificate filename + certFilename: '' + # + # Certificate Key filename + certKeyFilename: '' + # + # CA Certificate filename + # If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate + # ref: https://www.postgresql.org/docs/9.6/auth-methods.html + certCAFilename: + # + # File containing a Certificate Revocation List + crlFilename: + +## Configure metrics exporter +## +metrics: + enabled: false + # resources: {} + service: + type: ClusterIP + annotations: + prometheus.io/scrape: 'true' + prometheus.io/port: '9187' + loadBalancerIP: + serviceMonitor: + enabled: false + additionalLabels: {} + # namespace: monitoring + # interval: 30s + # scrapeTimeout: 10s + ## Custom PrometheusRule to be defined + ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart + ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions + ## + prometheusRule: + enabled: false + additionalLabels: {} + namespace: '' + ## These are just examples rules, please adapt them to your needs. + ## Make sure to constraint the rules to the current postgresql service. + ## rules: + ## - alert: HugeReplicationLag + ## expr: pg_replication_lag{service="{{ template "common.names.fullname" . }}-metrics"} / 3600 > 1 + ## for: 1m + ## labels: + ## severity: critical + ## annotations: + ## description: replication for {{ template "common.names.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s). + ## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s). + ## + rules: [] + + image: + registry: docker.io + repository: bitnami/postgres-exporter + tag: 0.9.0-debian-10-r43 + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Define additional custom metrics + ## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file + # customMetrics: + # pg_database: + # query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size_bytes FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'postgres')" + # metrics: + # - name: + # usage: "LABEL" + # description: "Name of the database" + # - size_bytes: + # usage: "GAUGE" + # description: "Size of the database in bytes" + # + ## An array to add extra env vars to configure postgres-exporter + ## see: https://github.com/wrouesnel/postgres_exporter#environment-variables + ## For example: + # extraEnvVars: + # - name: PG_EXPORTER_DISABLE_DEFAULT_METRICS + # value: "true" + extraEnvVars: {} + + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## + securityContext: + enabled: false + runAsUser: 1001 + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) + ## Configure extra options for liveness and readiness probes + ## + livenessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + + readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Array with extra yaml to deploy with the chart. Evaluated as a template +## +extraDeploy: [] diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/access-tls-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/access-tls-values.yaml new file mode 100644 index 000000000..27e24d346 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/access-tls-values.yaml @@ -0,0 +1,34 @@ +databaseUpgradeReady: true +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/default-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/default-values.yaml new file mode 100644 index 000000000..020f52335 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/default-values.yaml @@ -0,0 +1,32 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true +## This is an exception here because HA needs masterKey to connect with other node members and it is commented in values to support 6.x to 7.x Migration +## Please refer https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#special-upgrade-notes-1 +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/global-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/global-values.yaml new file mode 100644 index 000000000..0987e17ca --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/global-values.yaml @@ -0,0 +1,255 @@ +databaseUpgradeReady: true +artifactory: + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + customInitContainersBegin: | + - name: "custom-init-begin-local" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + customInitContainers: | + - name: "custom-init-local" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-local + mountPath: "/scriptslocal" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-local" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +global: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + customInitContainersBegin: | + - name: "custom-init-begin-global" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + customInitContainers: | + - name: "custom-init-global" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + # Add custom volumes + customVolumes: | + - name: custom-script-global + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-global + mountPath: "/scripts" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-global" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in global' >> /scripts/sidecarglobal.txt; cat /scripts/sidecarglobal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scripts" + name: custom-script-global + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +nginx: + customInitContainers: | + - name: "custom-init-begin-nginx" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in nginx" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: custom-script-local + customSidecarContainers: | + - name: "sidecar-list-nginx" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + + artifactoryConf: | + {{- if .Values.nginx.https.enabled }} + ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; + ssl_certificate {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt; + ssl_certificate_key {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key; + ssl_session_cache shared:SSL:1m; + ssl_prefer_server_ciphers on; + {{- end }} + ## server configuration + server { + listen 8088; + {{- if .Values.nginx.internalPortHttps }} + listen {{ .Values.nginx.internalPortHttps }} ssl; + {{- else -}} + {{- if .Values.nginx.https.enabled }} + listen {{ .Values.nginx.https.internalPort }} ssl; + {{- end }} + {{- end }} + {{- if .Values.nginx.internalPortHttp }} + listen {{ .Values.nginx.internalPortHttp }}; + {{- else -}} + {{- if .Values.nginx.http.enabled }} + listen {{ .Values.nginx.http.internalPort }}; + {{- end }} + {{- end }} + server_name ~(?.+)\.{{ include "artifactory-ha.fullname" . }} {{ include "artifactory-ha.fullname" . }} + {{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} + {{- end -}}; + if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; + } + ## Application specific logs + ## access_log /var/log/nginx/artifactory-access.log timing; + ## error_log /var/log/nginx/artifactory-error.log; + rewrite ^/artifactory/?$ / redirect; + if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; + } + chunked_transfer_encoding on; + client_max_body_size 0; + + location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + } + } + + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: + - containerPort: 8088 + name: http2 + service: + ## A list of custom ports to expose through the Ingress controller service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: + - port: 8088 + targetPort: 8088 + protocol: TCP + name: http2 diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/large-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/large-values.yaml new file mode 100644 index 000000000..153307aa2 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/large-values.yaml @@ -0,0 +1,85 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 300 + primary: + replicaCount: 4 + resources: + requests: + memory: "6Gi" + cpu: "2" + limits: + memory: "10Gi" + cpu: "8" + javaOpts: + xms: "8g" + xmx: "10g" +access: + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 100 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 150 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/loggers-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/loggers-values.yaml new file mode 100644 index 000000000..03c94be95 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/loggers-values.yaml @@ -0,0 +1,43 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + + loggers: + - access-audit.log + - access-request.log + - access-security-audit.log + - access-service.log + - artifactory-access.log + - artifactory-event.log + - artifactory-import-export.log + - artifactory-request.log + - artifactory-service.log + - frontend-request.log + - frontend-service.log + - metadata-request.log + - metadata-service.log + - router-request.log + - router-service.log + - router-traefik.log + + catalinaLoggers: + - tomcat-catalina.log + - tomcat-localhost.log diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/medium-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/medium-values.yaml new file mode 100644 index 000000000..115e7d460 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/medium-values.yaml @@ -0,0 +1,85 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 200 + primary: + replicaCount: 3 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "8Gi" + cpu: "6" + javaOpts: + xms: "6g" + xmx: "8g" +access: + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 100 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/migration-disabled-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/migration-disabled-values.yaml new file mode 100644 index 000000000..44895a373 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/migration-disabled-values.yaml @@ -0,0 +1,31 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + migration: + enabled: false + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/nginx-autoreload-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/nginx-autoreload-values.yaml new file mode 100644 index 000000000..a6f4e8001 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/nginx-autoreload-values.yaml @@ -0,0 +1,53 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true +## This is an exception here because HA needs masterKey to connect with other node members and it is commented in values to support 6.x to 7.x Migration +## Please refer https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#special-upgrade-notes-1 +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false + +nginx: + customVolumes: | + - name: scripts + configMap: + name: {{ template "artifactory-ha.fullname" . }}-nginx-scripts + defaultMode: 0550 + customVolumeMounts: | + - name: scripts + mountPath: /var/opt/jfrog/nginx/scripts/ + customCommand: + - /bin/sh + - -c + - | + # watch for configmap changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/conf.d:n & + {{ if .Values.nginx.https.enabled -}} + # watch for tls secret changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/ssl:n & + {{ end -}} + nginx -g 'daemon off;' diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/rtsplit-access-tls-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/rtsplit-access-tls-values.yaml new file mode 100644 index 000000000..6f3b13cb1 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/rtsplit-access-tls-values.yaml @@ -0,0 +1,106 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 3 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 102 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/rtsplit-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/rtsplit-values.yaml new file mode 100644 index 000000000..87832a505 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/rtsplit-values.yaml @@ -0,0 +1,155 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 3 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + + # Add lifecycle hooks for artifactory container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 102 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for jfconect container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for router container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for frontend container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/small-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/small-values.yaml new file mode 100644 index 000000000..b4557289e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/small-values.yaml @@ -0,0 +1,87 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 200 + primary: + replicaCount: 1 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "6g" + node: + replicaCount: 2 +access: + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 80 + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.15/ci/test-values.yaml b/charts/jfrog/artifactory-ha/107.90.15/ci/test-values.yaml new file mode 100644 index 000000000..8bbbb5b3e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/ci/test-values.yaml @@ -0,0 +1,85 @@ +databaseUpgradeReady: true +artifactory: + metrics: + enabled: true + podSecurityContext: + fsGroupChangePolicy: "OnRootMismatch" + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + unifiedSecretInstallation: false + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + statefulset: + annotations: + artifactory: test + +postgresql: + postgresqlPassword: "password" + postgresqlExtendedConf: + maxConnections: "102" + persistence: + enabled: false +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false + +jfconnect: + enabled: false + +## filebeat sidecar +filebeat: + enabled: true + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output.file: + path: "/tmp/filebeat" + filename: filebeat + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 diff --git a/charts/jfrog/artifactory-ha/107.90.15/files/binarystore.xml b/charts/jfrog/artifactory-ha/107.90.15/files/binarystore.xml new file mode 100644 index 000000000..0e7bc5af0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/files/binarystore.xml @@ -0,0 +1,439 @@ +{{- if and (eq .Values.artifactory.persistence.type "nfs") (.Values.artifactory.haDataDir.enabled) }} + + + + + + + +{{- end }} +{{- if and (eq .Values.artifactory.persistence.type "nfs") (not .Values.artifactory.haDataDir.enabled) }} + + {{- if (.Values.artifactory.persistence.maxCacheSize) }} + + + + + + {{- else }} + + + + {{- end }} + + {{- if .Values.artifactory.persistence.maxCacheSize }} + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + + {{ .Values.artifactory.persistence.nfs.dataDir }}/filestore + + + +{{- end }} + +{{- if eq .Values.artifactory.persistence.type "file-system" }} + +{{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + + + + + + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) -}} + + {{- end }} + + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + // Specify the read and write strategy and redundancy for the sharding binary provider + + roundRobin + percentageFreeSpace + 2 + + + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) -}} + //For each sub-provider (mount), specify the filestore location + + filestore{{ $sharedClaimNumber }} + + {{- end }} + +{{- else }} + + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + 2 + 2 + + + + + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + + + shard-fs-1 + local + + + + + 30 + tester-remote1 + 10000 + remote + + + +{{- end }} +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") (eq .Values.artifactory.persistence.type "google-storage-v2-direct") }} + + + {{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") }} + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + 2 + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "google-storage-v2-direct" }} + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") }} + + local + + + + + 30 + 10000 + remote + + {{- end }} + + + + {{- if .Values.artifactory.persistence.googleStorage.useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .Values.artifactory.persistence.googleStorage.enableSignedUrlRedirect }} + google-cloud-storage + {{ .Values.artifactory.persistence.googleStorage.endpoint }} + {{ .Values.artifactory.persistence.googleStorage.httpsOnly }} + {{ .Values.artifactory.persistence.googleStorage.bucketName }} + {{ .Values.artifactory.persistence.googleStorage.path }} + {{ .Values.artifactory.persistence.googleStorage.bucketExists }} + + +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "aws-s3-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-direct") (eq .Values.artifactory.persistence.type "s3-storage-v3-archive") }} + + + {{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }} + + + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-direct" }} + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-archive" }} + + + + + + + {{- end }} + + {{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }} + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + + + + + remote + + + + local + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + {{- if eq .Values.artifactory.persistence.type "s3-storage-v3-direct" }} + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + {{- with .Values.artifactory.persistence.awsS3V3 }} + + {{ .testConnection }} + {{- if .identity }} + {{ .identity }} + {{- end }} + {{- if .credential }} + {{ .credential }} + {{- end }} + {{ .region }} + {{ .bucketName }} + {{ .path }} + {{ .endpoint }} + {{- with .port }} + {{ . }} + {{- end }} + {{- with .useHttp }} + {{ . }} + {{- end }} + {{- with .maxConnections }} + {{ . }} + {{- end }} + {{- with .connectionTimeout }} + {{ . }} + {{- end }} + {{- with .socketTimeout }} + {{ . }} + {{- end }} + {{- with .kmsServerSideEncryptionKeyId }} + {{ . }} + {{- end }} + {{- with .kmsKeyRegion }} + {{ . }} + {{- end }} + {{- with .kmsCryptoMode }} + {{ . }} + {{- end }} + {{- if .useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .usePresigning }} + {{ .signatureExpirySeconds }} + {{ .signedUrlExpirySeconds }} + {{- with .cloudFrontDomainName }} + {{ . }} + {{- end }} + {{- with .cloudFrontKeyPairId }} + {{ . }} + {{- end }} + {{- with .cloudFrontPrivateKey }} + {{ . }} + {{- end }} + {{- with .enableSignedUrlRedirect }} + {{ . }} + {{- end }} + {{- with .enablePathStyleAccess }} + {{ . }} + {{- end }} + {{- with .multiPartLimit }} + {{ . | int64 }} + {{- end }} + {{- with .multipartElementSize }} + {{ . | int64 }} + {{- end }} + + {{- end }} + +{{- end }} + +{{- if or (eq .Values.artifactory.persistence.type "azure-blob") (eq .Values.artifactory.persistence.type "azure-blob-storage-direct") }} + + + {{- if eq .Values.artifactory.persistence.type "azure-blob" }} + + + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "azure-blob-storage-direct" }} + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if eq .Values.artifactory.persistence.type "azure-blob" }} + + + crossNetworkStrategy + crossNetworkStrategy + 2 + 1 + + + + + remote + + + + local + + {{- end }} + + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "azure-blob-storage-v2-direct" -}} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/files/installer-info.json b/charts/jfrog/artifactory-ha/107.90.15/files/installer-info.json new file mode 100644 index 000000000..cf6b020fb --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/files/installer-info.json @@ -0,0 +1,32 @@ +{ + "productId": "Helm_artifactory-ha/{{ .Chart.Version }}", + "features": [ + { + "featureId": "Platform/{{ printf "%s-%s" "kubernetes" .Capabilities.KubeVersion.Version }}" + }, + { + "featureId": "Database/{{ .Values.database.type }}" + }, + { + "featureId": "PostgreSQL_Enabled/{{ .Values.postgresql.enabled }}" + }, + { + "featureId": "Nginx_Enabled/{{ .Values.nginx.enabled }}" + }, + { + "featureId": "ArtifactoryPersistence_Type/{{ .Values.artifactory.persistence.type }}" + }, + { + "featureId": "SplitServicesToContainers_Enabled/{{ .Values.splitServicesToContainers }}" + }, + { + "featureId": "UnifiedSecretInstallation_Enabled/{{ .Values.artifactory.unifiedSecretInstallation }}" + }, + { + "featureId": "Filebeat_Enabled/{{ .Values.filebeat.enabled }}" + }, + { + "featureId": "ReplicaCount/{{ add .Values.artifactory.primary.replicaCount .Values.artifactory.node.replicaCount }}" + } + ] +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/files/migrate.sh b/charts/jfrog/artifactory-ha/107.90.15/files/migrate.sh new file mode 100644 index 000000000..ba44160f4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/files/migrate.sh @@ -0,0 +1,4311 @@ +#!/bin/bash + +# Flags +FLAG_Y="y" +FLAG_N="n" +FLAGS_Y_N="$FLAG_Y $FLAG_N" +FLAG_NOT_APPLICABLE="_NA_" + +CURRENT_VERSION=$1 + +WRAPPER_SCRIPT_TYPE_RPMDEB="RPMDEB" +WRAPPER_SCRIPT_TYPE_DOCKER_COMPOSE="DOCKERCOMPOSE" + +SENSITIVE_KEY_VALUE="__sensitive_key_hidden___" + +# Shared system keys +SYS_KEY_SHARED_JFROGURL="shared.jfrogUrl" +SYS_KEY_SHARED_SECURITY_JOINKEY="shared.security.joinKey" +SYS_KEY_SHARED_SECURITY_MASTERKEY="shared.security.masterKey" + +SYS_KEY_SHARED_NODE_ID="shared.node.id" +SYS_KEY_SHARED_JAVAHOME="shared.javaHome" + +SYS_KEY_SHARED_DATABASE_TYPE="shared.database.type" +SYS_KEY_SHARED_DATABASE_TYPE_VALUE_POSTGRES="postgresql" +SYS_KEY_SHARED_DATABASE_DRIVER="shared.database.driver" +SYS_KEY_SHARED_DATABASE_URL="shared.database.url" +SYS_KEY_SHARED_DATABASE_USERNAME="shared.database.username" +SYS_KEY_SHARED_DATABASE_PASSWORD="shared.database.password" + +SYS_KEY_SHARED_ELASTICSEARCH_URL="shared.elasticsearch.url" +SYS_KEY_SHARED_ELASTICSEARCH_USERNAME="shared.elasticsearch.username" +SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD="shared.elasticsearch.password" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP="shared.elasticsearch.clusterSetup" +SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE="shared.elasticsearch.unicastFile" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP_VALUE="YES" + +# Define this in product specific script. Should contain the path to unitcast file +# File used by insight server to write cluster active nodes info. This will be read by elasticsearch +#SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE_VALUE="" + +SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME="shared.rabbitMq.active.node.name" +SYS_KEY_RABBITMQ_ACTIVE_NODE_IP="shared.rabbitMq.active.node.ip" + +# Filenames +FILE_NAME_SYSTEM_YAML="system.yaml" +FILE_NAME_JOIN_KEY="join.key" +FILE_NAME_MASTER_KEY="master.key" +FILE_NAME_INSTALLER_YAML="installer.yaml" + +# Global constants used in business logic +NODE_TYPE_STANDALONE="standalone" +NODE_TYPE_CLUSTER_NODE="node" +NODE_TYPE_DATABASE="database" + +# External(isable) databases +DATABASE_POSTGRES="POSTGRES" +DATABASE_ELASTICSEARCH="ELASTICSEARCH" +DATABASE_RABBITMQ="RABBITMQ" + +POSTGRES_LABEL="PostgreSQL" +ELASTICSEARCH_LABEL="Elasticsearch" +RABBITMQ_LABEL="Rabbitmq" + +ARTIFACTORY_LABEL="Artifactory" +JFMC_LABEL="Mission Control" +DISTRIBUTION_LABEL="Distribution" +XRAY_LABEL="Xray" + +POSTGRES_CONTAINER="postgres" +ELASTICSEARCH_CONTAINER="elasticsearch" +RABBITMQ_CONTAINER="rabbitmq" +REDIS_CONTAINER="redis" + +#Adding a small timeout before a read ensures it is positioned correctly in the screen +read_timeout=0.5 + +# Options related to data directory location +PROMPT_DATA_DIR_LOCATION="Installation Directory" +KEY_DATA_DIR_LOCATION="installer.data_dir" + +SYS_KEY_SHARED_NODE_HAENABLED="shared.node.haEnabled" +PROMPT_ADD_TO_CLUSTER="Are you adding an additional node to an existing product cluster?" +KEY_ADD_TO_CLUSTER="installer.ha" +VALID_VALUES_ADD_TO_CLUSTER="$FLAGS_Y_N" + +MESSAGE_POSTGRES_INSTALL="The installer can install a $POSTGRES_LABEL database, or you can connect to an existing compatible $POSTGRES_LABEL database\n(compatible databases: https://www.jfrog.com/confluence/display/JFROG/System+Requirements#SystemRequirements-RequirementsMatrix)" +PROMPT_POSTGRES_INSTALL="Do you want to install $POSTGRES_LABEL?" +KEY_POSTGRES_INSTALL="installer.install_postgresql" +VALID_VALUES_POSTGRES_INSTALL="$FLAGS_Y_N" + +# Postgres connection details +RPM_DEB_POSTGRES_HOME_DEFAULT="/var/opt/jfrog/postgres" +RPM_DEB_MESSAGE_STANDALONE_POSTGRES_DATA="$POSTGRES_LABEL home will have data and its configuration" +RPM_DEB_PROMPT_STANDALONE_POSTGRES_DATA="Type desired $POSTGRES_LABEL home location" +RPM_DEB_KEY_STANDALONE_POSTGRES_DATA="installer.postgresql.home" + +MESSAGE_DATABASE_URL="Provide the database connection details" +PROMPT_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://:/artifactory" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://:/mission_control?sslmode=disable" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://:/distribution?sslmode=disable" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://:/xraydb?sslmode=disable" + ;; + esac + if [ -z "$databaseURlExample" ]; then + echo -n "$POSTGRES_LABEL URL" # For consistency with username and password + return + fi + echo -n "$POSTGRES_LABEL url. Example: [$databaseURlExample]" +} +REGEX_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://.*/artifactory.*" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://.*/mission_control.*" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://.*/distribution.*" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://.*/xraydb.*" + ;; + esac + echo -n "^$databaseURlExample\$" +} +ERROR_MESSAGE_DATABASE_URL="Invalid $POSTGRES_LABEL URL" +KEY_DATABASE_URL="$SYS_KEY_SHARED_DATABASE_URL" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_USERNAME="$POSTGRES_LABEL username" +KEY_DATABASE_USERNAME="$SYS_KEY_SHARED_DATABASE_USERNAME" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_PASSWORD="$POSTGRES_LABEL password" +KEY_DATABASE_PASSWORD="$SYS_KEY_SHARED_DATABASE_PASSWORD" +IS_SENSITIVE_DATABASE_PASSWORD="$FLAG_Y" + +MESSAGE_STANDALONE_ELASTICSEARCH_INSTALL="The installer can install a $ELASTICSEARCH_LABEL database or you can connect to an existing compatible $ELASTICSEARCH_LABEL database" +PROMPT_STANDALONE_ELASTICSEARCH_INSTALL="Do you want to install $ELASTICSEARCH_LABEL?" +KEY_STANDALONE_ELASTICSEARCH_INSTALL="installer.install_elasticsearch" +VALID_VALUES_STANDALONE_ELASTICSEARCH_INSTALL="$FLAGS_Y_N" + +# Elasticsearch connection details +MESSAGE_ELASTICSEARCH_DETAILS="Provide the $ELASTICSEARCH_LABEL connection details" +PROMPT_ELASTICSEARCH_URL="$ELASTICSEARCH_LABEL URL" +KEY_ELASTICSEARCH_URL="$SYS_KEY_SHARED_ELASTICSEARCH_URL" + +PROMPT_ELASTICSEARCH_USERNAME="$ELASTICSEARCH_LABEL username" +KEY_ELASTICSEARCH_USERNAME="$SYS_KEY_SHARED_ELASTICSEARCH_USERNAME" + +PROMPT_ELASTICSEARCH_PASSWORD="$ELASTICSEARCH_LABEL password" +KEY_ELASTICSEARCH_PASSWORD="$SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD" +IS_SENSITIVE_ELASTICSEARCH_PASSWORD="$FLAG_Y" + +# Cluster related questions +MESSAGE_CLUSTER_MASTER_KEY="Provide the cluster's master key. It can be found in the data directory of the first node under /etc/security/master.key" +PROMPT_CLUSTER_MASTER_KEY="Master Key" +KEY_CLUSTER_MASTER_KEY="$SYS_KEY_SHARED_SECURITY_MASTERKEY" +IS_SENSITIVE_CLUSTER_MASTER_KEY="$FLAG_Y" + +MESSAGE_JOIN_KEY="The Join key is the secret key used to establish trust between services in the JFrog Platform.\n(You can copy the Join Key from Admin > User Management > Settings)" +PROMPT_JOIN_KEY="Join Key" +KEY_JOIN_KEY="$SYS_KEY_SHARED_SECURITY_JOINKEY" +IS_SENSITIVE_JOIN_KEY="$FLAG_Y" +REGEX_JOIN_KEY="^[a-zA-Z0-9]{16,}\$" +ERROR_MESSAGE_JOIN_KEY="Invalid Join Key" + +# Rabbitmq related cluster information +MESSAGE_RABBITMQ_ACTIVE_NODE_NAME="Provide an active ${RABBITMQ_LABEL} node name. Run the command [ hostname -s ] on any of the existing nodes in the product cluster to get this" +PROMPT_RABBITMQ_ACTIVE_NODE_NAME="${RABBITMQ_LABEL} active node name" +KEY_RABBITMQ_ACTIVE_NODE_NAME="$SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME" + +# Rabbitmq related cluster information (necessary only for docker-compose) +PROMPT_RABBITMQ_ACTIVE_NODE_IP="${RABBITMQ_LABEL} active node ip" +KEY_RABBITMQ_ACTIVE_NODE_IP="$SYS_KEY_RABBITMQ_ACTIVE_NODE_IP" + +MESSAGE_JFROGURL(){ + echo -e "The JFrog URL allows ${PRODUCT_NAME} to connect to a JFrog Platform Instance.\n(You can copy the JFrog URL from Administration > User Management > Settings > Connection details)" +} +PROMPT_JFROGURL="JFrog URL" +KEY_JFROGURL="$SYS_KEY_SHARED_JFROGURL" +REGEX_JFROGURL="^https?://.*:{0,}[0-9]{0,4}\$" +ERROR_MESSAGE_JFROGURL="Invalid JFrog URL" + + +# Set this to FLAG_Y on upgrade +IS_UPGRADE="${FLAG_N}" + +# This belongs in JFMC but is the ONLY one that needs it so keeping it here for now. Can be made into a method and overridden if necessary +MESSAGE_MULTIPLE_PG_SCHEME="Please setup $POSTGRES_LABEL with schema as described in https://www.jfrog.com/confluence/display/JFROG/Installing+Mission+Control" + +_getMethodOutputOrVariableValue() { + unset EFFECTIVE_MESSAGE + local keyToSearch=$1 + local effectiveMessage= + local result="0" + # logSilly "Searching for method: [$keyToSearch]" + LC_ALL=C type "$keyToSearch" > /dev/null 2>&1 || result="$?" + if [[ "$result" == "0" ]]; then + # logSilly "Found method for [$keyToSearch]" + EFFECTIVE_MESSAGE="$($keyToSearch)" + return + fi + eval EFFECTIVE_MESSAGE=\${$keyToSearch} + if [ ! -z "$EFFECTIVE_MESSAGE" ]; then + return + fi + # logSilly "Didn't find method or variable for [$keyToSearch]" +} + + +# REF https://misc.flogisoft.com/bash/tip_colors_and_formatting +cClear="\e[0m" +cBlue="\e[38;5;69m" +cRedDull="\e[1;31m" +cYellow="\e[1;33m" +cRedBright="\e[38;5;197m" +cBold="\e[1m" + + +_loggerGetModeRaw() { + local MODE="$1" + case $MODE in + INFO) + printf "" + ;; + DEBUG) + printf "%s" "[${MODE}] " + ;; + WARN) + printf "${cRedDull}%s%s${cClear}" "[" "${MODE}" "] " + ;; + ERROR) + printf "${cRedBright}%s%s${cClear}" "[" "${MODE}" "] " + ;; + esac +} + + +_loggerGetMode() { + local MODE="$1" + case $MODE in + INFO) + printf "${cBlue}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + DEBUG) + printf "%-7s" "[${MODE}]" + ;; + WARN) + printf "${cRedDull}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + ERROR) + printf "${cRedBright}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + esac +} + +# Capitalises the first letter of the message +_loggerGetMessage() { + local originalMessage="$*" + local firstChar=$(echo "${originalMessage:0:1}" | awk '{ print toupper($0) }') + local resetOfMessage="${originalMessage:1}" + echo "$firstChar$resetOfMessage" +} + +# The spec also says content should be left-trimmed but this is not necessary in our case. We don't reach the limit. +_loggerGetStackTrace() { + printf "%s%-30s%s" "[" "$1:$2" "]" +} + +_loggerGetThread() { + printf "%s" "[main]" +} + +_loggerGetServiceType() { + printf "%s%-5s%s" "[" "shell" "]" +} + +#Trace ID is not applicable to scripts +_loggerGetTraceID() { + printf "%s" "[]" +} + +logRaw() { + echo "" + printf "$1" + echo "" +} + +logBold(){ + echo "" + printf "${cBold}$1${cClear}" + echo "" +} + +# The date binary works differently based on whether it is GNU/BSD +is_date_supported=0 +date --version > /dev/null 2>&1 || is_date_supported=1 +IS_GNU=$(echo $is_date_supported) + +_loggerGetTimestamp() { + if [ "${IS_GNU}" == "0" ]; then + echo -n $(date -u +%FT%T.%3NZ) + else + echo -n $(date -u +%FT%T.000Z) + fi +} + +# https://www.shellscript.sh/tips/spinner/ +_spin() +{ + spinner="/|\\-/|\\-" + while : + do + for i in `seq 0 7` + do + echo -n "${spinner:$i:1}" + echo -en "\010" + sleep 1 + done + done +} + +showSpinner() { + # Start the Spinner: + _spin & + # Make a note of its Process ID (PID): + SPIN_PID=$! + # Kill the spinner on any signal, including our own exit. + trap "kill -9 $SPIN_PID" `seq 0 15` &> /dev/null || return 0 +} + +stopSpinner() { + local occurrences=$(ps -ef | grep -wc "${SPIN_PID}") + let "occurrences+=0" + # validate that it is present (2 since this search itself will show up in the results) + if [ $occurrences -gt 1 ]; then + kill -9 $SPIN_PID &>/dev/null || return 0 + wait $SPIN_ID &>/dev/null + fi +} + +_getEffectiveMessage(){ + local MESSAGE="$1" + local MODE=${2-"INFO"} + + if [ -z "$CONTEXT" ]; then + CONTEXT=$(caller) + fi + + _EFFECTIVE_MESSAGE= + if [ -z "$LOG_BEHAVIOR_ADD_META" ]; then + _EFFECTIVE_MESSAGE="$(_loggerGetModeRaw $MODE)$(_loggerGetMessage $MESSAGE)" + else + local SERVICE_TYPE="script" + local TRACE_ID="" + local THREAD="main" + + local CONTEXT_LINE=$(echo "$CONTEXT" | awk '{print $1}') + local CONTEXT_FILE=$(echo "$CONTEXT" | awk -F"/" '{print $NF}') + + _EFFECTIVE_MESSAGE="$(_loggerGetTimestamp) $(_loggerGetServiceType) $(_loggerGetMode $MODE) $(_loggerGetTraceID) $(_loggerGetStackTrace $CONTEXT_FILE $CONTEXT_LINE) $(_loggerGetThread) - $(_loggerGetMessage $MESSAGE)" + fi + CONTEXT= +} + +# Important - don't call any log method from this method. Will become an infinite loop. Use echo to debug +_logToFile() { + local MODE=${1-"INFO"} + local targetFile="$LOG_BEHAVIOR_ADD_REDIRECTION" + # IF the file isn't passed, abort + if [ -z "$targetFile" ]; then + return + fi + # IF this is not being run in verbose mode and mode is debug or lower, abort + if [ "${VERBOSE_MODE}" != "$FLAG_Y" ] && [ "${VERBOSE_MODE}" != "true" ] && [ "${VERBOSE_MODE}" != "debug" ]; then + if [ "$MODE" == "DEBUG" ] || [ "$MODE" == "SILLY" ]; then + return + fi + fi + + # Create the file if it doesn't exist + if [ ! -f "${targetFile}" ]; then + return + # touch $targetFile > /dev/null 2>&1 || true + fi + # # Make it readable + # chmod 640 $targetFile > /dev/null 2>&1 || true + + # Log contents + printf "%s\n" "$_EFFECTIVE_MESSAGE" >> "$targetFile" || true +} + +logger() { + if [ "$LOG_BEHAVIOR_ADD_NEW_LINE" == "$FLAG_Y" ]; then + echo "" + fi + _getEffectiveMessage "$@" + local MODE=${2-"INFO"} + printf "%s\n" "$_EFFECTIVE_MESSAGE" + _logToFile "$MODE" +} + +logDebug(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "$FLAG_Y" ] || [ "${VERBOSE_MODE}" == "true" ] || [ "${VERBOSE_MODE}" == "debug" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logSilly(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "silly" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logError() { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= +} + +errorExit () { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= + exit 1 +} + +warn () { + CONTEXT=$(caller) + logger "$1" "WARN" + CONTEXT= +} + +note () { + CONTEXT=$(caller) + logger "$1" "NOTE" + CONTEXT= +} + +bannerStart() { + title=$1 + echo + echo -e "\033[1m${title}\033[0m" + echo +} + +bannerSection() { + title=$1 + echo + echo -e "******************************** ${title} ********************************" + echo +} + +bannerSubSection() { + title=$1 + echo + echo -e "************** ${title} *******************" + echo +} + +bannerMessge() { + title=$1 + echo + echo -e "********************************" + echo -e "${title}" + echo -e "********************************" + echo +} + +setRed () { + local input="$1" + echo -e \\033[31m${input}\\033[0m +} +setGreen () { + local input="$1" + echo -e \\033[32m${input}\\033[0m +} +setYellow () { + local input="$1" + echo -e \\033[33m${input}\\033[0m +} + +logger_addLinebreak () { + echo -e "---\n" +} + +bannerImportant() { + title=$1 + local bold="\033[1m" + local noColour="\033[0m" + echo + echo -e "${bold}######################################## IMPORTANT ########################################${noColour}" + echo -e "${bold}${title}${noColour}" + echo -e "${bold}###########################################################################################${noColour}" + echo +} + +bannerEnd() { + #TODO pass a title and calculate length dynamically so that start and end look alike + echo + echo "*****************************************************************************" + echo +} + +banner() { + title=$1 + content=$2 + bannerStart "${title}" + echo -e "$content" +} + +# The logic below helps us redirect content we'd normally hide to the log file. + # + # We have several commands which clutter the console with output and so use + # `cmd > /dev/null` - this redirects the command's output to null. + # + # However, the information we just hid maybe useful for support. Using the code pattern + # `cmd >&6` (instead of `cmd> >/dev/null` ), the command's output is hidden from the console + # but redirected to the installation log file + # + +#Default value of 6 is just null +exec 6>>/dev/null +redirectLogsToFile() { + echo "" + # local file=$1 + + # [ ! -z "${file}" ] || return 0 + + # local logDir=$(dirname "$file") + + # if [ ! -f "${file}" ]; then + # [ -d "${logDir}" ] || mkdir -p ${logDir} || \ + # ( echo "WARNING : Could not create parent directory (${logDir}) to redirect console log : ${file}" ; return 0 ) + # fi + + # #6 now points to the log file + # exec 6>>${file} + # #reference https://unix.stackexchange.com/questions/145651/using-exec-and-tee-to-redirect-logs-to-stdout-and-a-log-file-in-the-same-time + # exec 2>&1 > >(tee -a "${file}") +} + +# Check if a give key contains any sensitive string as part of it +# Based on the result, the caller can decide its value can be displayed or not +# Sample usage : isKeySensitive "${key}" && displayValue="******" || displayValue=${value} +isKeySensitive(){ + local key=$1 + local sensitiveKeys="password|secret|key|token" + + if [ -z "${key}" ]; then + return 1 + else + local lowercaseKey=$(echo "${key}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + [[ "${lowercaseKey}" =~ ${sensitiveKeys} ]] && return 0 || return 1 + fi +} + +getPrintableValueOfKey(){ + local displayValue= + local key="$1" + if [ -z "$key" ]; then + # This is actually an incorrect usage of this method but any logging will cause unexpected content in the caller + echo -n "" + return + fi + + local value="$2" + isKeySensitive "${key}" && displayValue="$SENSITIVE_KEY_VALUE" || displayValue="${value}" + echo -n $displayValue +} + +_createConsoleLog(){ + if [ -z "${JF_PRODUCT_HOME}" ]; then + return + fi + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + mkdir -p "${JF_PRODUCT_HOME}/var/log" || true + if [ ! -f ${targetFile} ]; then + touch $targetFile > /dev/null 2>&1 || true + fi + chmod 640 $targetFile > /dev/null 2>&1 || true +} + +# Output from application's logs are piped to this method. It checks a configuration variable to determine if content should be logged to +# the common console.log file +redirectServiceLogsToFile() { + + local result="0" + # check if the function getSystemValue exists + LC_ALL=C type getSystemValue > /dev/null 2>&1 || result="$?" + if [[ "$result" != "0" ]]; then + warn "Couldn't find the systemYamlHelper. Skipping log redirection" + return 0 + fi + + getSystemValue "shared.consoleLog" "NOT_SET" + if [[ "${YAML_VALUE}" == "false" ]]; then + logger "Redirection is set to false. Skipping log redirection" + return 0; + fi + + if [ -z "${JF_PRODUCT_HOME}" ] || [ "${JF_PRODUCT_HOME}" == "" ]; then + warn "JF_PRODUCT_HOME is unavailable. Skipping log redirection" + return 0 + fi + + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + + _createConsoleLog + + while read -r line; do + printf '%s\n' "${line}" >> $targetFile || return 0 # Don't want to log anything - might clutter the screen + done +} + +## Display environment variables starting with JF_ along with its value +## Value of sensitive keys will be displayed as "******" +## +## Sample Display : +## +## ======================== +## JF Environment variables +## ======================== +## +## JF_SHARED_NODE_ID : locahost +## JF_SHARED_JOINKEY : ****** +## +## +displayEnv() { + local JFEnv=$(printenv | grep ^JF_ 2>/dev/null) + local key= + local value= + + if [ -z "${JFEnv}" ]; then + return + fi + + cat << ENV_START_MESSAGE + +======================== +JF Environment variables +======================== +ENV_START_MESSAGE + + for entry in ${JFEnv}; do + key=$(echo "${entry}" | awk -F'=' '{print $1}') + value=$(echo "${entry}" | awk -F'=' '{print $2}') + + isKeySensitive "${key}" && value="******" || value=${value} + + printf "\n%-35s%s" "${key}" " : ${value}" + done + echo; +} + +_addLogRotateConfiguration() { + logDebug "Method ${FUNCNAME[0]}" + # mandatory inputs + local confFile="$1" + local logFile="$2" + + # Method available in _ioOperations.sh + LC_ALL=C type io_setYQPath > /dev/null 2>&1 || return 1 + + io_setYQPath + + # Method available in _systemYamlHelper.sh + LC_ALL=C type getSystemValue > /dev/null 2>&1 || return 1 + + local frequency="daily" + local archiveFolder="archived" + + local compressLogFiles= + getSystemValue "shared.logging.rotation.compress" "true" + if [[ "${YAML_VALUE}" == "true" ]]; then + compressLogFiles="compress" + fi + + getSystemValue "shared.logging.rotation.maxFiles" "10" + local noOfBackupFiles="${YAML_VALUE}" + + getSystemValue "shared.logging.rotation.maxSizeMb" "25" + local sizeOfFile="${YAML_VALUE}M" + + logDebug "Adding logrotate configuration for [$logFile] to [$confFile]" + + # Add configuration to file + local confContent=$(cat << LOGROTATECONF +$logFile { + $frequency + missingok + rotate $noOfBackupFiles + $compressLogFiles + notifempty + olddir $archiveFolder + dateext + extension .log + dateformat -%Y-%m-%d + size ${sizeOfFile} +} +LOGROTATECONF +) + echo "${confContent}" > ${confFile} || return 1 +} + +_operationIsBySameUser() { + local targetUser="$1" + local currentUserID=$(id -u) + local currentUserName=$(id -un) + + if [ $currentUserID == $targetUser ] || [ $currentUserName == $targetUser ]; then + echo -n "yes" + else + echo -n "no" + fi +} + +_addCronJobForLogrotate() { + logDebug "Method ${FUNCNAME[0]}" + + # Abort if logrotate is not available + [ "$(io_commandExists 'crontab')" != "yes" ] && warn "cron is not available" && return 1 + + # mandatory inputs + local productHome="$1" + local confFile="$2" + local cronJobOwner="$3" + + # We want to use our binary if possible. It may be more recent than the one in the OS + local logrotateBinary="$productHome/app/third-party/logrotate/logrotate" + + if [ ! -f "$logrotateBinary" ]; then + logrotateBinary="logrotate" + [ "$(io_commandExists 'logrotate')" != "yes" ] && warn "logrotate is not available" && return 1 + fi + local cmd="$logrotateBinary ${confFile} --state $productHome/var/etc/logrotate/logrotate-state" #--verbose + + id -u $cronJobOwner > /dev/null 2>&1 || { warn "User $cronJobOwner does not exist. Aborting logrotate configuration" && return 1; } + + # Remove the existing line + removeLogRotation "$productHome" "$cronJobOwner" || true + + # Run logrotate daily at 23:55 hours + local cronInterval="55 23 * * * $cmd" + + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + # If this is standalone mode, we cannot use -u - the user running this process may not have the necessary privileges + if [ "$standaloneMode" == "no" ]; then + (crontab -l -u $cronJobOwner 2>/dev/null; echo "$cronInterval") | crontab -u $cronJobOwner - + else + (crontab -l 2>/dev/null; echo "$cronInterval") | crontab - + fi +} + +## Configure logrotate for a product +## Failure conditions: +## If logrotation could not be setup for some reason +## Parameters: +## $1: The product name +## $2: The product home +## Depends on global: none +## Updates global: none +## Returns: NA + +configureLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + + # mandatory inputs + local productName="$1" + if [ -z $productName ]; then + warn "Incorrect usage. A product name is necessary for configuring log rotation" && return 1 + fi + + local productHome="$2" + if [ -z $productHome ]; then + warn "Incorrect usage. A product home folder is necessary for configuring log rotation" && return 1 + fi + + local logFile="${productHome}/var/log/console.log" + if [[ $(uname) == "Darwin" ]]; then + logger "Log rotation for [$logFile] has not been configured. Please setup manually" + return 0 + fi + + local userID="$3" + if [ -z $userID ]; then + warn "Incorrect usage. A userID is necessary for configuring log rotation" && return 1 + fi + + local groupID=${4:-$userID} + local logConfigOwner=${5:-$userID} + + logDebug "Configuring log rotation as user [$userID], group [$groupID], effective cron User [$logConfigOwner]" + + local errorMessage="Could not configure logrotate. Please configure log rotation of the file: [$logFile] manually" + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + # TODO move to recursive method + createDir "${productHome}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log/archived" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + + # TODO move to recursive method + createDir "${productHome}/var/etc" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/etc/logrotate" "$logConfigOwner" || { warn "${errorMessage}" && return 1; } + + # conf file should be owned by the user running the script + createFile "${confFile}" "${logConfigOwner}" || { warn "Could not create configuration file [$confFile]" return 1; } + + _addLogRotateConfiguration "${confFile}" "${logFile}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + _addCronJobForLogrotate "${productHome}" "${confFile}" "${logConfigOwner}" || { warn "${errorMessage}" && return 1; } +} + +_pauseExecution() { + if [ "${VERBOSE_MODE}" == "debug" ]; then + + local breakPoint="$1" + if [ ! -z "$breakPoint" ]; then + printf "${cBlue}Breakpoint${cClear} [$breakPoint] " + echo "" + fi + printf "${cBlue}Press enter once you are ready to continue${cClear}" + read -s choice + echo "" + fi +} + +# removeLogRotation "$productHome" "$cronJobOwner" || true +removeLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + if [[ $(uname) == "Darwin" ]]; then + logDebug "Not implemented for Darwin." + return 0 + fi + local productHome="$1" + local cronJobOwner="$2" + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + if [ "$standaloneMode" == "no" ]; then + crontab -l -u $cronJobOwner 2>/dev/null | grep -v "$confFile" | crontab -u $cronJobOwner - + else + crontab -l 2>/dev/null | grep -v "$confFile" | crontab - + fi +} + +# NOTE: This method does not check the configuration to see if redirection is necessary. +# This is intentional. If we don't redirect, tomcat logs might get redirected to a folder/file +# that does not exist, causing the service itself to not start +setupTomcatRedirection() { + logDebug "Method ${FUNCNAME[0]}" + local consoleLog="${JF_PRODUCT_HOME}/var/log/console.log" + _createConsoleLog + export CATALINA_OUT="${consoleLog}" +} + +setupScriptLogsRedirection() { + logDebug "Method ${FUNCNAME[0]}" + if [ -z "${JF_PRODUCT_HOME}" ]; then + logDebug "No JF_PRODUCT_HOME. Returning" + return + fi + # Create the console.log file if it is not already present + # _createConsoleLog || true + # # Ensure any logs (logger/logError/warn) also get redirected to the console.log + # # Using installer.log as a temparory fix. Please change this to console.log once INST-291 is fixed + export LOG_BEHAVIOR_ADD_REDIRECTION="${JF_PRODUCT_HOME}/var/log/console.log" + export LOG_BEHAVIOR_ADD_META="$FLAG_Y" +} + +# Returns Y if this method is run inside a container +isRunningInsideAContainer() { + local check1=$(grep -sq 'docker\|kubepods' /proc/1/cgroup; echo $?) + local check2=$(grep -sq 'containers' /proc/self/mountinfo; echo $?) + if [[ $check1 == 0 || $check2 == 0 || -f "/.dockerenv" ]]; then + echo -n "$FLAG_Y" + else + echo -n "$FLAG_N" + fi +} + +POSTGRES_USER=999 +NGINX_USER=104 +NGINX_GROUP=107 +ES_USER=1000 +REDIS_USER=999 +MONGO_USER=999 +RABBITMQ_USER=999 +LOG_FILE_PERMISSION=640 +PID_FILE_PERMISSION=644 + +# Copy file +copyFile(){ + local source=$1 + local target=$2 + local mode=${3:-overwrite} + local enableVerbose=${4:-"${FLAG_N}"} + local verboseFlag="" + + if [ ! -z "${enableVerbose}" ] && [ "${enableVerbose}" == "${FLAG_Y}" ]; then + verboseFlag="-v" + fi + + if [[ ! ( $source && $target ) ]]; then + warn "Source and target is mandatory to copy file" + return 1 + fi + + if [[ -f "${target}" ]]; then + [[ "$mode" = "overwrite" ]] && ( cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}") || true + else + cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}" + fi +} + +# Copy files recursively from given source directory to destination directory +# This method wil copy but will NOT overwrite +# Destination will be created if its not available +copyFilesNoOverwrite(){ + local src=$1 + local dest=$2 + local enableVerboseCopy="${3:-${FLAG_Y}}" + + if [[ -z "${src}" || -z "${dest}" ]]; then + return + fi + + if [ -d "${src}" ] && [ "$(ls -A ${src})" ]; then + local relativeFilePath="" + local targetFilePath="" + + for file in $(find ${src} -type f 2>/dev/null) ; do + # Derive relative path and attach it to destination + # Example : + # src=/extra_config + # dest=/var/opt/jfrog/artifactory/etc + # file=/extra_config/config.xml + # relativeFilePath=config.xml + # targetFilePath=/var/opt/jfrog/artifactory/etc/config.xml + relativeFilePath=${file/${src}/} + targetFilePath=${dest}${relativeFilePath} + + createDir "$(dirname "$targetFilePath")" + copyFile "${file}" "${targetFilePath}" "no_overwrite" "${enableVerboseCopy}" + done + fi +} + +# TODO : WINDOWS ? +# Check the max open files and open processes set on the system +checkULimits () { + local minMaxOpenFiles=${1:-32000} + local minMaxOpenProcesses=${2:-1024} + local setValue=${3:-true} + local warningMsgForFiles=${4} + local warningMsgForProcesses=${5} + + logger "Checking open files and processes limits" + + local currentMaxOpenFiles=$(ulimit -n) + logger "Current max open files is $currentMaxOpenFiles" + if [ ${currentMaxOpenFiles} != "unlimited" ] && [ "$currentMaxOpenFiles" -lt "$minMaxOpenFiles" ]; then + if [ "${setValue}" ]; then + ulimit -n "${minMaxOpenFiles}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForFiles}" ] || warn "${warningMsgForFiles}" + else + errorExit "Max number of open files $currentMaxOpenFiles, is too low. Cannot run the application!" + fi + fi + + local currentMaxOpenProcesses=$(ulimit -u) + logger "Current max open processes is $currentMaxOpenProcesses" + if [ "$currentMaxOpenProcesses" != "unlimited" ] && [ "$currentMaxOpenProcesses" -lt "$minMaxOpenProcesses" ]; then + if [ "${setValue}" ]; then + ulimit -u "${minMaxOpenProcesses}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForProcesses}" ] || warn "${warningMsgForProcesses}" + else + errorExit "Max number of open files $currentMaxOpenProcesses, is too low. Cannot run the application!" + fi + fi +} + +createDirs() { + local appDataDir=$1 + local serviceName=$2 + local folders="backup bootstrap data etc logs work" + + [ -z "${appDataDir}" ] && errorExit "An application directory is mandatory to create its data structure" || true + [ -z "${serviceName}" ] && errorExit "A service name is mandatory to create service data structure" || true + + for folder in ${folders} + do + folder=${appDataDir}/${folder}/${serviceName} + if [ ! -d "${folder}" ]; then + logger "Creating folder : ${folder}" + mkdir -p "${folder}" || errorExit "Failed to create ${folder}" + fi + done +} + + +testReadWritePermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local test_file=${dir_to_check}/test-permissions + + # Write file + if echo test > ${test_file} 1> /dev/null 2>&1; then + # Write succeeded. Testing read... + if cat ${test_file} > /dev/null; then + rm -f ${test_file} + else + error=true + fi + else + error=true + fi + + if [ ${error} == true ]; then + return 1 + else + return 0 + fi +} + +# Test directory has read/write permissions for current user +testDirectoryPermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local u_id=$(id -u) + local id_str="id ${u_id}" + + logger "Testing directory ${dir_to_check} has read/write permissions for user ${id_str}" + + if ! testReadWritePermissions ${dir_to_check}; then + error=true + fi + + if [ "${error}" == true ]; then + local stat_data=$(stat -Lc "Directory: %n, permissions: %a, owner: %U, group: %G" ${dir_to_check}) + logger "###########################################################" + logger "${dir_to_check} DOES NOT have proper permissions for user ${id_str}" + logger "${stat_data}" + logger "Mounted directory must have read/write permissions for user ${id_str}" + logger "###########################################################" + errorExit "Directory ${dir_to_check} has bad permissions for user ${id_str}" + fi + logger "Permissions for ${dir_to_check} are good" +} + +# Utility method to create a directory path recursively with chown feature as +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: Root directory from where the path can be created +## $2: List of recursive child directories separated by space +## $3: user who should own the directory. Optional +## $4: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA +# +# Usage: +# createRecursiveDir "/opt/jfrog/product/var" "bootstrap tomcat lib" "user_name" "group_name" +createRecursiveDir(){ + local rootDir=$1 + local pathDirs=$2 + local user=$3 + local group=${4:-${user}} + local fullPath= + + [ ! -z "${rootDir}" ] || return 0 + + createDir "${rootDir}" "${user}" "${group}" + + [ ! -z "${pathDirs}" ] || return 0 + + fullPath=${rootDir} + + for dir in ${pathDirs}; do + fullPath=${fullPath}/${dir} + createDir "${fullPath}" "${user}" "${group}" + done +} + +# Utility method to create a directory +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: directory to create +## $2: user who should own the directory. Optional +## $3: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA + +createDir(){ + local dirName="$1" + local printMessage=no + logSilly "Method ${FUNCNAME[0]} invoked with [$dirName]" + [ -z "${dirName}" ] && return + + logDebug "Attempting to create ${dirName}" + mkdir -p "${dirName}" || errorExit "Unable to create directory: [${dirName}]" + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + # Earlier, this line would have returned 1 if it failed. Now it just warns. + # This is intentional. Earlier, this line would NOT be reached if the folder already existed. + # Since it will always come to this line and the script may be running as a non-root user, this method will just warn if + # setting permissions fails (so as to not affect any existing flows) + io_setOwnershipNonRecursive "$dirName" "$userID" "$groupID" || warn "Could not set owner of [$dirName] to [$userID:$groupID]" + fi + # logging message to print created dir with user and group + local logMessage=${4:-$printMessage} + if [[ "${logMessage}" == "yes" ]]; then + logger "Successfully created directory [${dirName}]. Owner: [${userID}:${groupID}]" + fi +} + +removeSoftLinkAndCreateDir () { + local dirName="$1" + local userID="$2" + local groupID="$3" + local logMessage="$4" + removeSoftLink "${dirName}" + createDir "${dirName}" "${userID}" "${groupID}" "${logMessage}" +} + +# Utility method to remove a soft link +removeSoftLink () { + local dirName="$1" + if [[ -L "${dirName}" ]]; then + targetLink=$(readlink -f "${dirName}") + logger "Removing the symlink [${dirName}] pointing to [${targetLink}]" + rm -f "${dirName}" + fi +} + +# Check Directory exist in the path +checkDirExists () { + local directoryPath="$1" + + [[ -d "${directoryPath}" ]] && echo -n "true" || echo -n "false" +} + + +# Utility method to create a file +# Failure conditions: +# Parameters: +## $1: file to create +# Depends on global: none +# Updates global: none +# Returns: NA + +createFile(){ + local fileName="$1" + logSilly "Method ${FUNCNAME[0]} [$fileName]" + [ -f "${fileName}" ] && return 0 + touch "${fileName}" || return 1 + + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + io_setOwnership "$fileName" "$userID" "$groupID" || return 1 + fi +} + +# Check File exist in the filePath +# IMPORTANT- DON'T ADD LOGGING to this method +checkFileExists () { + local filePath="$1" + + [[ -f "${filePath}" ]] && echo -n "true" || echo -n "false" +} + +# Check for directories contains any (files or sub directories) +# IMPORTANT- DON'T ADD LOGGING to this method +checkDirContents () { + local directoryPath="$1" + if [[ "$(ls -1 "${directoryPath}" | wc -l)" -gt 0 ]]; then + echo -n "true" + else + echo -n "false" + fi +} + +# Check contents exist in directory +# IMPORTANT- DON'T ADD LOGGING to this method +checkContentExists () { + local source="$1" + + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + echo -n "false" + else + echo -n "true" + fi +} + +# Resolve the variable +# IMPORTANT- DON'T ADD LOGGING to this method +evalVariable () { + local output="$1" + local input="$2" + + eval "${output}"=\${"${input}"} + eval echo \${"${output}"} +} + +# Usage: if [ "$(io_commandExists 'curl')" == "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_commandExists() { + local commandToExecute="$1" + hash "${commandToExecute}" 2>/dev/null + local rt=$? + if [ "$rt" == 0 ]; then echo -n "yes"; else echo -n "no"; fi +} + +# Usage: if [ "$(io_curlExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_curlExists() { + io_commandExists "curl" +} + + +io_hasMatch() { + logSilly "Method ${FUNCNAME[0]}" + local result=0 + logDebug "Executing [echo \"$1\" | grep \"$2\" >/dev/null 2>&1]" + echo "$1" | grep "$2" >/dev/null 2>&1 || result=1 + return $result +} + +# Utility method to check if the string passed (usually a connection url) corresponds to this machine itself +# Failure conditions: None +# Parameters: +## $1: string to check against +# Depends on global: none +# Updates global: IS_LOCALHOST with value "yes/no" +# Returns: NA + +io_getIsLocalhost() { + logSilly "Method ${FUNCNAME[0]}" + IS_LOCALHOST="$FLAG_N" + local inputString="$1" + logDebug "Parsing [$inputString] to check if we are dealing with this machine itself" + + io_hasMatch "$inputString" "localhost" && { + logDebug "Found localhost. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for localhost" + + local hostIP=$(io_getPublicHostIP) + io_hasMatch "$inputString" "$hostIP" && { + logDebug "Found $hostIP. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostIP" + + local hostID=$(io_getPublicHostID) + io_hasMatch "$inputString" "$hostID" && { + logDebug "Found $hostID. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostID" + + local hostName=$(io_getPublicHostName) + io_hasMatch "$inputString" "$hostName" && { + logDebug "Found $hostName. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostName" + +} + +# Usage: if [ "$(io_tarExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_tarExists() { + io_commandExists "tar" +} + +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostIP() { + local OS_TYPE=$(uname) + local publicHostIP= + if [ "${OS_TYPE}" == "Darwin" ]; then + ipStatus=$(ifconfig en0 | grep "status" | awk '{print$2}') + if [ "${ipStatus}" == "active" ]; then + publicHostIP=$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}') + else + errorExit "Host IP could not be resolved!" + fi + elif [ "${OS_TYPE}" == "Linux" ]; then + publicHostIP=$(hostname -i 2>/dev/null || echo "127.0.0.1") + fi + publicHostIP=$(echo "${publicHostIP}" | awk '{print $1}') + echo -n "${publicHostIP}" +} + +# Will return the short host name (up to the first dot) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostName() { + echo -n "$(hostname -s)" +} + +# Will return the full host name (use this as much as possible) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostID() { + echo -n "$(hostname)" +} + +# Utility method to backup a file +# Failure conditions: NA +# Parameters: filePath +# Depends on global: none, +# Updates global: none +# Returns: NA +io_backupFile() { + logSilly "Method ${FUNCNAME[0]}" + fileName="$1" + if [ ! -f "${filePath}" ]; then + logDebug "No file: [${filePath}] to backup" + return + fi + dateTime=$(date +"%Y-%m-%d-%H-%M-%S") + targetFileName="${fileName}.backup.${dateTime}" + yes | \cp -f "$fileName" "${targetFileName}" + logger "File [${fileName}] backedup as [${targetFileName}]" +} + +# Reference https://stackoverflow.com/questions/4023830/how-to-compare-two-strings-in-dot-separated-version-format-in-bash/4025065#4025065 +is_number() { + case "$BASH_VERSION" in + 3.1.*) + PATTERN='\^\[0-9\]+\$' + ;; + *) + PATTERN='^[0-9]+$' + ;; + esac + + [[ "$1" =~ $PATTERN ]] +} + +io_compareVersions() { + if [[ $# != 2 ]] + then + echo "Usage: min_version current minimum" + return + fi + + A="${1%%.*}" + B="${2%%.*}" + + if [[ "$A" != "$1" && "$B" != "$2" && "$A" == "$B" ]] + then + io_compareVersions "${1#*.}" "${2#*.}" + else + if is_number "$A" && is_number "$B" + then + if [[ "$A" -eq "$B" ]]; then + echo "0" + elif [[ "$A" -gt "$B" ]]; then + echo "1" + elif [[ "$A" -lt "$B" ]]; then + echo "-1" + fi + fi + fi +} + +# Reference https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-a-bash-variable +# Strip all leading and trailing spaces +# IMPORTANT- DON'T ADD LOGGING to this method +io_trim() { + local var="$1" + # remove leading whitespace characters + var="${var#"${var%%[![:space:]]*}"}" + # remove trailing whitespace characters + var="${var%"${var##*[![:space:]]}"}" + echo -n "$var" +} + +# temporary function will be removing it ASAP +# search for string and replace text in file +replaceText_migration_hook () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + fi +} + +# search for string and replace text in file +replaceText () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + logDebug "Replaced [$regexString] with [$replaceText] in [$file]" + fi +} + +# search for string and prepend text in file +prependText () { + local regexString="$1" + local text="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + else + sed -i -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + fi +} + +# add text to beginning of the file +addText () { + local text="$1" + local file="$2" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + else + sed -i -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + fi +} + +io_replaceString () { + local value="$1" + local firstString="$2" + local secondString="$3" + local separator=${4:-"/"} + local updateValue= + if [[ $(uname) == "Darwin" ]]; then + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + else + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + fi + echo -n "${updateValue}" +} + +_findYQ() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + local parentDir="$1" + if [ -z "$parentDir" ]; then + return + fi + logDebug "Executing command [find "${parentDir}" -name third-party -type d]" + local yq=$(find "${parentDir}" -name third-party -type d) + if [ -d "${yq}/yq" ]; then + export YQ_PATH="${yq}/yq" + fi +} + + +io_setYQPath() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + if [ "$(io_commandExists 'yq')" == "yes" ]; then + return + fi + + if [ ! -z "${JF_PRODUCT_HOME}" ] && [ -d "${JF_PRODUCT_HOME}" ]; then + _findYQ "${JF_PRODUCT_HOME}" + fi + + if [ -z "${YQ_PATH}" ] && [ ! -z "${COMPOSE_HOME}" ] && [ -d "${COMPOSE_HOME}" ]; then + _findYQ "${COMPOSE_HOME}" + fi + # TODO We can remove this block after all the code is restructured. + if [ -z "${YQ_PATH}" ] && [ ! -z "${SCRIPT_HOME}" ] && [ -d "${SCRIPT_HOME}" ]; then + _findYQ "${SCRIPT_HOME}" + fi + +} + +io_getLinuxDistribution() { + LINUX_DISTRIBUTION= + + # Make sure running on Linux + [ $(uname -s) != "Linux" ] && return + + # Find out what Linux distribution we are on + + cat /etc/*-release | grep -i Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 6.x + cat /etc/issue.net | grep Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 7.x + cat /etc/*-release | grep -i centos >/dev/null 2>&1 && LINUX_DISTRIBUTION=CentOS && LINUX_DISTRIBUTION_VER="7" || true + + # OS 8.x + grep -q -i "release 8" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="8" || true + + # OS 7.x + grep -q -i "release 7" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="7" || true + + # OS 6.x + grep -q -i "release 6" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="6" || true + + cat /etc/*-release | grep -i Red | grep -i 'VERSION=7' >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat && LINUX_DISTRIBUTION_VER="7" || true + + cat /etc/*-release | grep -i debian >/dev/null 2>&1 && LINUX_DISTRIBUTION=Debian || true + + cat /etc/*-release | grep -i ubuntu >/dev/null 2>&1 && LINUX_DISTRIBUTION=Ubuntu || true +} + +## Utility method to check ownership of folders/files +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If file is not owned by the user & group +## Parameters: + ## user + ## group + ## folder to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac +io_checkOwner () { + logSilly "Method ${FUNCNAME[0]}" + local osType=$(uname) + + if [ "${osType}" != "Linux" ]; then + logDebug "Unsupported OS. Skipping check" + return 0 + fi + + local file_to_check=$1 + local user_id_to_check=$2 + + + if [ -z "$user_id_to_check" ] || [ -z "$file_to_check" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group_id_to_check=${3:-$user_id_to_check} + local check_user_name=${4:-"no"} + + logDebug "Checking permissions on [$file_to_check] for user [$user_id_to_check] & group [$group_id_to_check]" + + local stat= + + if [ "${check_user_name}" == "yes" ]; then + stat=( $(stat -Lc "%U %G" ${file_to_check}) ) + else + stat=( $(stat -Lc "%u %g" ${file_to_check}) ) + fi + + local user_id=${stat[0]} + local group_id=${stat[1]} + + if [[ "${user_id}" != "${user_id_to_check}" ]] || [[ "${group_id}" != "${group_id_to_check}" ]] ; then + logDebug "Ownership mismatch. [${file_to_check}] is not owned by [${user_id_to_check}:${group_id_to_check}]" + return 1 + else + return 0 + fi +} + +## Utility method to change ownership of a file/folder - NON recursive +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnershipNonRecursive() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown ${user}:${group} ${targetFile}]" + chown ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to change ownership of a file. +## IMPORTANT +## If being called on a folder, should ONLY be called for fresh folders or may cause performance issues +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnership() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown -R ${user}:${group} ${targetFile}]" + chown -R ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to create third party folder structure necessary for Postgres +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## POSTGRESQL_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createPostgresDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${POSTGRESQL_DATA_ROOT}" ] && return 0 + + logDebug "Property [${POSTGRESQL_DATA_ROOT}] exists. Proceeding" + + createDir "${POSTGRESQL_DATA_ROOT}/data" + io_setOwnership "${POSTGRESQL_DATA_ROOT}" "${POSTGRES_USER}" "${POSTGRES_USER}" || errorExit "Setting ownership of [${POSTGRESQL_DATA_ROOT}] to [${POSTGRES_USER}:${POSTGRES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Nginx +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## NGINX_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createNginxDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${NGINX_DATA_ROOT}" ] && return 0 + + logDebug "Property [${NGINX_DATA_ROOT}] exists. Proceeding" + + createDir "${NGINX_DATA_ROOT}" + io_setOwnership "${NGINX_DATA_ROOT}" "${NGINX_USER}" "${NGINX_GROUP}" || errorExit "Setting ownership of [${NGINX_DATA_ROOT}] to [${NGINX_USER}:${NGINX_GROUP}] failed" +} + +## Utility method to create third party folder structure necessary for ElasticSearch +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## ELASTIC_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createElasticSearchDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${ELASTIC_DATA_ROOT}" ] && return 0 + + logDebug "Property [${ELASTIC_DATA_ROOT}] exists. Proceeding" + + createDir "${ELASTIC_DATA_ROOT}/data" + io_setOwnership "${ELASTIC_DATA_ROOT}" "${ES_USER}" "${ES_USER}" || errorExit "Setting ownership of [${ELASTIC_DATA_ROOT}] to [${ES_USER}:${ES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Redis +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## REDIS_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRedisDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${REDIS_DATA_ROOT}" ] && return 0 + + logDebug "Property [${REDIS_DATA_ROOT}] exists. Proceeding" + + createDir "${REDIS_DATA_ROOT}" + io_setOwnership "${REDIS_DATA_ROOT}" "${REDIS_USER}" "${REDIS_USER}" || errorExit "Setting ownership of [${REDIS_DATA_ROOT}] to [${REDIS_USER}:${REDIS_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Mongo +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## MONGODB_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createMongoDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${MONGODB_DATA_ROOT}" ] && return 0 + + logDebug "Property [${MONGODB_DATA_ROOT}] exists. Proceeding" + + createDir "${MONGODB_DATA_ROOT}/logs" + createDir "${MONGODB_DATA_ROOT}/configdb" + createDir "${MONGODB_DATA_ROOT}/db" + io_setOwnership "${MONGODB_DATA_ROOT}" "${MONGO_USER}" "${MONGO_USER}" || errorExit "Setting ownership of [${MONGODB_DATA_ROOT}] to [${MONGO_USER}:${MONGO_USER}] failed" +} + +## Utility method to create third party folder structure necessary for RabbitMQ +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## RABBITMQ_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRabbitMQDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${RABBITMQ_DATA_ROOT}" ] && return 0 + + logDebug "Property [${RABBITMQ_DATA_ROOT}] exists. Proceeding" + + createDir "${RABBITMQ_DATA_ROOT}" + io_setOwnership "${RABBITMQ_DATA_ROOT}" "${RABBITMQ_USER}" "${RABBITMQ_USER}" || errorExit "Setting ownership of [${RABBITMQ_DATA_ROOT}] to [${RABBITMQ_USER}:${RABBITMQ_USER}] failed" +} + +# Add or replace a property in provided properties file +addOrReplaceProperty() { + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + local delimiter=${4:-"="} + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}\s*${delimiter}.*$" ${propertiesPath} > /dev/null 2>&1 + [ $? -ne 0 ] && echo -e "\n${propertyName}${delimiter}${propertyValue}" >> ${propertiesPath} + sed -i -e "s|^${propertyName}\s*${delimiter}.*$|${propertyName}${delimiter}${propertyValue}|g;" ${propertiesPath} +} + +# Set property only if its not set +io_setPropertyNoOverride(){ + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}:" ${propertiesPath} > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo -e "${propertyName}: ${propertyValue}" >> ${propertiesPath} || warn "Setting property ${propertyName}: ${propertyValue} in [ ${propertiesPath} ] failed" + else + logger "Skipping update of property : ${propertyName}" >&6 + fi +} + +# Add a line to a file if it doesn't already exist +addLine() { + local line_to_add=$1 + local target_file=$2 + logger "Trying to add line $1 to $2" >&6 2>&1 + cat "$target_file" | grep -F "$line_to_add" -wq >&6 2>&1 + if [ $? != 0 ]; then + logger "Line does not exist and will be added" >&6 2>&1 + echo $line_to_add >> $target_file || errorExit "Could not update $target_file" + fi +} + +# Utility method to check if a value (first parameter) exists in an array (2nd parameter) +# 1st parameter "value to find" +# 2nd parameter "The array to search in. Please pass a string with each value separated by space" +# Example: containsElement "y" "y Y n N" +containsElement () { + local searchElement=$1 + local searchArray=($2) + local found=1 + for elementInIndex in "${searchArray[@]}";do + if [[ $elementInIndex == $searchElement ]]; then + found=0 + fi + done + return $found +} + +# Utility method to get user's choice +# 1st parameter "what to ask the user" +# 2nd parameter "what choices to accept, separated by spaces" +# 3rd parameter "what is the default choice (to use if the user simply presses Enter)" +# Example 'getUserChoice "Are you feeling lucky? Punk!" "y n Y N" "y"' +getUserChoice(){ + configureLogOutput + read_timeout=${read_timeout:-0.5} + local choice="na" + local text_to_display=$1 + local choices=$2 + local default_choice=$3 + users_choice= + + until containsElement "$choice" "$choices"; do + echo "";echo ""; + sleep $read_timeout #This ensures correct placement of the question. + read -p "$text_to_display :" choice + : ${choice:=$default_choice} + done + users_choice=$choice + echo -e "\n$text_to_display: $users_choice" >&6 + sleep $read_timeout #This ensures correct logging +} + +setFilePermission () { + local permission=$1 + local file=$2 + chmod "${permission}" "${file}" || warn "Setting permission ${permission} to file [ ${file} ] failed" +} + + +#setting required paths +setAppDir (){ + SCRIPT_DIR=$(dirname $0) + SCRIPT_HOME="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + APP_DIR="`cd "${SCRIPT_HOME}";pwd`" +} + +ZIP_TYPE="zip" +COMPOSE_TYPE="compose" +HELM_TYPE="helm" +RPM_TYPE="rpm" +DEB_TYPE="debian" + +sourceScript () { + local file="$1" + + [ ! -z "${file}" ] || errorExit "target file is not passed to source a file" + + if [ ! -f "${file}" ]; then + errorExit "${file} file is not found" + else + source "${file}" || errorExit "Unable to source ${file}, please check if the user ${USER} has permissions to perform this action" + fi +} +# Source required helpers +initHelpers () { + local systemYamlHelper="${APP_DIR}/systemYamlHelper.sh" + local thirdPartyDir=$(find ${APP_DIR}/.. -name third-party -type d) + export YQ_PATH="${thirdPartyDir}/yq" + LIBXML2_PATH="${thirdPartyDir}/libxml2/bin/xmllint" + export LD_LIBRARY_PATH="${thirdPartyDir}/libxml2/lib" + sourceScript "${systemYamlHelper}" +} +# Check migration info yaml file available in the path +checkMigrationInfoYaml () { + + if [[ -f "${APP_DIR}/migrationHelmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationHelmInfo.yaml" + INSTALLER="${HELM_TYPE}" + elif [[ -f "${APP_DIR}/migrationZipInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationZipInfo.yaml" + INSTALLER="${ZIP_TYPE}" + elif [[ -f "${APP_DIR}/migrationRpmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationRpmInfo.yaml" + INSTALLER="${RPM_TYPE}" + elif [[ -f "${APP_DIR}/migrationDebInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationDebInfo.yaml" + INSTALLER="${DEB_TYPE}" + elif [[ -f "${APP_DIR}/migrationComposeInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationComposeInfo.yaml" + INSTALLER="${COMPOSE_TYPE}" + else + errorExit "File migration Info yaml does not exist in [${APP_DIR}]" + fi +} + +retrieveYamlValue () { + local yamlPath="$1" + local value="$2" + local output="$3" + local message="$4" + + [[ -z "${yamlPath}" ]] && errorExit "yamlPath is mandatory to get value from ${MIGRATION_SYSTEM_YAML_INFO}" + + getYamlValue "${yamlPath}" "${MIGRATION_SYSTEM_YAML_INFO}" "false" + value="${YAML_VALUE}" + if [[ -z "${value}" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "Empty value for ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + elif [[ "${output}" == "Skip" ]]; then + return + else + errorExit "${message}" + fi + fi +} + +checkEnv () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + # check Environment JF_PRODUCT_HOME is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_PRODUCT_HOME")" + if [[ -z "${NEW_DATA_DIR}" ]]; then + errorExit "Environment variable JF_PRODUCT_HOME is not set, this is required to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + getCustomDataDir_hook + NEW_DATA_DIR="${OLD_DATA_DIR}" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + else + # check Environment JF_ROOT_DATA_DIR is set before migration + OLD_DATA_DIR="$(evalVariable "OLD_DATA_DIR" "JF_ROOT_DATA_DIR")" + # check Environment JF_ROOT_DATA_DIR is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_ROOT_DATA_DIR")" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi + +} + +getDataDir () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}"|| "${INSTALLER}" == "${HELM_TYPE}" ]]; then + checkEnv + else + getCustomDataDir_hook + NEW_DATA_DIR="`cd "${APP_DIR}"/../../;pwd`" + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi +} + +# Retrieve Product name from MIGRATION_SYSTEM_YAML_INFO +getProduct () { + retrieveYamlValue "migration.product" "${YAML_VALUE}" "Fail" "Empty value under ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + PRODUCT="${YAML_VALUE}" + PRODUCT=$(echo "${PRODUCT}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + if [[ "${PRODUCT}" != "artifactory" && "${PRODUCT}" != "distribution" && "${PRODUCT}" != "xray" ]]; then + errorExit "migration.product in [${MIGRATION_SYSTEM_YAML_INFO}] is not correct, please set based on product as ARTIFACTORY or DISTRIBUTION" + fi + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + JF_USER="${PRODUCT}" + fi +} +# Compare product version with minProductVersion and maxProductVersion +migrateCheckVersion () { + local productVersion="$1" + local minProductVersion="$2" + local maxProductVersion="$3" + local productVersion618="6.18.0" + local unSupportedProductVersions7=("7.2.0 7.2.1") + + if [[ "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 1 ]]; then + logger "Migration not necessary. ${PRODUCT} is already ${productVersion}" + exit 11 + elif [[ "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 1 ]]; then + if [[ ("$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 1) && " ${unSupportedProductVersions7[@]} " =~ " ${CURRENT_VERSION} " ]]; then + touch /tmp/error; + errorExit "Current ${PRODUCT} version (${productVersion}) does not support migration to ${CURRENT_VERSION}" + else + bannerStart "Detected ${PRODUCT} ${productVersion}, initiating migration" + fi + else + logger "Current ${PRODUCT} ${productVersion} version is not supported for migration" + exit 1 + fi +} + +getProductVersion () { + local minProductVersion="$1" + local maxProductVersion="$2" + local newfilePath="$3" + local oldfilePath="$4" + local propertyInDocker="$5" + local property="$6" + local productVersion= + local status= + + if [[ "$INSTALLER" == "${COMPOSE_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + elif [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${propertyInDocker}" "${newfilePath}")" + status="fail" + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + exit 0 + fi + elif [[ "$INSTALLER" == "${HELM_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + else + productVersion="${CURRENT_VERSION}" + [[ -z "${productVersion}" || "${productVersion}" == "" ]] && logger "${PRODUCT} CURRENT_VERSION is not set" && exit 0 + fi + else + if [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${property}" "${newfilePath}")" + status="fail" + elif [[ -f "${oldfilePath}" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + status="success" + else + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + logger "File [${newfilePath}] not found to get current version." + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + fi + exit 0 + fi + fi + if [[ -z "${productVersion}" || "${productVersion}" == "" ]]; then + [[ "${status}" == "success" ]] && logger "No version found in file [${oldfilePath}]." + [[ "${status}" == "fail" ]] && logger "No version found in file [${newfilePath}]." + exit 0 + fi + + migrateCheckVersion "${productVersion}" "${minProductVersion}" "${maxProductVersion}" +} + +readKey () { + local property="$1" + local file="$2" + local version= + + while IFS='=' read -r key value || [ -n "${key}" ]; + do + [[ ! "${key}" =~ \#.* && ! -z "${key}" && ! -z "${value}" ]] + key="$(io_trim "${key}")" + if [[ "${key}" == "${property}" ]]; then + version="${value}" && check=true && break + else + check=false + fi + done < "${file}" + if [[ "${check}" == "false" ]]; then + return + fi + echo "${version}" +} + +# create Log directory +createLogDir () { + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" + fi +} + +# Creating migration log file +creationMigrateLog () { + local LOG_FILE_NAME="migration.log" + createLogDir + local MIGRATION_LOG_FILE="${NEW_DATA_DIR}/log/${LOG_FILE_NAME}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + MIGRATION_LOG_FILE="${SCRIPT_HOME}/${LOG_FILE_NAME}" + fi + touch "${MIGRATION_LOG_FILE}" + setFilePermission "${LOG_FILE_PERMISSION}" "${MIGRATION_LOG_FILE}" + exec &> >(tee -a "${MIGRATION_LOG_FILE}") +} +# Set path where system.yaml should create +setSystemYamlPath () { + SYSTEM_YAML_PATH="${NEW_DATA_DIR}/etc/system.yaml" + if [[ "${INSTALLER}" != "${HELM_TYPE}" ]]; then + logger "system.yaml will be created in path [${SYSTEM_YAML_PATH}]" + fi +} +# Create directory +createDirectory () { + local directory="$1" + local output="$2" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${directory}" + mkdir -p "${directory}" && check=true || check=false + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi + setOwnershipBasedOnInstaller "${directory}" +} + +setOwnershipBasedOnInstaller () { + local directory="$1" + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + chown -R ${USER_TO_CHECK}:${GROUP_TO_CHECK} "${directory}" || warn "Setting ownership on $directory failed" + elif [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + io_setOwnership "${directory}" "${JF_USER}" "${JF_USER}" + fi +} + +getUserAndGroup () { + local file="$1" + read uid gid <<<$(stat -c '%U %G' ${file}) + USER_TO_CHECK="${uid}" + GROUP_TO_CHECK="${gid}" +} + +# set ownership +getUserAndGroupFromFile () { + case $PRODUCT in + artifactory) + getUserAndGroup "/etc/opt/jfrog/artifactory/artifactory.properties" + ;; + distribution) + getUserAndGroup "${OLD_DATA_DIR}/etc/versions.properties" + ;; + xray) + getUserAndGroup "${OLD_DATA_DIR}/security/master.key" + ;; + esac +} + +# creating required directories +createRequiredDirs () { + bannerSubSection "CREATING REQUIRED DIRECTORIES" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${JF_USER}" "${JF_USER}" "yes" + io_setOwnership "${NEW_DATA_DIR}" "${JF_USER}" "${JF_USER}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data/postgres" "${POSTGRES_USER}" "${POSTGRES_USER}" "yes" + fi + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + fi +} + +# Check entry in map is format +checkMapEntry () { + local entry="$1" + + [[ "${entry}" != *"="* ]] && echo -n "false" || echo -n "true" +} +# Check value Empty and warn +warnIfEmpty () { + local filePath="$1" + local yamlPath="$2" + local check= + + if [[ -z "${filePath}" ]]; then + warn "Empty value in yamlpath [${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + check=false + else + check=true + fi + echo "${check}" +} + +logCopyStatus () { + local status="$1" + local logMessage="$2" + local warnMessage="$3" + + [[ "${status}" == "success" ]] && logger "${logMessage}" + [[ "${status}" == "fail" ]] && warn "${warnMessage}" +} +# copy contents from source to destination +copyCmd () { + local source="$1" + local target="$2" + local mode="$3" + local status= + + case $mode in + unique) + cp -up "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + specific) + cp -pf "${source}" "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied file [${source}] to [${target}]" "Failed to copy file [${source}] to [${target}]" + ;; + patternFiles) + cp -pf "${source}"* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied files matching [${source}*] to [${target}]" "Failed to copy files matching [${source}*] to [${target}]" + ;; + full) + cp -prf "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + esac +} +# Check contents exist in source before copying +copyOnContentExist () { + local source="$1" + local target="$2" + local mode="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + copyCmd "${source}" "${target}" "${mode}" + else + logger "No contents to copy from [${source}]" + fi +} + +# move source to destination +moveCmd () { + local source="$1" + local target="$2" + local status= + + mv -f "${source}" "${target}" && status="success" || status="fail" + [[ "${status}" == "success" ]] && logger "Successfully moved directory [${source}] to [${target}]" + [[ "${status}" == "fail" ]] && warn "Failed to move directory [${source}] to [${target}]" +} + +# symlink target to source +symlinkCmd () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + local check=false + + if [[ "${symlinkSubDir}" == "subDir" ]]; then + ln -sf "${source}"/* "${target}" && check=true || check=false + else + ln -sf "${source}" "${target}" && check=true || check=false + fi + + [[ "${check}" == "true" ]] && logger "Successfully symlinked directory [${target}] to old [${source}]" + [[ "${check}" == "false" ]] && warn "Symlink operation failed" +} +# Check contents exist in source before symlinking +symlinkOnExist () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + if [[ "${symlinkSubDir}" == "subDir" ]]; then + symlinkCmd "${source}" "${target}" "subDir" + else + symlinkCmd "${source}" "${target}" + fi + else + logger "No contents to symlink from [${source}]" + fi +} + +prependDir () { + local absolutePath="$1" + local fullPath="$2" + local sourcePath= + + if [[ "${absolutePath}" = \/* ]]; then + sourcePath="${absolutePath}" + else + sourcePath="${fullPath}" + fi + echo "${sourcePath}" +} + +getFirstEntry (){ + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $1}' +} + +getSecondEntry () { + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $2}' +} +# To get absolutePath +pathResolver () { + local directoryPath="$1" + local dataDir= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Warning" + dataDir="${YAML_VALUE}" + cd "${dataDir}" + else + cd "${OLD_DATA_DIR}" + fi + absoluteDir="`cd "${directoryPath}";pwd`" + echo "${absoluteDir}" +} + +checkPathResolver () { + local value="$1" + + if [[ "${value}" == \/* ]]; then + value="${value}" + else + value="$(pathResolver "${value}")" + fi + echo "${value}" +} + +propertyMigrate () { + local entry="$1" + local filePath="$2" + local fileName="$3" + local check=false + + local yamlPath="$(getFirstEntry "${entry}")" + local property="$(getSecondEntry "${entry}")" + if [[ -z "${property}" ]]; then + warn "Property is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${property}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + local keyValues=$(cat "${NEW_DATA_DIR}/${filePath}/${fileName}" | grep "^[^#]" | grep "[*=*]") + for i in ${keyValues}; do + key=$(echo "${i}" | awk -F"=" '{print $1}') + value=$(echo "${i}" | cut -f 2- -d '=') + [ -z "${key}" ] && continue + [ -z "${value}" ] && continue + if [[ "${key}" == "${property}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + value="$(migrateResolveDerbyPath "${key}" "${value}")" + value="$(migrateResolveHaDirPath "${key}" "${value}")" + if [[ "${INSTALLER}" != "${DOCKER_TYPE}" ]]; then + value="$(updatePostgresUrlString_Hook "${yamlPath}" "${value}")" + fi + fi + if [[ "${key}" == "context.url" ]]; then + local ip=$(echo "${value}" | awk -F/ '{print $3}' | sed 's/:.*//') + setSystemValue "shared.node.ip" "${ip}" "${SYSTEM_YAML_PATH}" + logger "Setting [shared.node.ip] with [${ip}] in system.yaml" + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" && logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" && check=true && break || check=false + fi + done + [[ "${check}" == "false" ]] && logger "Property [${property}] not found in file [${fileName}]" +} + +setHaEnabled_hook () { + echo "" +} + +migratePropertiesFiles () { + local fileList= + local filePath= + local fileName= + local map= + + retrieveYamlValue "migration.propertyFiles.files" "fileList" "Skip" + fileList="${YAML_VALUE}" + if [[ -z "${fileList}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF PROPERTY FILES" + for file in ${fileList}; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.propertyFiles.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.propertyFiles.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + if [[ "$(checkFileExists "${NEW_DATA_DIR}/${filePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + # setting haEnabled with true only if ha-node.properties is present + setHaEnabled_hook "${filePath}" + retrieveYamlValue "migration.propertyFiles.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + propertyMigrate "${entry}" "${filePath}" "${fileName}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=property" + fi + done + else + logger "File [${fileName}] was not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} + +createTargetDir () { + local mountDir="$1" + local target="$2" + + logger "Target directory not found [${mountDir}/${target}], creating it" + createDirectoryRecursive "${mountDir}" "${target}" "Warning" +} + +createDirectoryRecursive () { + local mountDir="$1" + local target="$2" + local output="$3" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${mountDir}/${target}" + local directory=$(echo "${target}" | tr '/' ' ' ) + local targetDir="${mountDir}" + for dir in ${directory}; + do + targetDir="${targetDir}/${dir}" + mkdir -p "${targetDir}" && check=true || check=false + setOwnershipBasedOnInstaller "${targetDir}" + done + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi +} + +copyOperation () { + local source="$1" + local target="$2" + local mode="$3" + local check=false + local targetDataDir= + local targetLink= + local date= + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + #remove source if it is a symlink + if [[ -L "${source}" ]]; then + targetLink=$(readlink -f "${source}") + logger "Removing the symlink [${source}] pointing to [${targetLink}]" + rm -f "${source}" + source=${targetLink} + fi + if [[ "$(checkDirExists "${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path" + return + fi + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + logger "No contents to copy from [${source}]" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyOnContentExist "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copySpecificFiles () { + local source="$1" + local target="$2" + local mode="$3" + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkFileExists "${source}")" != "true" ]]; then + logger "Source file [${source}] does not exist in path" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copyPatternMatchingFiles () { + local source="$1" + local target="$2" + local mode="$3" + local sourcePath="${4}" + + # prepend OLD_DATA_DIR only if source is relative path + sourcePath="$(prependDir "${sourcePath}" "${OLD_DATA_DIR}/${sourcePath}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkDirExists "${sourcePath}")" != "true" ]]; then + logger "Source [${sourcePath}] directory not found in path" + return + fi + if ls "${sourcePath}/${source}"* 1> /dev/null 2>&1; then + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${sourcePath}/${source}" "${targetDataDir}/${target}" "${mode}" + else + logger "Source file [${sourcePath}/${source}*] does not exist in path" + fi +} + +copyLogMessage () { + local mode="$1" + case $mode in + specific) + logger "Copy file [${source}] to target [${targetDataDir}/${target}]" + ;; + patternFiles) + logger "Copy files matching [${sourcePath}/${source}*] to target [${targetDataDir}/${target}]" + ;; + full) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + unique) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + esac +} + +copyBannerMessages () { + local mode="$1" + local textMode="$2" + case $mode in + specific) + bannerSection "COPY ${textMode} FILES" + ;; + patternFiles) + bannerSection "COPY MATCHING ${textMode}" + ;; + full) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + unique) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + esac +} + +invokeCopyFunctions () { + local mode="$1" + local source="$2" + local target="$3" + + case $mode in + specific) + copySpecificFiles "${source}" "${target}" "${mode}" + ;; + patternFiles) + retrieveYamlValue "migration.${copyFormat}.sourcePath" "map" "Warning" + local sourcePath="${YAML_VALUE}" + copyPatternMatchingFiles "${source}" "${target}" "${mode}" "${sourcePath}" + ;; + full) + copyOperation "${source}" "${target}" "${mode}" + ;; + unique) + copyOperation "${source}" "${target}" "${mode}" + ;; + esac +} +# Copies contents from source directory and target directory +copyDataDirectories () { + local copyFormat="$1" + local mode="$2" + local map= + local source= + local target= + local textMode= + local targetDataDir= + local copyFormatValue= + + retrieveYamlValue "migration.${copyFormat}" "${copyFormat}" "Skip" + copyFormatValue="${YAML_VALUE}" + if [[ -z "${copyFormatValue}" ]]; then + return + fi + textMode=$(echo "${mode}" | tr '[:lower:]' '[:upper:]' 2>/dev/null) + copyBannerMessages "${mode}" "${textMode}" + retrieveYamlValue "migration.${copyFormat}.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeCopyFunctions "${mode}" "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +invokeMoveFunctions () { + local source="$1" + local target="$2" + local sourceDataDir= + local targetBasename= + # prepend OLD_DATA_DIR only if source is relative path + sourceDataDir=$(prependDir "${source}" "${OLD_DATA_DIR}/${source}") + targetBasename=$(dirname "${target}") + logger "Moving directory source [${sourceDataDir}] to target [${NEW_DATA_DIR}/${target}]" + if [[ "$(checkDirExists "${sourceDataDir}")" != "true" ]]; then + logger "Directory [${sourceDataDir}] not found in path to move" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${targetBasename}")" != "true" ]]; then + createTargetDir "${NEW_DATA_DIR}" "${targetBasename}" + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/${target}" + else + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/tempDir" + moveCmd "${NEW_DATA_DIR}/tempDir" "${NEW_DATA_DIR}/${target}" + fi +} + +# Move source directory and target directory +moveDirectories () { + local moveDataDirectories= + local map= + local source= + local target= + + retrieveYamlValue "migration.moveDirectories" "moveDirectories" "Skip" + moveDirectories="${YAML_VALUE}" + if [[ -z "${moveDirectories}" ]]; then + return + fi + bannerSection "MOVE DIRECTORIES" + retrieveYamlValue "migration.moveDirectories.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeMoveFunctions "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +# Trim masterKey if its generated using hex 32 +trimMasterKey () { + local masterKeyDir=/opt/jfrog/artifactory/var/etc/security + local oldMasterKey=$(<${masterKeyDir}/master.key) + local oldMasterKey_Length=$(echo ${#oldMasterKey}) + local newMasterKey= + if [[ ${oldMasterKey_Length} -gt 32 ]]; then + bannerSection "TRIM MASTERKEY" + newMasterKey=$(echo ${oldMasterKey:0:32}) + cp ${masterKeyDir}/master.key ${masterKeyDir}/backup_master.key + logger "Original masterKey is backed up : ${masterKeyDir}/backup_master.key" + rm -rf ${masterKeyDir}/master.key + echo ${newMasterKey} > ${masterKeyDir}/master.key + logger "masterKey is trimmed : ${masterKeyDir}/master.key" + fi +} + +copyDirectories () { + + copyDataDirectories "copyFiles" "full" + copyDataDirectories "copyUniqueFiles" "unique" + copyDataDirectories "copySpecificFiles" "specific" + copyDataDirectories "copyPatternMatchingFiles" "patternFiles" +} + +symlinkDir () { + local source="$1" + local target="$2" + local targetDir= + local basename= + local targetParentDir= + + targetDir="$(dirname "${target}")" + if [[ "${targetDir}" == "${source}" ]]; then + # symlink the sub directories + createDirectory "${NEW_DATA_DIR}/${target}" "Warning" + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" "subDir" + basename="$(basename "${target}")" + cd "${NEW_DATA_DIR}/${target}" && rm -f "${basename}" + fi + else + targetParentDir="$(dirname "${NEW_DATA_DIR}/${target}")" + createDirectory "${targetParentDir}" "Warning" + if [[ "$(checkDirExists "${targetParentDir}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" + fi + fi +} + +symlinkOperation () { + local source="$1" + local target="$2" + local check=false + local targetLink= + local date= + + # Check if source is a link and do symlink + if [[ -L "${OLD_DATA_DIR}/${source}" ]]; then + targetLink=$(readlink -f "${OLD_DATA_DIR}/${source}") + symlinkOnExist "${targetLink}" "${NEW_DATA_DIR}/${target}" + else + # check if source is directory and do symlink + if [[ "$(checkDirExists "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path to symlink" + return + fi + if [[ "$(checkDirContents "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "No contents found in [${OLD_DATA_DIR}/${source}] to symlink" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" != "true" ]]; then + logger "Target directory [${NEW_DATA_DIR}/${target}] does not exist to create symlink, creating it" + symlinkDir "${source}" "${target}" + else + rm -rf "${NEW_DATA_DIR}/${target}" && check=true || check=false + [[ "${check}" == "false" ]] && warn "Failed to remove contents in [${NEW_DATA_DIR}/${target}/]" + symlinkDir "${source}" "${target}" + fi + fi +} +# Creates a symlink path - Source directory to which the symbolic link should point. +symlinkDirectories () { + local linkFiles= + local map= + local source= + local target= + + retrieveYamlValue "migration.linkFiles" "linkFiles" "Skip" + linkFiles="${YAML_VALUE}" + if [[ -z "${linkFiles}" ]]; then + return + fi + bannerSection "SYMLINK DIRECTORIES" + retrieveYamlValue "migration.linkFiles.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + logger "Symlink directory [${NEW_DATA_DIR}/${target}] to old [${OLD_DATA_DIR}/${source}]" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + symlinkOperation "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +updateConnectionString () { + local yamlPath="$1" + local value="$2" + local mongoPath="shared.mongo.url" + local rabbitmqPath="shared.rabbitMq.url" + local postgresPath="shared.database.url" + local redisPath="shared.redis.connectionString" + local mongoConnectionString="mongo.connectionString" + local sourceKey= + local hostIp=$(io_getPublicHostIP) + local hostKey= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + # Replace @postgres:,@mongodb:,@rabbitmq:,@redis: to @{hostIp}: (Compose Installer) + hostKey="@${hostIp}:" + case $yamlPath in + ${postgresPath}) + sourceKey="@postgres:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoPath}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${rabbitmqPath}) + sourceKey="@rabbitmq:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${redisPath}) + sourceKey="@redis:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoConnectionString}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + esac + fi + echo -n "${value}" +} + +yamlMigrate () { + local entry="$1" + local sourceFile="$2" + local value= + local yamlPath= + local key= + yamlPath="$(getFirstEntry "${entry}")" + key="$(getSecondEntry "${entry}")" + if [[ -z "${key}" ]]; then + warn "key is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + getYamlValue "${key}" "${sourceFile}" "false" + value="${YAML_VALUE}" + if [[ ! -z "${value}" ]]; then + value=$(updateConnectionString "${yamlPath}" "${value}") + fi + if [[ -z "${value}" ]]; then + logger "No value for [${key}] in [${sourceFile}]" + else + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the key [${key}] in system.yaml" + fi +} + +migrateYamlFile () { + local files= + local filePath= + local fileName= + local sourceFile= + local map= + retrieveYamlValue "migration.yaml.files" "files" "Skip" + files="${YAML_VALUE}" + if [[ -z "${files}" ]]; then + return + fi + bannerSection "MIGRATION OF YAML FILES" + for file in $files; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.yaml.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.yaml.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + sourceFile="${NEW_DATA_DIR}/${filePath}/${fileName}" + if [[ "$(checkFileExists "${sourceFile}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + retrieveYamlValue "migration.yaml.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + yamlMigrate "${entry}" "${sourceFile}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done + else + logger "File [${fileName}] is not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} +# updates the key and value in system.yaml +updateYamlKeyValue () { + local entry="$1" + local value= + local yamlPath= + local key= + + yamlPath="$(getFirstEntry "${entry}")" + value="$(getSecondEntry "${entry}")" + if [[ -z "${value}" ]]; then + warn "value is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value [${value}] in system.yaml" +} + +updateSystemYamlFile () { + local updateYaml= + local map= + + retrieveYamlValue "migration.updateSystemYaml" "updateYaml" "Skip" + updateSystemYaml="${YAML_VALUE}" + if [[ -z "${updateSystemYaml}" ]]; then + return + fi + bannerSection "UPDATE SYSTEM YAML FILE WITH KEY AND VALUES" + retrieveYamlValue "migration.updateSystemYaml.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ -z "${map}" ]]; then + return + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + updateYamlKeyValue "${entry}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done +} + +backupFiles_hook () { + logSilly "Method ${FUNCNAME[0]}" +} + +backupDirectory () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyOnContentExist "${targetDir}" "${backupDirectory}/${dir}" "full" + fi +} + +removeOldDirectory () { + local backupDir="$1" + local entry="$2" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${entry}" "${OLD_DATA_DIR}/${entry}")" + local outputCheckDirExists="$(checkDirExists "${targetDir}")" + if [[ "${outputCheckDirExists}" != "true" ]]; then + logger "No [${targetDir}] directory found to delete" + echo ""; + return + fi + backupDirectory "${backupDir}" "${entry}" "${targetDir}" + rm -rf "${targetDir}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed directory [${targetDir}]" + [[ "${check}" == "false" ]] && warn "Failed to remove directory [${targetDir}]" + echo ""; +} + +cleanUpOldDataDirectories () { + local cleanUpOldDataDir= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldDataDir" "cleanUpOldDataDir" "Skip" + cleanUpOldDataDir="${YAML_VALUE}" + if [[ -z "${cleanUpOldDataDir}" ]]; then + return + fi + bannerSection "CLEAN UP OLD DATA DIRECTORIES" + retrieveYamlValue "migration.cleanUpOldDataDir.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old data configurations are backedup in [${backupDir}] directory ******" + backupFiles_hook "${backupDir}/${PRODUCT}" + for entry in $map; + do + removeOldDirectory "${backupDir}" "${entry}" + done +} + +backupFiles () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local fileName="$4" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyCmd "${targetDir}/${fileName}" "${backupDirectory}/${dir}" "specific" + fi +} + +removeOldFiles () { + local backupDir="$1" + local directoryName="$2" + local fileName="$3" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${directoryName}" "${OLD_DATA_DIR}/${directoryName}")" + local outputCheckFileExists="$(checkFileExists "${targetDir}/${fileName}")" + if [[ "${outputCheckFileExists}" != "true" ]]; then + logger "No [${targetDir}/${fileName}] file found to delete" + return + fi + backupFiles "${backupDir}" "${directoryName}" "${targetDir}" "${fileName}" + rm -f "${targetDir}/${fileName}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed file [${targetDir}/${fileName}]" + [[ "${check}" == "false" ]] && warn "Failed to remove file [${targetDir}/${fileName}]" + echo ""; +} + +cleanUpOldFiles () { + local cleanUpFiles= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldFiles" "cleanUpOldFiles" "Skip" + cleanUpOldFiles="${YAML_VALUE}" + if [[ -z "${cleanUpOldFiles}" ]]; then + return + fi + bannerSection "CLEAN UP OLD FILES" + retrieveYamlValue "migration.cleanUpOldFiles.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old files are backedup in [${backupDir}] directory ******" + for entry in $map; + do + local outputCheckMapEntry="$(checkMapEntry "${entry}")" + if [[ "${outputCheckMapEntry}" != "true" ]]; then + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e directoryName=fileName" + fi + local fileName="$(getSecondEntry "${entry}")" + local directoryName="$(getFirstEntry "${entry}")" + [[ -z "${fileName}" ]] && warn "File name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${directoryName}" ]] && warn "Directory name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + removeOldFiles "${backupDir}" "${directoryName}" "${fileName}" + echo ""; + done +} + +startMigration () { + bannerSection "STARTING MIGRATION" +} + +endMigration () { + bannerSection "MIGRATION COMPLETED SUCCESSFULLY" +} + +initialize () { + setAppDir + _pauseExecution "setAppDir" + initHelpers + _pauseExecution "initHelpers" + checkMigrationInfoYaml + _pauseExecution "checkMigrationInfoYaml" + getProduct + _pauseExecution "getProduct" + getDataDir + _pauseExecution "getDataDir" +} + +main () { + case $PRODUCT in + artifactory) + migrateArtifactory + ;; + distribution) + migrateDistribution + ;; + xray) + migrationXray + ;; + esac + exit 0 +} + +# Ensures meta data is logged +LOG_BEHAVIOR_ADD_META="$FLAG_Y" + + +migrateResolveDerbyPath () { + local key="$1" + local value="$2" + + if [[ "${key}" == "url" && "${value}" == *"db.home"* ]]; then + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + derbyPath="/opt/jfrog/artifactory/var/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + else + derbyPath="${NEW_DATA_DIR}/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + fi + fi + echo "${value}" +} + +migrateResolveHaDirPath () { + local key="$1" + local value="$2" + + if [[ "${INSTALLER}" == "${RPM_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" || "${INSTALLER}" == "${DEB_TYPE}" ]]; then + if [[ "${key}" == "artifactory.ha.data.dir" || "${key}" == "artifactory.ha.backup.dir" ]]; then + value=$(checkPathResolver "${value}") + fi + fi + echo "${value}" +} +updatePostgresUrlString_Hook () { + local yamlPath="$1" + local value="$2" + local hostIp=$(io_getPublicHostIP) + local sourceKey="//postgresql:" + if [[ "${yamlPath}" == "shared.database.url" ]]; then + value=$(io_replaceString "${value}" "${sourceKey}" "//${hostIp}:" "#") + fi + echo "${value}" +} +# Check Artifactory product version +checkArtifactoryVersion () { + local minProductVersion="6.0.0" + local maxProductVersion="7.0.0" + local propertyInDocker="ARTIFACTORY_VERSION" + local property="artifactory.version" + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + local newfilePath="${APP_DIR}/../.env" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + else + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="/etc/opt/jfrog/artifactory/artifactory.properties" + fi + + getProductVersion "${minProductVersion}" "${maxProductVersion}" "${newfilePath}" "${oldfilePath}" "${propertyInDocker}" "${property}" +} + +getCustomDataDir_hook () { + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Fail" + OLD_DATA_DIR="${YAML_VALUE}" +} + +# Get protocol value of connector +getXmlConnectorProtocol () { + local i="$1" + local filePath="$2" + local fileName="$3" + local protocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@protocol' ${filePath}/${fileName} 2>/dev/null |awk -F"=" '{print $2}' | tr -d '"') + echo -e "${protocolValue}" +} + +# Get all attributes of connector +getXmlConnectorAttributes () { + local i="$1" + local filePath="$2" + local fileName="$3" + local connectorAttributes=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@*' ${filePath}/${fileName} 2>/dev/null) + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + echo "${connectorAttributes}" +} + +# Get port value of connector +getXmlConnectorPort () { + local i="$1" + local filePath="$2" + local fileName="$3" + local portValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@port' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${portValue}" +} + +# Get maxThreads value of connector +getXmlConnectorMaxThreads () { + local i="$1" + local filePath="$2" + local fileName="$3" + local maxThreadValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@maxThreads' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${maxThreadValue}" +} +# Get sendReasonPhrase value of connector +getXmlConnectorSendReasonPhrase () { + local i="$1" + local filePath="$2" + local fileName="$3" + local sendReasonPhraseValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sendReasonPhrase' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${sendReasonPhraseValue}" +} +# Get relaxedPathChars value of connector +getXmlConnectorRelaxedPathChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedPathCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedPathChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedPathCharsValue=$(io_trim "${relaxedPathCharsValue}") + echo -e "${relaxedPathCharsValue}" +} +# Get relaxedQueryChars value of connector +getXmlConnectorRelaxedQueryChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedQueryCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedQueryChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedQueryCharsValue=$(io_trim "${relaxedQueryCharsValue}") + echo -e "${relaxedQueryCharsValue}" +} + +# Updating system.yaml with Connector port +setConnectorPort () { + local yamlPath="$1" + local valuePort="$2" + local portYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${valuePort}" ]]; then + warn "port value is empty, could not migrate to system.yaml" + return + fi + ## Getting port yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" portYamlPath "Warning" + portYamlPath="${YAML_VALUE}" + if [[ -z "${portYamlPath}" ]]; then + return + fi + setSystemValue "${portYamlPath}" "${valuePort}" "${SYSTEM_YAML_PATH}" + logger "Setting [${portYamlPath}] with value [${valuePort}] in system.yaml" +} + +# Updating system.yaml with Connector maxThreads +setConnectorMaxThread () { + local yamlPath="$1" + local threadValue="$2" + local maxThreadYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${threadValue}" ]]; then + return + fi + ## Getting max Threads yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" maxThreadYamlPath "Warning" + maxThreadYamlPath="${YAML_VALUE}" + if [[ -z "${maxThreadYamlPath}" ]]; then + return + fi + setSystemValue "${maxThreadYamlPath}" "${threadValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${maxThreadYamlPath}] with value [${threadValue}] in system.yaml" +} + +# Updating system.yaml with Connector sendReasonPhrase +setConnectorSendReasonPhrase () { + local yamlPath="$1" + local sendReasonPhraseValue="$2" + local sendReasonPhraseYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${sendReasonPhraseValue}" ]]; then + return + fi + ## Getting sendReasonPhrase yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" sendReasonPhraseYamlPath "Warning" + sendReasonPhraseYamlPath="${YAML_VALUE}" + if [[ -z "${sendReasonPhraseYamlPath}" ]]; then + return + fi + setSystemValue "${sendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${sendReasonPhraseYamlPath}] with value [${sendReasonPhraseValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedPathChars +setConnectorRelaxedPathChars () { + local yamlPath="$1" + local relaxedPathCharsValue="$2" + local relaxedPathCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedPathCharsValue}" ]]; then + return + fi + ## Getting relaxedPathChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedPathCharsYamlPath "Warning" + relaxedPathCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedPathCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedPathCharsYamlPath}" "${relaxedPathCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedPathCharsYamlPath}] with value [${relaxedPathCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedQueryChars +setConnectorRelaxedQueryChars () { + local yamlPath="$1" + local relaxedQueryCharsValue="$2" + local relaxedQueryCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedQueryCharsValue}" ]]; then + return + fi + ## Getting relaxedQueryChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedQueryCharsYamlPath "Warning" + relaxedQueryCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedQueryCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedQueryCharsYamlPath}" "${relaxedQueryCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedQueryCharsYamlPath}] with value [${relaxedQueryCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connectors configurations +setConnectorExtraConfig () { + local yamlPath="$1" + local connectorAttributes="$2" + local extraConfigPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${connectorAttributes}" ]]; then + return + fi + ## Getting extraConfig yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConfig "Warning" + extraConfigPath="${YAML_VALUE}" + if [[ -z "${extraConfigPath}" ]]; then + return + fi + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setSystemValue "${extraConfigPath}" "${connectorAttributes}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConfigPath}] with connector attributes in system.yaml" +} + +# Updating system.yaml with extra Connectors +setExtraConnector () { + local yamlPath="$1" + local extraConnector="$2" + local extraConnectorYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${extraConnector}" ]]; then + return + fi + ## Getting extraConnecotr yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConnectorYamlPath "Warning" + extraConnectorYamlPath="${YAML_VALUE}" + if [[ -z "${extraConnectorYamlPath}" ]]; then + return + fi + getYamlValue "${extraConnectorYamlPath}" "${SYSTEM_YAML_PATH}" "false" + local connectorExtra="${YAML_VALUE}" + if [[ -z "${connectorExtra}" ]]; then + setSystemValue "${extraConnectorYamlPath}" "${extraConnector}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + else + setSystemValue "${extraConnectorYamlPath}" "\"${connectorExtra} ${extraConnector}\"" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + fi +} + +# Migrate extra connectors to system.yaml +migrateExtraConnectors () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local excludeDefaultPort="$4" + local i="$5" + local extraConfig= + local extraConnector= + if [[ "${excludeDefaultPort}" == "yes" ]]; then + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" && "${portValue}" != "${DEFAULT_RT_PORT}" ]] || continue + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + done + else + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + fi +} + +# Migrate connector configurations +migrateConnectorConfig () { + local i="$1" + local protocolType="$2" + local portValue="$3" + local connectorPortYamlPath="$4" + local connectorMaxThreadYamlPath="$5" + local connectorAttributesYamlPath="$6" + local filePath="$7" + local fileName="$8" + local connectorSendReasonPhraseYamlPath="$9" + local connectorRelaxedPathCharsYamlPath="${10}" + local connectorRelaxedQueryCharsYamlPath="${11}" + + # migrate port + setConnectorPort "${connectorPortYamlPath}" "${portValue}" + + # migrate maxThreads + local maxThreadValue=$(getXmlConnectorMaxThreads "$i" "${filePath}" "${fileName}") + setConnectorMaxThread "${connectorMaxThreadYamlPath}" "${maxThreadValue}" + + # migrate sendReasonPhrase + local sendReasonPhraseValue=$(getXmlConnectorSendReasonPhrase "$i" "${filePath}" "${fileName}") + setConnectorSendReasonPhrase "${connectorSendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" + + # migrate relaxedPathChars + local relaxedPathCharsValue=$(getXmlConnectorRelaxedPathChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedPathChars "${connectorRelaxedPathCharsYamlPath}" "\"${relaxedPathCharsValue}\"" + # migrate relaxedQueryChars + local relaxedQueryCharsValue=$(getXmlConnectorRelaxedQueryChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedQueryChars "${connectorRelaxedQueryCharsYamlPath}" "\"${relaxedQueryCharsValue}\"" + + # migrate all attributes to extra config except port , maxThread , sendReasonPhrase ,relaxedPathChars and relaxedQueryChars + local connectorAttributes=$(getXmlConnectorAttributes "$i" "${filePath}" "${fileName}") + connectorAttributes=$(echo "${connectorAttributes}" | sed 's/port="'${portValue}'"//g' | sed 's/maxThreads="'${maxThreadValue}'"//g' | sed 's/sendReasonPhrase="'${sendReasonPhraseValue}'"//g' | sed 's/relaxedPathChars="\'${relaxedPathCharsValue}'\"//g' | sed 's/relaxedQueryChars="\'${relaxedQueryCharsValue}'\"//g') + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setConnectorExtraConfig "${connectorAttributesYamlPath}" "${connectorAttributes}" +} + +# Check for default port 8040 and 8081 in connectors and migrate +migrateConnectorPort () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + local connectorPortYamlPath="$5" + local connectorMaxThreadYamlPath="$6" + local connectorAttributesYamlPath="$7" + local connectorSendReasonPhraseYamlPath="$8" + local connectorRelaxedPathCharsYamlPath="$9" + local connectorRelaxedQueryCharsYamlPath="${10}" + local portYamlPath= + local maxThreadYamlPath= + local status= + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" == *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + RT_DEFAULTPORT_STATUS=success + else + AC_DEFAULTPORT_STATUS=success + fi + migrateConnectorConfig "${i}" "${protocolType}" "${portValue}" "${connectorPortYamlPath}" "${connectorMaxThreadYamlPath}" "${connectorAttributesYamlPath}" "${filePath}" "${fileName}" "${connectorSendReasonPhraseYamlPath}" "${connectorRelaxedPathCharsYamlPath}" "${connectorRelaxedQueryCharsYamlPath}" + done +} + +# migrate to extra, connector having default port and protocol is AJP +migrateDefaultPortIfAjp () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + done + +} + +# Comparing max threads in connectors +compareMaxThreads () { + local firstConnectorMaxThread="$1" + local firstConnectorNode="$2" + local secondConnectorMaxThread="$3" + local secondConnectorNode="$4" + local filePath="$5" + local fileName="$6" + + # choose higher maxThreads connector as Artifactory. + if [[ "${firstConnectorMaxThread}" -gt ${secondConnectorMaxThread} || "${firstConnectorMaxThread}" -eq ${secondConnectorMaxThread} ]]; then + # maxThread is higher in firstConnector, + # Taking firstConnector as Artifactory and SecondConnector as Access + # maxThread is equal in both connector,considering firstConnector as Artifactory and SecondConnector as Access + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + else + # maxThread is higher in SecondConnector, + # Taking SecondConnector as Artifactory and firstConnector as Access + local rtPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +# Check max threads exist to compare +maxThreadsExistToCompare () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local firstConnectorMaxThread= + local secondConnectorMaxThread= + local firstConnectorNode= + local secondConnectorNode= + local status=success + local firstnode=fail + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ ${protocolType} == *AJP* ]]; then + # Migrate Connectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + fi + # store maxthreads value of each connector + if [[ ${firstnode} == "fail" ]]; then + firstConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + firstConnectorNode="${i}" + firstnode=success + else + secondConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + secondConnectorNode="${i}" + fi + done + [[ -z "${firstConnectorMaxThread}" ]] && status=fail + [[ -z "${secondConnectorMaxThread}" ]] && status=fail + # maxThreads is set, now compare MaxThreads + if [[ "${status}" == "success" ]]; then + compareMaxThreads "${firstConnectorMaxThread}" "${firstConnectorNode}" "${secondConnectorMaxThread}" "${secondConnectorNode}" "${filePath}" "${fileName}" + else + # Assume first connector is RT, maxThreads is not set in both connectors + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +migrateExtraBasedOnNonAjpCount () { + local nonAjpCount="$1" + local filePath="$2" + local fileName="$3" + local connectorCount="$4" + local i="$5" + + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ "${protocolType}" == *AJP* ]]; then + if [[ "${nonAjpCount}" -eq 1 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + else + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + continue + fi + fi +} + +# find RT and AC Connector +findRtAndAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local initialAjpCount=0 + local nonAjpCount=0 + + # get the count of non AJP + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] || continue + nonAjpCount=$((initialAjpCount+1)) + initialAjpCount="${nonAjpCount}" + done + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access and artifactory connectors + # Mark port as 8040 for access + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + done + elif [[ "${nonAjpCount}" -eq 2 ]]; then + # compare maxThreads in both connectors + maxThreadsExistToCompare "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${nonAjpCount}" -gt 2 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # setting with default port in system.yaml + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# get the count of non AJP +getCountOfNonAjp () { + local port="$1" + local connectorCount="$2" + local filePath=$3 + local fileName=$4 + local initialNonAjpCount=0 + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${port}" ]] || continue + [[ "${protocolType}" != *AJP* ]] || continue + local nonAjpCount=$((initialNonAjpCount+1)) + initialNonAjpCount="${nonAjpCount}" + done + echo -e "${nonAjpCount}" +} + +# Find for access connector +findAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_RT_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access connector and mark port as that of connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take RT properties into access with 8040 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add RT connector details as access connector and mark port as 8040 + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# Find for artifactory connector +findRtConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_ACCESS_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as RT connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take access properties into artifactory with 8081 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add access connector details as RT connector and mark as ${DEFAULT_RT_PORT} + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +checkForTlsConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + local sslProtocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sslProtocol' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${sslProtocolValue}" == "TLS" ]]; then + bannerImportant "NOTE: Ignoring TLS connector during migration, modify the system yaml to enable TLS. Original server.xml is saved in path [${filePath}/${fileName}]" + TLS_CONNECTOR_EXISTS=${FLAG_Y} + continue + fi + done +} + +# set custom tomcat server Listeners to system.yaml +setListenerConnector () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + for ((i = 1 ; i <= "${listenerCount}" ; i++)) + do + local listenerConnector=$($LIBXML2_PATH --xpath '//Server/Listener['$i']' ${filePath}/${fileName} 2>/dev/null) + local listenerClassName=$($LIBXML2_PATH --xpath '//Server/Listener['$i']/@className' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${listenerClassName}" == *Apr* ]]; then + setExtraConnector "${EXTRA_LISTENER_CONFIG_YAMLPATH}" "${listenerConnector}" + fi + done +} +# add custom tomcat server Listeners +addTomcatServerListeners () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + if [[ "${listenerCount}" == "0" ]]; then + logger "No listener connectors found in the [${filePath}/${fileName}],skipping migration of listener connectors" + else + setListenerConnector "${filePath}" "${fileName}" "${listenerCount}" + setSystemValue "${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}" "true" "${SYSTEM_YAML_PATH}" + logger "Setting [${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}] with value [true] in system.yaml" + fi +} + +# server.xml migration operations +xmlMigrateOperation () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local listenerCount="$4" + RT_DEFAULTPORT_STATUS=fail + AC_DEFAULTPORT_STATUS=fail + TLS_CONNECTOR_EXISTS=${FLAG_N} + + # Check for connector with TLS , if found ignore migrating it + checkForTlsConnector "${filePath}" "${fileName}" "${connectorCount}" + if [[ "${TLS_CONNECTOR_EXISTS}" == "${FLAG_Y}" ]]; then + return + fi + addTomcatServerListeners "${filePath}" "${fileName}" "${listenerCount}" + # Migrate RT default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + # Migrate to extra if RT default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" + # Migrate AC default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + # Migrate to extra if access default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" + + if [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # RT and AC default port found + logger "Artifactory 8081 and Access 8040 default port are found" + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # Only AC default port found,find RT connector + logger "Found Access default 8040 port" + findRtConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # Only RT default port found,find AC connector + logger "Found Artifactory default 8081 port" + findAcConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # RT and AC default port not found, find connector + logger "Artifactory 8081 and Access 8040 default port are not found" + findRtAndAcConnector "${filePath}" "${fileName}" "${connectorCount}" + fi +} + +# get count of connectors +getXmlConnectorCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Service/Connector)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# get count of listener connectors +getTomcatServerListenersCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Listener)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# Migrate server.xml configuration to system.yaml +migrateXmlFile () { + local xmlFiles= + local fileName= + local filePath= + local sourceFilePath= + DEFAULT_ACCESS_PORT="8040" + DEFAULT_RT_PORT="8081" + AC_PORT_YAMLPATH="migration.xmlFiles.serverXml.access.port" + AC_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.access.maxThreads" + AC_SENDREASONPHRASE_YAMLPATH="migration.xmlFiles.serverXml.access.sendReasonPhrase" + AC_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.access.extraConfig" + RT_PORT_YAMLPATH="migration.xmlFiles.serverXml.artifactory.port" + RT_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.artifactory.maxThreads" + RT_SENDREASONPHRASE_YAMLPATH='migration.xmlFiles.serverXml.artifactory.sendReasonPhrase' + RT_RELAXEDPATHCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedPathChars' + RT_RELAXEDQUERYCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedQueryChars' + RT_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.artifactory.extraConfig" + ROUTER_PORT_YAMLPATH="migration.xmlFiles.serverXml.router.port" + EXTRA_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.config" + EXTRA_LISTENER_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.listener" + RT_TOMCAT_HTTPSCONNECTOR_ENABLED="artifactory.tomcat.httpsConnector.enabled" + + retrieveYamlValue "migration.xmlFiles" "xmlFiles" "Skip" + xmlFiles="${YAML_VALUE}" + if [[ -z "${xmlFiles}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF XML FILES" + retrieveYamlValue "migration.xmlFiles.serverXml.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + if [[ -z "${fileName}" ]]; then + return + fi + bannerSubSection "Processing Migration of $fileName" + retrieveYamlValue "migration.xmlFiles.serverXml.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + if [[ -z "${filePath}" ]]; then + return + fi + # prepend NEW_DATA_DIR only if filePath is relative path + sourceFilePath=$(prependDir "${filePath}" "${NEW_DATA_DIR}/${filePath}") + if [[ "$(checkFileExists "${sourceFilePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] is found in path [${sourceFilePath}]" + local connectorCount=$(getXmlConnectorCount "${sourceFilePath}" "${fileName}") + if [[ "${connectorCount}" == "0" ]]; then + logger "No connectors found in the [${filePath}/${fileName}],skipping migration of xml configuration" + return + fi + local listenerCount=$(getTomcatServerListenersCount "${sourceFilePath}" "${fileName}") + xmlMigrateOperation "${sourceFilePath}" "${fileName}" "${connectorCount}" "${listenerCount}" + else + logger "File [${fileName}] is not found in path [${sourceFilePath}] to migrate" + fi +} + +compareArtifactoryUser () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + + if [[ "${oldPropertyValue}" != "${newPropertyValue}" ]]; then + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" + else + logger "No change in property [${property}] value in [${sourceFile}] to migrate" + fi +} + +migrateReplicator () { + local property="$1" + local oldPropertyValue="$2" + local yamlPath="$3" + + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" +} + +compareJavaOptions () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + local oldJavaOption= + local newJavaOption= + local extraJavaOption= + local check=false + local success=true + local status=true + + oldJavaOption=$(echo "${oldPropertyValue}" | awk 'BEGIN{FS=OFS="\""}{for(i=2;i.+)\.{{ include "artifactory-ha.fullname" . }} {{ include "artifactory-ha.fullname" . }} +{{ tpl (include "artifactory.nginx.hosts" .) . }}; + +if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; +} +set $host_port {{ .Values.nginx.https.externalPort }}; +if ( $scheme = "http" ) { + set $host_port {{ .Values.nginx.http.externalPort }}; +} +## Application specific logs +## access_log /var/log/nginx/artifactory-access.log timing; +## error_log /var/log/nginx/artifactory-error.log; +rewrite ^/artifactory/?$ / redirect; +if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; +} +chunked_transfer_encoding on; +client_max_body_size 0; + +location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$host_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + {{- if .Values.nginx.disableProxyBuffering}} + proxy_http_version 1.1; + proxy_request_buffering off; + proxy_buffering off; + {{- end }} + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass http://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass http://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + location /pipelines/ { + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $http_host; + {{- if .Values.router.tlsEnabled }} + proxy_pass https://{{ include "artifactory-ha.fullname" . }}:{{ .Values.router.internalPort }}; + {{- else }} + proxy_pass http://{{ include "artifactory-ha.fullname" . }}:{{ .Values.router.internalPort }}; + {{- end }} + } +} +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/files/nginx-main-conf.yaml b/charts/jfrog/artifactory-ha/107.90.15/files/nginx-main-conf.yaml new file mode 100644 index 000000000..78cecea6a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/files/nginx-main-conf.yaml @@ -0,0 +1,83 @@ +# Main Nginx configuration file +worker_processes 4; + +{{- if .Values.nginx.logs.stderr }} +error_log stderr {{ .Values.nginx.logs.level }}; +{{- else -}} +error_log {{ .Values.nginx.persistence.mountPath }}/logs/error.log {{ .Values.nginx.logs.level }}; +{{- end }} +pid /var/run/nginx.pid; + +{{- if .Values.artifactory.ssh.enabled }} +## SSH Server Configuration +stream { + server { + {{- if .Values.nginx.singleStackIPv6Cluster }} + listen [::]:{{ .Values.nginx.ssh.internalPort }}; + {{- else -}} + listen {{ .Values.nginx.ssh.internalPort }}; + {{- end }} + proxy_pass {{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.ssh.externalPort }}; + } +} +{{- end }} + +events { + worker_connections 1024; +} + +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + variables_hash_max_size 1024; + variables_hash_bucket_size 64; + server_names_hash_max_size 4096; + server_names_hash_bucket_size 128; + types_hash_max_size 2048; + types_hash_bucket_size 64; + proxy_read_timeout 2400s; + client_header_timeout 2400s; + client_body_timeout 2400s; + proxy_connect_timeout 75s; + proxy_send_timeout 2400s; + proxy_buffer_size 128k; + proxy_buffers 40 128k; + proxy_busy_buffers_size 128k; + proxy_temp_file_write_size 250m; + proxy_http_version 1.1; + client_body_buffer_size 128k; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + log_format timing 'ip = $remote_addr ' + 'user = \"$remote_user\" ' + 'local_time = \"$time_local\" ' + 'host = $host ' + 'request = \"$request\" ' + 'status = $status ' + 'bytes = $body_bytes_sent ' + 'upstream = \"$upstream_addr\" ' + 'upstream_time = $upstream_response_time ' + 'request_time = $request_time ' + 'referer = \"$http_referer\" ' + 'UA = \"$http_user_agent\"'; + + {{- if .Values.nginx.logs.stdout }} + access_log /dev/stdout timing; + {{- else -}} + access_log {{ .Values.nginx.persistence.mountPath }}/logs/access.log timing; + {{- end }} + + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + + #gzip on; + + include /etc/nginx/conf.d/*.conf; + +} diff --git a/charts/jfrog/artifactory-ha/107.90.15/files/system.yaml b/charts/jfrog/artifactory-ha/107.90.15/files/system.yaml new file mode 100644 index 000000000..3a1d93269 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/files/system.yaml @@ -0,0 +1,163 @@ +router: + serviceRegistry: + insecure: {{ .Values.router.serviceRegistry.insecure }} +shared: +{{- if .Values.artifactory.coldStorage.enabled }} + jfrogColdStorage: + coldInstanceEnabled: true +{{- end }} +{{ tpl (include "artifactory.metrics" .) . }} + logging: + consoleLog: + enabled: {{ .Values.artifactory.consoleLog }} + extraJavaOpts: > + -Dartifactory.graceful.shutdown.max.request.duration.millis={{ mul .Values.artifactory.terminationGracePeriodSeconds 1000 }} + -Dartifactory.access.client.max.connections={{ .Values.access.tomcat.connector.maxThreads }} + {{- with .Values.artifactory.primary.javaOpts }} + {{- if .corePoolSize }} + -Dartifactory.async.corePoolSize={{ .corePoolSize }} + {{- end }} + {{- if .xms }} + -Xms{{ .xms }} + {{- end }} + {{- if .xmx }} + -Xmx{{ .xmx }} + {{- end }} + {{- if .jmx.enabled }} + -Dcom.sun.management.jmxremote + -Dcom.sun.management.jmxremote.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.rmi.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.ssl={{ .jmx.ssl }} + {{- if .jmx.host }} + -Djava.rmi.server.hostname={{ tpl .jmx.host $ }} + {{- else }} + -Djava.rmi.server.hostname={{ template "artifactory-ha.fullname" $ }} + {{- end }} + {{- if .jmx.authenticate }} + -Dcom.sun.management.jmxremote.authenticate=true + -Dcom.sun.management.jmxremote.access.file={{ .jmx.accessFile }} + -Dcom.sun.management.jmxremote.password.file={{ .jmx.passwordFile }} + {{- else }} + -Dcom.sun.management.jmxremote.authenticate=false + {{- end }} + {{- end }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + database: + allowNonPostgresql: {{ .Values.database.allowNonPostgresql }} + {{- if .Values.postgresql.enabled }} + type: postgresql + url: "jdbc:postgresql://{{ .Release.Name }}-postgresql:{{ .Values.postgresql.service.port }}/{{ .Values.postgresql.postgresqlDatabase }}" + host: "" + driver: org.postgresql.Driver + username: "{{ .Values.postgresql.postgresqlUsername }}" + {{ else }} + type: "{{ .Values.database.type }}" + driver: "{{ .Values.database.driver }}" + {{- end }} +artifactory: +{{- if or .Values.artifactory.haDataDir.enabled .Values.artifactory.haBackupDir.enabled }} + node: + {{- if .Values.artifactory.haDataDir.path }} + haDataDir: {{ .Values.artifactory.haDataDir.path }} + {{- end }} + {{- if .Values.artifactory.haBackupDir.path }} + haBackupDir: {{ .Values.artifactory.haBackupDir.path }} + {{- end }} +{{- end }} + database: + maxOpenConnections: {{ .Values.artifactory.database.maxOpenConnections }} + tomcat: + maintenanceConnector: + port: {{ .Values.artifactory.tomcat.maintenanceConnector.port }} + connector: + maxThreads: {{ .Values.artifactory.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.artifactory.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.artifactory.tomcat.connector.extraConfig }} +frontend: + session: + timeMinutes: {{ .Values.frontend.session.timeoutMinutes | quote }} +access: + runOnArtifactoryTomcat: {{ .Values.access.runOnArtifactoryTomcat | default false }} + database: + maxOpenConnections: {{ .Values.access.database.maxOpenConnections }} + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + extraJavaOpts: > + {{- if .Values.splitServicesToContainers }} + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=70 + {{- end }} + {{- with .Values.access.javaOpts }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + {{- end }} + tomcat: + connector: + maxThreads: {{ .Values.access.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.access.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.access.tomcat.connector.extraConfig }} + {{- if .Values.access.database.enabled }} + type: "{{ .Values.access.database.type }}" + url: "{{ .Values.access.database.url }}" + driver: "{{ .Values.access.database.driver }}" + username: "{{ .Values.access.database.user }}" + password: "{{ .Values.access.database.password }}" + {{- end }} +{{- if .Values.mc.enabled }} +mc: + enabled: true + database: + maxOpenConnections: {{ .Values.mc.database.maxOpenConnections }} + idgenerator: + maxOpenConnections: {{ .Values.mc.idgenerator.maxOpenConnections }} + tomcat: + connector: + maxThreads: {{ .Values.mc.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.mc.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.mc.tomcat.connector.extraConfig }} +{{- end }} +metadata: + database: + maxOpenConnections: {{ .Values.metadata.database.maxOpenConnections }} +{{- if and .Values.jfconnect.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +jfconnect: + enabled: true +{{- else }} +jfconnect: + enabled: false +jfconnect_service: + enabled: false +{{- end }} + +{{- if and .Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +federation: + enabled: true + embedded: {{ .Values.federation.embedded }} + extraJavaOpts: {{ .Values.federation.extraJavaOpts }} + port: {{ .Values.federation.internalPort }} +rtfs: + database: + driver: org.postgresql.Driver + type: postgresql + username: {{ .Values.federation.database.username }} + password: {{ .Values.federation.database.password }} + url: "jdbc:postgresql://{{ .Values.federation.database.host }}:{{ .Values.federation.database.port }}/{{ .Values.federation.database.name }}" +{{- else }} +federation: + enabled: false +{{- end }} +{{- if .Values.event.webhooks }} +event: + webhooks: {{ toYaml .Values.event.webhooks | nindent 6 }} +{{- end }} +{{- if .Values.evidence.enabled }} +evidence: + enabled: true +{{- else }} +evidence: + enabled: false +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/logo/artifactory-logo.png b/charts/jfrog/artifactory-ha/107.90.15/logo/artifactory-logo.png new file mode 100644 index 000000000..fe6c23c5a Binary files /dev/null and b/charts/jfrog/artifactory-ha/107.90.15/logo/artifactory-logo.png differ diff --git a/charts/jfrog/artifactory-ha/107.90.15/questions.yml b/charts/jfrog/artifactory-ha/107.90.15/questions.yml new file mode 100644 index 000000000..14e9024e6 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/questions.yml @@ -0,0 +1,424 @@ +questions: +# Advance Settings +- variable: artifactory.masterKey + default: "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" + description: "Artifactory master key. For security reasons, we strongly recommend you generate your own master key using this command: 'openssl rand -hex 32'" + type: string + label: Artifactory master key + group: "Security Settings" + +# Container Images +- variable: defaultImage + default: true + description: "Use default Docker image" + label: Use Default Image + type: boolean + show_subquestion_if: false + group: "Container Images" + subquestions: + - variable: initContainerImage + default: "docker.bintray.io/alpine:3.12" + description: "Init image name" + type: string + label: Init image name + - variable: artifactory.image.repository + default: "docker.bintray.io/jfrog/artifactory-pro" + description: "Artifactory image name" + type: string + label: Artifactory Image Name + - variable: artifactory.image.version + default: "7.6.3" + description: "Artifactory image tag" + type: string + label: Artifactory Image Tag + - variable: nginx.image.repository + default: "docker.bintray.io/jfrog/nginx-artifactory-pro" + description: "Nginx image name" + type: string + label: Nginx Image Name + - variable: nginx.image.version + default: "7.6.3" + description: "Nginx image tag" + type: string + label: Nginx Image Tag + - variable: imagePullSecrets + description: "Image Pull Secret" + type: string + label: Image Pull Secret + +# Services and LoadBalancing Settings +- variable: artifactory.node.replicaCount + default: "2" + description: "Number of Secondary Nodes" + type: string + label: Number of Secondary Nodes + show_subquestion_if: true + group: "Services and Load Balancing" +- variable: ingress.enabled + default: false + description: "Expose app using Layer 7 Load Balancer - ingress" + type: boolean + label: Expose app using Layer 7 Load Balancer + show_subquestion_if: true + group: "Services and Load Balancing" + required: true + subquestions: + - variable: ingress.hosts[0] + default: "xip.io" + description: "Hostname to your artifactory installation" + type: hostname + required: true + label: Hostname + +# Nginx Settings +- variable: nginx.enabled + default: true + description: "Enable nginx server" + type: boolean + label: Enable Nginx Server + group: "Services and Load Balancing" + required: true + show_if: "ingress.enabled=false" +- variable: nginx.service.type + default: "LoadBalancer" + description: "Nginx service type" + type: enum + required: true + label: Nginx Service Type + show_if: "nginx.enabled=true&&ingress.enabled=false" + group: "Services and Load Balancing" + options: + - "ClusterIP" + - "NodePort" + - "LoadBalancer" +- variable: nginx.service.loadBalancerIP + default: "" + description: "Provide Static IP to configure with Nginx" + type: string + label: Config Nginx LoadBalancer IP + show_if: "nginx.enabled=true&&nginx.service.type=LoadBalancer&&ingress.enabled=false" + group: "Services and Load Balancing" +- variable: nginx.tlsSecretName + default: "" + description: "Provide SSL Secret name to configure with Nginx" + type: string + label: Config Nginx SSL Secret + show_if: "nginx.enabled=true&&ingress.enabled=false" + group: "Services and Load Balancing" +- variable: nginx.customArtifactoryConfigMap + default: "" + description: "Provide configMap name to configure Nginx with custom `artifactory.conf`" + type: string + label: ConfigMap for Nginx Artifactory Config + show_if: "nginx.enabled=true&&ingress.enabled=false" + group: "Services and Load Balancing" + +# Artifactory Storage Settings +- variable: artifactory.persistence.size + default: "50Gi" + description: "Artifactory persistent volume size" + type: string + label: Artifactory Persistent Volume Size + required: true + group: "Artifactory Storage" +- variable: artifactory.persistence.type + default: "file-system" + description: "Artifactory persistent volume size" + type: enum + label: Artifactory Persistent Storage Type + required: true + options: + - "file-system" + - "nfs" + - "google-storage" + - "aws-s3" + group: "Artifactory Storage" + +#Storage Type Settings +- variable: artifactory.persistence.nfs.ip + default: "" + type: string + group: "Artifactory Storage" + label: NFS Server IP + description: "NFS server IP" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.haDataMount + default: "/data" + type: string + label: NFS Data Directory + description: "NFS data directory" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.haBackupMount + default: "/backup" + type: string + label: NFS Backup Directory + description: "NFS backup directory" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.dataDir + default: "/var/opt/jfrog/artifactory-ha" + type: string + label: HA Data Directory + description: "HA data directory" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.backupDir + default: "/var/opt/jfrog/artifactory-backup" + type: string + label: HA Backup Directory + description: "HA backup directory " + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.capacity + default: "200Gi" + type: string + label: NFS PVC Size + description: "NFS PVC size " + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" + +#Google storage settings +- variable: artifactory.persistence.googleStorage.bucketName + default: "artifactory-ha-gcp" + type: string + label: Google Storage Bucket Name + description: "Google storage bucket name" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +- variable: artifactory.persistence.googleStorage.identity + default: "" + type: string + label: Google Storage Service Account ID + description: "Google Storage service account id" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +- variable: artifactory.persistence.googleStorage.credential + default: "" + type: string + label: Google Storage Service Account Key + description: "Google Storage service account key" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +- variable: artifactory.persistence.googleStorage.path + default: "artifactory-ha/filestore" + type: string + label: Google Storage Path In Bucket + description: "Google Storage path in bucket" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +# awsS3 storage settings +- variable: artifactory.persistence.awsS3.bucketName + default: "artifactory-ha-aws" + type: string + label: AWS S3 Bucket Name + description: "AWS S3 bucket name" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.region + default: "" + type: string + label: AWS S3 Bucket Region + description: "AWS S3 bucket region" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.identity + default: "" + type: string + label: AWS S3 AWS_ACCESS_KEY_ID + description: "AWS S3 AWS_ACCESS_KEY_ID" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.credential + default: "" + type: string + label: AWS S3 AWS_SECRET_ACCESS_KEY + description: "AWS S3 AWS_SECRET_ACCESS_KEY" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.path + default: "artifactory-ha/filestore" + type: string + label: AWS S3 Path In Bucket + description: "AWS S3 path in bucket" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" + +# Database Settings +- variable: postgresql.enabled + default: true + description: "Enable PostgreSQL" + type: boolean + required: true + label: Enable PostgreSQL + group: "Database Settings" + show_subquestion_if: true + subquestions: + - variable: postgresql.postgresqlPassword + default: "" + description: "PostgreSQL password" + type: password + required: true + label: PostgreSQL Password + group: "Database Settings" + show_if: "postgresql.enabled=true" + - variable: postgresql.persistence.size + default: 20Gi + description: "PostgreSQL persistent volume size" + type: string + label: PostgreSQL Persistent Volume Size + show_if: "postgresql.enabled=true" + - variable: postgresql.persistence.storageClass + default: "" + description: "If undefined or null, uses the default StorageClass. Default to null" + type: storageclass + label: Default StorageClass for PostgreSQL + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.requests.cpu + default: "200m" + description: "PostgreSQL initial cpu request" + type: string + label: PostgreSQL Initial CPU Request + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.requests.memory + default: "500Mi" + description: "PostgreSQL initial memory request" + type: string + label: PostgreSQL Initial Memory Request + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.limits.cpu + default: "1" + description: "PostgreSQL cpu limit" + type: string + label: PostgreSQL CPU Limit + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.limits.memory + default: "1Gi" + description: "PostgreSQL memory limit" + type: string + label: PostgreSQL Memory Limit + show_if: "postgresql.enabled=true" +- variable: database.type + default: "postgresql" + description: "xternal database type (postgresql, mysql, oracle or mssql)" + type: enum + required: true + label: External Database Type + group: "Database Settings" + show_if: "postgresql.enabled=false" + options: + - "postgresql" + - "mysql" + - "oracle" + - "mssql" +- variable: database.url + default: "" + description: "External database URL. If you set the url, leave host and port empty" + type: string + label: External Database URL + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.host + default: "" + description: "External database hostname" + type: string + label: External Database Hostname + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.port + default: "" + description: "External database port" + type: string + label: External Database Port + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.user + default: "" + description: "External database username" + type: string + label: External Database Username + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.password + default: "" + description: "External database password" + type: password + label: External Database Password + group: "Database Settings" + show_if: "postgresql.enabled=false" + +# Advance Settings +- variable: advancedOptions + default: false + description: "Show advanced configurations" + label: Show Advanced Configurations + type: boolean + show_subquestion_if: true + group: "Advanced Options" + subquestions: + - variable: artifactory.primary.resources.requests.cpu + default: "500m" + description: "Artifactory primary node initial cpu request" + type: string + label: Artifactory Primary Node Initial CPU Request + - variable: artifactory.primary.resources.requests.memory + default: "1Gi" + description: "Artifactory primary node initial memory request" + type: string + label: Artifactory Primary Node Initial Memory Request + - variable: artifactory.primary.javaOpts.xms + default: "1g" + description: "Artifactory primary node java Xms size" + type: string + label: Artifactory Primary Node Java Xms Size + - variable: artifactory.primary.resources.limits.cpu + default: "2" + description: "Artifactory primary node cpu limit" + type: string + label: Artifactory Primary Node CPU Limit + - variable: artifactory.primary.resources.limits.memory + default: "4Gi" + description: "Artifactory primary node memory limit" + type: string + label: Artifactory Primary Node Memory Limit + - variable: artifactory.primary.javaOpts.xmx + default: "4g" + description: "Artifactory primary node java Xmx size" + type: string + label: Artifactory Primary Node Java Xmx Size + - variable: artifactory.node.resources.requests.cpu + default: "500m" + description: "Artifactory member node initial cpu request" + type: string + label: Artifactory Member Node Initial CPU Request + - variable: artifactory.node.resources.requests.memory + default: "2Gi" + description: "Artifactory member node initial memory request" + type: string + label: Artifactory Member Node Initial Memory Request + - variable: artifactory.node.javaOpts.xms + default: "1g" + description: "Artifactory member node java Xms size" + type: string + label: Artifactory Member Node Java Xms Size + - variable: artifactory.node.resources.limits.cpu + default: "2" + description: "Artifactory member node cpu limit" + type: string + label: Artifactory Member Node CPU Limit + - variable: artifactory.node.resources.limits.memory + default: "4Gi" + description: "Artifactory member node memory limit" + type: string + label: Artifactory Member Node Memory Limit + - variable: artifactory.node.javaOpts.xmx + default: "4g" + description: "Artifactory member node java Xmx size" + type: string + label: Artifactory Member Node Java Xmx Size + +# Internal Settings +- variable: installerInfo + default: '\{\"productId\": \"RancherHelm_artifactory-ha/7.17.5\", \"features\": \[\{\"featureId\": \"Partner/ACC-007246\"\}\]\}' + type: string + group: "Internal Settings (Do not modify)" diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-2xlarge-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-2xlarge-extra-config.yaml new file mode 100644 index 000000000..6afc491dc --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-2xlarge-extra-config.yaml @@ -0,0 +1,44 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=200 + -Dartifactory.async.poolMaxQueueSize=100000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=200 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + + tomcat: + connector: + maxThreads: 800 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 200 + +access: + tomcat: + connector: + maxThreads: 200 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + + database: + maxOpenConnections: 200 + +metadata: + database: + maxOpenConnections: 200 + diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-2xlarge.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-2xlarge.yaml new file mode 100644 index 000000000..02cf7f94e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-2xlarge.yaml @@ -0,0 +1,127 @@ +############################################################## +# The 2xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 6 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "4" + memory: 20Gi + limits: + # cpu: "20" + memory: 24Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +router: + resources: + requests: + cpu: "1" + memory: 1Gi + limits: + # cpu: "6" + memory: 2Gi + +frontend: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 1Gi + +metadata: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 2Gi + +event: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 4Gi + +observability: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +jfconnect: + resources: + requests: + cpu: 100m + memory: 100Mi + limits: + # cpu: "1" + memory: 250Mi + +nginx: + replicaCount: 3 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "6Gi" + limits: + # cpu: "14" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "5000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 256Gi + cpu: "64" + limits: + memory: 256Gi + # cpu: "128" diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-large-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-large-extra-config.yaml new file mode 100644 index 000000000..fac24ad68 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-large-extra-config.yaml @@ -0,0 +1,44 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=80 + -Dartifactory.async.poolMaxQueueSize=20000 + -Dartifactory.http.client.max.total.connections=100 + -Dartifactory.http.client.max.connections.per.route=100 + -Dartifactory.access.client.max.connections=125 + -Dartifactory.metadata.event.operator.threads=4 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=524288 + -XX:MaxDirectMemorySize=512m + + tomcat: + connector: + maxThreads: 500 + extraConfig: 'acceptCount="800" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 100 + +access: + tomcat: + connector: + maxThreads: 125 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + + database: + maxOpenConnections: 100 + +metadata: + database: + maxOpenConnections: 100 + diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-large.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-large.yaml new file mode 100644 index 000000000..504edf1ed --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-large.yaml @@ -0,0 +1,127 @@ +############################################################## +# The large sizing +# This size is intended for large organizations. It can be increased with adding replicas or moving to the xlarge sizing +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 3 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 10Gi + limits: + # cpu: "14" + memory: 12Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "8" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 3Gi + +router: + resources: + requests: + cpu: 200m + memory: 400Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "1" + memory: "500Mi" + limits: + # cpu: "4" + memory: "1Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "600" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 64Gi + cpu: "16" + limits: + memory: 64Gi + # cpu: "32" diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-medium-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-medium-extra-config.yaml new file mode 100644 index 000000000..b2b20b198 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-medium-extra-config.yaml @@ -0,0 +1,45 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-medium.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-medium.yaml new file mode 100644 index 000000000..93b79788d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-medium.yaml @@ -0,0 +1,127 @@ +############################################################## +# The medium sizing +# This size is just 2 replicas of the small size. Vertical sizing of all services is not changed +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 2 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +access: + resources: + requests: + cpu: 1 + memory: 1.5Gi + limits: + # cpu: 1.5 + memory: 2Gi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "200" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 32Gi + cpu: "8" + limits: + memory: 32Gi + # cpu: "16" \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-small-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-small-extra-config.yaml new file mode 100644 index 000000000..e8329f1a3 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-small-extra-config.yaml @@ -0,0 +1,43 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-small.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-small.yaml new file mode 100644 index 000000000..b75a22323 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-small.yaml @@ -0,0 +1,127 @@ +############################################################## +# The small sizing +# This is the size recommended for running Artifactory for small teams +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "100" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 16Gi + cpu: "4" + limits: + memory: 16Gi + # cpu: "10" diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xlarge-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xlarge-extra-config.yaml new file mode 100644 index 000000000..8d04850ad --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xlarge-extra-config.yaml @@ -0,0 +1,42 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=160 + -Dartifactory.async.poolMaxQueueSize=50000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=150 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + tomcat: + connector: + maxThreads: 600 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 150 + +access: + tomcat: + connector: + maxThreads: 150 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 150 + +metadata: + database: + maxOpenConnections: 150 + diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xlarge.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xlarge.yaml new file mode 100644 index 000000000..550bd051d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xlarge.yaml @@ -0,0 +1,127 @@ +############################################################## +# The xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 4 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 14Gi + limits: + # cpu: "14" + memory: 16Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 4Gi + +router: + resources: + requests: + cpu: 200m + memory: 500Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "4Gi" + limits: + # cpu: "12" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "2000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 128Gi + cpu: "32" + limits: + memory: 128Gi + # cpu: "64" diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xsmall-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xsmall-extra-config.yaml new file mode 100644 index 000000000..1371e87b8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xsmall-extra-config.yaml @@ -0,0 +1,43 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=10 + -Dartifactory.async.poolMaxQueueSize=2000 + -Dartifactory.http.client.max.total.connections=20 + -Dartifactory.http.client.max.connections.per.route=20 + -Dartifactory.access.client.max.connections=15 + -Dartifactory.metadata.event.operator.threads=2 + -XX:MaxMetaspaceSize=400m + -XX:CompressedClassSpaceSize=96m + -Djdk.nio.maxCachedBufferSize=131072 + -XX:MaxDirectMemorySize=128m + tomcat: + connector: + maxThreads: 50 + extraConfig: 'acceptCount="200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 15 + +access: + tomcat: + connector: + maxThreads: 15 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 15 + +metadata: + database: + maxOpenConnections: 15 + diff --git a/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xsmall.yaml b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xsmall.yaml new file mode 100644 index 000000000..3f7b07138 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/sizing/artifactory-xsmall.yaml @@ -0,0 +1,127 @@ +############################################################## +# The xsmall sizing +# This is the minimum size recommended for running Artifactory +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 3Gi + limits: + # cpu: "10" + memory: 4Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 50m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "50m" + memory: "50Mi" + limits: + # cpu: "1" + memory: "250Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "50" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 8Gi + cpu: "2" + limits: + memory: 8Gi + # cpu: "8" \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/NOTES.txt b/charts/jfrog/artifactory-ha/107.90.15/templates/NOTES.txt new file mode 100644 index 000000000..30dfab8b8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/NOTES.txt @@ -0,0 +1,149 @@ +Congratulations. You have just deployed JFrog Artifactory HA! + +{{- if .Values.artifactory.masterKey }} +{{- if and (not .Values.artifactory.masterKeySecretName) (eq .Values.artifactory.masterKey "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF") }} + + +***************************************** WARNING ****************************************** +* Your Artifactory master key is still set to the provided example: * +* artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF * +* * +* You should change this to your own generated key: * +* $ export MASTER_KEY=$(openssl rand -hex 32) * +* $ echo ${MASTER_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.masterKey=${MASTER_KEY}' * +* * +* Alternatively, you can use a pre-existing secret with a key called master-key with * +* '--set artifactory.masterKeySecretName=${SECRET_NAME}' * +******************************************************************************************** +{{- end }} +{{- end }} + +{{- if .Values.artifactory.joinKey }} +{{- if eq .Values.artifactory.joinKey "EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE" }} + + +***************************************** WARNING ****************************************** +* Your Artifactory join key is still set to the provided example: * +* artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE * +* * +* You should change this to your own generated key: * +* $ export JOIN_KEY=$(openssl rand -hex 32) * +* $ echo ${JOIN_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.joinKey=${JOIN_KEY}' * +* * +******************************************************************************************** +{{- end }} +{{- end }} + + +{{- if .Values.artifactory.setSecurityContext }} +****************************************** WARNING ********************************************** +* From chart version 107.84.x, `setSecurityContext` has been renamed to `podSecurityContext`, * + please change your values.yaml before upgrade , For more Info , refer to 107.84.x changelog * +************************************************************************************************* +{{- end }} + +{{- if and (or (or (or (or (or ( or ( or ( or (or (or ( or (or .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName) .Values.systemYamlOverride.existingSecret) (or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled)) .Values.aws.licenseConfigSecretName) .Values.artifactory.persistence.customBinarystoreXmlSecret) .Values.access.customCertificatesSecretName) .Values.systemYamlOverride.existingSecret) .Values.artifactory.license.secret) .Values.artifactory.userPluginSecrets) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey)) (and .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName)) (or .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName)) .Values.artifactory.unifiedSecretInstallation }} +****************************************** WARNING ************************************************************************************************** +* The unifiedSecretInstallation flag is currently enabled, which creates the unified secret. The existing secrets will continue as separate secrets.* +* Update the values.yaml with the existing secrets to add them to the unified secret. * +***************************************************************************************************************************************************** +{{- end }} + +{{- if .Values.postgresql.enabled }} + +DATABASE: +To extract the database password, run the following +export DB_PASSWORD=$(kubectl get --namespace {{ .Release.Namespace }} $(kubectl get secret --namespace {{ .Release.Namespace }} -o name | grep postgresql) -o jsonpath="{.data.postgresql-password}" | base64 --decode) +echo ${DB_PASSWORD} +{{- end }} + +SETUP: +1. Get the Artifactory IP and URL + + {{- if contains "NodePort" .Values.nginx.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "artifactory-ha.nginx.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT/ + + {{- else if contains "LoadBalancer" .Values.nginx.service.type }} + NOTE: It may take a few minutes for the LoadBalancer public IP to be available! + + You can watch the status of the service by running 'kubectl get svc -w {{ template "artifactory-ha.nginx.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory-ha.nginx.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP/ + + {{- else if contains "ClusterIP" .Values.nginx.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ .Values.nginx.name }}" -o jsonpath="{.items[0].metadata.name}") + kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 8080:80 + echo http://127.0.0.1:8080 + + {{- end }} + +2. Open Artifactory in your browser + Default credential for Artifactory: + user: admin + password: password + + {{- if .Values.artifactory.license.secret }} + +3. Artifactory license(s) is deployed as a Kubernetes secret. This method is relevant for initial deployment only! + Updating the license should be done via Artifactory UI or REST API. If you want to keep managing the artifactory license using the same method, you can use artifactory.copyOnEveryStartup in values.yaml. + + {{- else }} + +3. Add HA licenses to activate Artifactory HA through the Artifactory UI + NOTE: Each Artifactory node requires a valid license. See https://www.jfrog.com/confluence/display/RTF/HA+Installation+and+Setup for more details. + + {{- end }} + +{{ if or .Values.artifactory.primary.javaOpts.jmx.enabled .Values.artifactory.node.javaOpts.jmx.enabled }} +JMX configuration: +{{- if not (contains "LoadBalancer" .Values.artifactory.service.type) }} +If you want to access JMX from you computer with jconsole, you should set ".Values.artifactory.service.type=LoadBalancer" !!! +{{ end }} + +1. Get the Artifactory service IP: +{{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} +export PRIMARY_SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory-ha.primary.name" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +{{- end }} +{{- if .Values.artifactory.node.javaOpts.jmx.enabled }} +export MEMBER_SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory-ha.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +{{- end }} + +2. Map the service name to the service IP in /etc/hosts: +{{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} +sudo sh -c "echo \"${PRIMARY_SERVICE_IP} {{ template "artifactory-ha.primary.name" . }}\" >> /etc/hosts" +{{- end }} +{{- if .Values.artifactory.node.javaOpts.jmx.enabled }} +sudo sh -c "echo \"${MEMBER_SERVICE_IP} {{ template "artifactory-ha.fullname" . }}\" >> /etc/hosts" +{{- end }} + +3. Launch jconsole: +{{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} +jconsole {{ template "artifactory-ha.primary.name" . }}:{{ .Values.artifactory.primary.javaOpts.jmx.port }} +{{- end }} +{{- if .Values.artifactory.node.javaOpts.jmx.enabled }} +jconsole {{ template "artifactory-ha.fullname" . }}:{{ .Values.artifactory.node.javaOpts.jmx.port }} +{{- end }} +{{- end }} + + +{{- if ge (.Values.artifactory.node.replicaCount | int) 1 }} +***************************************** WARNING ***************************************************************************** +* Currently member node(s) are enabled, will be deprecated in upcoming releases * +* It is recommended to upgrade from primary-members to primary-only. * +* It can be done by deploying the chart ( >=107.59.x) with the new values. Also, please refer to changelog of 107.59.x chart * +* More Info: https://jfrog.com/help/r/jfrog-installation-setup-documentation/cloud-native-high-availability * +******************************************************************************************************************************* +{{- end }} + +{{- if and .Values.nginx.enabled .Values.ingress.hosts }} +***************************************** WARNING ***************************************************************************** +* when nginx is enabled , .Values.ingress.hosts will be deprecated in upcoming releases * +* It is recommended to use nginx.hosts instead ingress.hosts +******************************************************************************************************************************* +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/_helpers.tpl b/charts/jfrog/artifactory-ha/107.90.15/templates/_helpers.tpl new file mode 100644 index 000000000..d6fb229fe --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/_helpers.tpl @@ -0,0 +1,563 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "artifactory-ha.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +The primary node name +*/}} +{{- define "artifactory-ha.primary.name" -}} +{{- if .Values.nameOverride -}} +{{- printf "%s-primary" .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := .Release.Name | trunc 29 -}} +{{- printf "%s-%s-primary" $name .Chart.Name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +The member node name +*/}} +{{- define "artifactory-ha.node.name" -}} +{{- if .Values.nameOverride -}} +{{- printf "%s-member" .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := .Release.Name | trunc 29 -}} +{{- printf "%s-%s-member" $name .Chart.Name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +Expand the name nginx service. +*/}} +{{- define "artifactory-ha.nginx.name" -}} +{{- default .Values.nginx.name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "artifactory-ha.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "artifactory-ha.nginx.fullname" -}} +{{- if .Values.nginx.fullnameOverride -}} +{{- .Values.nginx.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nginx.name -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "artifactory-ha.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} +{{ default (include "artifactory-ha.fullname" .) .Values.serviceAccount.name }} +{{- else -}} +{{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "artifactory-ha.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Generate SSL certificates +*/}} +{{- define "artifactory-ha.gen-certs" -}} +{{- $altNames := list ( printf "%s.%s" (include "artifactory-ha.fullname" .) .Release.Namespace ) ( printf "%s.%s.svc" (include "artifactory-ha.fullname" .) .Release.Namespace ) -}} +{{- $ca := genCA "artifactory-ca" 365 -}} +{{- $cert := genSignedCert ( include "artifactory-ha.fullname" . ) nil $altNames 365 $ca -}} +tls.crt: {{ $cert.Cert | b64enc }} +tls.key: {{ $cert.Key | b64enc }} +{{- end -}} + +{{/* +Scheme (http/https) based on Access or Router TLS enabled/disabled +*/}} +{{- define "artifactory-ha.scheme" -}} +{{- if or .Values.access.accessConfig.security.tls .Values.router.tlsEnabled -}} +{{- printf "%s" "https" -}} +{{- else -}} +{{- printf "%s" "http" -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKey value +*/}} +{{- define "artifactory-ha.joinKey" -}} +{{- if .Values.global.joinKey -}} +{{- .Values.global.joinKey -}} +{{- else if .Values.artifactory.joinKey -}} +{{- .Values.artifactory.joinKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectToken value +*/}} +{{- define "artifactory-ha.jfConnectToken" -}} +{{- .Values.artifactory.jfConnectToken -}} +{{- end -}} + +{{/* +Resolve masterKey value +*/}} +{{- define "artifactory-ha.masterKey" -}} +{{- if .Values.global.masterKey -}} +{{- .Values.global.masterKey -}} +{{- else if .Values.artifactory.masterKey -}} +{{- .Values.artifactory.masterKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKeySecretName value +*/}} +{{- define "artifactory-ha.joinKeySecretName" -}} +{{- if .Values.global.joinKeySecretName -}} +{{- .Values.global.joinKeySecretName -}} +{{- else if .Values.artifactory.joinKeySecretName -}} +{{- .Values.artifactory.joinKeySecretName -}} +{{- else -}} +{{ include "artifactory-ha.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectTokenSecretName value +*/}} +{{- define "artifactory-ha.jfConnectTokenSecretName" -}} +{{- if .Values.artifactory.jfConnectTokenSecretName -}} +{{- .Values.artifactory.jfConnectTokenSecretName -}} +{{- else -}} +{{ include "artifactory-ha.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve masterKeySecretName value +*/}} +{{- define "artifactory-ha.masterKeySecretName" -}} +{{- if .Values.global.masterKeySecretName -}} +{{- .Values.global.masterKeySecretName -}} +{{- else if .Values.artifactory.masterKeySecretName -}} +{{- .Values.artifactory.masterKeySecretName -}} +{{- else -}} +{{ include "artifactory-ha.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve imagePullSecrets value +*/}} +{{- define "artifactory-ha.imagePullSecrets" -}} +{{- if .Values.global.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.global.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- else if .Values.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainersBegin value +*/}} +{{- define "artifactory-ha.customInitContainersBegin" -}} +{{- if .Values.global.customInitContainersBegin -}} +{{- .Values.global.customInitContainersBegin -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainersBegin -}} +{{- .Values.artifactory.customInitContainersBegin -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory-ha.customInitContainers" -}} +{{- if .Values.global.customInitContainers -}} +{{- .Values.global.customInitContainers -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainers -}} +{{- .Values.artifactory.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory-ha.customVolumes" -}} +{{- if .Values.global.customVolumes -}} +{{- .Values.global.customVolumes -}} +{{- end -}} +{{- if .Values.artifactory.customVolumes -}} +{{- .Values.artifactory.customVolumes -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve unifiedCustomSecretVolumeName value +*/}} +{{- define "artifactory-ha.unifiedCustomSecretVolumeName" -}} +{{- printf "%s-%s" (include "artifactory-ha.name" .) ("unified-secret-volume") | trunc 63 -}} +{{- end -}} + +{{/* +Check the Duplication of volume names for secrets. If unifiedSecretInstallation is enabled then the method is checking for volume names, +if the volume exists in customVolume then an extra volume with the same name will not be getting added in unifiedSecretInstallation case.*/}} +{{- define "artifactory-ha.checkDuplicateUnifiedCustomVolume" -}} +{{- if or .Values.global.customVolumes .Values.artifactory.customVolumes -}} +{{- $val := (tpl (include "artifactory-ha.customVolumes" .) .) | toJson -}} +{{- contains (include "artifactory-ha.unifiedCustomSecretVolumeName" .) $val | toString -}} +{{- else -}} +{{- printf "%s" "false" -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumeMounts value +*/}} +{{- define "artifactory-ha.customVolumeMounts" -}} +{{- if .Values.global.customVolumeMounts -}} +{{- .Values.global.customVolumeMounts -}} +{{- end -}} +{{- if .Values.artifactory.customVolumeMounts -}} +{{- .Values.artifactory.customVolumeMounts -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory-ha.customSidecarContainers" -}} +{{- if .Values.global.customSidecarContainers -}} +{{- .Values.global.customSidecarContainers -}} +{{- end -}} +{{- if .Values.artifactory.customSidecarContainers -}} +{{- .Values.artifactory.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory chart image names +*/}} +{{- define "artifactory-ha.getImageInfoByValue" -}} +{{- $dot := index . 0 }} +{{- $indexReference := index . 1 }} +{{- $registryName := index $dot.Values $indexReference "image" "registry" -}} +{{- $repositoryName := index $dot.Values $indexReference "image" "repository" -}} +{{- $tag := "" -}} +{{- if and (eq $indexReference "artifactory") (hasKey $dot.Values "artifactoryService") }} + {{- if default false $dot.Values.artifactoryService.enabled }} + {{- $indexReference = "artifactoryService" -}} + {{- $tag = default $dot.Chart.Annotations.artifactoryServiceVersion (index $dot.Values $indexReference "image" "tag") | toString -}} + {{- $repositoryName = index $dot.Values $indexReference "image" "repository" -}} + {{- else -}} + {{- $tag = default $dot.Chart.AppVersion (index $dot.Values $indexReference "image" "tag") | toString -}} + {{- end -}} +{{- else -}} + {{- $tag = default $dot.Chart.AppVersion (index $dot.Values $indexReference "image" "tag") | toString -}} +{{- end -}} +{{- if $dot.Values.global }} + {{- if and $dot.Values.splitServicesToContainers $dot.Values.global.versions.router (eq $indexReference "router") }} + {{- $tag = $dot.Values.global.versions.router | toString -}} + {{- end -}} + {{- if and $dot.Values.global.versions.initContainers (eq $indexReference "initContainers") }} + {{- $tag = $dot.Values.global.versions.initContainers | toString -}} + {{- end -}} + {{- if $dot.Values.global.versions.artifactory }} + {{- if or (eq $indexReference "artifactory") (eq $indexReference "metadata") (eq $indexReference "nginx") (eq $indexReference "observability") }} + {{- $tag = $dot.Values.global.versions.artifactory | toString -}} + {{- end -}} + {{- end -}} + {{- if $dot.Values.global.imageRegistry }} + {{- printf "%s/%s:%s" $dot.Values.global.imageRegistry $repositoryName $tag -}} + {{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} + {{- end -}} +{{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory app version +*/}} +{{- define "artifactory-ha.app.version" -}} +{{- $tag := (splitList ":" ((include "artifactory-ha.getImageInfoByValue" (list . "artifactory" )))) | last | toString -}} +{{- printf "%s" $tag -}} +{{- end -}} + +{{/* +Custom certificate copy command +*/}} +{{- define "artifactory-ha.copyCustomCerts" -}} +echo "Copy custom certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; +for file in $(ls -1 /tmp/certs/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; fi done; +if [ -f {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt ]; then mv -v {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/ca.crt; fi; +{{- end -}} + +{{/* +Circle of trust certificates copy command +*/}} +{{- define "artifactory.copyCircleOfTrustCertsCerts" -}} +echo "Copy circle of trust certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; +for file in $(ls -1 /tmp/circleoftrustcerts/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; fi done; +{{- end -}} + +{{/* +Resolve requiredServiceTypes value +*/}} +{{- define "artifactory-ha.router.requiredServiceTypes" -}} +{{- $requiredTypes := "jfrt,jfac" -}} +{{- if not .Values.access.enabled -}} + {{- $requiredTypes = "jfrt" -}} +{{- end -}} +{{- if .Values.observability.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfob" -}} +{{- end -}} +{{- if .Values.metadata.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmd" -}} +{{- end -}} +{{- if .Values.event.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevt" -}} +{{- end -}} +{{- if .Values.frontend.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jffe" -}} +{{- end -}} +{{- if .Values.jfconnect.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfcon" -}} +{{- end -}} +{{- if .Values.evidence.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevd" -}} +{{- end -}} +{{- if .Values.mc.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmc" -}} +{{- end -}} +{{- $requiredTypes -}} +{{- end -}} + +{{/* +nginx scheme (http/https) +*/}} +{{- define "nginx.scheme" -}} +{{- if .Values.nginx.http.enabled -}} +{{- printf "%s" "http" -}} +{{- else -}} +{{- printf "%s" "https" -}} +{{- end -}} +{{- end -}} + + +{{/* +nginx command +*/}} +{{- define "nginx.command" -}} +{{- if .Values.nginx.customCommand }} +{{ toYaml .Values.nginx.customCommand }} +{{- end }} +{{- end -}} + +{{/* +nginx port (8080/8443) based on http/https enabled +*/}} +{{- define "nginx.port" -}} +{{- if .Values.nginx.http.enabled -}} +{{- .Values.nginx.http.internalPort -}} +{{- else -}} +{{- .Values.nginx.https.internalPort -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory.nginx.customInitContainers" -}} +{{- if .Values.nginx.customInitContainers -}} +{{- .Values.nginx.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory.nginx.customVolumes" -}} +{{- if .Values.nginx.customVolumes -}} +{{- .Values.nginx.customVolumes -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumeMounts nginx value +*/}} +{{- define "artifactory.nginx.customVolumeMounts" -}} +{{- if .Values.nginx.customVolumeMounts -}} +{{- .Values.nginx.customVolumeMounts -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory.nginx.customSidecarContainers" -}} +{{- if .Values.nginx.customSidecarContainers -}} +{{- .Values.nginx.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve Artifactory pod primary node selector value +*/}} +{{- define "artifactory.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.artifactory.primary.nodeSelector }} +{{ toYaml .Values.artifactory.primary.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve Artifactory pod node nodeselector value +*/}} +{{- define "artifactory.node.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.artifactory.node.nodeSelector }} +{{ toYaml .Values.artifactory.node.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve Nginx pods node selector value +*/}} +{{- define "nginx.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.nginx.nodeSelector }} +{{ toYaml .Values.nginx.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Calculate the systemYaml from structured and unstructured text input +*/}} +{{- define "artifactory.finalSystemYaml" -}} +{{ tpl (mergeOverwrite (include "artifactory.systemYaml" . | fromYaml) .Values.artifactory.extraSystemYaml | toYaml) . }} +{{- end -}} + +{{/* +Calculate the systemYaml from the unstructured text input +*/}} +{{- define "artifactory.systemYaml" -}} +{{ include (print $.Template.BasePath "/_system-yaml-render.tpl") . }} +{{- end -}} + +{{/* +Metrics enabled +*/}} +{{- define "metrics.enabled" -}} +shared: + metrics: + enabled: true +{{- end }} + +{{/* +Resolve artifactory metrics +*/}} +{{- define "artifactory.metrics" -}} +{{- if .Values.artifactory.openMetrics -}} +{{- if .Values.artifactory.openMetrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.openMetrics.filebeat }} +{{- if .Values.artifactory.openMetrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.openMetrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- else if .Values.artifactory.metrics -}} +{{- if .Values.artifactory.metrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.metrics.filebeat }} +{{- if .Values.artifactory.metrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.metrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve unified secret prepend release name +*/}} +{{- define "artifactory.unifiedSecretPrependReleaseName" -}} +{{- if .Values.artifactory.unifiedSecretPrependReleaseName }} +{{- printf "%s" (include "artifactory-ha.fullname" .) -}} +{{- else }} +{{- printf "%s" (include "artifactory-ha.name" .) -}} +{{- end }} +{{- end }} + +{{/* +Resolve nginx hosts value +*/}} +{{- define "artifactory.nginx.hosts" -}} +{{- if .Values.ingress.hosts }} +{{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- else if .Values.nginx.hosts }} +{{- range .Values.nginx.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/_system-yaml-render.tpl b/charts/jfrog/artifactory-ha/107.90.15/templates/_system-yaml-render.tpl new file mode 100644 index 000000000..deaa773ea --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/_system-yaml-render.tpl @@ -0,0 +1,5 @@ +{{- if .Values.artifactory.systemYaml -}} +{{- tpl .Values.artifactory.systemYaml . -}} +{{- else -}} +{{ (tpl ( $.Files.Get "files/system.yaml" ) .) }} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/additional-resources.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/additional-resources.yaml new file mode 100644 index 000000000..c4d06f08a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/additional-resources.yaml @@ -0,0 +1,3 @@ +{{ if .Values.additionalResources }} +{{ tpl .Values.additionalResources . }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/admin-bootstrap-creds.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/admin-bootstrap-creds.yaml new file mode 100644 index 000000000..40d91f75e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/admin-bootstrap-creds.yaml @@ -0,0 +1,15 @@ +{{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} +{{- if and .Values.artifactory.admin.password (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-bootstrap-creds + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + bootstrap.creds: {{ (printf "%s@%s=%s" .Values.artifactory.admin.username .Values.artifactory.admin.ip .Values.artifactory.admin.password) | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-access-config.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-access-config.yaml new file mode 100644 index 000000000..0b96a337d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-access-config.yaml @@ -0,0 +1,15 @@ +{{- if and .Values.access.accessConfig (not .Values.artifactory.unifiedSecretInstallation) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" . }}-access-config + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +stringData: + access.config.patch.yml: | +{{ tpl (toYaml .Values.access.accessConfig) . | indent 4 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-binarystore-secret.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-binarystore-secret.yaml new file mode 100644 index 000000000..6824fe90f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-binarystore-secret.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.artifactory.persistence.customBinarystoreXmlSecret) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-binarystore + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + binarystore.xml: |- +{{- if .Values.artifactory.persistence.binarystoreXml }} +{{ tpl .Values.artifactory.persistence.binarystoreXml . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/binarystore.xml" ) . | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-configmaps.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-configmaps.yaml new file mode 100644 index 000000000..1385bc578 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-configmaps.yaml @@ -0,0 +1,13 @@ +{{ if .Values.artifactory.configMaps }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-configmaps + labels: + app: {{ template "artifactory-ha.fullname" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ tpl .Values.artifactory.configMaps . | indent 2 }} +{{ end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-custom-secrets.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-custom-secrets.yaml new file mode 100644 index 000000000..8065fe686 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-custom-secrets.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.artifactory.customSecrets (not .Values.artifactory.unifiedSecretInstallation) }} +{{- range .Values.artifactory.customSecrets }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" $ }}-{{ .name }} + labels: + app: "{{ template "artifactory-ha.name" $ }}" + chart: "{{ template "artifactory-ha.chart" $ }}" + component: "{{ $.Values.artifactory.name }}" + heritage: {{ $.Release.Service | quote }} + release: {{ $.Release.Name | quote }} +type: Opaque +stringData: + {{ .key }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-database-secrets.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-database-secrets.yaml new file mode 100644 index 000000000..6daf5db7b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-database-secrets.yaml @@ -0,0 +1,24 @@ +{{- if and (not .Values.database.secrets) (not .Values.postgresql.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +{{- if or .Values.database.url .Values.database.user .Values.database.password }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" . }}-database-creds + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +data: + {{- with .Values.database.url }} + db-url: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.user }} + db-user: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.password }} + db-password: {{ tpl . $ | b64enc | quote }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-gcp-credentials-secret.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-gcp-credentials-secret.yaml new file mode 100644 index 000000000..d90769595 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-gcp-credentials-secret.yaml @@ -0,0 +1,16 @@ +{{- if not .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} +{{- if and (.Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-gcpcreds + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + gcp.credentials.json: |- +{{ tpl .Values.artifactory.persistence.googleStorage.gcpServiceAccount.config . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-installer-info.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-installer-info.yaml new file mode 100644 index 000000000..0dff9dc86 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-installer-info.yaml @@ -0,0 +1,16 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-installer-info + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + installer-info.json: | +{{- if .Values.installerInfo -}} +{{- tpl .Values.installerInfo . | nindent 4 -}} +{{- else -}} +{{ (tpl ( .Files.Get "files/installer-info.json" | nindent 4 ) .) }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-license-secret.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-license-secret.yaml new file mode 100644 index 000000000..73f900863 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-license-secret.yaml @@ -0,0 +1,16 @@ +{{ if and (not .Values.artifactory.unifiedSecretInstallation) (not .Values.artifactory.license.secret) (not .Values.artifactory.license.licenseKey) }} +{{- with .Values.artifactory.license.licenseKey }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" $ }}-license + labels: + app: {{ template "artifactory-ha.name" $ }} + chart: {{ template "artifactory-ha.chart" $ }} + heritage: {{ $.Release.Service }} + release: {{ $.Release.Name }} +type: Opaque +data: + artifactory.lic: {{ . | b64enc | quote }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-migration-scripts.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-migration-scripts.yaml new file mode 100644 index 000000000..fe40f980f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-migration-scripts.yaml @@ -0,0 +1,18 @@ +{{- if .Values.artifactory.migration.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-migration-scripts + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + migrate.sh: | +{{ .Files.Get "files/migrate.sh" | indent 4 }} + migrationHelmInfo.yaml: | +{{ .Files.Get "files/migrationHelmInfo.yaml" | indent 4 }} + migrationStatus.sh: | +{{ .Files.Get "files/migrationStatus.sh" | indent 4 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-networkpolicy.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-networkpolicy.yaml new file mode 100644 index 000000000..9924448f0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-networkpolicy.yaml @@ -0,0 +1,34 @@ +{{- range .Values.networkpolicy }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ template "artifactory-ha.fullname" $ }}-{{ .name }}-networkpolicy + labels: + app: {{ template "artifactory-ha.name" $ }} + chart: {{ template "artifactory-ha.chart" $ }} + release: {{ $.Release.Name }} + heritage: {{ $.Release.Service }} +spec: +{{- if .podSelector }} + podSelector: +{{ .podSelector | toYaml | trimSuffix "\n" | indent 4 -}} +{{ else }} + podSelector: {} +{{- end }} + policyTypes: + {{- if .ingress }} + - Ingress + {{- end }} + {{- if .egress }} + - Egress + {{- end }} +{{- if .ingress }} + ingress: +{{ .ingress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +{{- if .egress }} + egress: +{{ .egress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +--- +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-nfs-pvc.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-nfs-pvc.yaml new file mode 100644 index 000000000..6ed7d82f6 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-nfs-pvc.yaml @@ -0,0 +1,101 @@ +{{- if eq .Values.artifactory.persistence.type "nfs" }} +### Artifactory HA data +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory-ha.fullname" . }}-data-pv + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory-ha.name" . }}-data-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haDataMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-data-pvc + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory-ha.name" . }}-data-pv + app: {{ template "artifactory-ha.name" . }} + release: {{ .Release.Name }} +--- +### Artifactory HA backup +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory-ha.fullname" . }}-backup-pv + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory-ha.name" . }}-backup-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haBackupMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-backup-pvc + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory-ha.name" . }}-backup-pv + app: {{ template "artifactory-ha.name" . }} + release: {{ .Release.Name }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-node-pdb.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-node-pdb.yaml new file mode 100644 index 000000000..46c6dac21 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/artifactory-node-pdb.yaml @@ -0,0 +1,26 @@ +{{- if gt (.Values.artifactory.node.replicaCount | int) 0 -}} +{{- if .Values.artifactory.node.minAvailable -}} +{{- if semverCompare " + mkdir -p {{ tpl .Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir . }}; + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- end }} + {{- if .Values.artifactory.deleteDBPropertiesOnStartup }} + - name: "delete-db-properties" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'bash' + - '-c' + - 'rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/db.properties' + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- end }} + {{- if and .Values.artifactory.node.waitForPrimaryStartup.enabled }} + - name: "wait-for-primary" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - 'bash' + - '-c' + - > + echo "Waiting for primary node to be ready..."; + {{- if and .Values.artifactory.node.waitForPrimaryStartup.enabled .Values.artifactory.node.waitForPrimaryStartup.time }} + echo "Sleeping to allow time for primary node to come up"; + sleep {{ .Values.artifactory.node.waitForPrimaryStartup.time }}; + {{- else }} + ready=false; + while ! $ready; do echo Primary not ready. Waiting...; + timeout 2s bash -c " + if [[ -e "{{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml" ]]; then chmod 644 {{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml; fi; + echo "Copy system.yaml to {{ .Values.artifactory.persistence.mountPath }}/etc"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; + {{- if .Values.systemYamlOverride.existingSecret }} + cp -fv /tmp/etc/{{ .Values.systemYamlOverride.dataKey }} {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- else }} + cp -fv /tmp/etc/system.yaml {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- end }} + echo "Copy binarystore.xml file"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory; + cp -fv /tmp/etc/artifactory/binarystore.xml {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/binarystore.xml; + echo "Removing join.key file"; + rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/security/join.key; + {{- if .Values.access.resetAccessCAKeys }} + echo "Resetting Access CA Keys - load from database"; + {{- end }} + {{- if .Values.access.customCertificatesSecretName }} + echo "Load custom certificates from database"; + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + echo "Copy masterKey to {{ .Values.artifactory.persistence.mountPath }}/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security; + echo -n ${ARTIFACTORY_MASTER_KEY} > {{ .Values.artifactory.persistence.mountPath }}/etc/security/master.key; + env: + - name: ARTIFACTORY_MASTER_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) (or .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName) }} + name: {{ include "artifactory-ha.masterKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: master-key + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## SystemYaml ######################### + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.systemYamlOverride.existingSecret }} + - name: systemyaml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + {{- if .Values.systemYamlOverride.existingSecret }} + mountPath: "/tmp/etc/{{.Values.systemYamlOverride.dataKey}}" + subPath: {{ .Values.systemYamlOverride.dataKey }} + {{- else }} + mountPath: "/tmp/etc/system.yaml" + subPath: system.yaml + {{- end }} + + ######################## Binarystore ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## CustomCertificates ########################## + {{- if or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled }} + - name: copy-custom-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory-ha.copyCustomCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: ca-certs + mountPath: "/tmp/certs" + {{- end }} + + {{- if .Values.artifactory.circleOfTrustCertificatesSecret }} + - name: copy-circle-of-trust-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCircleOfTrustCertsCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: circle-of-trust-certs + mountPath: "/tmp/circleoftrustcerts" + {{- end }} + + {{- if .Values.waitForDatabase }} + {{- if or .Values.postgresql.enabled }} + - name: "wait-for-db" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - /bin/bash + - -c + - | + echo "Waiting for postgresql to come up" + ready=false; + while ! $ready; do echo waiting; + timeout 2s bash -c " + {{- if .Values.artifactory.migration.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.migration.preStartCommand . }}; + {{- end }} + scriptsPath="/opt/jfrog/artifactory/app/bin"; + mkdir -p $scriptsPath; + echo "Copy migration scripts and Run migration"; + cp -fv /tmp/migrate.sh $scriptsPath/migrate.sh; + cp -fv /tmp/migrationHelmInfo.yaml $scriptsPath/migrationHelmInfo.yaml; + cp -fv /tmp/migrationStatus.sh $scriptsPath/migrationStatus.sh; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/log; + bash $scriptsPath/migrationStatus.sh {{ include "artifactory-ha.app.version" . }} {{ .Values.artifactory.migration.timeoutSeconds }} > >(tee {{ .Values.artifactory.persistence.mountPath }}/log/helm-migration.log) 2>&1; + resources: +{{ toYaml .Values.artifactory.node.resources | indent 10 }} + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: migration-scripts + mountPath: "/tmp/migrate.sh" + subPath: migrate.sh + - name: migration-scripts + mountPath: "/tmp/migrationHelmInfo.yaml" + subPath: migrationHelmInfo.yaml + - name: migration-scripts + mountPath: "/tmp/migrationStatus.sh" + subPath: migrationStatus.sh + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + {{- end }} + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + +{{- end }} + {{- if .Values.hostAliases }} + hostAliases: +{{ toYaml .Values.hostAliases | indent 6 }} + {{- end }} + containers: + {{- if .Values.splitServicesToContainers }} + - name: {{ .Values.router.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "router") }} + imagePullPolicy: {{ .Values.router.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/router/app/bin/entrypoint-router.sh + {{- with .Values.router.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES + value: {{ include "artifactory-ha.router.requiredServiceTypes" . }} +{{- with .Values.router.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - name: http + containerPort: {{ .Values.router.internalPort }} + volumeMounts: + - name: volume + mountPath: {{ .Values.router.persistence.mountPath | quote }} +{{- with .Values.router.customVolumeMounts }} +{{ tpl . $ | indent 8 }} +{{- end }} + resources: +{{ toYaml .Values.router.resources | indent 10 }} + {{- if .Values.router.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.router.startupProbe.config . | indent 10 }} + {{- end }} +{{- if .Values.router.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.router.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.router.livenessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.enabled }} + - name: {{ .Values.frontend.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/node/bin/node /opt/jfrog/artifactory/app/frontend/bin/server/dist/bundle.js /opt/jfrog/artifactory/app/frontend + {{- with .Values.frontend.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + - name : JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.frontend.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.frontend.resources | indent 10 }} + {{- if .Values.frontend.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.frontend.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.frontend.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.evidence.enabled }} + - name: {{ .Values.evidence.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/evidence/bin/jf-evidence start + {{- with .Values.evidence.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.evidence.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.evidence.internalPort }} + name: http-evidence + - containerPort: {{ .Values.evidence.externalPort }} + name: grpc-evidence + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.evidence.resources | indent 10 }} + {{- if .Values.evidence.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.evidence.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.evidence.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.evidence.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.metadata.enabled }} + - name: {{ .Values.metadata.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "metadata") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/metadata/bin/jf-metadata start + {{- with .Values.metadata.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.metadata.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.metadata.resources | indent 10 }} + {{- if .Values.metadata.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.metadata.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.metadata.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.metadata.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.event.enabled }} + - name: {{ .Values.event.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/event/bin/jf-event start + {{- with .Values.event.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.event.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.event.resources | indent 10 }} + {{- if .Values.event.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.event.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.event.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.event.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.jfconnect.enabled }} + - name: {{ .Values.jfconnect.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/jfconnect/bin/jf-connect start + {{- with .Values.jfconnect.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.jfconnect.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.jfconnect.resources | indent 10 }} + {{- if .Values.jfconnect.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.jfconnect.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.jfconnect.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.jfconnect.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.federation.enabled .Values.federation.embedded }} + - name: {{ .Values.federation.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/java/bin/java {{ .Values.federation.extraJavaOpts }} -jar /opt/jfrog/artifactory/app/rtfs/lib/jf-rtfs + {{- with .Values.federation.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_RTFS_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} +{{- with .Values.federation.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.federation.resources | indent 10 }} + {{- if .Values.federation.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.federation.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.federation.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.federation.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.observability.enabled }} + - name: {{ .Values.observability.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "observability") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/observability/bin/jf-observability start + {{- with .Values.observability.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.observability.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.observability.resources | indent 10 }} + {{- if .Values.observability.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.observability.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.observability.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.observability.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.access.enabled (not (.Values.access.runOnArtifactoryTomcat | default false)) }} + - name: {{ .Values.access.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.access.resources }} + resources: +{{ toYaml .Values.access.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + {{- if .Values.access.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.access.preStartCommand . }}; + {{- end }} + exec /opt/jfrog/artifactory/app/access/bin/entrypoint-access.sh + {{- with .Values.access.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.access.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + + ######################## Artifactory ConfigMap ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.access.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.access.startupProbe.config . | indent 10 }} + {{- end }} + {{- if semverCompare " + set -e; + {{- range .Values.artifactory.copyOnEveryStartup }} + {{- $targetPath := printf "%s/%s" $.Values.artifactory.persistence.mountPath .target }} + {{- $baseDirectory := regexFind ".*/" $targetPath }} + mkdir -p {{ $baseDirectory }}; + cp -Lrf {{ .source }} {{ $.Values.artifactory.persistence.mountPath }}/{{ .target }}; + {{- end }} + {{- if .Values.artifactory.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.preStartCommand . }}; + {{- end }} + {{- with .Values.artifactory.node.preStartCommand }} + echo "Running member node specific custom preStartCommand command"; + {{ tpl . $ }}; + {{- end }} + exec /entrypoint-artifactory.sh + {{- with .Values.artifactory.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if .Values.aws.license.enabled }} + - name: IS_AWS_LICENSE + value: "true" + - name: AWS_REGION + value: {{ .Values.aws.region | quote }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.aws.licenseConfigSecretName }} + key: iam_role + {{- end }} + {{- end }} + {{- if .Values.splitServicesToContainers }} + - name : JF_ROUTER_ENABLED + value: "true" + - name : JF_ROUTER_SERVICE_ENABLED + value: "false" + - name : JF_EVENT_ENABLED + value: "false" + - name : JF_METADATA_ENABLED + value: "false" + - name : JF_FRONTEND_ENABLED + value: "false" + - name: JF_FEDERATION_ENABLED + value: "false" + - name : JF_OBSERVABILITY_ENABLED + value: "false" + - name : JF_JFCONNECT_SERVICE_ENABLED + value: "false" + - name : JF_EVIDENCE_ENABLED + value: "false" + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + - name : JF_ACCESS_ENABLED + value: "false" + {{- end}} + {{- end }} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.artifactory.internalPort }} + name: http + - containerPort: {{ .Values.artifactory.internalArtifactoryPort }} + name: http-internal + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + {{- if .Values.artifactory.node.javaOpts.jmx.enabled }} + - containerPort: {{ .Values.artifactory.node.javaOpts.jmx.port }} + name: tcp-jmx + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.artifactory.ssh.internalPort }} + name: tcp-ssh + {{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + + ######################## Artifactory ConfigMap ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + {{- end }} + - name: installer-info + mountPath: "/artifactory_bootstrap/info/installer-info.json" + subPath: installer-info.json + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.artifactory.node.resources | indent 10 }} + {{- if .Values.artifactory.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.artifactory.startupProbe.config . | indent 10 }} + {{- end }} + {{- if and (not .Values.splitServicesToContainers) (semverCompare "= 107.79.x), just set databaseUpgradeReady=true \n" .Values.databaseUpgradeReady | quote }} +{{- end }} +{{- if .Values.artifactory.postStartCommand }} + {{- fail ".Values.artifactory.postStartCommand is not supported and should be replaced with .Values.artifactory.lifecycle.postStart.exec.command" }} +{{- end }} +{{- if eq .Values.artifactory.persistence.type "aws-s3" }} + {{- fail "\nPersistence storage type 'aws-s3' is deprecated and is not supported and should be replaced with 'aws-s3-v3'" }} +{{- end }} +{{- if or .Values.artifactory.persistence.googleStorage.identity .Values.artifactory.persistence.googleStorage.credential }} + {{- fail "\nGCP Bucket Authentication with Identity and Credential is deprecated" }} +{{- end }} +{{- if (eq (.Values.artifactory.setSecurityContext | toString) "false" ) }} + {{- fail "\n You need to set security context at the pod level. .Values.artifactory.setSecurityContext is no longer supported. Replace it with .Values.artifactory.podSecurityContext" }} +{{- end }} +{{- if or .Values.artifactory.uid .Values.artifactory.gid }} +{{- if or (not (eq (.Values.artifactory.uid | toString) "1030" )) (not (eq (.Values.artifactory.gid | toString) "1030" )) }} + {{- fail "\n .Values.artifactory.uid and .Values.artifactory.gid are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.runAsUser, .Values.artifactory.podSecurityContext.runAsGroup and .Values.artifactory.podSecurityContext.fsGroup" }} +{{- end }} +{{- end }} +{{- if or .Values.artifactory.fsGroupChangePolicy .Values.artifactory.seLinuxOptions }} + {{- fail "\n .Values.artifactory.fsGroupChangePolicy and .Values.artifactory.seLinuxOptions are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.fsGroupChangePolicy and .Values.artifactory.podSecurityContext.seLinuxOptions" }} +{{- end }} +{{- if .Values.initContainerImage }} + {{- fail "\n .Values.initContainerImage is no longer supported. Replace it with .Values.initContainers.image.registry .Values.initContainers.image.repository and .Values.initContainers.image.tag" }} +{{- end }} +{{- with .Values.artifactory.statefulset.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: + serviceName: {{ template "artifactory-ha.primary.name" . }} + replicas: {{ .Values.artifactory.primary.replicaCount }} + updateStrategy: {{- toYaml .Values.artifactory.primary.updateStrategy | nindent 4}} + selector: + matchLabels: + app: {{ template "artifactory-ha.name" . }} + role: {{ template "artifactory-ha.primary.name" . }} + release: {{ .Release.Name }} + template: + metadata: + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + role: {{ template "artifactory-ha.primary.name" . }} + component: {{ .Values.artifactory.name }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + {{- with .Values.artifactory.primary.labels }} +{{ toYaml . | indent 8 }} + {{- end }} + annotations: + {{- if not .Values.artifactory.unifiedSecretInstallation }} + checksum/database-secrets: {{ include (print $.Template.BasePath "/artifactory-database-secrets.yaml") . | sha256sum }} + checksum/binarystore: {{ include (print $.Template.BasePath "/artifactory-binarystore-secret.yaml") . | sha256sum }} + checksum/systemyaml: {{ include (print $.Template.BasePath "/artifactory-system-yaml.yaml") . | sha256sum }} + {{- if .Values.access.accessConfig }} + checksum/access-config: {{ include (print $.Template.BasePath "/artifactory-access-config.yaml") . | sha256sum }} + {{- end }} + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + checksum/gcpcredentials: {{ include (print $.Template.BasePath "/artifactory-gcp-credentials-secret.yaml") . | sha256sum }} + {{- end }} + {{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + checksum/admin-creds: {{ include (print $.Template.BasePath "/admin-bootstrap-creds.yaml") . | sha256sum }} + {{- end }} + {{- else }} + checksum/artifactory-unified-secret: {{ include (print $.Template.BasePath "/artifactory-unified-secret.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.artifactory.annotations }} +{{ toYaml . | indent 8 }} + {{- end }} + spec: + {{- if .Values.artifactory.schedulerName }} + schedulerName: {{ .Values.artifactory.schedulerName | quote }} + {{- end }} + {{- if .Values.artifactory.priorityClass.existingPriorityClass }} + priorityClassName: {{ .Values.artifactory.priorityClass.existingPriorityClass }} + {{- else -}} + {{- if .Values.artifactory.priorityClass.create }} + priorityClassName: {{ default (include "artifactory-ha.fullname" .) .Values.artifactory.priorityClass.name }} + {{- end }} + {{- end }} + serviceAccountName: {{ template "artifactory-ha.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ add .Values.artifactory.terminationGracePeriodSeconds 10 }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory-ha.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.podSecurityContext.enabled }} + securityContext: {{- omit .Values.artifactory.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.artifactory.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.artifactory.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if or .Values.artifactory.customInitContainersBegin .Values.global.customInitContainersBegin }} +{{ tpl (include "artifactory-ha.customInitContainersBegin" .) . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.persistence.enabled }} + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + - name: "create-artifactory-data-dir" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > + mkdir -p {{ tpl .Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir . }}; + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- end }} + {{- if .Values.artifactory.deleteDBPropertiesOnStartup }} + - name: "delete-db-properties" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - 'rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/db.properties' + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- if or (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) .Values.artifactory.admin.password }} + - name: "access-bootstrap-creds" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > + echo "Preparing {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -Lrf /tmp/access/bootstrap.creds {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + chmod 600 {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + volumeMounts: + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- if or (not .Values.artifactory.unifiedSecretInstallation) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + - name: access-bootstrap-creds + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/access/bootstrap.creds" + {{- if and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey }} + subPath: {{ .Values.artifactory.admin.dataKey }} + {{- else }} + subPath: bootstrap.creds + {{- end }} + {{- end }} + {{- end }} + - name: 'copy-system-configurations' + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - '/bin/bash' + - '-c' + - > + if [[ -e "{{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml" ]]; then chmod 644 {{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml; fi; + echo "Copy system.yaml to {{ .Values.artifactory.persistence.mountPath }}/etc"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; + {{- if .Values.systemYamlOverride.existingSecret }} + cp -fv /tmp/etc/{{ .Values.systemYamlOverride.dataKey }} {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- else }} + cp -fv /tmp/etc/system.yaml {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- end }} + echo "Copy binarystore.xml file"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory; + cp -fv /tmp/etc/artifactory/binarystore.xml {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/binarystore.xml; + {{- if .Values.access.accessConfig }} + echo "Copy access.config.patch.yml to {{ .Values.artifactory.persistence.mountPath }}/etc/access"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -fv /tmp/etc/access.config.patch.yml {{ .Values.artifactory.persistence.mountPath }}/etc/access/access.config.patch.yml; + {{- end }} + {{- if .Values.access.resetAccessCAKeys }} + echo "Resetting Access CA Keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + touch {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/reset_ca_keys; + {{- end }} + {{- if .Values.access.customCertificatesSecretName }} + echo "Copying custom certificates to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + cp -fv /tmp/etc/tls.crt {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.crt; + cp -fv /tmp/etc/tls.key {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.private.key; + {{- end }} + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + echo "Copy joinKey to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security; + echo -n ${ARTIFACTORY_JOIN_KEY} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security/join.key; + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectTokenSecretName }} + echo "Copy jfConnectToken to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/; + echo -n ${ARTIFACTORY_JFCONNECT_TOKEN} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token; + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + echo "Copy masterKey to {{ .Values.artifactory.persistence.mountPath }}/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security; + echo -n ${ARTIFACTORY_MASTER_KEY} > {{ .Values.artifactory.persistence.mountPath }}/etc/security/master.key; + {{- end }} + env: + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + - name: ARTIFACTORY_JOIN_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + name: {{ include "artifactory-ha.joinKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: join-key + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectTokenSecretName }} + - name: ARTIFACTORY_JFCONNECT_TOKEN + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.jfConnectTokenSecretName }} + name: {{ include "artifactory-ha.jfConnectTokenSecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: jfconnect-token + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + - name: ARTIFACTORY_MASTER_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + name: {{ include "artifactory-ha.masterKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: master-key + {{- end }} + + ######################## Volume Mounts For copy-system-configurations ########################## + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## SystemYaml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.systemYamlOverride.existingSecret }} + - name: systemyaml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + {{- if .Values.systemYamlOverride.existingSecret }} + mountPath: "/tmp/etc/{{.Values.systemYamlOverride.dataKey}}" + subPath: {{ .Values.systemYamlOverride.dataKey }} + {{- else }} + mountPath: "/tmp/etc/system.yaml" + subPath: system.yaml + {{- end }} + + ######################## Binarystore ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Access config ########################## + {{- if .Values.access.accessConfig }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + - name: access-config + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/access.config.patch.yml" + subPath: access.config.patch.yml + {{- end }} + + ######################## Access certs external secret ########################## + {{- if .Values.access.customCertificatesSecretName }} + - name: access-certs + mountPath: "/tmp/etc/tls.crt" + subPath: tls.crt + - name: access-certs + mountPath: "/tmp/etc/tls.key" + subPath: tls.key + {{- end }} + + {{- if or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled }} + - name: copy-custom-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory-ha.copyCustomCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: ca-certs + mountPath: "/tmp/certs" + {{- end }} + + {{- if .Values.artifactory.circleOfTrustCertificatesSecret }} + - name: copy-circle-of-trust-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCircleOfTrustCertsCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: circle-of-trust-certs + mountPath: "/tmp/circleoftrustcerts" + {{- end }} + + {{- if .Values.waitForDatabase }} + {{- if or .Values.postgresql.enabled }} + - name: "wait-for-db" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - /bin/bash + - -c + - | + echo "Waiting for postgresql to come up" + ready=false; + while ! $ready; do echo waiting; + timeout 2s bash -c " + {{- if .Values.artifactory.migration.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.migration.preStartCommand . }}; + {{- end }} + scriptsPath="/opt/jfrog/artifactory/app/bin"; + mkdir -p $scriptsPath; + echo "Copy migration scripts and Run migration"; + cp -fv /tmp/migrate.sh $scriptsPath/migrate.sh; + cp -fv /tmp/migrationHelmInfo.yaml $scriptsPath/migrationHelmInfo.yaml; + cp -fv /tmp/migrationStatus.sh $scriptsPath/migrationStatus.sh; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/log; + bash $scriptsPath/migrationStatus.sh {{ include "artifactory-ha.app.version" . }} {{ .Values.artifactory.migration.timeoutSeconds }} > >(tee {{ .Values.artifactory.persistence.mountPath }}/log/helm-migration.log) 2>&1; + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: migration-scripts + mountPath: "/tmp/migrate.sh" + subPath: migrate.sh + - name: migration-scripts + mountPath: "/tmp/migrationHelmInfo.yaml" + subPath: migrationHelmInfo.yaml + - name: migration-scripts + mountPath: "/tmp/migrationStatus.sh" + subPath: migrationStatus.sh + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## CustomVolumeMounts ########################## + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + +{{- end }} + + {{- if .Values.hostAliases }} + hostAliases: +{{ toYaml .Values.hostAliases | indent 6 }} + {{- end }} + containers: + {{- if .Values.splitServicesToContainers }} + - name: {{ .Values.router.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "router") }} + imagePullPolicy: {{ .Values.router.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/router/app/bin/entrypoint-router.sh; + {{- with .Values.router.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES + value: {{ include "artifactory-ha.router.requiredServiceTypes" . }} +{{- with .Values.router.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - name: http + containerPort: {{ .Values.router.internalPort }} + volumeMounts: + - name: volume + mountPath: {{ .Values.router.persistence.mountPath | quote }} +{{- with .Values.router.customVolumeMounts }} +{{ tpl . $ | indent 8 }} +{{- end }} + resources: +{{ toYaml .Values.router.resources | indent 10 }} + {{- if .Values.router.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.router.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.router.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.router.livenessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.enabled }} + - name: {{ .Values.frontend.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/node/bin/node /opt/jfrog/artifactory/app/frontend/bin/server/dist/bundle.js /opt/jfrog/artifactory/app/frontend + {{- with .Values.frontend.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + - name : JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.frontend.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.frontend.resources | indent 10 }} + {{- if .Values.frontend.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.frontend.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.frontend.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.evidence.enabled }} + - name: {{ .Values.evidence.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/evidence/bin/jf-evidence start + {{- with .Values.evidence.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.evidence.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.evidence.internalPort }} + name: http-evidence + - containerPort: {{ .Values.evidence.externalPort }} + name: grpc-evidence + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.evidence.resources | indent 10 }} + {{- if .Values.evidence.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.evidence.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.evidence.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.evidence.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.metadata.enabled }} + - name: {{ .Values.metadata.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "metadata") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/metadata/bin/jf-metadata start + {{- with .Values.metadata.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.metadata.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.metadata.resources | indent 10 }} + {{- if .Values.metadata.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.metadata.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.metadata.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.metadata.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.event.enabled }} + - name: {{ .Values.event.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/event/bin/jf-event start + {{- with .Values.event.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.event.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.event.resources | indent 10 }} + {{- if .Values.event.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.event.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.event.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.event.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.jfconnect.enabled }} + - name: {{ .Values.jfconnect.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/jfconnect/bin/jf-connect start + {{- with .Values.jfconnect.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.jfconnect.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.jfconnect.resources | indent 10 }} + {{- if .Values.jfconnect.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.jfconnect.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.jfconnect.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.jfconnect.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.federation.enabled .Values.federation.embedded }} + - name: {{ .Values.federation.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/java/bin/java {{ .Values.federation.extraJavaOpts }} -jar /opt/jfrog/artifactory/app/rtfs/lib/jf-rtfs + {{- with .Values.federation.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + # TODO - Password,Url,Username - should be derived from env variable +{{- with .Values.federation.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.federation.resources | indent 10 }} + {{- if .Values.federation.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.federation.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.federation.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.federation.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.observability.enabled }} + - name: {{ .Values.observability.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "observability") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/observability/bin/jf-observability start + {{- with .Values.observability.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.observability.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.observability.resources | indent 10 }} + {{- if .Values.observability.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.observability.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.observability.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.observability.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.access.enabled (not (.Values.access.runOnArtifactoryTomcat | default false)) }} + - name: {{ .Values.access.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.access.resources }} + resources: +{{ toYaml .Values.access.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + {{- if .Values.access.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.access.preStartCommand . }}; + {{- end }} + exec /opt/jfrog/artifactory/app/access/bin/entrypoint-access.sh + {{- with .Values.access.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.access.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.access.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.access.startupProbe.config . | indent 10 }} + {{- end }} + {{- if semverCompare " + set -e; + if [ -d /artifactory_extra_conf ] && [ -d /artifactory_bootstrap ]; then + echo "Copying bootstrap config from /artifactory_extra_conf to /artifactory_bootstrap"; + cp -Lrfv /artifactory_extra_conf/ /artifactory_bootstrap/; + fi; + {{- if .Values.artifactory.configMapName }} + echo "Copying bootstrap configs"; + cp -Lrf /bootstrap/* /artifactory_bootstrap/; + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + echo "Copying plugins"; + cp -Lrf /tmp/plugin/*/* /artifactory_bootstrap/plugins; + {{- end }} + {{- range .Values.artifactory.copyOnEveryStartup }} + {{- $targetPath := printf "%s/%s" $.Values.artifactory.persistence.mountPath .target }} + {{- $baseDirectory := regexFind ".*/" $targetPath }} + mkdir -p {{ $baseDirectory }}; + cp -Lrf {{ .source }} {{ $.Values.artifactory.persistence.mountPath }}/{{ .target }}; + {{- end }} + {{- with .Values.artifactory.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl . $ }}; + {{- end }} + {{- with .Values.artifactory.primary.preStartCommand }} + echo "Running primary specific custom preStartCommand command"; + {{ tpl . $ }}; + {{- end }} + exec /entrypoint-artifactory.sh + {{- with .Values.artifactory.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if .Values.aws.license.enabled }} + - name: IS_AWS_LICENSE + value: "true" + - name: AWS_REGION + value: {{ .Values.aws.region | quote }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.aws.licenseConfigSecretName }} + key: iam_role + {{- end }} + {{- end }} + {{- if .Values.splitServicesToContainers }} + - name : JF_ROUTER_ENABLED + value: "true" + - name : JF_ROUTER_SERVICE_ENABLED + value: "false" + - name : JF_EVENT_ENABLED + value: "false" + - name : JF_METADATA_ENABLED + value: "false" + - name : JF_FRONTEND_ENABLED + value: "false" + - name: JF_FEDERATION_ENABLED + value: "false" + - name : JF_OBSERVABILITY_ENABLED + value: "false" + - name : JF_JFCONNECT_SERVICE_ENABLED + value: "false" + - name : JF_EVIDENCE_ENABLED + value: "false" + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + - name : JF_ACCESS_ENABLED + value: "false" + {{- end}} + {{- end }} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.artifactory.internalPort }} + name: http + - containerPort: {{ .Values.artifactory.internalArtifactoryPort }} + name: http-internal + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + {{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} + - containerPort: {{ .Values.artifactory.primary.javaOpts.jmx.port }} + name: tcp-jmx + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.artifactory.ssh.internalPort }} + name: tcp-ssh + {{- end }} + + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + - name: bootstrap-plugins + mountPath: "/artifactory_bootstrap/plugins/" + {{- range .Values.artifactory.userPluginSecrets }} + - name: {{ tpl . $ }} + mountPath: "/tmp/plugin/{{ tpl . $ }}" + {{- end }} + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystoreXml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence googleStorage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + + ######################## Artifactory configMapName ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + + - name: installer-info + mountPath: "/artifactory_bootstrap/info/installer-info.json" + subPath: installer-info.json + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.artifactory.primary.resources | indent 10 }} + {{- if .Values.artifactory.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.artifactory.startupProbe.config . | indent 10 }} + {{- end }} + {{- if and (not .Values.splitServicesToContainers) (semverCompare "=1.18.0-0" .Capabilities.KubeVersion.GitVersion) }} + ingressClassName: {{ .Values.ingress.className }} + {{- end }} + {{- if .Values.ingress.defaultBackend.enabled }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + defaultBackend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- else }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end }} + rules: +{{- if .Values.ingress.hosts }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- if not $.Values.ingress.disableRouterBypass }} + - path: {{ $.Values.ingress.artifactoryPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $artifactoryServicePort }} + {{- end }} + {{- if and $.Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" $.Values.artifactory.image.repository)) }} + - path: {{ $.Values.ingress.rtfsPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $.Values.federation.internalPort }} + {{- end }} + {{- end }} + {{- else }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + - path: {{ $.Values.ingress.artifactoryPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $artifactoryServicePort }} + {{- end }} + {{- end }} +{{- end -}} + {{- with .Values.ingress.additionalRules }} +{{ tpl . $ | indent 2 }} + {{- end }} + {{- if .Values.ingress.tls }} + tls: +{{ toYaml .Values.ingress.tls | indent 4 }} + {{- end -}} + +{{- if .Values.customIngress }} +--- +{{ .Values.customIngress | toYaml | trimSuffix "\n" }} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/logger-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/logger-configmap.yaml new file mode 100644 index 000000000..d3597905d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/logger-configmap.yaml @@ -0,0 +1,63 @@ +{{- if or .Values.artifactory.loggers .Values.artifactory.catalinaLoggers }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-logger + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + tail-log.sh: | + #!/bin/sh + + LOG_DIR=$1 + LOG_NAME=$2 + PID= + + # Wait for log dir to appear + while [ ! -d ${LOG_DIR} ]; do + sleep 1 + done + + cd ${LOG_DIR} + + LOG_PREFIX=$(echo ${LOG_NAME} | sed 's/.log$//g') + + # Find the log to tail + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + + # Wait for the log file + while [ -z "${LOG_FILE}" ]; do + sleep 1 + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + done + + echo "Log file ${LOG_FILE} is ready!" + + # Get inode number + INODE_ID=$(ls -i ${LOG_FILE}) + + # echo "Tailing ${LOG_FILE}" + tail -F ${LOG_FILE} & + PID=$! + + # Loop forever to see if a new log was created + while true; do + # Check inode number + NEW_INODE_ID=$(ls -i ${LOG_FILE}) + + # If inode number changed, this means log was rotated and need to start a new tail + if [ "${INODE_ID}" != "${NEW_INODE_ID}" ]; then + kill -9 ${PID} 2>/dev/null + INODE_ID="${NEW_INODE_ID}" + + # Start a new tail + tail -F ${LOG_FILE} & + PID=$! + fi + sleep 1 + done + +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-artifactory-conf.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-artifactory-conf.yaml new file mode 100644 index 000000000..97ae5f27b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-artifactory-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customArtifactoryConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-nginx-artifactory-conf + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + artifactory.conf: | +{{- if .Values.nginx.artifactoryConf }} +{{ tpl .Values.nginx.artifactoryConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-artifactory-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-certificate-secret.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-certificate-secret.yaml new file mode 100644 index 000000000..29c77ad5a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-certificate-secret.yaml @@ -0,0 +1,14 @@ +{{- if and (not .Values.nginx.tlsSecretName) .Values.nginx.enabled .Values.nginx.https.enabled }} +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: {{ template "artifactory-ha.fullname" . }}-nginx-certificate + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ ( include "artifactory-ha.gen-certs" . ) | indent 2 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-conf.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-conf.yaml new file mode 100644 index 000000000..4f0d65f25 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-nginx-conf + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + nginx.conf: | +{{- if .Values.nginx.mainConf }} +{{ tpl .Values.nginx.mainConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-main-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-deployment.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-deployment.yaml new file mode 100644 index 000000000..d43689b8c --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-deployment.yaml @@ -0,0 +1,221 @@ +{{- if .Values.nginx.enabled -}} +{{- $serviceName := include "artifactory-ha.fullname" . -}} +{{- $servicePort := .Values.artifactory.externalPort -}} +apiVersion: apps/v1 +kind: {{ .Values.nginx.kind }} +metadata: + name: {{ template "artifactory-ha.nginx.fullname" . }} + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 4 }} +{{- end }} +{{- with .Values.nginx.deployment.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: +{{- if ne .Values.nginx.kind "DaemonSet" }} + replicas: {{ .Values.nginx.replicaCount }} +{{- end }} + selector: + matchLabels: + app: {{ template "artifactory-ha.name" . }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} + template: + metadata: + annotations: + checksum/nginx-conf: {{ include (print $.Template.BasePath "/nginx-conf.yaml") . | sha256sum }} + checksum/nginx-artifactory-conf: {{ include (print $.Template.BasePath "/nginx-artifactory-conf.yaml") . | sha256sum }} + {{- range $key, $value := .Values.nginx.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + component: {{ .Values.nginx.name }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 8 }} +{{- end }} + spec: + {{- if .Values.nginx.podSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + serviceAccountName: {{ template "artifactory-ha.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ .Values.nginx.terminationGracePeriodSeconds }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory-ha.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.nginx.priorityClassName }} + priorityClassName: {{ .Values.nginx.priorityClassName | quote }} + {{- end }} + {{- if .Values.nginx.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.nginx.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if .Values.nginx.customInitContainers }} +{{ tpl (include "artifactory.nginx.customInitContainers" .) . | indent 6 }} + {{- end }} + - name: "setup" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.imagePullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/sh' + - '-c' + - > + rm -rfv {{ .Values.nginx.persistence.mountPath }}/lost+found; + mkdir -p {{ .Values.nginx.persistence.mountPath }}/logs; + resources: + {{- toYaml .Values.initContainers.resources | nindent 10 }} + volumeMounts: + - mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + name: nginx-volume + containers: + - name: {{ .Values.nginx.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "nginx") }} + imagePullPolicy: {{ .Values.nginx.image.pullPolicy }} + {{- if .Values.nginx.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.nginx.customCommand }} + command: +{{- tpl (include "nginx.command" .) . | indent 10 }} + {{- end }} + ports: +{{ if .Values.nginx.customPorts }} +{{ toYaml .Values.nginx.customPorts | indent 8 }} +{{ end }} + # DEPRECATION NOTE: The following is to maintain support for values pre 1.3.1 and + # will be cleaned up in a later version + {{- if .Values.nginx.http }} + {{- if .Values.nginx.http.enabled }} + - containerPort: {{ .Values.nginx.http.internalPort }} + name: http + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttp }} + name: http-internal + {{- end }} + {{- if .Values.nginx.https }} + {{- if .Values.nginx.https.enabled }} + - containerPort: {{ .Values.nginx.https.internalPort }} + name: https + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttps }} + name: https-internal + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.nginx.ssh.internalPort }} + name: tcp-ssh + {{- end }} + {{- with .Values.nginx.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + volumeMounts: + - name: nginx-conf + mountPath: /etc/nginx/nginx.conf + subPath: nginx.conf + - name: nginx-artifactory-conf + mountPath: "{{ .Values.nginx.persistence.mountPath }}/conf.d/" + - name: nginx-volume + mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + mountPath: "{{ .Values.nginx.persistence.mountPath }}/ssl" + {{- end }} + {{- if .Values.nginx.customVolumeMounts }} +{{ tpl (include "artifactory.nginx.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.nginx.resources | indent 10 }} + {{- if .Values.nginx.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.nginx.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.nginx.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.nginx.livenessProbe.config . | indent 10 }} + {{- end }} + {{- $mountPath := .Values.nginx.persistence.mountPath }} + {{- range .Values.nginx.loggers }} + - name: {{ . | replace "_" "-" | replace "." "-" }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list $ "initContainers") }} + imagePullPolicy: {{ $.Values.initContainers.image.pullPolicy }} + command: + - tail + args: + - '-F' + - '{{ $mountPath }}/logs/{{ . }}' + volumeMounts: + - name: nginx-volume + mountPath: {{ $mountPath }} + resources: +{{ toYaml $.Values.nginx.loggersResources | indent 10 }} + {{- end }} + {{- if .Values.nginx.customSidecarContainers }} +{{ tpl (include "artifactory.nginx.customSidecarContainers" .) . | indent 6 }} + {{- end }} + {{- if or .Values.nginx.nodeSelector .Values.global.nodeSelector }} +{{ tpl (include "nginx.nodeSelector" .) . | indent 6 }} + {{- end }} + {{- with .Values.nginx.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.nginx.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} + volumes: + {{- if .Values.nginx.customVolumes }} +{{ tpl (include "artifactory.nginx.customVolumes" .) . | indent 6 }} + {{- end }} + - name: nginx-conf + configMap: + {{- if .Values.nginx.customConfigMap }} + name: {{ .Values.nginx.customConfigMap }} + {{- else }} + name: {{ template "artifactory-ha.fullname" . }}-nginx-conf + {{- end }} + - name: nginx-artifactory-conf + configMap: + {{- if .Values.nginx.customArtifactoryConfigMap }} + name: {{ .Values.nginx.customArtifactoryConfigMap }} + {{- else }} + name: {{ template "artifactory-ha.fullname" . }}-nginx-artifactory-conf + {{- end }} + + - name: nginx-volume + {{- if .Values.nginx.persistence.enabled }} + persistentVolumeClaim: + claimName: {{ .Values.nginx.persistence.existingClaim | default (include "artifactory-ha.nginx.fullname" .) }} + {{- else }} + emptyDir: {} + {{- end }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + secret: + {{- if .Values.nginx.tlsSecretName }} + secretName: {{ .Values.nginx.tlsSecretName }} + {{- else }} + secretName: {{ template "artifactory-ha.fullname" . }}-nginx-certificate + {{- end }} + {{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-pdb.yaml b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-pdb.yaml new file mode 100644 index 000000000..0aed99368 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.15/templates/nginx-pdb.yaml @@ -0,0 +1,23 @@ +{{- if .Values.nginx.enabled -}} +{{- if semverCompare " --from-literal=license_token=${TOKEN} --from-literal=iam_role=${ROLE_ARN}` +aws: + license: + enabled: false + licenseConfigSecretName: + region: us-east-1 +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +## @param containerSecurityContext.enabled Enabled containers' Security Context +## @param containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot +## @param containerSecurityContext.privileged Set container's Security Context privileged +## @param containerSecurityContext.allowPrivilegeEscalation Set container's Security Context allowPrivilegeEscalation +## @param containerSecurityContext.capabilities.drop List of capabilities to be dropped +## @param containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile +## +containerSecurityContext: + enabled: true + runAsNonRoot: true + privileged: false + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL +## The following router settings are to configure only when splitServicesToContainers set to true +router: + name: router + image: + registry: releases-docker.jfrog.io + repository: jfrog/router + tag: 7.118.3 + pullPolicy: IfNotPresent + serviceRegistry: + ## Service registry (Access) TLS verification skipped if enabled + insecure: false + internalPort: 8082 + externalPort: 8082 + tlsEnabled: false + ## Extra environment variables that can be used to tune router to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + + ## Add lifecycle hooks for router container + lifecycle: + ## From Artifactory versions 7.52.x, Wait for Artifactory to complete any open uploads or downloads before terminating + preStop: + exec: + command: ["sh", "-c", "while [[ $(curl --fail --silent --connect-timeout 2 http://localhost:8081/artifactory/api/v1/system/liveness) =~ OK ]]; do echo Artifactory is still alive; sleep 2; done"] + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: /scripts/script.sh + # subPath: script.sh + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "artifactory-ha.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " prepended. + unifiedSecretPrependReleaseName: true + image: + registry: releases-docker.jfrog.io + repository: jfrog/artifactory-pro + # tag: + pullPolicy: IfNotPresent + ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ + schedulerName: + ## Create a priority class for the Artifactory pods or use an existing one + ## NOTE - Maximum allowed value of a user defined priority is 1000000000 + priorityClass: + create: false + value: 1000000000 + ## Override default name + # name: + ## Use an existing priority class + # existingPriorityClass: + ## Delete the db.properties file in ARTIFACTORY_HOME/etc/db.properties + deleteDBPropertiesOnStartup: true + database: + maxOpenConnections: 80 + tomcat: + maintenanceConnector: + port: 8091 + connector: + maxThreads: 200 + sendReasonPhrase: false + extraConfig: 'acceptCount="400"' + ## certificates added to this secret will be copied to $JFROG_HOME/artifactory/var/etc/security/keys/trusted directory + customCertificates: + enabled: false + # certificateSecretName: + ## Support for metrics is only available for Artifactory 7.7.x (appVersions) and above. + ## To enable set `.Values.artifactory.metrics.enabled` to `true` + ## Note: Depricated `openMetrics` as part of 7.87.x and renamed to `metrics` + ## Refer - https://www.jfrog.com/confluence/display/JFROG/Open+Metrics + metrics: + enabled: false + ## Settings for pushing metrics to Insight - enable filebeat to true + filebeat: + enabled: false + log: + enabled: false + ## Log level for filebeat. Possible values: debug, info, warning, or error. + level: "info" + ## Elasticsearch details for filebeat to connect + elasticsearch: + url: "Elasticsearch url where JFrog Insight is installed For example, http://:8082" + username: "" + password: "" + ## Support for Cold Artifact Storage + ## set 'coldStorage.enabled' to 'true' only for Artifactory instance that you are designating as the Cold instance + ## Refer - https://jfrog.com/help/r/jfrog-platform-administration-documentation/setting-up-cold-artifact-storage + coldStorage: + enabled: false + ## This directory is intended for use with NFS eventual configuration for HA + ## When enabling this section, The system.yaml will include haDataDir section. + ## The location of Artifactory Data directory and Artifactory Filestore will be modified accordingly and will be shared among all nodes. + ## It's recommended to leave haDataDir disabled, and the default BinarystoreXml will set the Filestore location as configured in artifactory.persistence.nfs.dataDir. + haDataDir: + enabled: false + path: + haBackupDir: + enabled: false + path: + ## Files to copy to ARTIFACTORY_HOME/ on each Artifactory startup + ## Note : From 107.46.x chart versions, copyOnEveryStartup is not needed for binarystore.xml, it is always copied via initContainers + copyOnEveryStartup: + ## Absolute path + # - source: /artifactory_bootstrap/artifactory.cluster.license + ## Relative to ARTIFACTORY_HOME/ + # target: etc/artifactory/ + + ## Sidecar containers for tailing Artifactory logs + loggers: [] + # - access-audit.log + # - access-request.log + # - access-security-audit.log + # - access-service.log + # - artifactory-access.log + # - artifactory-event.log + # - artifactory-import-export.log + # - artifactory-request.log + # - artifactory-service.log + # - frontend-request.log + # - frontend-service.log + # - metadata-request.log + # - metadata-service.log + # - router-request.log + # - router-service.log + # - router-traefik.log + # - derby.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Sidecar containers for tailing Tomcat (catalina) logs + catalinaLoggers: [] + # - tomcat-catalina.log + # - tomcat-localhost.log + + ## Tomcat (catalina) loggers resources + catalinaLoggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Migration support from 6.x to 7.x. + migration: + enabled: false + timeoutSeconds: 3600 + ## Extra pre-start command in migration Init Container to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + ## Add custom init containers execution before predefined init containers + customInitContainersBegin: | + # - name: "custom-setup" + # image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'touch {{ .Values.artifactory.persistence.mountPath }}/example-custom-setup' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: volume + ## Add custom init containers + ## Add custom init containers execution after predefined init containers + customInitContainers: | + # - name: "custom-systemyaml-setup" + # image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'curl -o {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml https:///systemyaml' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: volume + ## Add custom sidecar containers + ## - The provided example uses a custom volume (customVolumes) + ## - The provided example shows running container as root (id 0) + customSidecarContainers: | + # - name: "sidecar-list-etc" + # image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'sh /scripts/script.sh' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: volume + # - mountPath: "/scripts/script.sh" + # name: custom-script + # subPath: script.sh + # resources: + # requests: + # memory: "32Mi" + # cpu: "50m" + # limits: + # memory: "128Mi" + # cpu: "100m" + ## Add custom volumes + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret'. + customVolumes: | + # - name: custom-script + # configMap: + # name: custom-script + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: "/scripts/script.sh" + # subPath: script.sh + # - name: posthook-start + # mountPath: "/scripts/posthoook-start.sh" + # subPath: posthoook-start.sh + # - name: prehook-start + # mountPath: "/scripts/prehook-start.sh" + # subPath: prehook-start.sh + ## Add custom persistent volume mounts - Available to the entire namespace + customPersistentVolumeClaim: {} + # name: + # mountPath: + # accessModes: + # - "-" + # size: + # storageClassName: + + ## Artifactory HA requires a unique master key. Each Artifactory node must have the same master key! + ## You can generate one with the command: "openssl rand -hex 32" + ## Pass it to helm with '--set artifactory.masterKey=${MASTER_KEY}' + ## Alternatively, you can use a pre-existing secret with a key called master-key by specifying masterKeySecretName + ## IMPORTANT: You should NOT use the example masterKey for a production deployment! + ## IMPORTANT: This is a mandatory for fresh Install of 7.x (App version) + # masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + # masterKeySecretName: + + ## Join Key to connect to other services to Artifactory. + ## IMPORTANT: Setting this value overrides the existing joinKey + ## IMPORTANT: You should NOT use the example joinKey for a production deployment! + # joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + ## Alternatively, you can use a pre-existing secret with a key called join-key by specifying joinKeySecretName + # joinKeySecretName: + + ## Registration Token for JFConnect + # jfConnectToken: + ## Alternatively, you can use a pre-existing secret with a key called jfconnect-token by specifying jfConnectTokenSecretName + # jfConnectTokenSecretName: + + ## Add custom secrets - secret per file + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret' common to all secrets + customSecrets: + # - name: custom-secret + # key: custom-secret.yaml + # data: > + # custom_secret_config: + # parameter1: value1 + # parameter2: value2 + # - name: custom-secret2 + # key: custom-secret2.config + # data: | + # here the custom secret 2 config + + ## If false, all service console logs will not redirect to a common console.log + consoleLog: false + ## admin allows to set the password for the default admin user. + ## See: https://www.jfrog.com/confluence/display/JFROG/Users+and+Groups#UsersandGroups-RecreatingtheDefaultAdminUserrecreate + admin: + ip: "127.0.0.1" + username: "admin" + password: + secret: + dataKey: + ## Artifactory license. + license: + ## licenseKey is the license key in plain text. Use either this or the license.secret setting + licenseKey: + ## If artifactory.license.secret is passed, it will be mounted as + ## ARTIFACTORY_HOME/etc/artifactory.cluster.license and loaded at run time. + secret: + ## The dataKey should be the name of the secret data key created. + dataKey: + ## Create configMap with artifactory.config.import.xml and security.import.xml and pass name of configMap in following parameter + configMapName: + ## Add any list of configmaps to Artifactory + configMaps: | + # posthook-start.sh: |- + # echo "This is a post start script" + # posthook-end.sh: |- + # echo "This is a post end script" + ## List of secrets for Artifactory user plugins. + ## One Secret per plugin's files. + userPluginSecrets: + # - archive-old-artifacts + # - build-cleanup + # - webhook + # - '{{ template "my-chart.fullname" . }}' + + ## Extra pre-start command to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + + ## Add lifecycle hooks for artifactory container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## Extra environment variables that can be used to tune Artifactory to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: SERVER_XML_ARTIFACTORY_PORT + # value: "8081" + # - name: SERVER_XML_ARTIFACTORY_MAX_THREADS + # value: "200" + # - name: SERVER_XML_ACCESS_MAX_THREADS + # value: "50" + # - name: SERVER_XML_ARTIFACTORY_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_ACCESS_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_EXTRA_CONNECTOR + # value: "" + # - name: DB_POOL_MAX_ACTIVE + # value: "100" + # - name: DB_POOL_MAX_IDLE + # value: "10" + # - name: MY_SECRET_ENV_VAR + # valueFrom: + # secretKeyRef: + # name: my-secret-name + # key: my-secret-key + + ## System YAML entries now reside under files/system.yaml. + ## You can provide the specific values that you want to add or override under 'artifactory.extraSystemYaml'. + ## For example: + ## extraSystemYaml: + ## shared: + ## node: + ## id: my-instance + ## The entries provided under 'artifactory.extraSystemYaml' are merged with files/system.yaml to create the final system.yaml. + ## If you have already provided system.yaml under, 'artifactory.systemYaml', the values in that entry take precedence over files/system.yaml + ## You can modify specific entries with your own value under `artifactory.extraSystemYaml`, The values under extraSystemYaml overrides the values under 'artifactory.systemYaml' and files/system.yaml + extraSystemYaml: {} + ## systemYaml is intentionally commented and the previous content has been moved under files/system.yaml. + ## You have to add the all entries of the system.yaml file here, and it overrides the values in files/system.yaml. + # systemYaml: + + ## IMPORTANT: If overriding artifactory.internalPort: + ## DO NOT use port lower than 1024 as Artifactory runs as non-root and cannot bind to ports lower than 1024! + externalPort: 8082 + internalPort: 8082 + externalArtifactoryPort: 8081 + internalArtifactoryPort: 8081 + terminationGracePeriodSeconds: 30 + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## @param artifactory.podSecurityContext.enabled Enable security context + ## @param artifactory.podSecurityContext.runAsNonRoot Set pod's Security Context runAsNonRoot + ## @param artifactory.podSecurityContext.runAsUser User ID for the pod + ## @param artifactory.podSecurityContext.runASGroup Group ID for the pod + ## @param artifactory.podSecurityContext.fsGroup Group ID for the pod + ## + podSecurityContext: + enabled: true + runAsNonRoot: true + runAsUser: 1030 + runAsGroup: 1030 + fsGroup: 1030 + # fsGroupChangePolicy: "Always" + # seLinuxOptions: {} + ## The following settings are to configure the frequency of the liveness and startup probes. + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.artifactory.tomcat.maintenanceConnector.port }}/artifactory/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + + ## Set the persistence storage type. This will apply the matching binarystore.xml to Artifactory config + ## Supported types are: + ## file-system (default) + ## nfs + ## google-storage + ## google-storage-v2 + ## google-storage-v2-direct (Recommended for GCS - Google Cloud Storage) + ## aws-s3-v3 + ## s3-storage-v3-direct (Recommended for AWS S3) + ## s3-storage-v3-archive + ## azure-blob + ## azure-blob-storage-direct + ## azure-blob-storage-v2-direct (Recommended for Azure Blob Storage) + type: file-system + ## Use binarystoreXml to provide a custom binarystore.xml + ## This is intentionally commented and below previous content of binarystoreXml is moved under files/binarystore.xml + ## binarystoreXml: + + ## For artifactory.persistence.type file-system + fileSystem: + ## Need to have the following set + existingSharedClaim: + enabled: false + numberOfExistingClaims: 1 + ## Should be a child directory of {{ .Values.artifactory.persistence.mountPath }} + dataDir: "{{ .Values.artifactory.persistence.mountPath }}/artifactory-data" + backupDir: "/var/opt/jfrog/artifactory-backup" + ## You may also use existing shared claims for the data and backup storage. This allows storage (NAS for example) to be used for Data and Backup dirs which are safe to share across multiple artifactory nodes. + ## You may specify numberOfExistingClaims to indicate how many of these existing shared claims to mount. (Default = 1) + ## Create PVCs with ReadWriteMany that match the naming convetions: + ## {{ template "artifactory-ha.fullname" . }}-data-pvc- + ## {{ template "artifactory-ha.fullname" . }}-backup-pvc + ## Example (using numberOfExistingClaims: 2) + ## myexample-data-pvc-0 + ## myexample-data-pvc-1 + ## myexample-backup-pvc + ## Note: While you need two PVC fronting two PVs, multiple PVs can be attached to the same storage in many cases allowing you to share an underlying drive. + ## For artifactory.persistence.type nfs + ## If using NFS as the shared storage, you must have a running NFS server that is accessible by your Kubernetes + ## cluster nodes. + ## Need to have the following set + nfs: + ## Must pass actual IP of NFS server with '--set For artifactory.persistence.nfs.ip=${NFS_IP}' + ip: + haDataMount: "/data" + haBackupMount: "/backup" + dataDir: "/var/opt/jfrog/artifactory-ha" + backupDir: "/var/opt/jfrog/artifactory-backup" + capacity: 200Gi + mountOptions: [] + ## For artifactory.persistence.type google-storage, google-storage-v2, google-storage-v2-direct + googleStorage: + ## When using GCP buckets as your binary store (Available with enterprise license only) + gcpServiceAccount: + enabled: false + ## Use either an existing secret prepared in advance or put the config (replace the content) in the values + ## ref: https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#google-storage + # customSecretName: + # config: | + # { + # "type": "service_account", + # "project_id": "", + # "private_key_id": "?????", + # "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", + # "client_email": "???@j.iam.gserviceaccount.com", + # "client_id": "???????", + # "auth_uri": "https://accounts.google.com/o/oauth2/auth", + # "token_uri": "https://oauth2.googleapis.com/token", + # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + # "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." + # } + endpoint: commondatastorage.googleapis.com + httpsOnly: false + ## Set a unique bucket name + bucketName: "artifactory-ha-gcp" + ## GCP Bucket Authentication with Identity and Credential is deprecated. + ## identity: + ## credential: + path: "artifactory-ha/filestore" + bucketExists: false + useInstanceCredentials: false + enableSignedUrlRedirect: false + ## For artifactory.persistence.type aws-s3-v3, s3-storage-v3-direct, s3-storage-v3-archive + awsS3V3: + testConnection: false + identity: + credential: + region: + bucketName: artifactory-aws + path: artifactory/filestore + endpoint: + port: + useHttp: + maxConnections: 50 + connectionTimeout: + socketTimeout: + kmsServerSideEncryptionKeyId: + kmsKeyRegion: + kmsCryptoMode: + useInstanceCredentials: true + usePresigning: false + signatureExpirySeconds: 300 + signedUrlExpirySeconds: 30 + cloudFrontDomainName: + cloudFrontKeyPairId: + cloudFrontPrivateKey: + enableSignedUrlRedirect: false + enablePathStyleAccess: false + multiPartLimit: + multipartElementSize: + ## For artifactory.persistence.type azure-blob, azure-blob-storage-direct, azure-blob-storage-v2-direct + azureBlob: + accountName: + accountKey: + endpoint: + containerName: + multiPartLimit: 100000000 + multipartElementSize: 50000000 + testConnection: false + service: + name: artifactory + type: ClusterIP + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Artifactory service (useful if setting service.type=LoadBalancer) + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + annotations: {} + ## Which nodes in the cluster should be in the external load balancer pool (have external traffic routed to them) + ## Supported pool values + ## members + ## all + pool: members + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + statefulset: + annotations: {} + ssh: + enabled: false + internalPort: 1339 + externalPort: 1339 + annotations: {} + ## Spread Artifactory pods evenly across your nodes or some other topology + ## Note this applies to both the primary and replicas + topologySpreadConstraints: [] + # - maxSkew: 1 + # topologyKey: kubernetes.io/hostname + # whenUnsatisfiable: DoNotSchedule + # labelSelector: + # matchLabels: + # app: '{{ template "artifactory-ha.name" . }}' + # role: '{{ template "artifactory-ha.name" . }}' + # release: "{{ .Release.Name }}" + + ## Type specific configurations. + ## There is a difference between the primary and the member nodes. + ## Customising their resources and java parameters is done here. + primary: + name: artifactory-ha-primary + ## preStartCommand specific to the primary node, to be run after artifactory.preStartCommand + # preStartCommand: + labels: {} + persistence: + ## Set existingClaim to true or false + ## If true, you must prepare a PVC with the name e.g `volume-myrelease-artifactory-ha-primary-0` + existingClaim: false + replicaCount: 3 + # minAvailable: 1 + + updateStrategy: + type: RollingUpdate + ## Resources for the primary node + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "2Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory primary node. + ## You should set them according to the resources set above + javaOpts: + # xms: "1g" + # xmx: "2g" + # corePoolSize: 24 + jmx: + enabled: false + port: 9010 + host: + ssl: false + # When authenticate is true, accessFile and passwordFile are required + authenticate: false + accessFile: + passwordFile: + # other: "" + nodeSelector: {} + tolerations: [] + affinity: {} + ## Only used if "affinity" is empty + podAntiAffinity: + ## Valid values are "soft" or "hard"; any other value indicates no anti-affinity + type: "soft" + topologyKey: "kubernetes.io/hostname" + node: + name: artifactory-ha-member + ## preStartCommand specific to the member node, to be run after artifactory.preStartCommand + # preStartCommand: + labels: {} + persistence: + ## Set existingClaim to true or false + ## If true, you must prepare a PVC with the name e.g `volume-myrelease-artifactory-ha-member-0` + existingClaim: false + replicaCount: 0 + updateStrategy: + type: RollingUpdate + minAvailable: 1 + ## Resources for the member nodes + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "2Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory member nodes. + ## You should set them according to the resources set above + javaOpts: + # xms: "1g" + # xmx: "2g" + # corePoolSize: 24 + jmx: + enabled: false + port: 9010 + host: + ssl: false + # When authenticate is true, accessFile and passwordFile are required + authenticate: false + accessFile: + passwordFile: + # other: "" + nodeSelector: {} + ## Wait for Artifactory primary + waitForPrimaryStartup: + enabled: true + ## Setting time will override the built in test and will just wait the set time + time: + tolerations: [] + ## Complete specification of the "affinity" of the member nodes; if this is non-empty, + ## "podAntiAffinity" values are not used. + affinity: {} + ## Only used if "affinity" is empty + podAntiAffinity: + ## Valid values are "soft" or "hard"; any other value indicates no anti-affinity + type: "soft" + topologyKey: "kubernetes.io/hostname" +frontend: + name: frontend + enabled: true + internalPort: 8070 + ## Extra environment variables that can be used to tune frontend to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + ## Session settings + session: + ## Time in minutes after which the frontend token will need to be refreshed + timeoutMinutes: '30' + ## Add lifecycle hooks for frontend container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## The following settings are to configure the frequency of the liveness and startup probes when splitServicesToContainers set to true + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.frontend.internalPort }}/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " --cert=ca.crt --key=ca.private.key` + # customCertificatesSecretName: + + ## When resetAccessCAKeys is true, Access will regenerate the CA certificate and matching private key + # resetAccessCAKeys: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 + sendReasonPhrase: false + extraConfig: 'acceptCount="100"' + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:8040/access/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " /var/opt/jfrog/nginx/message"] + # preStop: + # exec: + # command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] + + ## Sidecar containers for tailing Nginx logs + loggers: [] + # - access.log + # - error.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "64Mi" + # cpu: "25m" + # limits: + # memory: "128Mi" + # cpu: "50m" + + ## Logs options + logs: + stderr: false + stdout: false + level: warn + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: [] + # - containerPort: 8066 + # name: docker + + ## The nginx main conf was moved to files/nginx-main-conf.yaml. This key is commented out to keep support for the old configuration + # mainConf: | + + ## The nginx artifactory conf was moved to files/nginx-artifactory-conf.yaml. This key is commented out to keep support for the old configuration + # artifactoryConf: | + customInitContainers: "" + customSidecarContainers: "" + customVolumes: "" + customVolumeMounts: "" + customCommand: + ## allows overwriting the command for the nginx container. + ## defaults to [ 'nginx', '-g', 'daemon off;' ] + + service: + ## For minikube, set this to NodePort, elsewhere use LoadBalancer + type: LoadBalancer + ssloffload: false + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Nginx LoadBalancer service + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + ## Provide static ip address + loadBalancerIP: + ## There are two available options: "Cluster" (default) and "Local". + externalTrafficPolicy: Cluster + labels: {} + # label-key: label-value + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + ## A list of custom ports to be exposed on nginx service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: [] + # - port: 8066 + # targetPort: 8066 + # protocol: TCP + # name: docker + + annotations: {} + ## Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + http: + enabled: true + externalPort: 80 + internalPort: 8080 + https: + enabled: true + externalPort: 443 + internalPort: 8443 + ## DEPRECATED: The following will be replaced by L1065-L1076 in a future release + # externalPortHttp: 80 + # internalPortHttp: 8080 + # externalPortHttps: 443 + # internalPortHttps: 8443 + + ssh: + internalPort: 1339 + externalPort: 1339 + ## The following settings are to configure the frequency of the liveness and readiness probes. + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "nginx.scheme" . }}://localhost:{{ include "nginx.port" . }}/ + initialDelaySeconds: {{ if semverCompare " + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + resources: {} + # requests: + # memory: "250Mi" + # cpu: "100m" + # limits: + # memory: "250Mi" + # cpu: "500m" + + nodeSelector: {} + tolerations: [] + affinity: {} +## Filebeat Sidecar container +## The provided filebeat configuration is for Artifactory logs. It assumes you have a logstash installed and configured properly. +filebeat: + enabled: false + name: artifactory-filebeat + image: + repository: "docker.elastic.co/beats/filebeat" + version: 7.16.2 + logstashUrl: "logstash:5044" + terminationGracePeriod: 10 + livenessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + filebeat test output + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "100Mi" + # cpu: "100m" + + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output: + logstash: + hosts: ["{{ .Values.filebeat.logstashUrl }}"] +## Allows to add additional kubernetes resources +## Use --- as a separator between multiple resources +## For an example, refer - https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-ha-values.yaml +additionalResources: "" +## Adding entries to a Pod's /etc/hosts file +## For an example, refer - https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases +hostAliases: [] +# - ip: "127.0.0.1" +# hostnames: +# - "foo.local" +# - "bar.local" +# - ip: "10.1.2.3" +# hostnames: +# - "foo.remote" +# - "bar.remote" + +## Toggling this feature is seamless and requires helm upgrade +## will enable all microservices to run in different containers in a single pod (by default it is true) +splitServicesToContainers: true +## Specify common probes parameters +probes: + timeoutSeconds: 5 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/CHANGELOG.md b/charts/jfrog/artifactory-jcr/107.90.15/CHANGELOG.md new file mode 100644 index 000000000..e078ddd06 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/CHANGELOG.md @@ -0,0 +1,206 @@ +# JFrog Container Registry Chart Changelog +All changes to this chart will be documented in this file. + +## [107.90.15] - Feb 20, 2024 +* Updated `artifactory.installerInfo` content + +## [107.80.0] - Feb 1, 2024 +* Updated README.md to create a namespace using `--create-namespace` as part of helm install + +## [107.74.0] - Nov 23, 2023 +* **IMPORTANT** +* Added min kubeVersion ">= 1.19.0-0" in chart.yaml + +## [107.66.0] - Jul 20, 2023 +* Disabled federation services when splitServicesToContainers=true + +## [107.45.0] - Aug 25, 2022 +* Included event service as mandatory and remove the flag from values.yaml + +## [107.41.0] - Jul 22, 2022 +* Bumping chart version to align with app version +* Disabled jfconnect and event services when splitServicesToContainers=true + +## [107.19.4] - May 27, 2021 +* Bumping chart version to align with app version +* Update dependency Artifactory chart version to 107.19.4 + +## [4.0.0] - Apr 22, 2021 +* **Breaking change:** +* Increased default postgresql persistence size to `200Gi` +* Update postgresql tag version to `13.2.0-debian-10-r55` +* Update postgresql chart version to `10.3.18` in chart.yaml - [10.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1000) +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true +* **IMPORTANT** +* This chart is only helm v3 compatible. +* Update dependency Artifactory chart version to 12.0.0 (Artifactory 7.18.3) + +## [3.8.0] - Apr 5, 2021 +* **IMPORTANT** +* Added `charts.jfrog.io` as default JFrog Helm repository +* Update dependency Artifactory chart version to 11.13.0 (Artifactory 7.17.5) + +## [3.7.0] - Mar 31, 2021 +* Update dependency Artifactory chart version to 11.12.2 (Artifactory 7.17.4) + +## [3.6.0] - Mar 15, 2021 +* Update dependency Artifactory chart version to 11.10.0 (Artifactory 7.16.3) + +## [3.5.1] - Mar 03, 2021 +* Update dependency Artifactory chart version to 11.9.3 (Artifactory 7.15.4) + +## [3.5.0] - Feb 18, 2021 +* Update dependency Artifactory chart version to 11.9.0 (Artifactory 7.15.3) + +## [3.4.1] - Feb 08, 2021 +* Update dependency Artifactory chart version to 11.8.0 (Artifactory 7.12.8) + +## [3.4.0] - Jan 4, 2020 +* Update dependency Artifactory chart version to 11.7.4 (Artifactory 7.12.5) + +## [3.3.1] - Dec 1, 2020 +* Update dependency Artifactory chart version to 11.5.4 (Artifactory 7.11.5) + +## [3.3.0] - Nov 23, 2020 +* Update dependency Artifactory chart version to 11.5.2 (Artifactory 7.11.2) + +## [3.2.2] - Nov 9, 2020 +* Update dependency Artifactory chart version to 11.4.5 (Artifactory 7.10.6) + +## [3.2.1] - Nov 2, 2020 +* Update dependency Artifactory chart version to 11.4.4 (Artifactory 7.10.5) + +## [3.2.0] - Oct 19, 2020 +* Update dependency Artifactory chart version to 11.4.0 (Artifactory 7.10.2) + +## [3.1.0] - Sep 30, 2020 +* Update dependency Artifactory chart version to 11.1.0 (Artifactory 7.9.0) + +## [3.0.2] - Sep 23, 2020 +* Updates readme + +## [3.0.1] - Sep 15, 2020 +* Update dependency Artifactory chart version to 11.0.1 (Artifactory 7.7.8) + +## [3.0.0] - Sep 14, 2020 +* **Breaking change:** Added `image.registry` and changed `image.version` to `image.tag` for docker images +* Update dependency Artifactory chart version to 11.0.0 (Artifactory 7.7.3) + +## [2.5.1] - Jul 29, 2020 +* Update dependency Artifactory chart version to 10.0.12 (Artifactory 7.6.3) + +## [2.5.0] - Jul 10, 2020 +* Update dependency Artifactory chart version to 10.0.3 (Artifactory 7.6.2) +* **IMPORTANT** +* Added ChartCenter Helm repository in README + +## [2.4.0] - Jun 30, 2020 +* Update dependency Artifactory chart version to 9.6.0 (Artifactory 7.6.1) + +## [2.3.1] - Jun 12, 2020 +* Update dependency Artifactory chart version to 9.5.2 (Artifactory 7.5.7) + +## [2.3.0] - Jun 1, 2020 +* Update dependency Artifactory chart version to 9.5.0 (Artifactory 7.5.5) + +## [2.2.5] - May 27, 2020 +* Update dependency Artifactory chart version to 9.4.9 (Artifactory 7.4.3) + +## [2.2.4] - May 20, 2020 +* Update dependency Artifactory chart version to 9.4.6 (Artifactory 7.4.3) + +## [2.2.3] - May 07, 2020 +* Update dependency Artifactory chart version to 9.4.5 (Artifactory 7.4.3) +* Add `installerInfo` string format + +## [2.2.2] - Apr 28, 2020 +* Update dependency Artifactory chart version to 9.4.4 (Artifactory 7.4.3) + +## [2.2.1] - Apr 27, 2020 +* Update dependency Artifactory chart version to 9.4.3 (Artifactory 7.4.1) + +## [2.2.0] - Apr 14, 2020 +* Update dependency Artifactory chart version to 9.4.0 (Artifactory 7.4.1) + +## [2.2.0] - Apr 14, 2020 +* Update dependency Artifactory chart version to 9.4.0 (Artifactory 7.4.1) + +## [2.1.6] - Apr 13, 2020 +* Update dependency Artifactory chart version to 9.3.1 (Artifactory 7.3.2) + +## [2.1.5] - Apr 8, 2020 +* Update dependency Artifactory chart version to 9.2.8 (Artifactory 7.3.2) + +## [2.1.4] - Mar 30, 2020 +* Update dependency Artifactory chart version to 9.2.3 (Artifactory 7.3.2) + +## [2.1.3] - Mar 30, 2020 +* Update dependency Artifactory chart version to 9.2.1 (Artifactory 7.3.2) + +## [2.1.2] - Mar 26, 2020 +* Update dependency Artifactory chart version to 9.1.5 (Artifactory 7.3.2) + +## [2.1.1] - Mar 25, 2020 +* Update dependency Artifactory chart version to 9.1.4 (Artifactory 7.3.2) + +## [2.1.0] - Mar 23, 2020 +* Update dependency Artifactory chart version to 9.1.3 (Artifactory 7.3.2) + +## [2.0.13] - Mar 19, 2020 +* Update dependency Artifactory chart version to 9.0.28 (Artifactory 7.2.1) + +## [2.0.12] - Mar 17, 2020 +* Update dependency Artifactory chart version to 9.0.26 (Artifactory 7.2.1) + +## [2.0.11] - Mar 11, 2020 +* Unified charts public release + +## [2.0.10] - Mar 8, 2020 +* Update dependency Artifactory chart version to 9.0.20 (Artifactory 7.2.1) + +## [2.0.9] - Feb 26, 2020 +* Update dependency Artifactory chart version to 9.0.15 (Artifactory 7.2.1) + +## [2.0.0] - Feb 12, 2020 +* Update dependency Artifactory chart version to 9.0.0 (Artifactory 7.0.0) + +## [1.1.0] - Jan 19, 2020 +* Update dependency Artifactory chart version to 8.4.1 (Artifactory 6.17.0) + +## [1.1.1] - Feb 3, 2020 +* Update dependency Artifactory chart version to 8.4.4 + +## [1.1.0] - Jan 19, 2020 +* Update dependency Artifactory chart version to 8.4.1 (Artifactory 6.17.0) + +## [1.0.1] - Dec 31, 2019 +* Update dependency Artifactory chart version to 8.3.5 + +## [1.0.0] - Dec 23, 2019 +* Update dependency Artifactory chart version to 8.3.3 + +## [0.2.1] - Dec 12, 2019 +* Update dependency Artifactory chart version to 8.3.1 + +## [0.2.0] - Dec 1, 2019 +* Updated Artifactory version to 6.16.0 + +## [0.1.5] - Nov 28, 2019 +* Update dependency Artifactory chart version to 8.2.6 + +## [0.1.4] - Nov 20, 2019 +* Update Readme + +## [0.1.3] - Nov 20, 2019 +* Fix JCR logo url +* Update dependency to Artifactory 8.2.2 chart + +## [0.1.2] - Nov 20, 2019 +* Update JCR logo + +## [0.1.1] - Nov 20, 2019 +* Add `appVersion` to Chart.yaml + +## [0.1.0] - Nov 20, 2019 +* Initial release of the JFrog Container Registry helm chart diff --git a/charts/jfrog/artifactory-jcr/107.90.15/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.15/Chart.yaml new file mode 100644 index 000000000..594791ce3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/Chart.yaml @@ -0,0 +1,30 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Container Registry + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-jcr +apiVersion: v2 +appVersion: 7.90.15 +dependencies: +- name: artifactory + repository: file://charts/artifactory + version: 107.90.15 +description: JFrog Container Registry +home: https://jfrog.com/container-registry/ +icon: file://assets/icons/artifactory-jcr.png +keywords: +- artifactory +- jfrog +- container +- registry +- devops +- jfrog-container-registry +kubeVersion: '>= 1.19.0-0' +maintainers: +- email: helm@jfrog.com + name: Chart Maintainers at JFrog +name: artifactory-jcr +sources: +- https://github.com/jfrog/charts +type: application +version: 107.90.15 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/LICENSE b/charts/jfrog/artifactory-jcr/107.90.15/LICENSE new file mode 100644 index 000000000..8dada3eda --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/jfrog/artifactory-jcr/107.90.15/README.md b/charts/jfrog/artifactory-jcr/107.90.15/README.md new file mode 100644 index 000000000..c0051e61d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/README.md @@ -0,0 +1,125 @@ +# JFrog Container Registry Helm Chart + +JFrog Container Registry is a free Artifactory edition with Docker and Helm repositories support. + +**Heads up: Our Helm Chart docs are moving to our main documentation site. For Artifactory installers, see [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory).** + +## Prerequisites Details + +* Kubernetes 1.19+ + +## Chart Details +This chart will do the following: + +* Deploy JFrog Container Registry +* Deploy an optional Nginx server +* Deploy an optional PostgreSQL Database +* Optionally expose Artifactory with Ingress [Ingress documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) + +## Installing the Chart + +### Add JFrog Helm repository + +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client. + +```bash +helm repo add jfrog https://charts.jfrog.io +helm repo update +``` + +### Install Chart +To install the chart with the release name `jfrog-container-registry`: +```bash +helm upgrade --install jfrog-container-registry --set artifactory.postgresql.postgresqlPassword= jfrog/artifactory-jcr --namespace artifactory-jcr --create-namespace +``` + +### Accessing JFrog Container Registry +**NOTE:** If using artifactory or nginx service type `LoadBalancer`, it might take a few minutes for JFrog Container Registry's public IP to become available. + +### Updating JFrog Container Registry +Once you have a new chart version, you can upgrade your deployment with +```bash +helm upgrade jfrog-container-registry jfrog/artifactory-jcr --namespace artifactory-jcr --create-namespace +``` + +### Special Upgrade Notes +#### Artifactory upgrade from 6.x to 7.x (App Version) +Arifactory 6.x to 7.x upgrade requires a one time migration process. This is done automatically on pod startup if needed. +It's possible to configure the migration timeout with the following configuration in extreme cases. The provided default should be more than enough for completion of the migration. +```yaml +artifactory: + artifactory: + # Migration support from 6.x to 7.x + migration: + enabled: true + timeoutSeconds: 3600 +``` +* Note: If you are upgrading from 1.x to 3.x and above chart versions, please delete the existing statefulset of postgresql before upgrading the chart due to breaking changes in postgresql subchart. +```bash +kubectl delete statefulsets -postgresql +``` +* For more details about artifactory chart upgrades refer [here](https://github.com/jfrog/charts/blob/master/stable/artifactory/UPGRADE_NOTES.md) + +### Deleting JFrog Container Registry + +```bash +helm delete jfrog-container-registry --namespace artifactory-jcr +``` + +This will delete your JFrog Container Registry deployment.
+**NOTE:** You might have left behind persistent volumes. You should explicitly delete them with +```bash +kubectl delete pvc ... +kubectl delete pv ... +``` + +## Database +The JFrog Container Registry chart comes with PostgreSQL deployed by default.
+For details on the PostgreSQL configuration or customising the database, Look at the options described in the [Artifactory helm chart](https://github.com/jfrog/charts/tree/master/stable/artifactory). + +### Ingress and TLS +To get Helm to create an ingress object with a hostname, add these two lines to your Helm command: +```bash +helm upgrade --install jfrog-container-registry \ + --set artifactory.nginx.enabled=false \ + --set artifactory.ingress.enabled=true \ + --set artifactory.ingress.hosts[0]="artifactory.company.com" \ + --set artifactory.artifactory.service.type=NodePort \ + jfrog/artifactory-jcr --namespace artifactory-jcr --create-namespace +``` + +To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace: + +```bash +kubectl create secret tls artifactory-tls --cert=path/to/tls.cert --key=path/to/tls.key +``` + +Include the secret's name, along with the desired hostnames, in the Artifactory Ingress TLS section of your custom `values.yaml` file: + +```yaml +artifactory: + artifactory: + ingress: + ## If true, Artifactory Ingress will be created + ## + enabled: true + + ## Artifactory Ingress hostnames + ## Must be provided if Ingress is enabled + ## + hosts: + - jfrog-container-registry.domain.com + annotations: + kubernetes.io/tls-acme: "true" + ## Artifactory Ingress TLS configuration + ## Secrets must be manually created in the namespace + ## + tls: + - secretName: artifactory-tls + hosts: + - jfrog-container-registry.domain.com +``` + +## Useful links +https://www.jfrog.com +https://www.jfrog.com/confluence/ diff --git a/charts/jfrog/artifactory-jcr/107.90.15/app-readme.md b/charts/jfrog/artifactory-jcr/107.90.15/app-readme.md new file mode 100644 index 000000000..9d9b7d85f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/app-readme.md @@ -0,0 +1,18 @@ +# JFrog Container Registry Helm Chart + +Universal Repository Manager supporting all major packaging formats, build tools and CI servers. + +## Chart Details +This chart will do the following: + +* Deploy JFrog Container Registry +* Deploy an optional Nginx server +* Optionally expose Artifactory with Ingress [Ingress documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) + + +## Useful links +Blog: [Herd Trust Into Your Rancher Labs Multi-Cloud Strategy with Artifactory](https://jfrog.com/blog/herd-trust-into-your-rancher-labs-multi-cloud-strategy-with-artifactory/) + +## Activate Your Artifactory Instance +Don't have a license? Please send an email to [rancher-jfrog-licenses@jfrog.com](mailto:rancher-jfrog-licenses@jfrog.com) to get it. + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/.helmignore b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/.helmignore new file mode 100644 index 000000000..b6e97f07f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/.helmignore @@ -0,0 +1,24 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +OWNERS + +tests/ \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/CHANGELOG.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/CHANGELOG.md new file mode 100644 index 000000000..aeba9bc88 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/CHANGELOG.md @@ -0,0 +1,1365 @@ +# JFrog Artifactory Chart Changelog +All changes to this chart will be documented in this file. + +## [107.90.15] - July 18, 2024 +* Fixed #adding colon in image registry which breaks deployment [GH-1892](https://github.com/jfrog/charts/pull/1892) +* Added new `nginx.hosts` to use Nginx server_name directive instead of `ingress.hosts` +* Added a deprecation notice of ingress.hosts when `ngnix.enabled` is true +* Added new evidence service +* Corrected database connection values based on sizing +* **IMPORTANT** +* Separate access from artifactory tomcat to run on its own dedicated tomcat + * With this change access will be running in its own dedicated container + * This will give the ability to control resources and java options specific to access + Can be done by passing the following, + `access.javaOpts.other` + `access.resources` + `access.extraEnvironmentVariables` +* Updating the example link for downloading the DB driver +* Added Binary Provider recommendations + +## [107.89.0] - June 7, 2024 +* Fix the indentation of the commented-out sections in the values.yaml file +* Fixed sizing values by removing `JF_SHARED_NODE_HAENABLED` in xsmall/small configurations + +## [107.88.0] - May 29, 2024 +* **IMPORTANT** +* Refactored `nginx.artifactoryConf` and `nginx.mainConf` configuration (moved to files/nginx-artifactory-conf.yaml and files/nginx-main-conf.yaml instead of keys in values.yaml) + +## [107.87.0] - May 29, 2024 +* Renamed `.Values.artifactory.openMetrics` to `.Values.artifactory.metrics` + +## [107.85.0] - May 29, 2024 +* Changed `migration.enabled` to false by default. For 6.x to 7.x migration, this flag needs to be set to `true` + +## [107.84.0] - May 29, 2024 +* Added image section for `initContainers` instead of `initContainerImage` +* Renamed `router.image.imagePullPolicy` to `router.image.pullPolicy` +* Removed image section for `loggers` +* Added support for `global.verisons.initContainers` to override `initContainers.image.tag` +* Fixed an issue with extraSystemYaml merge +* **IMPORTANT** +* Renamed `artifactory.setSecurityContext` to `artifactory.podSecurityContext` +* Renamed `artifactory.uid` to `artifactory.podSecurityContext.runAsUser` +* Renamed `artifactory.gid` to `artifactory.podSecurityContext.runAsGroup` and `artifactory.podSecurityContext.fsGroup` +* Renamed `artifactory.fsGroupChangePolicy` to `artifactory.podSecurityContext.fsGroupChangePolicy` +* Renamed `artifactory.seLinuxOptions` to `artifactory.podSecurityContext.seLinuxOptions` +* Added flag `allowNonPostgresql` defaults to false +* Update postgresql tag version to `15.6.0-debian-12-r5` +* Added a check if `initContainerImage` exists +* Fixed an issue to generate unified secret to support artifactory fullname [GH-1882](https://github.com/jfrog/charts/issues/1882) +* Fixed an issue template render on loggers [GH-1883](https://github.com/jfrog/charts/issues/1883) +* Fixed resource constraints for "setup" initContainer of nginx deployment [GH-962] (https://github.com/jfrog/charts/issues/962) +* Added .Values.artifactory.unifiedSecretPrependReleaseName` for unified secret to prepend release name +* Fixed maxCacheSize and cacheProviderDir mix up under azure-blob-storage-v2-direct template in binarystore.xml + +## [107.82.0] - Mar 04, 2024 +* Added `disableRouterBypass` flag as experimental feature, to disable the artifactoryPath /artifactory/ and route all traffic through the Router. +* Removed Replicator service + +## [107.81.0] - Feb 20, 2024 +* **IMPORTANT** +* Refactored systemYaml configuration (moved to files/system.yaml instead of key in values.yaml) +* Added ability to provide `extraSystemYaml` configuration in values.yaml which will merge with the existing system yaml when `systemYamlOverride` is not given [GH-1848](https://github.com/jfrog/charts/pull/1848) +* Added option to modify the new cache configs, maxFileSizeLimit and skipDuringUpload +* Added IPV4/IPV6 Dualstack flag support for Artifactory and nginx service +* Added `singleStackIPv6Cluster` flag, which manages the Nginx configuration to enable listening on IPv6 and proxying. +* Fixing broken link for creating additional kubernetes resources. Refer [here](https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-values.yaml) +* Refactored installerInfo configuration (moved to files/installer-info.json instead of key in values.yaml) + +## [107.80.0] - Feb 20, 2024 +* Updated README.md to create a namespace using `--create-namespace` as part of helm install + +## [107.79.0] - Feb 20, 2024 +* **IMPORTANT** +* Added `unifiedSecretInstallation` flag which enables single unified secret holding all internal (chart) secrets to `true` by default +* Added support for azure-blob-storage-v2-direct config +* Added option to set Nginx to write access_log to container STDOUT +* **Important change:** +* Update postgresql tag version to `15.2.0-debian-11-r23` +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default bundles PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x/13.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true + +## [107.77.0] - April 22, 2024 +* Removed integration service +* Added recommended postgresql sizing configurations under sizing directory +* Updated artifactory-federation (probes, port, embedded mode) +* Fixed - Removed duplicate keys of the sizing yaml file +* Fixing broken nginx port [GH-1860](https://github.com/jfrog/charts/issues/1860) +* Added nginx.customCommand to use custom commands for the nginx container + +## [107.76.0] - Dec 13, 2023 +* Added connectionTimeout and socketTimeout paramaters under AWSS3 binarystore section +* Reduced nginx startupProbe initialDelaySeconds + +## [107.74.0] - Nov 30, 2023 +* Added recommended sizing configurations under sizing directory, please refer [here](README.md/#apply-sizing-configurations-to-the-chart) +* **IMPORTANT** +* Added min kubeVersion ">= 1.19.0-0" in chart.yaml + +## [107.70.0] - Nov 30, 2023 +* Fixed - StatefulSet pod annotations changed from range to toYaml [GH-1828](https://github.com/jfrog/charts/issues/1828) +* Fixed - Invalid format for awsS3V3 `multiPartLimit,multipartElementSize` in binarystore.xml. +* Fixed - SecurityContext with runAsGroup in artifactory [GH-1838](https://github.com/jfrog/charts/issues/1838) +* Added support for custom labels in the Nginx pods [GH-1836](https://github.com/jfrog/charts/pull/1836) +* Added podSecurityContext and containerSecurityContext for nginx +* Added support for nginx on openshift, set `podSecurityContext` and `containerSecurityContext` to false +* Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + +## [107.69.0] - Sep 18, 2023 +* Adjust rtfs context +* Fixed - Metadata service does not respect customVolumeMounts for DB CAs [GH-1815](https://github.com/jfrog/charts/issues/1815) + +## [107.68.8] - Sep 18, 2023 +* Reverted - Enabled `unifiedSecretInstallation` by default [GH-1819](https://github.com/jfrog/charts/issues/1819) +* Removed openshift condition check from NOTES.txt + +## [107.68.7] - Aug 28, 2023 +* Enabled `unifiedSecretInstallation` by default + +## [107.67.0] - Aug 28, 2023 +* Add 'extraJavaOpts' and 'port' values to federation service + +## [107.66.0] - Aug 28, 2023 +* Added federation service container in artifactory +* Add rtfs service to ingress in artifactory + +## [107.64.0] - Aug 28, 2023 +* Added support to configure event.webhooks within generated system.yaml +* Fixed an issue to generate ssl certificate should support artifactory fullname +* Added binarystore.xml template to persistence storage type `nfs`. The default Filestore location configured according to artifactory.persistence.nfs.dataDir. +* Added 'multiPartLimit' and 'multipartElementSize' parameters to awsS3V3 binary providers. +* Increased default Artifactory Tomcat acceptCount config to 400 +* Fixed Illegal Strict-Transport-Security header in nginx config + +## [107.63.0] - Aug 28, 2023 +* Added support for Openshift by adding the securityContext in container level. +* **IMPORTANT** +* Disable securityContext in container and pod level to deploy postgres on openshift. +* Fixed support for fsGroup in non openshift environemnt and runAsGroup in openshift environment. +* Fixed - Helm Template Error when using artifactory.loggers [GH-1791](https://github.com/jfrog/charts/issues/1791) +* Removed the nginx disable condition for openshift +* Fixed jfconnect disabling as micro-service on splitcontainers [GH-1806](https://github.com/jfrog/charts/issues/1806) + +## [107.62.0] - Jun 5, 2023 +* Upgraded to autoscaling/v2 +* Added support for 'port' and 'useHttp' parameters for s3-storage-v3 binary provider [GH-1767](https://github.com/jfrog/charts/issues/1767) + +## [107.61.0] - May 31, 2023 +* Added new binary provider `google-storage-v2-direct` +* Added missing parameter 'enableSignedUrlRedirect' to 'googleStorage' + +## [107.60.0] - May 31, 2023 +* Enabled `splitServicesToContainers` to true by default +* Updated the recommended values for small, medium and large installations to support the 'splitServicesToContainers' + +## [107.59.0] - May 31, 2023 +* Fixed reference of `terminationGracePeriodSeconds` +* Added Support for Cold Artifact Storage as part of the systemYaml configuration (disabled by default) +* Added new binary provider `s3-storage-v3-archive` +* Fixed jfconnect disabling as micro-service on non-splitcontainers +* Fixed wrong cache-fs provider ID of cluster-s3-storage-v3 in the binarystore.xml [GH-1772](https://github.com/jfrog/charts/issues/1772) + +## [107.58.0] - Mar 23, 2023 +* Updated postgresql multi-arch tag version to `13.10.0-debian-11-r14` +* Removed obselete remove-lost-found initContainer` +* Added env JF_SHARED_NODE_HAENABLED under frontend when running in the container split mode + +## [107.57.0] - Mar 02, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1793` + +## [107.55.0] - Jan 31, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1760` +* Adding a custom preStop to Artifactory router for allowing graceful termination to complete + +## [107.53.0] - Jan 20, 2023 +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.50.0] - Jan 20, 2023 +* Updated postgresql tag version to `13.9.0-debian-11-11` +* Fixed an issue for capabilities check of ingress +* Updated jfrogUrl text path in migrate.sh file +* Added a note that from 107.46.x chart versions, `copyOnEveryStartup` is not needed for binarystore.xml, it is always copied via initContainers. For more Info, Refer [GH-1723](https://github.com/jfrog/charts/issues/1723) + +## [107.49.0] - Jan 16, 2023 +* Added support for setting `seLinuxOptions` in `securityContext` [GH-1699](https://github.com/jfrog/charts/pull/1699) +* Added option to enable/disable proxy_request_buffering and proxy_buffering_off [GH-1686](https://github.com/jfrog/charts/pull/1686) +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.48.0] - Oct 27, 2022 +* Updated router version to `7.51.0` + +## [107.47.0] - Sep 29, 2022 +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-941` +* Added support for annotations for artifactory statefulset and nginx deployment [GH-1665](https://github.com/jfrog/charts/pull/1665) +* Updated router version to `7.49.0` + +## [107.46.0] - Sep 14, 2022 +* **IMPORTANT** +* Added support for lifecycle hooks for all containers, changed `artifactory.postStartCommand` to `.Values.artifactory.lifecycle.postStart.exec.command` +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-902` +* Update nginx configuration to allow websocket requests when using pipelines +* Fixed an issue to allow artifactory to make direct API calls to store instead via jfconnect service when `splitServicesToContainers=true` +* Refactor binarystore.xml configuration (moved to `files/binarystore.xml` instead of key in values.yaml) +* Added new binary providers `cluster-s3-storage-v3`, `s3-storage-v3-direct`, `azure-blob-storage-direct`, `google-storage-v2` +* Deprecated (removed) `aws-s3` binary provider [JetS3t library](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore#ConfiguringtheFilestore-BinaryProvider) +* Deprecated (removed) `google-storage` binary provider and force persistence storage type `google-storage` to work with `google-storage-v2` only +* Copy binarystore.xml in init Container to fix existing persistence on file system in clear text +* Removed obselete `.Values.artifactory.binarystore.enabled` key +* Removed `newProbes.enabled`, default to new probes +* Added nginx.customCommand using inotifyd to reload nginx's config upon ssl secret or configmap changes [GH-1640](https://github.com/jfrog/charts/pull/1640) + +## [107.43.0] - Aug 25, 2022 +* Added flag `artifactory.replicator.ingress.enabled` to enable/disable ingress for replicator +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-854` +* Updated router version to `7.45.0` +* Added flag `artifactory.schedulerName` to set for the pods the value of schedulerName field [GH-1606](https://github.com/jfrog/charts/issues/1606) +* Enabled TLS based on access or router in values.yaml + +## [107.42.0] - Aug 25, 2022 +* Enabled database creds secret to use from unified secret +* Updated router version to `7.42.0` +* Fix duplicate volumes for userPluginSecrets [GH-1650] (https://github.com/jfrog/charts/issues/1650) +* Added support to truncate (> 63 chars) for unifiedCustomSecretVolumeName + +## [107.41.0] - June 27, 2022 +* Added support for nginx.terminationGracePeriodSeconds [GH-1645](https://github.com/jfrog/charts/issues/1645) +* Use an alternate command for `find` to copy custom certificates +* Added support for circle of trust using `circleOfTrustCertificatesSecret` secret name [GH-1623](https://github.com/jfrog/charts/pull/1623) + +## [107.40.0] - June 16, 2022 +* Added support for PodDisruptionBudget [GH-1618](https://github.com/jfrog/charts/issues/1618) +* From artifactory 7.38.x, joinKey can be retrived from Admin > User Management > Settings in UI +* Allow templating for pod annotations [GH-1634](https://github.com/jfrog/charts/pull/1634) +* Fixed `customPersistentPodVolumeClaim` name to `customPersistentVolumeClaim` +* Added flags to control enable/disable infra services in splitServicesToContainers + +## [107.39.0] - May 31, 2022 +* Fix default `artifactory.async.corePoolSize` [GH-1612](https://github.com/jfrog/charts/issues/1612) +* Added support of nginx annotations +* Reduce startupProbe `initialDelaySeconds` +* Align all liveness and readiness probes failureThreshold to `5` seconds +* Added new flag `unifiedSecretInstallation` to enables single unified secret holding all the artifactory secrets +* Updated router version to `7.38.0` +* Add support for NFS config with directories `haBackupDir` and `haDataDir` +* Fixed - disable jfconnect on oss/jcr/cpp flavours [GH-1630](https://github.com/jfrog/charts/issues/1630) + +## [107.38.0] - May 04, 2022 +* Added support for `global.nodeSelector` to artifactory and nginx pods +* Updated router version to `7.36.1` +* Added support for custom global probes timeout +* Updated frontend container command +* Added topologySpreadConstraints to artifactory and nginx, and add lifecycle hooks to nginx [GH-1596](https://github.com/jfrog/charts/pull/1596) +* Added support of extraEnvironmentVariables for all infra services containers +* Enabled the consumption (jfconnect) flag by default +* Fix jfconnect disabling on non-splitcontainers + +## [107.37.0] - Mar 08, 2022 +* Added support for customPorts in nginx deployment +* Bugfix - Wrong proxy_pass configurations for /artifactory/ in the default artifactory.conf +* Added signedUrlExpirySeconds option to artifactory.persistence.type aws-S3-V3 +* Updated router version to `7.35.0` +* Added useInstanceCredentials,enableSignedUrlRedirect option to google-storage-v2 +* Changed dependency charts repo to `charts.jfrog.io` + +## [107.36.0] - Mar 03, 2022 +* Remove pdn tracker which starts replicator service +* Added silent option for curl probes +* Added readiness health check for the artifactory container for k8s version < 1.20 +* Fix property file migration issue to system.yaml 6.x to 7.x + +## [107.35.0] - Feb 08, 2022 +* Updated router version to `7.32.1` + +## [107.33.0] - Jan 11, 2022 +* Add more user friendly support for anti-affinity +* Pod anti-affinity is now enabled by default (soft rule) +* Readme fixes +* Added support for setting `fsGroupChangePolicy` +* Added nginx customInitContainers, customVolumes, customSidecarContainers [GH-1565](https://github.com/jfrog/charts/pull/1565) +* Updated router version to `7.30.0` + +## [107.32.0] - Dec 22, 2021 +* Updated logger image to `jfrog/ubi-minimal:8.5-204` +* Added default `8091` as `artifactory.tomcat.maintenanceConnector.port` for probes check +* Refactored probes to replace httpGet probes with basic exec + curl +* Refactored `database-creds` secret to create only when database values are passed +* Added new endpoints for probes `/artifactory/api/v1/system/liveness` and `/artifactory/api/v1/system/readiness` +* Enabled `newProbes:true` by default to use these endpoints +* Fix filebeat sidecar spool file permissions +* Updated filebeat sidecar container to `7.16.2` + +## [107.31.0] - Dec 17, 2021 +* Added support for HorizontalPodAutoscaler apiVersion `autoscaling/v2beta2` +* Remove integration service feature flag to make it mandatory service +* Update postgresql tag version to `13.4.0-debian-10-r39` +* Fixed `artifactory.resources` indentation in `migration-artifactory` init container [GH-1562](https://github.com/jfrog/charts/issues/1562) +* Refactored `router.requiredServiceTypes` to support platform chart + +## [107.30.0] - Nov 30, 2021 +* Fixed incorrect permission for filebeat.yaml +* Updated healthcheck (liveness/readiness) api for integration service +* Disable readiness health check for the artifactory container when running in the container split mode +* Ability to start replicator on enabling pdn tracker + +## [107.29.0] - Nov 26, 2021 +* Added integration service container in artifactory +* Add support for Ingress Class Name in Ingress Spec [GH-1516](https://github.com/jfrog/charts/pull/1516) +* Fixed chart values to use curl instead of wget [GH-1529](https://github.com/jfrog/charts/issues/1529) +* Updated nginx config to allow websockets when pipelines is enabled +* Moved router.topology.local.requireqservicetypes from system.yaml to router as environment variable +* Added jfconnect in system.yaml +* Updated artifactory container’s health probes to use artifactory api on rt-split +* Updated initContainerImage to `jfrog/ubi-minimal:8.5-204` +* Updated router version to `7.28.2` +* Set Jfconnect enabled to `false` in the artifactory container when running in the container split mode + +## [107.28.0] - Nov 11, 2021 +* Added default values cpu and memeory in initContainers +* Updated router version to `7.26.0` +* Updated (`rbac.create` and `serviceAccount.create` to false by default) for least privileges +* Fixed incorrect data type for `Values.router.serviceRegistry.insecure` in default values.yaml [GH-1514](https://github.com/jfrog/charts/pull/1514/files) +* **IMPORTANT** +* Changed init-container images from `alpine` to `ubi8/ubi-minimal` +* Added support for AWS License Manager using `.Values.aws.licenseConfigSecretName` + +## [107.27.0] - Oct 6, 2021 +* **Breaking change** +* Aligned probe structure (moved probes variables under config block) +* Added support for new probes(set to false by default) +* Bugfix - Invalid format for `multiPartLimit,multipartElementSize,maxCacheSize` in binarystore.xml [GH-1466](https://github.com/jfrog/charts/issues/1466) +* Added missioncontrol container in artifactory +* Dropped NET_RAW capability for the containers +* Added resources to migration-artifactory init container +* Added resources to all rt split containers +* Updated router version to `7.25.1` +* Added support for Ingress networking.k8s.io/v1/Ingress for k8s >=1.22 [GH-1487](https://github.com/jfrog/charts/pull/1487) +* Added min kubeVersion ">= 1.14.0-0" in chart.yaml +* Update alpine tag version to `3.14.2` +* Update busybox tag version to `1.33.1` +* Artifactory chart support for cluster license + +## [107.26.0] - Aug 23, 2021 +* Added Observability container (only when `splitServicesToContainers` is enabled) +* Support for high availability (when replicaCount > 1) +* Added min kubeVersion ">= 1.12.0-0" in chart.yaml + +## [107.25.0] - Aug 13, 2021 +* Updated readme of chart to point to wiki. Refer [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory) +* Added startupProbe and livenessProbe for RT-split containers +* Updated router version to 7.24.1 +* Added security hardening fixes +* Enabled startup probes for k8s >= 1.20.x +* Changed network policy to allow all ingress and egress traffic +* Added Observability changes +* Added support for global.versions.router (only when `splitServicesToContainers` is enabled) + +## [107.24.0] - July 27, 2021 +* Support global and product specific tags at the same time +* Added support for artifactory containers split + +## [107.23.0] - July 8, 2021 +* Bug fix - logger sideCar picks up Wrong File in helm +* Allow filebeat metrics configuration in values.yaml + +## [107.22.0] - July 6, 2021 +* Update alpine tag version to `3.14.0` +* Added `nodePort` support to artifactory-service and nginx-service templates +* Removed redundant `terminationGracePeriodSeconds` in statefulset +* Increased `startupProbe.failureThreshold` time + +## [107.21.3] - July 2, 2021 +* Added ability to change sendreasonphrase value in server.xml via system yaml + +## [107.19.3] - May 20, 2021 +* Fix broken support for startupProbe for k8s < 1.18.x +* Added support for `nameOverride` and `fullnameOverride` in values.yaml + +## [107.18.6] - April 29, 2021 +* Bumping chart version to align with app version +* Add `securityContext` option on nginx container + +## [12.0.0] - April 22, 2021 +* **Breaking change:** +* Increased default postgresql persistence size to `200Gi` +* Update postgresql tag version to `13.2.0-debian-10-r55` +* Update postgresql chart version to `10.3.18` in chart.yaml - [10.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1000) +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true +* **IMPORTANT** +* This chart is only helm v3 compatible. +* Fixed filebeat-configmap naming +* Explicitly set ServiceAccount `automountServiceAccountToken` to 'true' +* Update alpine tag version to `3.13.5` + +## [11.13.2] - April 15, 2021 +* Updated Artifactory version to 7.17.9 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.9) + +## [11.13.1] - April 6, 2021 +* Updated Artifactory version to 7.17.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.6) +* Update alpine tag version to `3.13.4` + +## [11.13.0] - April 5, 2021 +* **IMPORTANT** +* Added `charts.jfrog.io` as default JFrog Helm repository +* Updated Artifactory version to 7.17.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.5) + +## [11.12.2] - Mar 31, 2021 +* Updated Artifactory version to 7.17.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.4) + +## [11.12.1] - Mar 30, 2021 +* Updated Artifactory version to 7.17.3 +* Add `timeoutSeconds` to all exec probes - Please refer [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) + +## [11.12.0] - Mar 24, 2021 +* Updated Artifactory version to 7.17.2 +* Optimized startupProbe time + +## [11.11.0] - Mar 18, 2021 +* Add support to startupProbe + +## [11.10.0] - Mar 15, 2021 +* Updated Artifactory version to 7.16.3 + +## [11.9.5] - Mar 09, 2021 +* Added HSTS header to nginx conf + +## [11.9.4] - Mar 9, 2021 +* Removed bintray URL references in the chart + +## [11.9.3] - Mar 04, 2021 +* Updated Artifactory version to 7.15.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.4) + +## [11.9.2] - Mar 04, 2021 +* Fixed creation of nginx-certificate-secret when Nginx is disabled + +## [11.9.1] - Feb 19, 2021 +* Update busybox tag version to `1.32.1` + +## [11.9.0] - Feb 18, 2021 +* Updated Artifactory version to 7.15.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.3) +* Add option to specify update strategy for Artifactory statefulset + +## [11.8.1] - Feb 11, 2021 +* Exposed "multiPartLimit" and "multipartElementSize" for the Azure Blob Storage Binary Provider + +## [11.8.0] - Feb 08, 2021 +* Updated Artifactory version to 7.12.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.8) +* Support for custom certificates using secrets +* **Important:** Switched docker images download from `docker.bintray.io` to `releases-docker.jfrog.io` +* Update alpine tag version to `3.13.1` + +## [11.7.8] - Jan 25, 2021 +* Add support for hostAliases + +## [11.7.7] - Jan 11, 2021 +* Fix failures when using creds file for configurating google storage + +## [11.7.6] - Jan 11, 2021 +* Updated Artifactory version to 7.12.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.6) + +## [11.7.5] - Jan 07, 2021 +* Added support for optional tracker dedicated ingress `.Values.artifactory.replicator.trackerIngress.enabled` (defaults to false) + +## [11.7.4] - Jan 04, 2021 +* Fixed gid support for statefulset + +## [11.7.3] - Dec 31, 2020 +* Added gid support for statefulset +* Add setSecurityContext flag to allow securityContext block to be removed from artifactory statefulset + +## [11.7.2] - Dec 29, 2020 +* **Important:** Removed `.Values.metrics` and `.Values.fluentd` (Fluentd and Prometheus integrations) +* Add support for creating additional kubernetes resources - [refer here](https://github.com/jfrog/log-analytics-prometheus/blob/master/artifactory-values.yaml) +* Updated Artifactory version to 7.12.5 + +## [11.7.1] - Dec 21, 2020 +* Updated Artifactory version to 7.12.3 + +## [11.7.0] - Dec 18, 2020 +* Updated Artifactory version to 7.12.2 +* Added `.Values.artifactory.openMetrics.enabled` + +## [11.6.1] - Dec 11, 2020 +* Added configurable `.Values.global.versions.artifactory` in values.yaml + +## [11.6.0] - Dec 10, 2020 +* Update postgresql tag version to `12.5.0-debian-10-r25` +* Fixed `artifactory.persistence.googleStorage.endpoint` from `storage.googleapis.com` to `commondatastorage.googleapis.com` +* Updated chart maintainers email + +## [11.5.5] - Dec 4, 2020 +* **Important:** Renamed `.Values.systemYaml` to `.Values.systemYamlOverride` + +## [11.5.4] - Dec 1, 2020 +* Improve error message returned when attempting helm upgrade command + +## [11.5.3] - Nov 30, 2020 +* Updated Artifactory version to 7.11.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) + +## [11.5.2] - Nov 23, 2020 +* Updated Artifactory version to 7.11.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) +* Updated port namings on services and pods to allow for istio protocol discovery +* Change semverCompare checks to support hosted Kubernetes +* Add flag to disable creation of ServiceMonitor when enabling prometheus metrics +* Prevent the PostHook command to be executed if the user did not specify a command in the values file +* Fix issue with tls file generation when nginx.https.enabled is false + +## [11.5.1] - Nov 19, 2020 +* Updated Artifactory version to 7.11.2 +* Bugfix - access.config.import.xml override Access Federation configurations + +## [11.5.0] - Nov 17, 2020 +* Updated Artifactory version to 7.11.1 +* Update alpine tag version to `3.12.1` + +## [11.4.6] - Nov 10, 2020 +* Pass system.yaml via external secret for advanced usecases +* Added support for custom ingress +* Bugfix - stateful set not picking up changes to database secrets + +## [11.4.5] - Nov 9, 2020 +* Updated Artifactory version to 7.10.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.6) + +## [11.4.4] - Nov 2, 2020 +* Add enablePathStyleAccess property for aws-s3-v3 binary provider template + +## [11.4.3] - Nov 2, 2020 +* Updated Artifactory version to 7.10.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.5) + +## [11.4.2] - Oct 22, 2020 +* Chown bug fix where Linux capability cannot chown all files causing log line warnings +* Fix Frontend timeout linting issue + +## [11.4.1] - Oct 20, 2020 +* Add flag to disable prepare-custom-persistent-volume init container + +## [11.4.0] - Oct 19, 2020 +* Updated Artifactory version to 7.10.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.2) + +## [11.3.2] - Oct 15, 2020 +* Add support to specify priorityClassName for nginx deployment + +## [11.3.1] - Oct 9, 2020 +* Add support for customInitContainersBegin + +## [11.3.0] - Oct 7, 2020 +* Updated Artifactory version to 7.9.1 +* **Breaking change:** Fix `storageClass` to correct `storageClassName` in values.yaml + +## [11.2.0] - Oct 5, 2020 +* Expose Prometheus metrics via a ServiceMonitor +* Parse log files for metric data with Fluentd + +## [11.1.0] - Sep 30, 2020 +* Updated Artifactory version to 7.9.0 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.9) +* Added support for resources in init container + +## [11.0.11] - Sep 25, 2020 +* Update to use linux capability CAP_CHOWN instead of root base init container to avoid any use of root containers to pass Redhat security requirements + +## [11.0.10] - Sep 28, 2020 +* Setting chart coordinates in migitation yaml + +## [11.0.9] - Sep 25, 2020 +* Update filebeat version to `7.9.2` + +## [11.0.8] - Sep 24, 2020 +* Fixed broken issue - when setting `waitForDatabase: false` container startup still waits for DB + +## [11.0.7] - Sep 22, 2020 +* Readme updates + +## [11.0.6] - Sep 22, 2020 +* Fix lint issue in migitation yaml + +## [11.0.5] - Sep 22, 2020 +* Fix broken migitation yaml + +## [11.0.4] - Sep 21, 2020 +* Added mitigation yaml for Artifactory - [More info](https://github.com/jfrog/chartcenter/blob/master/docs/securitymitigationspec.md) + +## [11.0.3] - Sep 17, 2020 +* Added configurable session(UI) timeout in frontend microservice + +## [11.0.2] - Sep 17, 2020 +* Added proper required text to be shown while postgres upgrades + +## [11.0.1] - Sep 14, 2020 +* Updated Artifactory version to 7.7.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7.8) + +## [11.0.0] - Sep 2, 2020 +* **Breaking change:** Changed `imagePullSecrets`values from string to list. +* **Breaking change:** Added `image.registry` and changed `image.version` to `image.tag` for docker images +* Added support for global values +* Updated maintainers in chart.yaml +* Update postgresql tag version to `12.3.0-debian-10-r71` +* Update postgresql chart version to `9.3.4` in requirements.yaml - [9.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#900) +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x's postgresql.image.tag and databaseUpgradeReady=true + +## [10.1.0] - Aug 13, 2020 +* Updated Artifactory version to 7.7.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7) + +## [10.0.15] - Aug 10, 2020 +* Added enableSignedUrlRedirect for persistent storage type aws-s3-v3. + +## [10.0.14] - Jul 31, 2020 +* Update the README section on Nginx SSL termination to reflect the actual YAML structure. + +## [10.0.13] - Jul 30, 2020 +* Added condition to disable the migration scripts. + +## [10.0.12] - Jul 28, 2020 +* Document Artifactory node affinity. + +## [10.0.11] - Jul 28, 2020 +* Added maxConnections for persistent storage type aws-s3-v3. + +## [10.0.10] - Jul 28, 2020 +* Bugfix / support for userPluginSecrets with Artifactory 7 + +## [10.0.9] - Jul 27, 2020 +* Add tpl to external database secrets +* Modified `scheme` to `artifactory.scheme` + +## [10.0.8] - Jul 23, 2020 +* Added condition to disable the migration init container. + +## [10.0.7] - Jul 21, 2020 +* Updated Artifactory Chart to add node and primary labels to pods and service objects. + +## [10.0.6] - Jul 20, 2020 +* Support custom CA and certificates + +## [10.0.5] - Jul 13, 2020 +* Updated Artifactory version to 7.6.3 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.3 +* Fixed Mysql database jar path in `preStartCommand` in README + +## [10.0.4] - Jul 10, 2020 +* Move some postgresql values to where they should be according to the subchart + +## [10.0.3] - Jul 8, 2020 +* Set Artifactory access client connections to the same value as the access threads + +## [10.0.2] - Jul 6, 2020 +* Updated Artifactory version to 7.6.2 +* **IMPORTANT** +* Added ChartCenter Helm repository in README + +## [10.0.1] - Jul 01, 2020 +* Add dedicated ingress object for Replicator service when enabled + +## [10.0.0] - Jun 30, 2020 +* Update postgresql tag version to `10.13.0-debian-10-r38` +* Update alpine tag version to `3.12` +* Update busybox tag version to `1.31.1` +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass postgresql.image.tag=9.6.18-debian-10-r7 and databaseUpgradeReady=true + +## [9.6.0] - Jun 29, 2020 +* Updated Artifactory version to 7.6.1 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.1 +* Add tpl for external database secrets + +## [9.5.5] - Jun 25, 2020 +* Stop loading the Nginx stream module because it is now a core module + +## [9.5.4] - Jun 25, 2020 +* Notes.txt update - add --namespace parameter + +## [9.5.3] - Jun 11, 2020 +* Support list of custom secrets + +## [9.5.2] - Jun 12, 2020 +* Updated Artifactory version to 7.5.7 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5.7 + +## [9.5.1] - Jun 8, 2020 +* Readme update - configuring Artifactory with oracledb + +## [9.5.0] - Jun 1, 2020 +* Updated Artifactory version to 7.5.5 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5 +* Fixes bootstrap configMap permission issue +* Update postgresql tag version to `9.6.18-debian-10-r7` + +## [9.4.9] - May 27, 2020 +* Added Tomcat maxThreads & acceptCount + +## [9.4.8] - May 25, 2020 +* Fixed postgresql README `image` Parameters + +## [9.4.7] - May 24, 2020 +* Fixed typo in README regarding migration timeout + +## [9.4.6] - May 19, 2020 +* Added metadata maxOpenConnections + +## [9.4.5] - May 07, 2020 +* Fix `installerInfo` string format + +## [9.4.4] - Apr 27, 2020 +* Updated Artifactory version to 7.4.3 + +## [9.4.3] - Apr 26, 2020 +* Change order of the customInitContainers to run before the "migration-artifactory" initContainer. + +## [9.4.2] - Apr 24, 2020 +* Fix `artifactory.persistence.awsS3V3.useInstanceCredentials` incorrect conditional logic +* Bump postgresql tag version to `9.6.17-debian-10-r72` in values.yaml + +## [9.4.1] - Apr 16, 2020 +* Custom volumes in migration init container. + +## [9.4.0] - Apr 14, 2020 +* Updated Artifactory version to 7.4.1 + +## [9.3.1] - April 13, 2020 +* Update README with helm v3 commands + +## [9.3.0] - April 10, 2020 +* Use dependency charts from `https://charts.bitnami.com/bitnami` +* Bump postgresql chart version to `8.7.3` in requirements.yaml +* Bump postgresql tag version to `9.6.17-debian-10-r21` in values.yaml + +## [9.2.9] - Apr 8, 2020 +* Added recommended ingress annotation to avoid 413 errors + +## [9.2.8] - Apr 8, 2020 +* Moved migration scripts under `files` directory +* Support preStartCommand in migration Init container as `artifactory.migration.preStartCommand` + +## [9.2.7] - Apr 6, 2020 +* Fix cache size (should be 5gb instead of 50gb since volume claim is only 20gb). + +## [9.2.6] - Apr 1, 2020 +* Support masterKey and joinKey as secrets + +## [9.2.5] - Apr 1, 2020 +* Fix readme use to `-hex 32` instead of `-hex 16` + +## [9.2.4] - Mar 31, 2020 +* Change the way the artifactory `command:` is set so it will properly pass a SIGTERM to java + +## [9.2.3] - Mar 29, 2020 +* Add Nginx log options: stderr as logfile and log level + +## [9.2.2] - Mar 30, 2020 +* Use the same defaulting mechanism used for the artifactory version used elsewhere in the chart + +## [9.2.1] - Mar 29, 2020 +* Fix loggers sidecars configurations to support new file system layout and new log names + +## [9.2.0] - Mar 29, 2020 +* Fix broken admin user bootstrap configuration +* **Breaking change:** renamed `artifactory.accessAdmin` to `artifactory.admin` + +## [9.1.5] - Mar 26, 2020 +* Fix volumeClaimTemplate issue + +## [9.1.4] - Mar 25, 2020 +* Fix volume name used by filebeat container + +## [9.1.3] - Mar 24, 2020 +* Use `postgresqlExtendedConf` for setting custom PostgreSQL configuration (instead of `postgresqlConfiguration`) + +## [9.1.2] - Mar 22, 2020 +* Support for SSL offload in Nginx service(LoadBalancer) layer. Introduced `nginx.service.ssloffload` field with boolean type. + +## [9.1.1] - Mar 23, 2020 +* Moved installer info to values.yaml so it is fully customizable + +## [9.1.0] - Mar 23, 2020 +* Updated Artifactory version to 7.3.2 + +## [9.0.29] - Mar 20, 2020 +* Add support for masterKey trim during 6.x to 7.x migration if 6.x masterKey is 32 hex (64 characters) + +## [9.0.28] - Mar 18, 2020 +* Increased Nginx proxy_buffers size + +## [9.0.27] - Mar 17, 2020 +* Changed all single quotes to double quotes in values files +* useInstanceCredentials variable was declared in S3 settings but not used in chart. Now it is being used. + +## [9.0.26] - Mar 17, 2020 +* Fix rendering of Service Account annotations + +## [9.0.25] - Mar 16, 2020 +* Update Artifactory readme with extra ingress annotations needed for Artifactory to be set as SSO provider + +## [9.0.24] - Mar 16, 2020 +* Add Unsupported message from 6.18 to 7.2.x (migration) + +## [9.0.23] - Mar 12, 2020 +* Fix README.md rendering issue + +## [9.0.22] - Mar 11, 2020 +* Upgrade Docs update + +## [9.0.21] - Mar 11, 2020 +* Unified charts public release + +## [9.0.20] - Mar 6, 2020 +* Fix path to `/artifactory_bootstrap` +* Add support for controlling the name of the ingress and allow to set more than one cname + +## [9.0.19] - Mar 4, 2020 +* Add support for disabling `consoleLog` in `system.yaml` file + +## [9.0.18] - Feb 28, 2020 +* Add support to process `valueFrom` for extraEnvironmentVariables + +## [9.0.17] - Feb 26, 2020 +* Fix join key secret naming + +## [9.0.16] - Feb 26, 2020 +* Store join key to secret + +## [9.0.15] - Feb 26, 2020 +* Updated Artifactory version to 7.2.1 + +## [9.0.10] - Feb 07, 2020 +* Remove protection flag `databaseUpgradeReady` which was added to check internal postgres upgrade + +## [9.0.0] - Feb 07, 2020 +* Updated Artifactory version to 7.0.0 + +## [8.4.8] - Feb 13, 2020 +* Add support for SSH authentication to Artifactory + +## [8.4.7] - Feb 11, 2020 +* Change Artifactory service port name to be hard-coded to `http` instead of using `{{ .Release.Name }}` + +## [8.4.6] - Feb 9, 2020 +* Add support for `tpl` in the `postStartCommand` + +## [8.4.5] - Feb 4, 2020 +* Support customisable Nginx kind + +## [8.4.4] - Feb 2, 2020 +* Add a comment stating that it is recommended to use an external PostgreSQL with a static password for production installations + +## [8.4.3] - Jan 30, 2020 +* Add the option to configure resources for the logger containers + +## [8.4.2] - Jan 26, 2020 +* Improve `database.user` and `database.password` logic in order to support more use cases and make the configuration less repetitive + +## [8.4.1] - Jan 19, 2020 +* Fix replicator port config in nginx replicator configmap + +## [8.4.0] - Jan 19, 2020 +* Updated Artifactory version to 6.17.0 + +## [8.3.6] - Jan 16, 2020 +* Added example for external nginx-ingress + +## [8.3.5] - Dec 30, 2019 +* Fix for nginx probes failing when launched with http disabled + +## [8.3.4] - Dec 24, 2019 +* Better support for custom `artifactory.internalPort` + +## [8.3.3] - Dec 23, 2019 +* Mark empty map values with `{}` + +## [8.3.2] - Dec 16, 2019 +* Fix for toggling nginx service ports + +## [8.3.1] - Dec 12, 2019 +* Add support for toggling nginx service ports + +## [8.3.0] - Dec 1, 2019 +* Updated Artifactory version to 6.16.0 + +## [8.2.6] - Nov 28, 2019 +* Add support for using existing PriorityClass + +## [8.2.5] - Nov 27, 2019 +* Add support for PriorityClass + +## [8.2.4] - Nov 21, 2019 +* Add an option to use a file system cache-fs with the file-system binarystore template + +## [8.2.3] - Nov 20, 2019 +* Update Artifactory Readme + +## [8.2.2] - Nov 20, 2019 +* Update Artfactory logo + +## [8.2.1] - Nov 18, 2019 +* Add the option to provide service account annotations (in order to support stuff like https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html) + +## [8.2.0] - Nov 18, 2019 +* Updated Artifactory version to 6.15.0 + +## [8.1.11] - Nov 17, 2019 +* Do not provide a default master key. Allow it to be auto generated by Artifactory on first startup + +## [8.1.10] - Nov 17, 2019 +* Fix creation of double slash in nginx artifactory configuration + +## [8.1.9] - Nov 14, 2019 +* Set explicit `postgresql.postgresqlPassword=""` to avoid helm v3 error + +## [8.1.8] - Nov 12, 2019 +* Updated Artifactory version to 6.14.1 + +## [8.1.7] - Nov 9, 2019 +* Additional documentation for masterKey + +## [8.1.6] - Nov 10, 2019 +* Update PostgreSQL chart version to 7.0.1 +* Use formal PostgreSQL configuration format + +## [8.1.5] - Nov 8, 2019 +* Add support `artifactory.service.loadBalancerSourceRanges` for whitelisting when setting `artifactory.service.type=LoadBalancer` + +## [8.1.4] - Nov 6, 2019 +* Add support for any type of environment variable by using `extraEnvironmentVariables` as-is + +## [8.1.3] - Nov 6, 2019 +* Add nodeselector support for Postgresql + +## [8.1.2] - Nov 5, 2019 +* Add support for the aws-s3-v3 filestore, which adds support for pod IAM roles + +## [8.1.1] - Nov 4, 2019 +* When using `copyOnEveryStartup`, make sure that the target base directories are created before copying the files + +## [8.1.0] - Nov 3, 2019 +* Updated Artifactory version to 6.14.0 + +## [8.0.1] - Nov 3, 2019 +* Make sure the artifactory pod exits when one of the pre-start stages fail + +## [8.0.0] - Oct 27, 2019 +**IMPORTANT - BREAKING CHANGES!**
+**DOWNTIME MIGHT BE REQUIRED FOR AN UPGRADE!** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), must use the upgrade instructions in [UPGRADE_NOTES.md](UPGRADE_NOTES.md)! +* PostgreSQL sub chart was upgraded to version `6.5.x`. This version is **not backward compatible** with the old version (`0.9.5`)! +* Note the following **PostgreSQL** Helm chart changes + * The chart configuration has changed! See [values.yaml](values.yaml) for the new keys used + * **PostgreSQL** is deployed as a StatefulSet + * See [PostgreSQL helm chart](https://hub.helm.sh/charts/stable/postgresql) for all available configurations + +## [7.18.3] - Oct 24, 2019 +* Change the preStartCommand to support templating + +## [7.18.2] - Oct 21, 2019 +* Add support for setting `artifactory.labels` +* Add support for setting `nginx.labels` + +## [7.18.1] - Oct 10, 2019 +* Updated Artifactory version to 6.13.1 + +## [7.18.0] - Oct 7, 2019 +* Updated Artifactory version to 6.13.0 + +## [7.17.5] - Sep 24, 2019 +* Option to skip wait-for-db init container with '--set waitForDatabase=false' + +## [7.17.4] - Sep 11, 2019 +* Updated Artifactory version to 6.12.2 + +## [7.17.3] - Sep 9, 2019 +* Updated Artifactory version to 6.12.1 + +## [7.17.2] - Aug 22, 2019 +* Fix the nginx server_name directive used with ingress.hosts + +## [7.17.1] - Aug 21, 2019 +* Enable the Artifactory container's liveness and readiness probes + +## [7.17.0] - Aug 21, 2019 +* Updated Artifactory version to 6.12.0 + +## [7.16.11] - Aug 14, 2019 +* Updated Artifactory version to 6.11.6 + +## [7.16.10] - Aug 11, 2019 +* Fix Ingress routing and add an example + +## [7.16.9] - Aug 5, 2019 +* Do not mount `access/etc/bootstrap.creds` unless user specifies a custom password or secret (Access already generates a random password if not provided one) +* If custom `bootstrap.creds` is provided (using keys or custom secret), prepare it with an init container so the temp file does not persist + +## [7.16.8] - Aug 4, 2019 +* Improve binarystore config + 1. Convert to a secret + 2. Move config to values.yaml + 3. Support an external secret + +## [7.16.7] - Jul 29, 2019 +* Don't create the nginx configmaps when nginx.enabled is false + +## [7.16.6] - Jul 24, 2019 +* Simplify nginx setup and shorten initial wait for probes + +## [7.16.5] - Jul 22, 2019 +* Change Ingress API to be compatible with recent kubernetes versions + +## [7.16.4] - Jul 22, 2019 +* Updated Artifactory version to 6.11.3 + +## [7.16.3] - Jul 11, 2019 +* Add ingress.hosts to the Nginx server_name directive when ingress is enabled to help with Docker repository sub domain configuration + +## [7.16.2] - Jul 3, 2019 +* Fix values key in reverse proxy example + +## [7.16.1] - Jul 1, 2019 +* Updated Artifactory version to 6.11.1 + +## [7.16.0] - Jun 27, 2019 +* Update Artifactory version to 6.11 and add restart to Artifactory when bootstrap.creds file has been modified + +## [7.15.8] - Jun 27, 2019 +* Add the option for changing nginx config using values.yaml and remove outdated reverse proxy documentation + +## [7.15.6] - Jun 24, 2019 +* Update chart maintainers + +## [7.15.5] - Jun 24, 2019 +* Change Nginx to point to the artifactory externalPort + +## [7.15.4] - Jun 23, 2019 +* Add the option to provide an IP for the access-admin endpoints + +## [7.15.3] - Jun 23, 2019 +* Add values files for small, medium and large installations + +## [7.15.2] - Jun 20, 2019 +* Add missing terminationGracePeriodSeconds to values.yaml + +## [7.15.1] - Jun 19, 2019 +* Updated Artifactory version to 6.10.4 + +## [7.15.0] - Jun 17, 2019 +* Use configmaps for nginx configuration and remove nginx postStart command + +## [7.14.8] - Jun 18, 2019 +* Add the option to provide additional ingress rules + +## [7.14.7] - Jun 14, 2019 +* Updated readme with improved external database setup example + +## [7.14.6] - Jun 11, 2019 +* Updated Artifactory version to 6.10.3 +* Updated installer-info template + +## [7.14.5] - Jun 6, 2019 +* Updated Google Cloud Storage API URL and https settings + +## [7.14.4] - Jun 5, 2019 +* Delete the db.properties file on Artifactory startup + +## [7.14.3] - Jun 3, 2019 +* Updated Artifactory version to 6.10.2 + +## [7.14.2] - May 21, 2019 +* Updated Artifactory version to 6.10.1 + +## [7.14.1] - May 19, 2019 +* Fix missing logger image tag + +## [7.14.0] - May 7, 2019 +* Updated Artifactory version to 6.10.0 + +## [7.13.21] - May 5, 2019 +* Add support for setting `artifactory.async.corePoolSize` + +## [7.13.20] - May 2, 2019 +* Remove unused property `artifactory.releasebundle.feature.enabled` + +## [7.13.19] - May 1, 2019 +* Fix indentation issue with the replicator system property + +## [7.13.18] - Apr 30, 2019 +* Add support for JMX monitoring + +## [7.13.17] - Apr 25, 2019 +* Added support for `cacheProviderDir` + +## [7.13.16] - Apr 18, 2019 +* Changing API StatefulSet version to `v1` and permission fix for custom `artifactory.conf` for Nginx + +## [7.13.15] - Apr 16, 2019 +* Updated documentation for Reverse Proxy Configuration + +## [7.13.14] - Apr 15, 2019 +* Added support for `customVolumeMounts` + +## [7.13.13] - Aprl 12, 2019 +* Added support for `bucketExists` flag for googleStorage + +## [7.13.12] - Apr 11, 2019 +* Replace `curl` examples with `wget` due to the new base image + +## [7.13.11] - Aprl 07, 2019 +* Add support for providing the Artifactory license as a parameter + +## [7.13.10] - Apr 10, 2019 +* Updated Artifactory version to 6.9.1 + +## [7.13.9] - Aprl 04, 2019 +* Add support for templated extraEnvironmentVariables + +## [7.13.8] - Aprl 07, 2019 +* Change network policy API group + +## [7.13.7] - Aprl 04, 2019 +* Bugfix for userPluginSecrets + +## [7.13.6] - Apr 4, 2019 +* Add information about upgrading Artifactory with auto-generated postgres password + +## [7.13.5] - Aprl 03, 2019 +* Added installer info + +## [7.13.4] - Aprl 03, 2019 +* Allow secret names for user plugins to contain template language + +## [7.13.3] - Apr 02, 2019 +* Allow NetworkPolicy configurations (defaults to allow all) + +## [7.13.2] - Aprl 01, 2019 +* Add support for user plugin secret + +## [7.13.1] - Mar 27, 2019 +* Add the option to copy a list of files to ARTIFACTORY_HOME on startup + +## [7.13.0] - Mar 26, 2019 +* Updated Artifactory version to 6.9.0 + +## [7.12.18] - Mar 25, 2019 +* Add CI tests for persistence, ingress support and nginx + +## [7.12.17] - Mar 22, 2019 +* Add the option to change the default access-admin password + +## [7.12.16] - Mar 22, 2019 +* Added support for `.Probe.path` to customise the paths used for health probes + +## [7.12.15] - Mar 21, 2019 +* Added support for `artifactory.customSidecarContainers` to create custom sidecar containers +* Added support for `artifactory.customVolumes` to create custom volumes + +## [7.12.14] - Mar 21, 2019 +* Make ingress path configurable + +## [7.12.13] - Mar 19, 2019 +* Move the copy of bootstrap config from postStart to preStart + +## [7.12.12] - Mar 19, 2019 +* Fix existingClaim example + +## [7.12.11] - Mar 18, 2019 +* Add information about nginx persistence + +## [7.12.10] - Mar 15, 2019 +* Wait for nginx configuration file before using it + +## [7.12.9] - Mar 15, 2019 +* Revert securityContext changes since they were causing issues + +## [7.12.8] - Mar 15, 2019 +* Fix issue #247 (init container failing to run) + +## [7.12.7] - Mar 14, 2019 +* Updated Artifactory version to 6.8.7 +* Add support for Artifactory-CE for C++ + +## [7.12.6] - Mar 13, 2019 +* Move securityContext to container level + +## [7.12.5] - Mar 11, 2019 +* Updated Artifactory version to 6.8.6 + +## [7.12.4] - Mar 8, 2019 +* Fix existingClaim option + +## [7.12.3] - Mar 5, 2019 +* Updated Artifactory version to 6.8.4 + +## [7.12.2] - Mar 4, 2019 +* Add support for catalina logs sidecars + +## [7.12.1] - Feb 27, 2019 +* Updated Artifactory version to 6.8.3 + +## [7.12.0] - Feb 25, 2019 +* Add nginx support for tail sidecars + +## [7.11.1] - Feb 20, 2019 +* Added support for enterprise storage + +## [7.10.2] - Feb 19, 2019 +* Updated Artifactory version to 6.8.2 + +## [7.10.1] - Feb 17, 2019 +* Updated Artifactory version to 6.8.1 +* Add example of `SERVER_XML_EXTRA_CONNECTOR` usage + +## [7.10.0] - Feb 15, 2019 +* Updated Artifactory version to 6.8.0 + +## [7.9.6] - Feb 13, 2019 +* Updated Artifactory version to 6.7.3 + +## [7.9.5] - Feb 12, 2019 +* Add support for tail sidecars to view logs from k8s api + +## [7.9.4] - Feb 6, 2019 +* Fix support for customizing statefulset `terminationGracePeriodSeconds` + +## [7.9.3] - Feb 5, 2019 +* Add instructions on how to deploy Artifactory with embedded Derby database + +## [7.9.2] - Feb 5, 2019 +* Add support for customizing statefulset `terminationGracePeriodSeconds` + +## [7.9.1] - Feb 3, 2019 +* Updated Artifactory version to 6.7.2 + +## [7.9.0] - Jan 23, 2019 +* Updated Artifactory version to 6.7.0 + +## [7.8.9] - Jan 22, 2019 +* Added support for `artifactory.customInitContainers` to create custom init containers + +## [7.8.8] - Jan 17, 2019 +* Added support of values ingress.labels + +## [7.8.7] - Jan 16, 2019 +* Mount replicator.yaml (config) directly to /replicator_extra_conf + +## [7.8.6] - Jan 13, 2019 +* Fix documentation about nginx group id + +## [7.8.5] - Jan 13, 2019 +* Updated Artifactory version to 6.6.5 + +## [7.8.4] - Jan 8, 2019 +* Make artifactory.replicator.publicUrl required when the replicator is enabled + +## [7.8.3] - Jan 1, 2019 +* Updated Artifactory version to 6.6.3 +* Add support for `artifactory.extraEnvironmentVariables` to pass more environment variables to Artifactory + +## [7.8.2] - Dec 28, 2018 +* Fix location `replicator.yaml` is copied to + +## [7.8.1] - Dec 27, 2018 +* Updated Artifactory version to 6.6.1 + +## [7.8.0] - Dec 20, 2018 +* Updated Artifactory version to 6.6.0 + +## [7.7.13] - Dec 17, 2018 +* Updated Artifactory version to 6.5.13 + +## [7.7.12] - Dec 12, 2018 +* Fix documentation about Artifactory license setup using secret + +## [7.7.11] - Dec 10, 2018 +* Fix issue when using existing claim + +## [7.7.10] - Dec 5, 2018 +* Remove Distribution certificates creation. + +## [7.7.9] - Nov 30, 2018 +* Updated Artifactory version to 6.5.9 + +## [7.7.8] - Nov 29, 2018 +* Updated postgresql version to 9.6.11 + +## [7.7.7] - Nov 27, 2018 +* Updated Artifactory version to 6.5.8 + +## [7.7.6] - Nov 19, 2018 +* Added support for configMap to use custom Reverse Proxy Configuration with Nginx + +## [7.7.5] - Nov 14, 2018 +* Fix location of `nodeSelector`, `affinity` and `tolerations` + +## [7.7.4] - Nov 14, 2018 +* Updated Artifactory version to 6.5.3 + +## [7.7.3] - Nov 12, 2018 +* Support artifactory.preStartCommand for running command before entrypoint starts + +## [7.7.2] - Nov 7, 2018 +* Support database.url parameter (DB_URL) + +## [7.7.1] - Oct 29, 2018 +* Change probes port to 8040 (so they will not be blocked when all tomcat threads on 8081 are exhausted) + +## [7.7.0] - Oct 28, 2018 +* Update postgresql chart to version 0.9.5 to be able and use `postgresConfig` options + +## [7.6.8] - Oct 23, 2018 +* Fix providing external secret for database credentials + +## [7.6.7] - Oct 23, 2018 +* Allow user to configure externalTrafficPolicy for Loadbalancer + +## [7.6.6] - Oct 22, 2018 +* Updated ingress annotation support (with examples) to support docker registry v2 + +## [7.6.5] - Oct 21, 2018 +* Updated Artifactory version to 6.5.2 + +## [7.6.4] - Oct 19, 2018 +* Allow providing pre-existing secret containing master key +* Allow arbitrary annotations on primary and member node pods +* Enforce size limits when using local storage with `emptyDir` +* Allow providing pre-existing secrets containing external database credentials + +## [7.6.3] - Oct 18, 2018 +* Updated Artifactory version to 6.5.1 + +## [7.6.2] - Oct 17, 2018 +* Add Apache 2.0 license + +## [7.6.1] - Oct 11, 2018 +* Supports master-key in the secrets and stateful-set +* Allows ingress default `backend` to be enabled or disabled (defaults to enabled) + +## [7.6.0] - Oct 11, 2018 +* Updated Artifactory version to 6.5.0 + +## [7.5.4] - Oct 9, 2018 +* Quote ingress hosts to support wildcard names + +## [7.5.3] - Oct 4, 2018 +* Add PostgreSQL resources template + +## [7.5.2] - Oct 2, 2018 +* Add `helm repo add jfrog https://charts.jfrog.io` to README + +## [7.5.1] - Oct 2, 2018 +* Set Artifactory to 6.4.1 + +## [7.5.0] - Sep 27, 2018 +* Set Artifactory to 6.4.0 + +## [7.4.3] - Sep 26, 2018 +* Add ci/test-values.yaml + +## [7.4.2] - Sep 2, 2018 +* Updated Artifactory version to 6.3.2 +* Removed unused PVC + +## [7.4.0] - Aug 22, 2018 +* Added support to run as non root +* Updated Artifactory version to 6.2.0 + +## [7.3.0] - Aug 22, 2018 +* Enabled RBAC Support +* Added support for PostStartCommand (To download Database JDBC connector) +* Increased postgresql max_connections +* Added support for `nginx.conf` ConfigMap +* Updated Artifactory version to 6.1.0 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/Chart.lock b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/Chart.lock new file mode 100644 index 000000000..8064c323b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +digest: sha256:404ce007353baaf92a6c5f24b249d5b336c232e5fd2c29f8a0e4d0095a09fd53 +generated: "2022-03-08T08:53:16.293311+05:30" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/Chart.yaml new file mode 100644 index 000000000..2d73d0a6e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +appVersion: 7.90.15 +dependencies: +- condition: postgresql.enabled + name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +description: Universal Repository Manager supporting all major packaging formats, + build tools and CI servers. +home: https://www.jfrog.com/artifactory/ +icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory/logo/artifactory-logo.png +keywords: +- artifactory +- jfrog +- devops +kubeVersion: '>= 1.19.0-0' +maintainers: +- email: installers@jfrog.com + name: Chart Maintainers at JFrog +name: artifactory +sources: +- https://github.com/jfrog/charts +type: application +version: 107.90.15 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/LICENSE b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/LICENSE new file mode 100644 index 000000000..8dada3eda --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/README.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/README.md new file mode 100644 index 000000000..da3304ee5 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/README.md @@ -0,0 +1,59 @@ +# JFrog Artifactory Helm Chart + +**IMPORTANT!** Our Helm Chart docs have moved to our main documentation site. Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-HelmInstallation). + +## Prerequisites +* Kubernetes 1.19+ +* Artifactory Pro trial license [get one from here](https://www.jfrog.com/artifactory/free-trial/) + +## Chart Details +This chart will do the following: + +* Deploy Artifactory-Pro/Artifactory-Edge (or OSS/CE if custom image is set) +* Deploy a PostgreSQL database using the stable/postgresql chart (can be changed) **NOTE:** For production grade installations it is recommended to use an external PostgreSQL. +* Deploy an optional Nginx server +* Optionally expose Artifactory with Ingress [Ingress documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) + +## Installing the Chart + +### Add JFrog Helm repository + +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client + +```bash +helm repo add jfrog https://charts.jfrog.io +helm repo update +``` + +### Install Chart +To install the chart with the release name `artifactory`: +```bash +helm upgrade --install artifactory jfrog/artifactory --namespace artifactory --create-namespace +``` + +### Apply Sizing configurations to the Chart +To apply the chart with recommended sizing configurations : +For small configurations : +```bash +helm upgrade --install artifactory jfrog/artifactory -f sizing/artifactory-small-extra-config.yaml -f sizing/artifactory-small.yaml --namespace artifactory --create-namespace +``` + +## Uninstalling Artifactory + +Uninstall is supported only on Helm v3 and on. + +Uninstall Artifactory using the following command. + +```bash +helm uninstall artifactory && sleep 90 && kubectl delete pvc -l app=artifactory +``` + +## Deleting Artifactory + +**IMPORTANT:** Deleting Artifactory will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. You do not need to uninstall Artifactory before deleting it. + +To delete Artifactory use the following command. + +```bash +helm delete artifactory --namespace artifactory +``` diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/.helmignore b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/.helmignore new file mode 100644 index 000000000..f0c131944 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/Chart.lock b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/Chart.lock new file mode 100644 index 000000000..3687f52df --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.4.2 +digest: sha256:dce0349883107e3ff103f4f17d3af4ad1ea3c7993551b1c28865867d3e53d37c +generated: "2021-03-30T09:13:28.360322819Z" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/Chart.yaml new file mode 100644 index 000000000..4b197b207 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/Chart.yaml @@ -0,0 +1,29 @@ +annotations: + category: Database +apiVersion: v2 +appVersion: 11.11.0 +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.x.x +description: Chart for PostgreSQL, an object-relational database management system + (ORDBMS) with an emphasis on extensibility and on standards-compliance. +home: https://github.com/bitnami/charts/tree/master/bitnami/postgresql +icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-220x234.png +keywords: +- postgresql +- postgres +- database +- sql +- replication +- cluster +maintainers: +- email: containers@bitnami.com + name: Bitnami +- email: cedric@desaintmartin.fr + name: desaintmartin +name: postgresql +sources: +- https://github.com/bitnami/bitnami-docker-postgresql +- https://www.postgresql.org/ +version: 10.3.18 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/README.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/README.md new file mode 100644 index 000000000..63d3605bb --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/README.md @@ -0,0 +1,770 @@ +# PostgreSQL + +[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance. + +For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha) + +## TL;DR + +```console +$ helm repo add bitnami https://charts.bitnami.com/bitnami +$ helm install my-release bitnami/postgresql +``` + +## Introduction + +This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/). + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 +- PV provisioner support in the underlying infrastructure + +## Installing the Chart +To install the chart with the release name `my-release`: + +```console +$ helm install my-release bitnami/postgresql +``` + +The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation. + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```console +$ helm delete my-release +``` + +The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release. + +To delete the PVC's associated with `my-release`: + +```console +$ kubectl delete pvc -l release=my-release +``` + +> **Note**: Deleting the PVC's will delete postgresql data as well. Please be cautious before doing it. + +## Parameters + +The following tables lists the configurable parameters of the PostgreSQL chart and their default values. + +| Parameter | Description | Default | +|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| +| `global.imageRegistry` | Global Docker Image registry | `nil` | +| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` | +| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` | +| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` | +| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` | +| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` | +| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| `image.registry` | PostgreSQL Image registry | `docker.io` | +| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` | +| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` | +| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `image.debug` | Specify if debug values should be set | `false` | +| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` | +| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` | +| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | +| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | +| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` | +| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` | +| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | +| `volumePermissions.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` | +| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` | +| `ldap.enabled` | Enable LDAP support | `false` | +| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` | +| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` | +| `ldap.server` | IP address or name of the LDAP server. | `nil` | +| `ldap.port` | Port number on the LDAP server to connect to | `nil` | +| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` | +| `ldap.tls` | Set to `1` to use TLS encryption | `nil` | +| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` | +| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` | +| `ldap.search_attr` | Attribute to match against the user name in the search | `nil` | +| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` | +| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` | +| `ldap.bindDN` | DN of user to bind to LDAP | `nil` | +| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` | +| `replication.enabled` | Enable replication | `false` | +| `replication.user` | Replication user | `repl_user` | +| `replication.password` | Replication user password | `repl_password` | +| `replication.readReplicas` | Number of read replicas replicas | `1` | +| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` | +| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.readReplicas`. | `0` | +| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` | +| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-postgres-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be used to authenticate on LDAP. The value is evaluated as a template. | `nil` | +| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`, in which case`postgres` is the admin username). | _random 10 character alphanumeric string_ | +| `postgresqlUsername` | PostgreSQL user (creates a non-admin user when `postgresqlUsername` is not `postgres`) | `postgres` | +| `postgresqlPassword` | PostgreSQL user password | _random 10 character alphanumeric string_ | +| `postgresqlDatabase` | PostgreSQL database | `nil` | +| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) | +| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` | +| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` | +| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` | +| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` | +| `postgresqlConfiguration` | Runtime Config Parameters | `nil` | +| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` | +| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` | +| `postgresqlSharedPreloadLibraries` | Shared preload libraries (comma-separated list) | `pgaudit` | +| `postgresqlMaxConnections` | Maximum total connections | `nil` | +| `postgresqlPostgresConnectionLimit` | Maximum total connections for the postgres user | `nil` | +| `postgresqlDbUserConnectionLimit` | Maximum total connections for the non-admin user | `nil` | +| `postgresqlTcpKeepalivesInterval` | TCP keepalives interval | `nil` | +| `postgresqlTcpKeepalivesIdle` | TCP keepalives idle | `nil` | +| `postgresqlTcpKeepalivesCount` | TCP keepalives count | `nil` | +| `postgresqlStatementTimeout` | Statement timeout | `nil` | +| `postgresqlPghbaRemoveFilters` | Comma-separated list of patterns to remove from the pg_hba.conf file | `nil` | +| `customStartupProbe` | Override default startup probe | `nil` | +| `customLivenessProbe` | Override default liveness probe | `nil` | +| `customReadinessProbe` | Override default readiness probe | `nil` | +| `audit.logHostname` | Add client hostnames to the log file | `false` | +| `audit.logConnections` | Add client log-in operations to the log file | `false` | +| `audit.logDisconnections` | Add client log-outs operations to the log file | `false` | +| `audit.pgAuditLog` | Add operations to log using the pgAudit extension | `nil` | +| `audit.clientMinMessages` | Message log level to share with the user | `nil` | +| `audit.logLinePrefix` | Template string for the log line prefix | `nil` | +| `audit.logTimezone` | Timezone for the log timestamps | `nil` | +| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` | +| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` | +| `initdbScripts` | Dictionary of initdb scripts | `nil` | +| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` | +| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` | +| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` | +| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` | +| `service.type` | Kubernetes Service type | `ClusterIP` | +| `service.port` | PostgreSQL port | `5432` | +| `service.nodePort` | Kubernetes Service nodePort | `nil` | +| `service.annotations` | Annotations for PostgreSQL service | `{}` (evaluated as a template) | +| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | +| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | `[]` (evaluated as a template) | +| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | +| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for primary and read replica(s) Pod(s) | `true` | +| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` | +| `persistence.enabled` | Enable persistence using PVC | `true` | +| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` | +| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` | +| `persistence.subPath` | Subdirectory of the volume to mount at | `""` | +| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` | +| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` | +| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` | +| `persistence.annotations` | Annotations for the PVC | `{}` | +| `persistence.selector` | Selector to match an existing Persistent Volume (this value is evaluated as a template) | `{}` | +| `commonAnnotations` | Annotations to be added to all deployed resources (rendered as a template) | `{}` | +| `primary.podAffinityPreset` | PostgreSQL primary pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.podAntiAffinityPreset` | PostgreSQL primary pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `primary.nodeAffinityPreset.type` | PostgreSQL primary node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.nodeAffinityPreset.key` | PostgreSQL primary node label key to match Ignored if `primary.affinity` is set. | `""` | +| `primary.nodeAffinityPreset.values` | PostgreSQL primary node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `primary.affinity` | Affinity for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.nodeSelector` | Node labels for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.tolerations` | Tolerations for PostgreSQL primary pods assignment | `[]` (evaluated as a template) | +| `primary.anotations` | Map of annotations to add to the statefulset (postgresql primary) | `{}` | +| `primary.labels` | Map of labels to add to the statefulset (postgresql primary) | `{}` | +| `primary.podAnnotations` | Map of annotations to add to the pods (postgresql primary) | `{}` | +| `primary.podLabels` | Map of labels to add to the pods (postgresql primary) | `{}` | +| `primary.priorityClassName` | Priority Class to use for each pod (postgresql primary) | `nil` | +| `primary.extraInitContainers` | Additional init containers to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumes` | Additional volumes to add to the pods (postgresql primary) | `[]` | +| `primary.sidecars` | Add additional containers to the pod | `[]` | +| `primary.service.type` | Allows using a different service type for primary | `nil` | +| `primary.service.nodePort` | Allows using a different nodePort for primary | `nil` | +| `primary.service.clusterIP` | Allows using a different clusterIP for primary | `nil` | +| `primaryAsStandBy.enabled` | Whether to enable current cluster's primary as standby server of another cluster or not. | `false` | +| `primaryAsStandBy.primaryHost` | The Host of replication primary in the other cluster. | `nil` | +| `primaryAsStandBy.primaryPort ` | The Port of replication primary in the other cluster. | `nil` | +| `readReplicas.podAffinityPreset` | PostgreSQL read only pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.podAntiAffinityPreset` | PostgreSQL read only pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `readReplicas.nodeAffinityPreset.type` | PostgreSQL read only node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.nodeAffinityPreset.key` | PostgreSQL read only node label key to match Ignored if `primary.affinity` is set. | `""` | +| `readReplicas.nodeAffinityPreset.values` | PostgreSQL read only node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `readReplicas.affinity` | Affinity for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.nodeSelector` | Node labels for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.anotations` | Map of annotations to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.resources` | CPU/Memory resource requests/limits override for readReplicass. Will fallback to `values.resources` if not defined. | `{}` | +| `readReplicas.labels` | Map of labels to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.podAnnotations` | Map of annotations to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.podLabels` | Map of labels to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.priorityClassName` | Priority Class to use for each pod (postgresql readReplicas) | `nil` | +| `readReplicas.extraInitContainers` | Additional init containers to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumes` | Additional volumes to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.sidecars` | Add additional containers to the pod | `[]` | +| `readReplicas.service.type` | Allows using a different service type for readReplicas | `nil` | +| `readReplicas.service.nodePort` | Allows using a different nodePort for readReplicas | `nil` | +| `readReplicas.service.clusterIP` | Allows using a different clusterIP for readReplicas | `nil` | +| `readReplicas.persistence.enabled` | Whether to enable readReplicas replicas persistence | `true` | +| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` | +| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` | +| `securityContext.*` | Other pod security context to be included as-is in the pod spec | `{}` | +| `securityContext.enabled` | Enable security context | `true` | +| `securityContext.fsGroup` | Group ID for the pod | `1001` | +| `containerSecurityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `containerSecurityContext.enabled` | Enable container security context | `true` | +| `containerSecurityContext.runAsUser` | User ID for the container | `1001` | +| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` | +| `serviceAccount.name` | Name of existing service account | `nil` | +| `networkPolicy.enabled` | Enable NetworkPolicy | `false` | +| `networkPolicy.allowExternal` | Don't require client label for connections | `true` | +| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` | +| `startupProbe.enabled` | Enable startupProbe | `false` | +| `startupProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `startupProbe.periodSeconds` | How often to perform the probe | 15 | +| `startupProbe.timeoutSeconds` | When the probe times | 5 | +| `startupProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 10 | +| `startupProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 | +| `livenessProbe.enabled` | Enable livenessProbe | `true` | +| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `readinessProbe.enabled` | Enable readinessProbe | `true` | +| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 | +| `readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `tls.enabled` | Enable TLS traffic support | `false` | +| `tls.preferServerCiphers` | Whether to use the server's TLS cipher preferences rather than the client's | `true` | +| `tls.certificatesSecret` | Name of an existing secret that contains the certificates | `nil` | +| `tls.certFilename` | Certificate filename | `""` | +| `tls.certKeyFilename` | Certificate key filename | `""` | +| `tls.certCAFilename` | CA Certificate filename. If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate. | `nil` | +| `tls.crlFilename` | File containing a Certificate Revocation List | `nil` | +| `metrics.enabled` | Start a prometheus exporter | `false` | +| `metrics.service.type` | Kubernetes Service type | `ClusterIP` | +| `service.clusterIP` | Static clusterIP or None for headless services | `nil` | +| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` | +| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` | +| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` | +| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` | +| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` | +| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` | +| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` | +| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` | +| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | +| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql | +| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` | +| `metrics.image.registry` | PostgreSQL Exporter Image registry | `docker.io` | +| `metrics.image.repository` | PostgreSQL Exporter Image name | `bitnami/postgres-exporter` | +| `metrics.image.tag` | PostgreSQL Exporter Image tag | `{TAG_NAME}` | +| `metrics.image.pullPolicy` | PostgreSQL Exporter Image pull policy | `IfNotPresent` | +| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `metrics.customMetrics` | Additional custom metrics | `nil` | +| `metrics.extraEnvVars` | Extra environment variables to add to exporter | `{}` (evaluated as a template) | +| `metrics.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `metrics.securityContext.enabled` | Enable security context for metrics | `false` | +| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` | +| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` | +| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | +| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` | +| `psp.create` | Create Pod Security Policy | `false` | +| `rbac.create` | Create Role and RoleBinding (required for PSP to work) | `false` | +| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, + +```console +$ helm install my-release \ + --set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \ + bitnami/postgresql +``` + +The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`. + +> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available. + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```console +$ helm install my-release -f values.yaml bitnami/postgresql +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + +### Customizing primary and read replica services in a replicated configuration + +At the top level, there is a service object which defines the services for both primary and readReplicas. For deeper customization, there are service objects for both the primary and read types individually. This allows you to override the values in the top level service object so that the primary and read can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the primary and read to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the primary.service or readReplicas.service objects will take precedence over the top level service object. + +### Change PostgreSQL version + +To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=X.Y.Z`. This approach is also applicable to other images like exporters. + +### postgresql.conf / pg_hba.conf files as configMap + +This helm chart also supports to customize the whole configuration file. + +Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server. + +Alternatively, you can add additional PostgreSQL configuration parameters using the `postgresqlExtendedConf` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}. Alternatively, to replace the entire default configuration use `postgresqlConfiguration`. + +In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options. + +### Allow settings to be loaded from files other than the default `postgresql.conf` + +If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory. +Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option. + +### Initialize a fresh instance + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap. + +Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict. + +In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter. + +The allowed extensions are `.sh`, `.sql` and `.sql.gz`. + +### Securing traffic using TLS + +TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart: + +- `tls.enabled`: Enable TLS support. Defaults to `false` +- `tls.certificatesSecret`: Name of an existing secret that contains the certificates. No defaults. +- `tls.certFilename`: Certificate filename. No defaults. +- `tls.certKeyFilename`: Certificate key filename. No defaults. + +For example: + +* First, create the secret with the cetificates files: + + ```console + kubectl create secret generic certificates-tls-secret --from-file=./cert.crt --from-file=./cert.key --from-file=./ca.crt + ``` + +* Then, use the following parameters: + + ```console + volumePermissions.enabled=true + tls.enabled=true + tls.certificatesSecret="certificates-tls-secret" + tls.certFilename="cert.crt" + tls.certKeyFilename="cert.key" + ``` + + > Note TLS and VolumePermissions: PostgreSQL requires certain permissions on sensitive files (such as certificate keys) to start up. Due to an on-going [issue](https://github.com/kubernetes/kubernetes/issues/57923) regarding kubernetes permissions and the use of `containerSecurityContext.runAsUser`, you must enable `volumePermissions` to ensure everything works as expected. + +### Sidecars + +If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec. + +```yaml +# For the PostgreSQL primary +primary: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +# For the PostgreSQL replicas +readReplicas: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +``` + +### Metrics + +The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml). + +The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details. + +### Use of global variables + +In more complex scenarios, we may have the following tree of dependencies + +``` + +--------------+ + | | + +------------+ Chart 1 +-----------+ + | | | | + | --------+------+ | + | | | + | | | + | | | + | | | + v v v ++-------+------+ +--------+------+ +--------+------+ +| | | | | | +| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 | +| | | | | | ++--------------+ +---------------+ +---------------+ +``` + +The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters: + +``` +postgresql.postgresqlPassword=testtest +subchart1.postgresql.postgresqlPassword=testtest +subchart2.postgresql.postgresqlPassword=testtest +postgresql.postgresqlDatabase=db1 +subchart1.postgresql.postgresqlDatabase=db1 +subchart2.postgresql.postgresqlDatabase=db1 +``` + +If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows: + +``` +global.postgresql.postgresqlPassword=testtest +global.postgresql.postgresqlDatabase=db1 +``` + +This way, the credentials will be available in all of the subcharts. + +## Persistence + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container. + +Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. +See the [Parameters](#parameters) section to configure the PVC or to disable persistence. + +If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished. + +## NetworkPolicy + +To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`. + +For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace: + +```console +$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}" +``` + +With NetworkPolicy enabled, traffic will be limited to just port 5432. + +For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL. +This label will be displayed in the output of a successful install. + +## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image + +- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image. +- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift. +- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,containerSecurityContext.enabled=false,shmVolume.chmod.enabled=false + +### Deploy chart using Docker Official PostgreSQL Image + +From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image. +Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory. + +``` +image.repository=postgres +image.tag=10.6 +postgresqlDataDir=/data/pgdata +persistence.mountPath=/data/ +``` + +### Setting Pod's affinity + +This chart allows you to set your custom affinity using the `XXX.affinity` paremeter(s). Find more infomation about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). + +As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters. + +## Troubleshooting + +Find more information about how to deal with common errors related to Bitnami’s Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues). + +## Upgrading + +It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart: + +```bash +$ helm upgrade my-release bitnami/postgresql \ + --set postgresqlPassword=[POSTGRESQL_PASSWORD] \ + --set replication.password=[REPLICATION_PASSWORD] +``` + +> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes. + +### To 10.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Move dependency information from the *requirements.yaml* to the *Chart.yaml* +- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock* +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart. + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ + +#### Breaking changes + +- The term `master` has been replaced with `primary` and `slave` with `readReplicas` throughout the chart. Role names have changed from `master` and `slave` to `primary` and `read`. + +To upgrade to `10.0.0`, it should be done reusing the PVCs used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is `postgresql`): + +> NOTE: Please, create a backup of your database before running any of those actions. + +Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release: + +```console +$ export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) +$ export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=master -o jsonpath="{.items[0].metadata.name}") +``` + +Delete the PostgreSQL statefulset. Notice the option `--cascade=false`: + +```console +$ kubectl delete statefulsets.apps postgresql-postgresql --cascade=false +``` + +Now the upgrade works: + +```console +$ helm upgrade postgresql bitnami/postgresql --set postgresqlPassword=$POSTGRESQL_PASSWORD --set persistence.existingClaim=$POSTGRESQL_PVC +``` + +You will have to delete the existing PostgreSQL pod and the new statefulset is going to create a new one + +```console +$ kubectl delete pod postgresql-postgresql-0 +``` + +Finally, you should see the lines below in PostgreSQL container logs: + +```console +$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}") +... +postgresql 08:05:12.59 INFO ==> Deploying PostgreSQL with persisted data... +... +``` + +### To 9.0.0 + +In this version the chart was adapted to follow the Helm label best practices, see [PR 3021](https://github.com/bitnami/charts/pull/3021). That means the backward compatibility is not guarantee when upgrading the chart to this major version. + +As a workaround, you can delete the existing statefulset (using the `--cascade=false` flag pods are not deleted) before upgrade the chart. For example, this can be a valid workflow: + +- Deploy an old version (8.X.X) + +```console +$ helm install postgresql bitnami/postgresql --version 8.10.14 +``` + +- Old version is up and running + +```console +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 1 2020-08-04 13:39:54.783480286 +0000 UTC deployed postgresql-8.10.14 11.8.0 + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 76s +``` + +- The upgrade to the latest one (9.X.X) is going to fail + +```console +$ helm upgrade postgresql bitnami/postgresql +Error: UPGRADE FAILED: cannot patch "postgresql-postgresql" with kind StatefulSet: StatefulSet.apps "postgresql-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden +``` + +- Delete the statefulset + +```console +$ kubectl delete statefulsets.apps --cascade=false postgresql-postgresql +statefulset.apps "postgresql-postgresql" deleted +``` + +- Now the upgrade works + +```console +$ helm upgrade postgresql bitnami/postgresql +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 3 2020-08-04 13:42:08.020385884 +0000 UTC deployed postgresql-9.1.2 11.8.0 +``` + +- We can kill the existing pod and the new statefulset is going to create a new one: + +```console +$ kubectl delete pod postgresql-postgresql-0 +pod "postgresql-postgresql-0" deleted + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 19s +``` + +Please, note that without the `--cascade=false` both objects (statefulset and pod) are going to be removed and both objects will be deployed again with the `helm upgrade` command + +### To 8.0.0 + +Prefixes the port names with their protocols to comply with Istio conventions. + +If you depend on the port names in your setup, make sure to update them to reflect this change. + +### To 7.1.0 + +Adds support for LDAP configuration. + +### To 7.0.0 + +Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec. + +In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage. + +This major version bump signifies this change. + +### To 6.5.7 + +In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies: + +- protobuf +- protobuf-c +- json-c +- geos +- proj + +### To 5.0.0 + +In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/). + +For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs: + +```console +Welcome to the Bitnami postgresql container +Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql +Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues +Send us your feedback at containers@bitnami.com + +INFO ==> ** Starting PostgreSQL setup ** +NFO ==> Validating settings in POSTGRESQL_* env vars.. +INFO ==> Initializing PostgreSQL database... +INFO ==> postgresql.conf file not detected. Generating it... +INFO ==> pg_hba.conf file not detected. Generating it... +INFO ==> Deploying PostgreSQL with persisted data... +INFO ==> Configuring replication parameters +INFO ==> Loading custom scripts... +INFO ==> Enabling remote connections +INFO ==> Stopping PostgreSQL... +INFO ==> ** PostgreSQL setup finished! ** + +INFO ==> ** Starting PostgreSQL ** + [1] FATAL: database files are incompatible with server + [1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3. +``` + +In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one. + +### To 4.0.0 + +This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately. + +IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error + +``` +The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development +``` + +### To 3.0.0 + +This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods. +It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride. + +#### Breaking changes + +- `affinty` has been renamed to `master.affinity` and `slave.affinity`. +- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`. +- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`. + +### To 2.0.0 + +In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps: + +- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running + +```console +$ kubectl get svc +``` + +- Install (not upgrade) the new version + +```console +$ helm repo update +$ helm install my-release bitnami/postgresql +``` + +- Connect to the new pod (you can obtain the name by running `kubectl get pods`): + +```console +$ kubectl exec -it NAME bash +``` + +- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart: + +```console +$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql +``` + +After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`). +This operation could take some time depending on the database size. + +- Once you have the backup file, you can restore it with a command like the one below: + +```console +$ psql -U postgres DATABASE_NAME < /tmp/backup.sql +``` + +In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt). + +If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below. + +```console +$ psql -U postgres +postgres=# drop database DATABASE_NAME; +postgres=# create database DATABASE_NAME; +postgres=# create user USER_NAME; +postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD'; +postgres=# grant all privileges on database DATABASE_NAME to USER_NAME; +postgres=# alter database DATABASE_NAME owner to USER_NAME; +``` diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/.helmignore b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/.helmignore new file mode 100644 index 000000000..50af03172 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/Chart.yaml new file mode 100644 index 000000000..bcc3808d0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/Chart.yaml @@ -0,0 +1,23 @@ +annotations: + category: Infrastructure +apiVersion: v2 +appVersion: 1.4.2 +description: A Library Helm Chart for grouping common logic between bitnami charts. + This chart is not deployable by itself. +home: https://github.com/bitnami/charts/tree/master/bitnami/common +icon: https://bitnami.com/downloads/logos/bitnami-mark.png +keywords: +- common +- helper +- template +- function +- bitnami +maintainers: +- email: containers@bitnami.com + name: Bitnami +name: common +sources: +- https://github.com/bitnami/charts +- http://www.bitnami.com/ +type: library +version: 1.4.2 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/README.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/README.md new file mode 100644 index 000000000..7287cbb5f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/README.md @@ -0,0 +1,322 @@ +# Bitnami Common Library Chart + +A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts. + +## TL;DR + +```yaml +dependencies: + - name: common + version: 0.x.x + repository: https://charts.bitnami.com/bitnami +``` + +```bash +$ helm dependency update +``` + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "common.names.fullname" . }} +data: + myvalue: "Hello World" +``` + +## Introduction + +This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications. + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 + +## Parameters + +The following table lists the helpers available in the library which are scoped in different sections. + +### Affinities + +| Helper identifier | Description | Expected Input | +|-------------------------------|------------------------------------------------------|------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | +| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | + +### Capabilities + +| Helper identifier | Description | Expected Input | +|----------------------------------------------|------------------------------------------------------------------------------------------------|-------------------| +| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context | +| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context | +| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context | +| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context | +| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context | +| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context | +| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context | + +### Errors + +| Helper identifier | Description | Expected Input | +|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| +| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` | + +### Images + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. | +| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` | + +### Ingress + +| Helper identifier | Description | Expected Input | +|--------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences | + +### Labels + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|-------------------| +| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context | +| `common.labels.matchLabels` | Return the proper Docker Image Registry Secret Names | `.` Chart context | + +### Names + +| Helper identifier | Description | Expected Inpput | +|-------------------------|------------------------------------------------------------|-------------------| +| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context | +| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context | +| `common.names.chart` | Chart name plus version | `.` Chart context | + +### Secrets + +| Helper identifier | Description | Expected Input | +|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. | +| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. | +| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. | +| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` | + +### Storage + +| Helper identifier | Description | Expected Input | +|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. | + +### TplValues + +| Helper identifier | Description | Expected Input | +|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` | + +### Utils + +| Helper identifier | Description | Expected Input | +|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------| +| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` | +| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` | +| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` | +| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` | + +### Validations + +| Helper identifier | Description | Expected Input | +|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) | +| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) | +| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. | +| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. | +| `common.validations.values.redis.passwords` | This helper will ensure required password for RedisTM are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. | +| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. | +| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB® are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. | + +### Warnings + +| Helper identifier | Description | Expected Input | +|------------------------------|----------------------------------|------------------------------------------------------------| +| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. | + +## Special input schemas + +### ImageRoot + +```yaml +registry: + type: string + description: Docker registry where the image is located + example: docker.io + +repository: + type: string + description: Repository and image name + example: bitnami/nginx + +tag: + type: string + description: image tag + example: 1.16.1-debian-10-r63 + +pullPolicy: + type: string + description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + +pullSecrets: + type: array + items: + type: string + description: Optionally specify an array of imagePullSecrets. + +debug: + type: boolean + description: Set to true if you would like to see extra information on logs + example: false + +## An instance would be: +# registry: docker.io +# repository: bitnami/nginx +# tag: 1.16.1-debian-10-r63 +# pullPolicy: IfNotPresent +# debug: false +``` + +### Persistence + +```yaml +enabled: + type: boolean + description: Whether enable persistence. + example: true + +storageClass: + type: string + description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning. + example: "-" + +accessMode: + type: string + description: Access mode for the Persistent Volume Storage. + example: ReadWriteOnce + +size: + type: string + description: Size the Persistent Volume Storage. + example: 8Gi + +path: + type: string + description: Path to be persisted. + example: /bitnami + +## An instance would be: +# enabled: true +# storageClass: "-" +# accessMode: ReadWriteOnce +# size: 8Gi +# path: /bitnami +``` + +### ExistingSecret + +```yaml +name: + type: string + description: Name of the existing secret. + example: mySecret +keyMapping: + description: Mapping between the expected key name and the name of the key in the existing secret. + type: object + +## An instance would be: +# name: mySecret +# keyMapping: +# password: myPasswordKey +``` + +#### Example of use + +When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets. + +```yaml +# templates/secret.yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "common.names.fullname" . }} + labels: + app: {{ include "common.names.fullname" . }} +type: Opaque +data: + password: {{ .Values.password | b64enc | quote }} + +# templates/dpl.yaml +--- +... + env: + - name: PASSWORD + valueFrom: + secretKeyRef: + name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }} + key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }} +... + +# values.yaml +--- +name: mySecret +keyMapping: + password: myPasswordKey +``` + +### ValidateValue + +#### NOTES.txt + +```console +{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}} + +{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} +``` + +If we force those values to be empty we will see some alerts + +```console +$ helm install test mychart --set path.to.value00="",path.to.value01="" + 'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value: + + export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode) + + 'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value: + + export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode) +``` + +## Upgrading + +### To 1.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information. +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_affinities.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_affinities.tpl new file mode 100644 index 000000000..493a6dc7e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_affinities.tpl @@ -0,0 +1,94 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return a soft nodeAffinity definition +{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.soft" -}} +preferredDuringSchedulingIgnoredDuringExecution: + - preference: + matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} + weight: 1 +{{- end -}} + +{{/* +Return a hard nodeAffinity definition +{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.hard" -}} +requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} +{{- end -}} + +{{/* +Return a nodeAffinity definition +{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.nodes.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.nodes.hard" . -}} + {{- end -}} +{{- end -}} + +{{/* +Return a soft podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.soft" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.soft" -}} +{{- $component := default "" .component -}} +preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname + weight: 1 +{{- end -}} + +{{/* +Return a hard podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.hard" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.hard" -}} +{{- $component := default "" .component -}} +requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname +{{- end -}} + +{{/* +Return a podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.pods" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.pods.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.pods.hard" . -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_capabilities.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_capabilities.tpl new file mode 100644 index 000000000..4dde56a38 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_capabilities.tpl @@ -0,0 +1,95 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return the target Kubernetes version +*/}} +{{- define "common.capabilities.kubeVersion" -}} +{{- if .Values.global }} + {{- if .Values.global.kubeVersion }} + {{- .Values.global.kubeVersion -}} + {{- else }} + {{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} + {{- end -}} +{{- else }} +{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for deployment. +*/}} +{{- define "common.capabilities.deployment.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for statefulset. +*/}} +{{- define "common.capabilities.statefulset.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apps/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "common.capabilities.ingress.apiVersion" -}} +{{- if .Values.ingress -}} +{{- if .Values.ingress.apiVersion -}} +{{- .Values.ingress.apiVersion -}} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end }} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for RBAC resources. +*/}} +{{- define "common.capabilities.rbac.apiVersion" -}} +{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "rbac.authorization.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "rbac.authorization.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for CRDs. +*/}} +{{- define "common.capabilities.crd.apiVersion" -}} +{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apiextensions.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "apiextensions.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Returns true if the used Helm version is 3.3+. +A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure. +This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error. +**To be removed when the catalog's minimun Helm version is 3.3** +*/}} +{{- define "common.capabilities.supportsHelmVersion" -}} +{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_errors.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_errors.tpl new file mode 100644 index 000000000..a79cc2e32 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_errors.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Through error when upgrading using empty passwords values that must not be empty. + +Usage: +{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}} +{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}} +{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }} + +Required password params: + - validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error. + - context - Context - Required. Parent context. +*/}} +{{- define "common.errors.upgrade.passwords.empty" -}} + {{- $validationErrors := join "" .validationErrors -}} + {{- if and $validationErrors .context.Release.IsUpgrade -}} + {{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}} + {{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}} + {{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}} + {{- $errorString = print $errorString "\n%s" -}} + {{- printf $errorString $validationErrors | fail -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_images.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_images.tpl new file mode 100644 index 000000000..60f04fd6e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_images.tpl @@ -0,0 +1,47 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper image name +{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }} +*/}} +{{- define "common.images.image" -}} +{{- $registryName := .imageRoot.registry -}} +{{- $repositoryName := .imageRoot.repository -}} +{{- $tag := .imageRoot.tag | toString -}} +{{- if .global }} + {{- if .global.imageRegistry }} + {{- $registryName = .global.imageRegistry -}} + {{- end -}} +{{- end -}} +{{- if $registryName }} +{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- else -}} +{{- printf "%s:%s" $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }} +*/}} +{{- define "common.images.pullSecrets" -}} + {{- $pullSecrets := list }} + + {{- if .global }} + {{- range .global.imagePullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- range .images -}} + {{- range .pullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- if (not (empty $pullSecrets)) }} +imagePullSecrets: + {{- range $pullSecrets }} + - name: {{ . }} + {{- end }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_ingress.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_ingress.tpl new file mode 100644 index 000000000..622ef50e3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_ingress.tpl @@ -0,0 +1,42 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Generate backend entry that is compatible with all Kubernetes API versions. + +Usage: +{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }} + +Params: + - serviceName - String. Name of an existing service backend + - servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.ingress.backend" -}} +{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}} +{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}} +serviceName: {{ .serviceName }} +servicePort: {{ .servicePort }} +{{- else -}} +service: + name: {{ .serviceName }} + port: + {{- if typeIs "string" .servicePort }} + name: {{ .servicePort }} + {{- else if typeIs "int" .servicePort }} + number: {{ .servicePort }} + {{- end }} +{{- end -}} +{{- end -}} + +{{/* +Print "true" if the API pathType field is supported +Usage: +{{ include "common.ingress.supportsPathType" . }} +*/}} +{{- define "common.ingress.supportsPathType" -}} +{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}} +{{- print "false" -}} +{{- else -}} +{{- print "true" -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_labels.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_labels.tpl new file mode 100644 index 000000000..252066c7e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_labels.tpl @@ -0,0 +1,18 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Kubernetes standard labels +*/}} +{{- define "common.labels.standard" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +helm.sh/chart: {{ include "common.names.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end -}} + +{{/* +Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector +*/}} +{{- define "common.labels.matchLabels" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_names.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_names.tpl new file mode 100644 index 000000000..adf2a74f4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_names.tpl @@ -0,0 +1,32 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "common.names.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "common.names.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "common.names.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_secrets.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_secrets.tpl new file mode 100644 index 000000000..60b84a701 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_secrets.tpl @@ -0,0 +1,129 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Generate secret name. + +Usage: +{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.secrets.name" -}} +{{- $name := (include "common.names.fullname" .context) -}} + +{{- if .defaultNameSuffix -}} +{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- with .existingSecret -}} +{{- if not (typeIs "string" .) -}} +{{- with .name -}} +{{- $name = . -}} +{{- end -}} +{{- else -}} +{{- $name = . -}} +{{- end -}} +{{- end -}} + +{{- printf "%s" $name -}} +{{- end -}} + +{{/* +Generate secret key. + +Usage: +{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - key - String - Required. Name of the key in the secret. +*/}} +{{- define "common.secrets.key" -}} +{{- $key := .key -}} + +{{- if .existingSecret -}} + {{- if not (typeIs "string" .existingSecret) -}} + {{- if .existingSecret.keyMapping -}} + {{- $key = index .existingSecret.keyMapping $.key -}} + {{- end -}} + {{- end }} +{{- end -}} + +{{- printf "%s" $key -}} +{{- end -}} + +{{/* +Generate secret password or retrieve one if already created. + +Usage: +{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - key - String - Required - Name of the key in the secret. + - providedValues - List - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value. + - length - int - Optional - Length of the generated random password. + - strong - Boolean - Optional - Whether to add symbols to the generated random password. + - chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.passwords.manage" -}} + +{{- $password := "" }} +{{- $subchart := "" }} +{{- $chartName := default "" .chartName }} +{{- $passwordLength := default 10 .length }} +{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }} +{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- if index $secret.data .key }} + {{- $password = index $secret.data .key }} + {{- end -}} +{{- else if $providedPasswordValue }} + {{- $password = $providedPasswordValue | toString | b64enc | quote }} +{{- else }} + + {{- if .context.Values.enabled }} + {{- $subchart = $chartName }} + {{- end -}} + + {{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}} + {{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}} + {{- $passwordValidationErrors := list $requiredPasswordError -}} + {{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}} + + {{- if .strong }} + {{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }} + {{- $password = randAscii $passwordLength }} + {{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }} + {{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }} + {{- else }} + {{- $password = randAlphaNum $passwordLength | b64enc | quote }} + {{- end }} +{{- end -}} +{{- printf "%s" $password -}} +{{- end -}} + +{{/* +Returns whether a previous generated secret already exists + +Usage: +{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.exists" -}} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_storage.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_storage.tpl new file mode 100644 index 000000000..60e2a844f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_storage.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper Storage Class +{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }} +*/}} +{{- define "common.storage.class" -}} + +{{- $storageClass := .persistence.storageClass -}} +{{- if .global -}} + {{- if .global.storageClass -}} + {{- $storageClass = .global.storageClass -}} + {{- end -}} +{{- end -}} + +{{- if $storageClass -}} + {{- if (eq "-" $storageClass) -}} + {{- printf "storageClassName: \"\"" -}} + {{- else }} + {{- printf "storageClassName: %s" $storageClass -}} + {{- end -}} +{{- end -}} + +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_tplvalues.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_tplvalues.tpl new file mode 100644 index 000000000..2db166851 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_tplvalues.tpl @@ -0,0 +1,13 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Renders a value that contains template. +Usage: +{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "common.tplvalues.render" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_utils.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_utils.tpl new file mode 100644 index 000000000..ea083a249 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_utils.tpl @@ -0,0 +1,62 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Print instructions to get a secret value. +Usage: +{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }} +*/}} +{{- define "common.utils.secret.getvalue" -}} +{{- $varname := include "common.utils.fieldToEnvVar" . -}} +export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode) +{{- end -}} + +{{/* +Build env var name given a field +Usage: +{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }} +*/}} +{{- define "common.utils.fieldToEnvVar" -}} + {{- $fieldNameSplit := splitList "-" .field -}} + {{- $upperCaseFieldNameSplit := list -}} + + {{- range $fieldNameSplit -}} + {{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}} + {{- end -}} + + {{ join "_" $upperCaseFieldNameSplit }} +{{- end -}} + +{{/* +Gets a value from .Values given +Usage: +{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }} +*/}} +{{- define "common.utils.getValueFromKey" -}} +{{- $splitKey := splitList "." .key -}} +{{- $value := "" -}} +{{- $latestObj := $.context.Values -}} +{{- range $splitKey -}} + {{- if not $latestObj -}} + {{- printf "please review the entire path of '%s' exists in values" $.key | fail -}} + {{- end -}} + {{- $value = ( index $latestObj . ) -}} + {{- $latestObj = $value -}} +{{- end -}} +{{- printf "%v" (default "" $value) -}} +{{- end -}} + +{{/* +Returns first .Values key with a defined value or first of the list if all non-defined +Usage: +{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }} +*/}} +{{- define "common.utils.getKeyFromList" -}} +{{- $key := first .keys -}} +{{- $reverseKeys := reverse .keys }} +{{- range $reverseKeys }} + {{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }} + {{- if $value -}} + {{- $key = . }} + {{- end -}} +{{- end -}} +{{- printf "%s" $key -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_warnings.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_warnings.tpl new file mode 100644 index 000000000..ae10fa41e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/_warnings.tpl @@ -0,0 +1,14 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Warning about using rolling tag. +Usage: +{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }} +*/}} +{{- define "common.warnings.rollingTag" -}} + +{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }} +WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment. ++info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ +{{- end }} + +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_cassandra.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_cassandra.tpl new file mode 100644 index 000000000..8679ddffb --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_cassandra.tpl @@ -0,0 +1,72 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Cassandra required passwords are not empty. + +Usage: +{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret" + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.cassandra.passwords" -}} + {{- $existingSecret := include "common.cassandra.values.existingSecret" . -}} + {{- $enabled := include "common.cassandra.values.enabled" . -}} + {{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}} + {{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.cassandra.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.cassandra.dbUser.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.dbUser.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled cassandra. + +Usage: +{{ include "common.cassandra.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.cassandra.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.cassandra.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key dbUser + +Usage: +{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.key.dbUser" -}} + {{- if .subchart -}} + cassandra.dbUser + {{- else -}} + dbUser + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mariadb.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mariadb.tpl new file mode 100644 index 000000000..bb5ed7253 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mariadb.tpl @@ -0,0 +1,103 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MariaDB required passwords are not empty. + +Usage: +{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret" + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mariadb.passwords" -}} + {{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mariadb.values.enabled" . -}} + {{- $architecture := include "common.mariadb.values.architecture" . -}} + {{- $authPrefix := include "common.mariadb.values.key.auth" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- if not (empty $valueUsername) -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replication") -}} + {{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mariadb. + +Usage: +{{ include "common.mariadb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mariadb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mariadb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.key.auth" -}} + {{- if .subchart -}} + mariadb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mongodb.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mongodb.tpl new file mode 100644 index 000000000..7d5ecbccb --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mongodb.tpl @@ -0,0 +1,108 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MongoDB(R) required passwords are not empty. + +Usage: +{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MongoDB(R) values are stored, e.g: "mongodb-passwords-secret" + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mongodb.passwords" -}} + {{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mongodb.values.enabled" . -}} + {{- $authPrefix := include "common.mongodb.values.key.auth" . -}} + {{- $architecture := include "common.mongodb.values.architecture" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyDatabase := printf "%s.database" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}} + {{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}} + + {{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}} + + {{- if and (not $existingSecret) (eq $enabled "true") (eq $authEnabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }} + {{- if and $valueUsername $valueDatabase -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replicaset") -}} + {{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mongodb. + +Usage: +{{ include "common.mongodb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mongodb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mongodb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.key.auth" -}} + {{- if .subchart -}} + mongodb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_postgresql.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_postgresql.tpl new file mode 100644 index 000000000..992bcd390 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_postgresql.tpl @@ -0,0 +1,131 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate PostgreSQL required passwords are not empty. + +Usage: +{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret" + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.postgresql.passwords" -}} + {{- $existingSecret := include "common.postgresql.values.existingSecret" . -}} + {{- $enabled := include "common.postgresql.values.enabled" . -}} + {{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}} + {{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}} + + {{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}} + {{- if (eq $enabledReplication "true") -}} + {{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to decide whether evaluate global values. + +Usage: +{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }} +Params: + - key - String - Required. Field to be evaluated within global, e.g: "existingSecret" +*/}} +{{- define "common.postgresql.values.use.global" -}} + {{- if .context.Values.global -}} + {{- if .context.Values.global.postgresql -}} + {{- index .context.Values.global.postgresql .key | quote -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.postgresql.values.existingSecret" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.existingSecret" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}} + + {{- if .subchart -}} + {{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}} + {{- else -}} + {{- default (.context.Values.existingSecret | quote) $globalValue -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled postgresql. + +Usage: +{{ include "common.postgresql.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key postgressPassword. + +Usage: +{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.postgressPassword" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}} + + {{- if not $globalValue -}} + {{- if .subchart -}} + postgresql.postgresqlPassword + {{- else -}} + postgresqlPassword + {{- end -}} + {{- else -}} + global.postgresql.postgresqlPassword + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled.replication. + +Usage: +{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.enabled.replication" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.replication.enabled -}} + {{- else -}} + {{- printf "%v" .context.Values.replication.enabled -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key replication.password. + +Usage: +{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.replicationPassword" -}} + {{- if .subchart -}} + postgresql.replication.password + {{- else -}} + replication.password + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_redis.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_redis.tpl new file mode 100644 index 000000000..3e2a47c03 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_redis.tpl @@ -0,0 +1,72 @@ + +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Redis(TM) required passwords are not empty. + +Usage: +{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret" + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.redis.passwords" -}} + {{- $existingSecret := include "common.redis.values.existingSecret" . -}} + {{- $enabled := include "common.redis.values.enabled" . -}} + {{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}} + {{- $valueKeyRedisPassword := printf "%s%s" $valueKeyPrefix "password" -}} + {{- $valueKeyRedisUsePassword := printf "%s%s" $valueKeyPrefix "usePassword" -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $usePassword := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUsePassword "context" .context) -}} + {{- if eq $usePassword "true" -}} + {{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Redis Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.redis.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Redis(TM) is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.redis.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled redis. + +Usage: +{{ include "common.redis.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.redis.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.redis.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right prefix path for the values + +Usage: +{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.keys.prefix" -}} + {{- if .subchart -}}redis.{{- else -}}{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_validations.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_validations.tpl new file mode 100644 index 000000000..9a814cf40 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/templates/validations/_validations.tpl @@ -0,0 +1,46 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate values must not be empty. + +Usage: +{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}} +{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" +*/}} +{{- define "common.validations.values.multiple.empty" -}} + {{- range .required -}} + {{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}} + {{- end -}} +{{- end -}} + +{{/* +Validate a value must not be empty. + +Usage: +{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" + - subchart - String - Optional - Name of the subchart that the validated password is part of. +*/}} +{{- define "common.validations.values.single.empty" -}} + {{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }} + {{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }} + + {{- if not $value -}} + {{- $varname := "my-value" -}} + {{- $getCurrentValue := "" -}} + {{- if and .secret .field -}} + {{- $varname = include "common.utils.fieldToEnvVar" . -}} + {{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}} + {{- end -}} + {{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/values.yaml new file mode 100644 index 000000000..9ecdc93f5 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/charts/common/values.yaml @@ -0,0 +1,3 @@ +## bitnami/common +## It is required by CI/CD tools and processes. +exampleValue: common-chart diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/commonAnnotations.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/commonAnnotations.yaml new file mode 100644 index 000000000..97e18a4cc --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/commonAnnotations.yaml @@ -0,0 +1,3 @@ +commonAnnotations: + helm.sh/hook: "\"pre-install, pre-upgrade\"" + helm.sh/hook-weight: "-1" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/default-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/default-values.yaml new file mode 100644 index 000000000..fc2ba605a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/default-values.yaml @@ -0,0 +1 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/shmvolume-disabled-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/shmvolume-disabled-values.yaml new file mode 100644 index 000000000..347d3b40a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/ci/shmvolume-disabled-values.yaml @@ -0,0 +1,2 @@ +shmVolume: + enabled: false diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/README.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/README.md new file mode 100644 index 000000000..1813a2fea --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/README.md @@ -0,0 +1 @@ +Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map. diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/conf.d/README.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/conf.d/README.md new file mode 100644 index 000000000..184c1875d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/conf.d/README.md @@ -0,0 +1,4 @@ +If you don't want to provide the whole configuration file and only specify certain parameters, you can copy here your extended `.conf` files. +These files will be injected as a config maps and add/overwrite the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +More info in the [bitnami-docker-postgresql README](https://github.com/bitnami/bitnami-docker-postgresql#configuration-file). diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/docker-entrypoint-initdb.d/README.md b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/docker-entrypoint-initdb.d/README.md new file mode 100644 index 000000000..cba38091e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/files/docker-entrypoint-initdb.d/README.md @@ -0,0 +1,3 @@ +You can copy here your custom `.sh`, `.sql` or `.sql.gz` file so they are executed during the first boot of the image. + +More info in the [bitnami-docker-postgresql](https://github.com/bitnami/bitnami-docker-postgresql#initializing-a-new-instance) repository. \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/NOTES.txt b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/NOTES.txt new file mode 100644 index 000000000..4e98958c1 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/NOTES.txt @@ -0,0 +1,59 @@ +** Please be patient while the chart is being deployed ** + +PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster: + + {{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection +{{- if .Values.replication.enabled }} + {{ template "common.names.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection +{{- end }} + +{{- if not (eq (include "postgresql.username" .) "postgres") }} + +To get the password for "postgres" run: + + export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-postgres-password}" | base64 --decode) +{{- end }} + +To get the password for "{{ template "postgresql.username" . }}" run: + + export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-password}" | base64 --decode) + +To connect to your database run the following command: + + kubectl run {{ template "common.names.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} + --labels="{{ template "common.names.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "common.names.fullname" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} +Note: Since NetworkPolicy is enabled, only pods with label {{ template "common.names.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster. +{{- end }} + +To connect to your database from outside the cluster execute the following commands: + +{{- if contains "NodePort" .Values.service.type }} + + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }}) + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "LoadBalancer" .Values.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "common.names.fullname" . }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "ClusterIP" .Values.service.type }} + + kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "common.names.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} & + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{- end }} + +{{- include "postgresql.validateValues" . -}} + +{{- include "common.warnings.rollingTag" .Values.image -}} + +{{- $passwordValidationErrors := include "common.validations.values.postgresql.passwords" (dict "secret" (include "common.names.fullname" .) "context" $) -}} + +{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $passwordValidationErrors) "context" $) -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/_helpers.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/_helpers.tpl new file mode 100644 index 000000000..1f98efe78 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/_helpers.tpl @@ -0,0 +1,337 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Expand the name of the chart. +*/}} +{{- define "postgresql.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "postgresql.primary.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- $fullname := default (printf "%s-%s" .Release.Name $name) .Values.fullnameOverride -}} +{{- if .Values.replication.enabled -}} +{{- printf "%s-%s" $fullname "primary" | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s" $fullname | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper PostgreSQL image name +*/}} +{{- define "postgresql.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper PostgreSQL metrics image name +*/}} +{{- define "postgresql.metrics.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.metrics.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper image name (for the init container volume-permissions image) +*/}} +{{- define "postgresql.volumePermissions.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +*/}} +{{- define "postgresql.imagePullSecrets" -}} +{{ include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.metrics.image .Values.volumePermissions.image) "global" .Values.global) }} +{{- end -}} + +{{/* +Return PostgreSQL postgres user password +*/}} +{{- define "postgresql.postgres.password" -}} +{{- if .Values.global.postgresql.postgresqlPostgresPassword }} + {{- .Values.global.postgresql.postgresqlPostgresPassword -}} +{{- else if .Values.postgresqlPostgresPassword -}} + {{- .Values.postgresqlPostgresPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL password +*/}} +{{- define "postgresql.password" -}} +{{- if .Values.global.postgresql.postgresqlPassword }} + {{- .Values.global.postgresql.postgresqlPassword -}} +{{- else if .Values.postgresqlPassword -}} + {{- .Values.postgresqlPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication password +*/}} +{{- define "postgresql.replication.password" -}} +{{- if .Values.global.postgresql.replicationPassword }} + {{- .Values.global.postgresql.replicationPassword -}} +{{- else if .Values.replication.password -}} + {{- .Values.replication.password -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL username +*/}} +{{- define "postgresql.username" -}} +{{- if .Values.global.postgresql.postgresqlUsername }} + {{- .Values.global.postgresql.postgresqlUsername -}} +{{- else -}} + {{- .Values.postgresqlUsername -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication username +*/}} +{{- define "postgresql.replication.username" -}} +{{- if .Values.global.postgresql.replicationUser }} + {{- .Values.global.postgresql.replicationUser -}} +{{- else -}} + {{- .Values.replication.user -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL port +*/}} +{{- define "postgresql.port" -}} +{{- if .Values.global.postgresql.servicePort }} + {{- .Values.global.postgresql.servicePort -}} +{{- else -}} + {{- .Values.service.port -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL created database +*/}} +{{- define "postgresql.database" -}} +{{- if .Values.global.postgresql.postgresqlDatabase }} + {{- .Values.global.postgresql.postgresqlDatabase -}} +{{- else if .Values.postgresqlDatabase -}} + {{- .Values.postgresqlDatabase -}} +{{- end -}} +{{- end -}} + +{{/* +Get the password secret. +*/}} +{{- define "postgresql.secretName" -}} +{{- if .Values.global.postgresql.existingSecret }} + {{- printf "%s" (tpl .Values.global.postgresql.existingSecret $) -}} +{{- else if .Values.existingSecret -}} + {{- printf "%s" (tpl .Values.existingSecret $) -}} +{{- else -}} + {{- printf "%s" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if we should use an existingSecret. +*/}} +{{- define "postgresql.useExistingSecret" -}} +{{- if or .Values.global.postgresql.existingSecret .Values.existingSecret -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a secret object should be created +*/}} +{{- define "postgresql.createSecret" -}} +{{- if not (include "postgresql.useExistingSecret" .) -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the configuration ConfigMap name. +*/}} +{{- define "postgresql.configurationCM" -}} +{{- if .Values.configurationConfigMap -}} +{{- printf "%s" (tpl .Values.configurationConfigMap $) -}} +{{- else -}} +{{- printf "%s-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the extended configuration ConfigMap name. +*/}} +{{- define "postgresql.extendedConfigurationCM" -}} +{{- if .Values.extendedConfConfigMap -}} +{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}} +{{- else -}} +{{- printf "%s-extended-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a configmap should be mounted with PostgreSQL configuration +*/}} +{{- define "postgresql.mountConfigurationCM" -}} +{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts ConfigMap name. +*/}} +{{- define "postgresql.initdbScriptsCM" -}} +{{- if .Values.initdbScriptsConfigMap -}} +{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}} +{{- else -}} +{{- printf "%s-init-scripts" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts Secret name. +*/}} +{{- define "postgresql.initdbScriptsSecret" -}} +{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}} +{{- end -}} + +{{/* +Get the metrics ConfigMap name. +*/}} +{{- define "postgresql.metricsCM" -}} +{{- printf "%s-metrics" (include "common.names.fullname" .) -}} +{{- end -}} + +{{/* +Get the readiness probe command +*/}} +{{- define "postgresql.readinessProbeCommand" -}} +- | +{{- if (include "postgresql.database" .) }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- else }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- end }} +{{- if contains "bitnami/" .Values.image.repository }} + [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] +{{- end -}} +{{- end -}} + +{{/* +Compile all warnings into a single message, and call fail. +*/}} +{{- define "postgresql.validateValues" -}} +{{- $messages := list -}} +{{- $messages := append $messages (include "postgresql.validateValues.ldapConfigurationMethod" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.psp" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.tls" .) -}} +{{- $messages := without $messages "" -}} +{{- $message := join "\n" $messages -}} + +{{- if $message -}} +{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}} +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If ldap.url is used then you don't need the other settings for ldap +*/}} +{{- define "postgresql.validateValues.ldapConfigurationMethod" -}} +{{- if and .Values.ldap.enabled (and (not (empty .Values.ldap.url)) (not (empty .Values.ldap.server))) }} +postgresql: ldap.url, ldap.server + You cannot set both `ldap.url` and `ldap.server` at the same time. + Please provide a unique way to configure LDAP. + More info at https://www.postgresql.org/docs/current/auth-ldap.html +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If PSP is enabled RBAC should be enabled too +*/}} +{{- define "postgresql.validateValues.psp" -}} +{{- if and .Values.psp.create (not .Values.rbac.create) }} +postgresql: psp.create, rbac.create + RBAC should be enabled if PSP is enabled in order for PSP to work. + More info at https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for podsecuritypolicy. +*/}} +{{- define "podsecuritypolicy.apiVersion" -}} +{{- if semverCompare "<1.10-0" .Capabilities.KubeVersion.GitVersion -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "policy/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for networkpolicy. +*/}} +{{- define "postgresql.networkPolicy.apiVersion" -}} +{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"extensions/v1beta1" +{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"networking.k8s.io/v1" +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql TLS - When TLS is enabled, so must be VolumePermissions +*/}} +{{- define "postgresql.validateValues.tls" -}} +{{- if and .Values.tls.enabled (not .Values.volumePermissions.enabled) }} +postgresql: tls.enabled, volumePermissions.enabled + When TLS is enabled you must enable volumePermissions as well to ensure certificates files have + the right permissions. +{{- end -}} +{{- end -}} + +{{/* +Return the path to the cert file. +*/}} +{{- define "postgresql.tlsCert" -}} +{{- required "Certificate filename is required when TLS in enabled" .Values.tls.certFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the cert key file. +*/}} +{{- define "postgresql.tlsCertKey" -}} +{{- required "Certificate Key filename is required when TLS in enabled" .Values.tls.certKeyFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the CA cert file. +*/}} +{{- define "postgresql.tlsCACert" -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.certCAFilename -}} +{{- end -}} + +{{/* +Return the path to the CRL file. +*/}} +{{- define "postgresql.tlsCRL" -}} +{{- if .Values.tls.crlFilename -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.crlFilename -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/configmap.yaml new file mode 100644 index 000000000..3a5ea18ae --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/configmap.yaml @@ -0,0 +1,31 @@ +{{ if and (or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration) (not .Values.configurationConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- if (.Files.Glob "files/postgresql.conf") }} +{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }} +{{- else if .Values.postgresqlConfiguration }} + postgresql.conf: | +{{- range $key, $value := default dict .Values.postgresqlConfiguration }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- if (.Files.Glob "files/pg_hba.conf") }} +{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }} +{{- else if .Values.pgHbaConfiguration }} + pg_hba.conf: | +{{ .Values.pgHbaConfiguration | indent 4 }} +{{- end }} +{{ end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/extended-config-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/extended-config-configmap.yaml new file mode 100644 index 000000000..b0dad253b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/extended-config-configmap.yaml @@ -0,0 +1,26 @@ +{{- if and (or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf) (not .Values.extendedConfConfigMap)}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-extended-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- with .Files.Glob "files/conf.d/*.conf" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{ with .Values.postgresqlExtendedConf }} + override.conf: | +{{- range $key, $value := . }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/extra-list.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/extra-list.yaml new file mode 100644 index 000000000..9ac65f9e1 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/extra-list.yaml @@ -0,0 +1,4 @@ +{{- range .Values.extraDeploy }} +--- +{{ include "common.tplvalues.render" (dict "value" . "context" $) }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/initialization-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/initialization-configmap.yaml new file mode 100644 index 000000000..7796c67a9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/initialization-configmap.yaml @@ -0,0 +1,25 @@ +{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-init-scripts + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }} +binaryData: +{{- range $path, $bytes := . }} + {{ base $path }}: {{ $.Files.Get $path | b64enc | quote }} +{{- end }} +{{- end }} +data: +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{- with .Values.initdbScripts }} +{{ toYaml . | indent 2 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/metrics-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/metrics-configmap.yaml new file mode 100644 index 000000000..fa539582b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/metrics-configmap.yaml @@ -0,0 +1,14 @@ +{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "postgresql.metricsCM" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: + custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/metrics-svc.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/metrics-svc.yaml new file mode 100644 index 000000000..af8b67e2f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/metrics-svc.yaml @@ -0,0 +1,26 @@ +{{- if .Values.metrics.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-metrics + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- toYaml .Values.metrics.service.annotations | nindent 4 }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ .Values.metrics.service.type }} + {{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }} + {{- end }} + ports: + - name: http-metrics + port: 9187 + targetPort: http-metrics + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/networkpolicy.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/networkpolicy.yaml new file mode 100644 index 000000000..4f2740ea0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/networkpolicy.yaml @@ -0,0 +1,39 @@ +{{- if .Values.networkPolicy.enabled }} +kind: NetworkPolicy +apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + ingress: + # Allow inbound connections + - ports: + - port: {{ template "postgresql.port" . }} + {{- if not .Values.networkPolicy.allowExternal }} + from: + - podSelector: + matchLabels: + {{ template "common.names.fullname" . }}-client: "true" + {{- if .Values.networkPolicy.explicitNamespacesSelector }} + namespaceSelector: +{{ toYaml .Values.networkPolicy.explicitNamespacesSelector | indent 12 }} + {{- end }} + - podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 14 }} + role: read + {{- end }} + {{- if .Values.metrics.enabled }} + # Allow prometheus scrapes + - ports: + - port: 9187 + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/podsecuritypolicy.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/podsecuritypolicy.yaml new file mode 100644 index 000000000..0c49694fa --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/podsecuritypolicy.yaml @@ -0,0 +1,38 @@ +{{- if .Values.psp.create }} +apiVersion: {{ include "podsecuritypolicy.apiVersion" . }} +kind: PodSecurityPolicy +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + privileged: false + volumes: + - 'configMap' + - 'secret' + - 'persistentVolumeClaim' + - 'emptyDir' + - 'projected' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/prometheusrule.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/prometheusrule.yaml new file mode 100644 index 000000000..d0f408c78 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/prometheusrule.yaml @@ -0,0 +1,23 @@ +{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + name: {{ template "common.names.fullname" . }} +{{- with .Values.metrics.prometheusRule.namespace }} + namespace: {{ . }} +{{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- with .Values.metrics.prometheusRule.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} +spec: +{{- with .Values.metrics.prometheusRule.rules }} + groups: + - name: {{ template "postgresql.name" $ }} + rules: {{ tpl (toYaml .) $ | nindent 8 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/role.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/role.yaml new file mode 100644 index 000000000..017a5716b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/role.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: Role +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +rules: + {{- if .Values.psp.create }} + - apiGroups: ["extensions"] + resources: ["podsecuritypolicies"] + verbs: ["use"] + resourceNames: + - {{ template "common.names.fullname" . }} + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/rolebinding.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/rolebinding.yaml new file mode 100644 index 000000000..189775a15 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/rolebinding.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: RoleBinding +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: {{ template "common.names.fullname" . }} + apiGroup: rbac.authorization.k8s.io +subjects: + - kind: ServiceAccount + name: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/secrets.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/secrets.yaml new file mode 100644 index 000000000..d492cd593 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/secrets.yaml @@ -0,0 +1,24 @@ +{{- if (include "postgresql.createSecret" .) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +type: Opaque +data: + {{- if not (eq (include "postgresql.username" .) "postgres") }} + postgresql-postgres-password: {{ include "postgresql.postgres.password" . | b64enc | quote }} + {{- end }} + postgresql-password: {{ include "postgresql.password" . | b64enc | quote }} + {{- if .Values.replication.enabled }} + postgresql-replication-password: {{ include "postgresql.replication.password" . | b64enc | quote }} + {{- end }} + {{- if (and .Values.ldap.enabled .Values.ldap.bind_password)}} + postgresql-ldap-password: {{ .Values.ldap.bind_password | b64enc | quote }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/serviceaccount.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/serviceaccount.yaml new file mode 100644 index 000000000..03f0f50e7 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/serviceaccount.yaml @@ -0,0 +1,12 @@ +{{- if and (.Values.serviceAccount.enabled) (not .Values.serviceAccount.name) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "common.labels.standard" . | nindent 4 }} + name: {{ template "common.names.fullname" . }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/servicemonitor.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/servicemonitor.yaml new file mode 100644 index 000000000..587ce85b8 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/servicemonitor.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "common.names.fullname" . }} + {{- if .Values.metrics.serviceMonitor.namespace }} + namespace: {{ .Values.metrics.serviceMonitor.namespace }} + {{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.metrics.serviceMonitor.additionalLabels }} + {{- toYaml .Values.metrics.serviceMonitor.additionalLabels | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + +spec: + endpoints: + - port: http-metrics + {{- if .Values.metrics.serviceMonitor.interval }} + interval: {{ .Values.metrics.serviceMonitor.interval }} + {{- end }} + {{- if .Values.metrics.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout }} + {{- end }} + namespaceSelector: + matchNames: + - {{ .Release.Namespace }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/statefulset-readreplicas.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/statefulset-readreplicas.yaml new file mode 100644 index 000000000..b038299bf --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/statefulset-readreplicas.yaml @@ -0,0 +1,411 @@ +{{- if .Values.replication.enabled }} +{{- $readReplicasResources := coalesce .Values.readReplicas.resources .Values.resources -}} +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: "{{ template "common.names.fullname" . }}-read" + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: read +{{- with .Values.readReplicas.labels }} +{{ toYaml . | indent 4 }} +{{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.readReplicas.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: {{ .Values.replication.readReplicas }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: read + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + app.kubernetes.io/component: read + role: read +{{- with .Values.readReplicas.podLabels }} +{{ toYaml . | indent 8 }} +{{- end }} +{{- with .Values.readReplicas.podAnnotations }} + annotations: +{{ toYaml . | indent 8 }} +{{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.readReplicas.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAffinityPreset "component" "read" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAntiAffinityPreset "component" "read" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.readReplicas.nodeAffinityPreset.type "key" .Values.readReplicas.nodeAffinityPreset.key "values" .Values.readReplicas.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.readReplicas.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.readReplicas.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name}} + {{- end }} + {{- if or .Values.readReplicas.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{ if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.readReplicas.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.readReplicas.priorityClassName }} + priorityClassName: {{ .Values.readReplicas.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if $readReplicasResources }} + resources: {{- toYaml $readReplicasResources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + - name: POSTGRES_REPLICATION_MODE + value: "slave" + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + - name: POSTGRES_MASTER_HOST + value: {{ template "common.names.fullname" . }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ include "postgresql.port" . | quote }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{ end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.readReplicas.extraVolumeMounts }} + {{- toYaml .Values.readReplicas.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.readReplicas.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.sidecars "context" $ ) | nindent 8 }} +{{- end }} + volumes: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} + {{- if or (not .Values.persistence.enabled) (not .Values.readReplicas.persistence.enabled) }} + - name: data + emptyDir: {} + {{- end }} + {{- if .Values.readReplicas.extraVolumes }} + {{- toYaml .Values.readReplicas.extraVolumes | nindent 8 }} + {{- end }} + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} +{{- if and .Values.persistence.enabled .Values.readReplicas.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/statefulset.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/statefulset.yaml new file mode 100644 index 000000000..f8163fd99 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/statefulset.yaml @@ -0,0 +1,609 @@ +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: {{ template "postgresql.primary.fullname" . }} + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: primary + {{- with .Values.primary.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.primary.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: 1 + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: primary + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + role: primary + app.kubernetes.io/component: primary + {{- with .Values.primary.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.primary.podAnnotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.primary.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.primary.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAffinityPreset "component" "primary" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAntiAffinityPreset "component" "primary" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.primary.nodeAffinityPreset.type "key" .Values.primary.nodeAffinityPreset.key "values" .Values.primary.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.primary.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.primary.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.primary.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.primary.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + {{- end }} + {{- if or .Values.primary.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.primary.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.primary.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.primary.priorityClassName }} + priorityClassName: {{ .Values.primary.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + {{- if .Values.postgresqlInitdbArgs }} + - name: POSTGRES_INITDB_ARGS + value: {{ .Values.postgresqlInitdbArgs | quote }} + {{- end }} + {{- if .Values.postgresqlInitdbWalDir }} + - name: POSTGRES_INITDB_WALDIR + value: {{ .Values.postgresqlInitdbWalDir | quote }} + {{- end }} + {{- if .Values.initdbUser }} + - name: POSTGRESQL_INITSCRIPTS_USERNAME + value: {{ .Values.initdbUser }} + {{- end }} + {{- if .Values.initdbPassword }} + - name: POSTGRESQL_INITSCRIPTS_PASSWORD + value: {{ .Values.initdbPassword }} + {{- end }} + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + {{- if .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_MASTER_HOST + value: {{ .Values.primaryAsStandBy.primaryHost }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ .Values.primaryAsStandBy.primaryPort | quote }} + {{- end }} + {{- if or .Values.replication.enabled .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_REPLICATION_MODE + {{- if .Values.primaryAsStandBy.enabled }} + value: "slave" + {{- else }} + value: "master" + {{- end }} + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + {{- if not (eq .Values.replication.synchronousCommit "off")}} + - name: POSTGRES_SYNCHRONOUS_COMMIT_MODE + value: {{ .Values.replication.synchronousCommit | quote }} + - name: POSTGRES_NUM_SYNCHRONOUS_REPLICAS + value: {{ .Values.replication.numSynchronousReplicas | quote }} + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + {{- end }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + - name: POSTGRES_USER + value: {{ include "postgresql.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + {{- if (include "postgresql.database" .) }} + - name: POSTGRES_DB + value: {{ (include "postgresql.database" .) | quote }} + {{- end }} + {{- if .Values.extraEnv }} + {{- include "common.tplvalues.render" (dict "value" .Values.extraEnv "context" $) | nindent 12 }} + {{- end }} + - name: POSTGRESQL_ENABLE_LDAP + value: {{ ternary "yes" "no" .Values.ldap.enabled | quote }} + {{- if .Values.ldap.enabled }} + - name: POSTGRESQL_LDAP_SERVER + value: {{ .Values.ldap.server }} + - name: POSTGRESQL_LDAP_PORT + value: {{ .Values.ldap.port | quote }} + - name: POSTGRESQL_LDAP_SCHEME + value: {{ .Values.ldap.scheme }} + {{- if .Values.ldap.tls }} + - name: POSTGRESQL_LDAP_TLS + value: "1" + {{- end }} + - name: POSTGRESQL_LDAP_PREFIX + value: {{ .Values.ldap.prefix | quote }} + - name: POSTGRESQL_LDAP_SUFFIX + value: {{ .Values.ldap.suffix | quote }} + - name: POSTGRESQL_LDAP_BASE_DN + value: {{ .Values.ldap.baseDN }} + - name: POSTGRESQL_LDAP_BIND_DN + value: {{ .Values.ldap.bindDN }} + {{- if (not (empty .Values.ldap.bind_password)) }} + - name: POSTGRESQL_LDAP_BIND_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-ldap-password + {{- end}} + - name: POSTGRESQL_LDAP_SEARCH_ATTR + value: {{ .Values.ldap.search_attr }} + - name: POSTGRESQL_LDAP_SEARCH_FILTER + value: {{ .Values.ldap.search_filter }} + - name: POSTGRESQL_LDAP_URL + value: {{ .Values.ldap.url }} + {{- end}} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + {{- if .Values.extraEnvVarsCM }} + envFrom: + - configMapRef: + name: {{ tpl .Values.extraEnvVarsCM . }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.startupProbe.enabled }} + startupProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.startupProbe.periodSeconds }} + timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }} + successThreshold: {{ .Values.startupProbe.successThreshold }} + failureThreshold: {{ .Values.startupProbe.failureThreshold }} + {{- else if .Values.customStartupProbe }} + startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customStartupProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + mountPath: /docker-entrypoint-initdb.d/ + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + mountPath: /docker-entrypoint-initdb.d/secret + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.primary.extraVolumeMounts }} + {{- toYaml .Values.primary.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.primary.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.primary.sidecars "context" $ ) | nindent 8 }} +{{- end }} +{{- if .Values.metrics.enabled }} + - name: metrics + image: {{ template "postgresql.metrics.image" . }} + imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }} + {{- if .Values.metrics.securityContext.enabled }} + securityContext: {{- omit .Values.metrics.securityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + {{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase)" (include "postgresql.database" .) }} + {{- $sslmode := ternary "require" "disable" .Values.tls.enabled }} + {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} + - name: DATA_SOURCE_NAME + value: {{ printf "host=127.0.0.1 port=%d user=%s sslmode=%s sslcert=%s sslkey=%s" (int (include "postgresql.port" .)) (include "postgresql.username" .) $sslmode (include "postgresql.tlsCert" .) (include "postgresql.tlsCertKey" .) }} + {{- else }} + - name: DATA_SOURCE_URI + value: {{ printf "127.0.0.1:%d/%s?sslmode=%s" (int (include "postgresql.port" .)) $database $sslmode }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: DATA_SOURCE_PASS_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: DATA_SOURCE_PASS + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: DATA_SOURCE_USER + value: {{ template "postgresql.username" . }} + {{- if .Values.metrics.extraEnvVars }} + {{- include "common.tplvalues.render" (dict "value" .Values.metrics.extraEnvVars "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.metrics.customMetrics }} + - name: custom-metrics + mountPath: /conf + readOnly: true + args: ["--extend.query-path", "/conf/custom-metrics.yaml"] + {{- end }} + ports: + - name: http-metrics + containerPort: 9187 + {{- if .Values.metrics.resources }} + resources: {{- toYaml .Values.metrics.resources | nindent 12 }} + {{- end }} +{{- end }} + volumes: + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + configMap: + name: {{ template "postgresql.initdbScriptsCM" . }} + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + secret: + secretName: {{ template "postgresql.initdbScriptsSecret" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.primary.extraVolumes }} + {{- toYaml .Values.primary.extraVolumes | nindent 8 }} + {{- end }} + {{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} + - name: custom-metrics + configMap: + name: {{ template "postgresql.metricsCM" . }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} +{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }} + - name: data + persistentVolumeClaim: +{{- with .Values.persistence.existingClaim }} + claimName: {{ tpl . $ }} +{{- end }} +{{- else if not .Values.persistence.enabled }} + - name: data + emptyDir: {} +{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc-headless.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc-headless.yaml new file mode 100644 index 000000000..6f5f3b9ee --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc-headless.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-headless + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + # Use this annotation in addition to the actual publishNotReadyAddresses + # field below because the annotation will stop being respected soon but the + # field is broken in some versions of Kubernetes: + # https://github.com/kubernetes/kubernetes/issues/58662 + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" + namespace: {{ .Release.Namespace }} +spec: + type: ClusterIP + clusterIP: None + # We want all pods in the StatefulSet to have their addresses published for + # the sake of the other Postgresql pods even before they're ready, since they + # have to be able to talk to each other in order to become ready. + publishNotReadyAddresses: true + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc-read.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc-read.yaml new file mode 100644 index 000000000..56195ea1e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc-read.yaml @@ -0,0 +1,43 @@ +{{- if .Values.replication.enabled }} +{{- $serviceAnnotations := coalesce .Values.readReplicas.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.readReplicas.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.readReplicas.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.readReplicas.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.readReplicas.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.readReplicas.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-read + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: read +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc.yaml new file mode 100644 index 000000000..a29431b6a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/templates/svc.yaml @@ -0,0 +1,41 @@ +{{- $serviceAnnotations := coalesce .Values.primary.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.primary.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.primary.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.primary.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.primary.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.primary.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/values.schema.json b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/values.schema.json new file mode 100644 index 000000000..66a2a9dd0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/values.schema.json @@ -0,0 +1,103 @@ +{ + "$schema": "http://json-schema.org/schema#", + "type": "object", + "properties": { + "postgresqlUsername": { + "type": "string", + "title": "Admin user", + "form": true + }, + "postgresqlPassword": { + "type": "string", + "title": "Password", + "form": true + }, + "persistence": { + "type": "object", + "properties": { + "size": { + "type": "string", + "title": "Persistent Volume Size", + "form": true, + "render": "slider", + "sliderMin": 1, + "sliderMax": 100, + "sliderUnit": "Gi" + } + } + }, + "resources": { + "type": "object", + "title": "Required Resources", + "description": "Configure resource requests", + "form": true, + "properties": { + "requests": { + "type": "object", + "properties": { + "memory": { + "type": "string", + "form": true, + "render": "slider", + "title": "Memory Request", + "sliderMin": 10, + "sliderMax": 2048, + "sliderUnit": "Mi" + }, + "cpu": { + "type": "string", + "form": true, + "render": "slider", + "title": "CPU Request", + "sliderMin": 10, + "sliderMax": 2000, + "sliderUnit": "m" + } + } + } + } + }, + "replication": { + "type": "object", + "form": true, + "title": "Replication Details", + "properties": { + "enabled": { + "type": "boolean", + "title": "Enable Replication", + "form": true + }, + "readReplicas": { + "type": "integer", + "title": "read Replicas", + "form": true, + "hidden": { + "value": false, + "path": "replication/enabled" + } + } + } + }, + "volumePermissions": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "form": true, + "title": "Enable Init Containers", + "description": "Change the owner of the persist volume mountpoint to RunAsUser:fsGroup" + } + } + }, + "metrics": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "title": "Configure metrics exporter", + "form": true + } + } + } + } +} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/values.yaml new file mode 100644 index 000000000..82ce09234 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/charts/postgresql/values.yaml @@ -0,0 +1,824 @@ +## Global Docker image parameters +## Please, note that this will override the image parameters, including dependencies, configured to use the global value +## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## +global: + postgresql: {} +# imageRegistry: myRegistryName +# imagePullSecrets: +# - myRegistryKeySecretName +# storageClass: myStorageClass + +## Bitnami PostgreSQL image version +## ref: https://hub.docker.com/r/bitnami/postgresql/tags/ +## +image: + registry: docker.io + repository: bitnami/postgresql + tag: 11.11.0-debian-10-r71 + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + + ## Set to true if you would like to see extra information on logs + ## It turns BASH and/or NAMI debugging in the image + ## + debug: false + +## String to partially override common.names.fullname template (will maintain the release name) +## +# nameOverride: + +## String to fully override common.names.fullname template +## +# fullnameOverride: + +## +## Init containers parameters: +## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup +## +volumePermissions: + enabled: false + image: + registry: docker.io + repository: bitnami/bitnami-shell + tag: "10" + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: Always + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Init container Security Context + ## Note: the chown of the data folder is done to securityContext.runAsUser + ## and not the below volumePermissions.securityContext.runAsUser + ## When runAsUser is set to special value "auto", init container will try to chwon the + ## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2` + ## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed). + ## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with + ## pod securityContext.enabled=false and shmVolume.chmod.enabled=false + ## + securityContext: + runAsUser: 0 + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## Pod Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +securityContext: + enabled: true + fsGroup: 1001 + +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +containerSecurityContext: + enabled: true + runAsUser: 1001 + +## Pod Service Account +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + enabled: false + ## Name of an already existing service account. Setting this value disables the automatic service account creation. + # name: + +## Pod Security Policy +## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +## +psp: + create: false + +## Creates role for ServiceAccount +## Required for PSP +## +rbac: + create: false + +replication: + enabled: false + user: repl_user + password: repl_password + readReplicas: 1 + ## Set synchronous commit mode: on, off, remote_apply, remote_write and local + ## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL + synchronousCommit: 'off' + ## From the number of `readReplicas` defined above, set the number of those that will have synchronous replication + ## NOTE: It cannot be > readReplicas + numSynchronousReplicas: 0 + ## Replication Cluster application name. Useful for defining multiple replication policies + ## + applicationName: my_application + +## PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!) +# postgresqlPostgresPassword: + +## PostgreSQL user (has superuser privileges if username is `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +postgresqlUsername: postgres + +## PostgreSQL password +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +# postgresqlPassword: + +## PostgreSQL password using existing secret +## existingSecret: secret +## + +## Mount PostgreSQL secret as a file instead of passing environment variable +# usePasswordFile: false + +## Create a database +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run +## +# postgresqlDatabase: + +## PostgreSQL data dir +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +postgresqlDataDir: /bitnami/postgresql/data + +## An array to add extra environment variables +## For example: +## extraEnv: +## - name: FOO +## value: "bar" +## +# extraEnv: +extraEnv: [] + +## Name of a ConfigMap containing extra env vars +## +# extraEnvVarsCM: + +## Specify extra initdb args +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbArgs: + +## Specify a custom location for the PostgreSQL transaction log +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbWalDir: + +## PostgreSQL configuration +## Specify runtime configuration parameters as a dict, using camelCase, e.g. +## {"sharedBuffers": "500MB"} +## Alternatively, you can put your postgresql.conf under the files/ directory +## ref: https://www.postgresql.org/docs/current/static/runtime-config.html +## +# postgresqlConfiguration: + +## PostgreSQL extended configuration +## As above, but _appended_ to the main configuration +## Alternatively, you can put your *.conf under the files/conf.d/ directory +## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf +## +# postgresqlExtendedConf: + +## Configure current cluster's primary server to be the standby server in other cluster. +## This will allow cross cluster replication and provide cross cluster high availability. +## You will need to configure pgHbaConfiguration if you want to enable this feature with local cluster replication enabled. +## +primaryAsStandBy: + enabled: false + # primaryHost: + # primaryPort: + +## PostgreSQL client authentication configuration +## Specify content for pg_hba.conf +## Default: do not create pg_hba.conf +## Alternatively, you can put your pg_hba.conf under the files/ directory +# pgHbaConfiguration: |- +# local all all trust +# host all all localhost trust +# host mydatabase mysuser 192.168.0.0/24 md5 + +## ConfigMap with PostgreSQL configuration +## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration +# configurationConfigMap: + +## ConfigMap with PostgreSQL extended configuration +# extendedConfConfigMap: + +## initdb scripts +## Specify dictionary of scripts to be run at first boot +## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory +## +# initdbScripts: +# my_init_script.sh: | +# #!/bin/sh +# echo "Do something." + +## ConfigMap with scripts to be run at first boot +## NOTE: This will override initdbScripts +# initdbScriptsConfigMap: + +## Secret with scripts to be run at first boot (in case it contains sensitive information) +## NOTE: This can work along initdbScripts or initdbScriptsConfigMap +# initdbScriptsSecret: + +## Specify the PostgreSQL username and password to execute the initdb scripts +# initdbUser: +# initdbPassword: + +## Audit settings +## https://github.com/bitnami/bitnami-docker-postgresql#auditing +## +audit: + ## Log client hostnames + ## + logHostname: false + ## Log connections to the server + ## + logConnections: false + ## Log disconnections + ## + logDisconnections: false + ## Operation to audit using pgAudit (default if not set) + ## + pgAuditLog: "" + ## Log catalog using pgAudit + ## + pgAuditLogCatalog: "off" + ## Log level for clients + ## + clientMinMessages: error + ## Template for log line prefix (default if not set) + ## + logLinePrefix: "" + ## Log timezone + ## + logTimezone: "" + +## Shared preload libraries +## +postgresqlSharedPreloadLibraries: "pgaudit" + +## Maximum total connections +## +postgresqlMaxConnections: + +## Maximum connections for the postgres user +## +postgresqlPostgresConnectionLimit: + +## Maximum connections for the created user +## +postgresqlDbUserConnectionLimit: + +## TCP keepalives interval +## +postgresqlTcpKeepalivesInterval: + +## TCP keepalives idle +## +postgresqlTcpKeepalivesIdle: + +## TCP keepalives count +## +postgresqlTcpKeepalivesCount: + +## Statement timeout +## +postgresqlStatementTimeout: + +## Remove pg_hba.conf lines with the following comma-separated patterns +## (cannot be used with custom pg_hba.conf) +## +postgresqlPghbaRemoveFilters: + +## Optional duration in seconds the pod needs to terminate gracefully. +## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods +## +# terminationGracePeriodSeconds: 30 + +## LDAP configuration +## +ldap: + enabled: false + url: '' + server: '' + port: '' + prefix: '' + suffix: '' + baseDN: '' + bindDN: '' + bind_password: + search_attr: '' + search_filter: '' + scheme: '' + tls: {} + +## PostgreSQL service configuration +## +service: + ## PosgresSQL service type + ## + type: ClusterIP + # clusterIP: None + port: 5432 + + ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## + # nodePort: + + ## Provide any additional annotations which may be required. Evaluated as a template. + ## + annotations: {} + ## Set the LoadBalancer service type to internal only. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer + ## + # loadBalancerIP: + ## Load Balancer sources. Evaluated as a template. + ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## + # loadBalancerSourceRanges: + # - 10.10.10.0/24 + +## Start primary and read(s) pod(s) without limitations on shm memory. +## By default docker and containerd (and possibly other container runtimes) +## limit `/dev/shm` to `64M` (see e.g. the +## [docker issue](https://github.com/docker-library/postgres/issues/416) and the +## [containerd issue](https://github.com/containerd/containerd/issues/3654), +## which could be not enough if PostgreSQL uses parallel workers heavily. +## +shmVolume: + ## Set `shmVolume.enabled` to `true` to mount a new tmpfs volume to remove + ## this limitation. + ## + enabled: true + ## Set to `true` to `chmod 777 /dev/shm` on a initContainer. + ## This option is ignored if `volumePermissions.enabled` is `false` + ## + chmod: + enabled: true + +## PostgreSQL data Persistent Volume Storage Class +## If defined, storageClassName: +## If set to "-", storageClassName: "", which disables dynamic provisioning +## If undefined (the default) or set to null, no storageClassName spec is +## set, choosing the default provisioner. (gp2 on AWS, standard on +## GKE, AWS & OpenStack) +## +persistence: + enabled: true + ## A manually managed Persistent Volume and Claim + ## If defined, PVC must be created manually before volume will be bound + ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart + ## + # existingClaim: + + ## The path the volume will be mounted at, useful when using different + ## PostgreSQL images. + ## + mountPath: /bitnami/postgresql + + ## The subdirectory of the volume to mount to, useful in dev environments + ## and one PV for multiple services. + ## + subPath: '' + + # storageClass: "-" + accessModes: + - ReadWriteOnce + size: 8Gi + annotations: {} + ## selector can be used to match an existing PersistentVolume + ## selector: + ## matchLabels: + ## app: my-app + selector: {} + +## updateStrategy for PostgreSQL StatefulSet and its reads StatefulSets +## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies +## +updateStrategy: + type: RollingUpdate + +## +## PostgreSQL Primary parameters +## +primary: + ## PostgreSQL Primary pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL Primary pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL Primary node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: primary.podAffinityPreset, primary.podAntiAffinityPreset, and primary.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL primary Volume mounts + ## + extraVolumeMounts: [] + ## Additional PostgreSQL primary Volumes + ## + extraVolumes: [] + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for primary + ## + service: {} + # type: + # nodePort: + # clusterIP: + +## +## PostgreSQL read only replica parameters +## +readReplicas: + ## PostgreSQL read only pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL read only pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL read only node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: readReplicas.podAffinityPreset, readReplicas.podAntiAffinityPreset, and readReplicas.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL read replicas Volume mounts + ## + extraVolumeMounts: [] + + ## Additional PostgreSQL read replicas Volumes + ## + extraVolumes: [] + + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for read + ## + service: {} + # type: + # nodePort: + # clusterIP: + + ## Whether to enable PostgreSQL read replicas data Persistent + ## + persistence: + enabled: true + + # Override the resource configuration for read replicas + resources: {} + # requests: + # memory: 256Mi + # cpu: 250m + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: + requests: + memory: 256Mi + cpu: 250m + +## Add annotations to all the deployed resources +## +commonAnnotations: {} + +networkPolicy: + ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. + ## + enabled: false + + ## The Policy model to apply. When set to false, only pods with the correct + ## client label will have network access to the port PostgreSQL is listening + ## on. When true, PostgreSQL will accept connections from any source + ## (with the correct destination port). + ## + allowExternal: true + + ## if explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace + ## and that match other criteria, the ones that have the good label, can reach the DB. + ## But sometimes, we want the DB to be accessible to clients from other namespaces, in this case, we can use this + ## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added. + ## + ## Example: + ## explicitNamespacesSelector: + ## matchLabels: + ## role: frontend + ## matchExpressions: + ## - {key: role, operator: In, values: [frontend]} + ## + explicitNamespacesSelector: {} + +## Configure extra options for startup, liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes +## +startupProbe: + enabled: false + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 5 + failureThreshold: 10 + successThreshold: 1 + +livenessProbe: + enabled: true + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Custom Startup probe +## +customStartupProbe: {} + +## Custom Liveness probe +## +customLivenessProbe: {} + +## Custom Rediness probe +## +customReadinessProbe: {} + +## +## TLS configuration +## +tls: + # Enable TLS traffic + enabled: false + # + # Whether to use the server's TLS cipher preferences rather than the client's. + preferServerCiphers: true + # + # Name of the Secret that contains the certificates + certificatesSecret: '' + # + # Certificate filename + certFilename: '' + # + # Certificate Key filename + certKeyFilename: '' + # + # CA Certificate filename + # If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate + # ref: https://www.postgresql.org/docs/9.6/auth-methods.html + certCAFilename: + # + # File containing a Certificate Revocation List + crlFilename: + +## Configure metrics exporter +## +metrics: + enabled: false + # resources: {} + service: + type: ClusterIP + annotations: + prometheus.io/scrape: 'true' + prometheus.io/port: '9187' + loadBalancerIP: + serviceMonitor: + enabled: false + additionalLabels: {} + # namespace: monitoring + # interval: 30s + # scrapeTimeout: 10s + ## Custom PrometheusRule to be defined + ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart + ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions + ## + prometheusRule: + enabled: false + additionalLabels: {} + namespace: '' + ## These are just examples rules, please adapt them to your needs. + ## Make sure to constraint the rules to the current postgresql service. + ## rules: + ## - alert: HugeReplicationLag + ## expr: pg_replication_lag{service="{{ template "common.names.fullname" . }}-metrics"} / 3600 > 1 + ## for: 1m + ## labels: + ## severity: critical + ## annotations: + ## description: replication for {{ template "common.names.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s). + ## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s). + ## + rules: [] + + image: + registry: docker.io + repository: bitnami/postgres-exporter + tag: 0.9.0-debian-10-r43 + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Define additional custom metrics + ## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file + # customMetrics: + # pg_database: + # query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size_bytes FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'postgres')" + # metrics: + # - name: + # usage: "LABEL" + # description: "Name of the database" + # - size_bytes: + # usage: "GAUGE" + # description: "Size of the database in bytes" + # + ## An array to add extra env vars to configure postgres-exporter + ## see: https://github.com/wrouesnel/postgres_exporter#environment-variables + ## For example: + # extraEnvVars: + # - name: PG_EXPORTER_DISABLE_DEFAULT_METRICS + # value: "true" + extraEnvVars: {} + + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## + securityContext: + enabled: false + runAsUser: 1001 + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) + ## Configure extra options for liveness and readiness probes + ## + livenessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + + readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Array with extra yaml to deploy with the chart. Evaluated as a template +## +extraDeploy: [] diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/access-tls-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/access-tls-values.yaml new file mode 100644 index 000000000..1a8c4698d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/access-tls-values.yaml @@ -0,0 +1,24 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/default-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/default-values.yaml new file mode 100644 index 000000000..fc3469399 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/default-values.yaml @@ -0,0 +1,21 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/derby-test-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/derby-test-values.yaml new file mode 100644 index 000000000..82ff48545 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/derby-test-values.yaml @@ -0,0 +1,19 @@ +databaseUpgradeReady: true + +postgresql: + enabled: false +artifactory: + podSecurityContext: + fsGroupChangePolicy: "OnRootMismatch" + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/global-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/global-values.yaml new file mode 100644 index 000000000..33bbf04a2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/global-values.yaml @@ -0,0 +1,247 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + customInitContainersBegin: | + - name: "custom-init-begin-local" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + customInitContainers: | + - name: "custom-init-local" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-local + mountPath: "/scriptslocal" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-local" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +global: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + customInitContainersBegin: | + - name: "custom-init-begin-global" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + customInitContainers: | + - name: "custom-init-global" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + # Add custom volumes + customVolumes: | + - name: custom-script-global + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-global + mountPath: "/scripts" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-global" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in global' >> /scripts/sidecarglobal.txt; cat /scripts/sidecarglobal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scripts" + name: custom-script-global + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +nginx: + customInitContainers: | + - name: "custom-init-begin-nginx" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in nginx" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: custom-script-local + customSidecarContainers: | + - name: "sidecar-list-nginx" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + + artifactoryConf: | + {{- if .Values.nginx.https.enabled }} + ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; + ssl_certificate {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt; + ssl_certificate_key {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key; + ssl_session_cache shared:SSL:1m; + ssl_prefer_server_ciphers on; + {{- end }} + ## server configuration + server { + listen 8088; + {{- if .Values.nginx.internalPortHttps }} + listen {{ .Values.nginx.internalPortHttps }} ssl; + {{- else -}} + {{- if .Values.nginx.https.enabled }} + listen {{ .Values.nginx.https.internalPort }} ssl; + {{- end }} + {{- end }} + {{- if .Values.nginx.internalPortHttp }} + listen {{ .Values.nginx.internalPortHttp }}; + {{- else -}} + {{- if .Values.nginx.http.enabled }} + listen {{ .Values.nginx.http.internalPort }}; + {{- end }} + {{- end }} + server_name ~(?.+)\.{{ include "artifactory.fullname" . }} {{ include "artifactory.fullname" . }} + {{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} + {{- end -}}; + + if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; + } + ## Application specific logs + ## access_log /var/log/nginx/artifactory-access.log timing; + ## error_log /var/log/nginx/artifactory-error.log; + rewrite ^/artifactory/?$ / redirect; + if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; + } + chunked_transfer_encoding on; + client_max_body_size 0; + + location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + } + } + + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: + - containerPort: 8088 + name: http2 + service: + ## A list of custom ports to expose through the Ingress controller service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: + - port: 8088 + targetPort: 8088 + protocol: TCP + name: http2 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/large-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/large-values.yaml new file mode 100644 index 000000000..94a485d6f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/large-values.yaml @@ -0,0 +1,82 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 300 + resources: + requests: + memory: "6Gi" + cpu: "2" + limits: + memory: "10Gi" + cpu: "8" + javaOpts: + xms: "8g" + xmx: "10g" +access: + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 100 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 150 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/loggers-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/loggers-values.yaml new file mode 100644 index 000000000..03c94be95 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/loggers-values.yaml @@ -0,0 +1,43 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + + loggers: + - access-audit.log + - access-request.log + - access-security-audit.log + - access-service.log + - artifactory-access.log + - artifactory-event.log + - artifactory-import-export.log + - artifactory-request.log + - artifactory-service.log + - frontend-request.log + - frontend-service.log + - metadata-request.log + - metadata-service.log + - router-request.log + - router-service.log + - router-traefik.log + + catalinaLoggers: + - tomcat-catalina.log + - tomcat-localhost.log diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/medium-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/medium-values.yaml new file mode 100644 index 000000000..35044dc36 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/medium-values.yaml @@ -0,0 +1,82 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 200 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "8Gi" + cpu: "6" + javaOpts: + xms: "6g" + xmx: "8g" +access: + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 100 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/migration-disabled-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/migration-disabled-values.yaml new file mode 100644 index 000000000..092756fb6 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/migration-disabled-values.yaml @@ -0,0 +1,21 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + migration: + enabled: false + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/nginx-autoreload-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/nginx-autoreload-values.yaml new file mode 100644 index 000000000..09616c5bf --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/nginx-autoreload-values.yaml @@ -0,0 +1,42 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + +nginx: + customVolumes: | + - name: scripts + configMap: + name: {{ template "artifactory.fullname" . }}-nginx-scripts + defaultMode: 0550 + customVolumeMounts: | + - name: scripts + mountPath: /var/opt/jfrog/nginx/scripts/ + customCommand: + - /bin/sh + - -c + - | + # watch for configmap changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/conf.d:n & + {{ if .Values.nginx.https.enabled -}} + # watch for tls secret changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/ssl:n & + {{ end -}} + nginx -g 'daemon off;' diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/rtsplit-values-access-tls-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/rtsplit-values-access-tls-values.yaml new file mode 100644 index 000000000..a38969a8f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/rtsplit-values-access-tls-values.yaml @@ -0,0 +1,96 @@ +databaseUpgradeReady: true +artifactory: + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 102 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/rtsplit-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/rtsplit-values.yaml new file mode 100644 index 000000000..057ae9bf3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/rtsplit-values.yaml @@ -0,0 +1,151 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 1 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + + # Add lifecycle hooks for artifactory container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 100 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for jfconect container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + + +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for router container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] + +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for frontend container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] + +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] + +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] + +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/small-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/small-values.yaml new file mode 100644 index 000000000..70d77790a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/small-values.yaml @@ -0,0 +1,82 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 200 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "6g" +access: + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 80 + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/test-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/test-values.yaml new file mode 100644 index 000000000..d2beb0eff --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/ci/test-values.yaml @@ -0,0 +1,84 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 3 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + unifiedSecretInstallation: false + metrics: + enabled: true + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + statefulset: + annotations: + artifactory: test + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 100 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false + +jfconnect: + enabled: false + +autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 3 + targetCPUUtilizationPercentage: 70 + +## filebeat sidecar +filebeat: + enabled: true + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output.file: + path: "/tmp/filebeat" + filename: filebeat + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/binarystore.xml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/binarystore.xml new file mode 100644 index 000000000..e396e0a41 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/binarystore.xml @@ -0,0 +1,426 @@ +{{- if eq .Values.artifactory.persistence.type "nfs" -}} + + {{- if (.Values.artifactory.persistence.maxCacheSize) }} + + + + + + {{- else }} + + + + {{- end }} + + {{- if .Values.artifactory.persistence.maxCacheSize }} + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + + {{ .Values.artifactory.persistence.nfs.dataDir }}/filestore + + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "file-system" -}} + + + + {{- if .Values.artifactory.persistence.fileSystem.cache.enabled }} + + {{- end }} + + {{- if .Values.artifactory.persistence.fileSystem.cache.enabled }} + + {{- end }} + + + {{- if .Values.artifactory.persistence.fileSystem.cache.enabled }} + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "cluster-file-system" -}} + + + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + 2 + + + + + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + + + shard-fs-1 + local + + + + + 30 + tester-remote1 + 10000 + remote + + + +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") (eq .Values.artifactory.persistence.type "cluster-google-storage-v2") (eq .Values.artifactory.persistence.type "google-storage-v2-direct") }} + + + {{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") }} + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "cluster-google-storage-v2" }} + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + 2 + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "google-storage-v2-direct" }} + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if eq .Values.artifactory.persistence.type "cluster-google-storage-v2" }} + + local + + + + 30 + 10000 + remote + + {{- end }} + + + {{- if .Values.artifactory.persistence.googleStorage.useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .Values.artifactory.persistence.googleStorage.enableSignedUrlRedirect }} + google-cloud-storage + {{ .Values.artifactory.persistence.googleStorage.endpoint }} + {{ .Values.artifactory.persistence.googleStorage.httpsOnly }} + {{ .Values.artifactory.persistence.googleStorage.bucketName }} + {{ .Values.artifactory.persistence.googleStorage.path }} + {{ .Values.artifactory.persistence.googleStorage.bucketExists }} + + +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "aws-s3-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-direct") (eq .Values.artifactory.persistence.type "cluster-s3-storage-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-archive") }} + + + {{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }} + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-direct" }} + + + + + + {{- else if eq .Values.artifactory.persistence.type "cluster-s3-storage-v3" }} + + + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-archive" }} + + + + + + + {{- end }} + + {{- if or (eq .Values.artifactory.persistence.type "aws-s3-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-direct") (eq .Values.artifactory.persistence.type "cluster-s3-storage-v3") }} + + + {{ .Values.artifactory.persistence.maxCacheSize | int64}} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + {{- if eq .Values.artifactory.persistence.type "cluster-s3-storage-v3" }} + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + + + + + remote + + + + local + + {{- end }} + + {{- with .Values.artifactory.persistence.awsS3V3 }} + + {{ .testConnection }} + {{- if .identity }} + {{ .identity }} + {{- end }} + {{- if .credential }} + {{ .credential }} + {{- end }} + {{ .region }} + {{ .bucketName }} + {{ .path }} + {{ .endpoint }} + {{- with .port }} + {{ . }} + {{- end }} + {{- with .useHttp }} + {{ . }} + {{- end }} + {{- with .maxConnections }} + {{ . }} + {{- end }} + {{- with .connectionTimeout }} + {{ . }} + {{- end }} + {{- with .socketTimeout }} + {{ . }} + {{- end }} + {{- with .kmsServerSideEncryptionKeyId }} + {{ . }} + {{- end }} + {{- with .kmsKeyRegion }} + {{ . }} + {{- end }} + {{- with .kmsCryptoMode }} + {{ . }} + {{- end }} + {{- if .useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .usePresigning }} + {{ .signatureExpirySeconds }} + {{ .signedUrlExpirySeconds }} + {{- with .cloudFrontDomainName }} + {{ . }} + {{- end }} + {{- with .cloudFrontKeyPairId }} + {{ . }} + {{- end }} + {{- with .cloudFrontPrivateKey }} + {{ . }} + {{- end }} + {{- with .enableSignedUrlRedirect }} + {{ . }} + {{- end }} + {{- with .enablePathStyleAccess }} + {{ . }} + {{- end }} + {{- with .multiPartLimit }} + {{ . | int64 }} + {{- end }} + {{- with .multipartElementSize }} + {{ . | int64 }} + {{- end }} + + {{- end }} + +{{- end }} + +{{- if or (eq .Values.artifactory.persistence.type "azure-blob") (eq .Values.artifactory.persistence.type "azure-blob-storage-direct") (eq .Values.artifactory.persistence.type "cluster-azure-blob-storage") }} + + + {{- if or (eq .Values.artifactory.persistence.type "azure-blob") }} + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "azure-blob-storage-direct" }} + + + + + + {{- else if eq .Values.artifactory.persistence.type "cluster-azure-blob-storage" }} + + + + + + + + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if eq .Values.artifactory.persistence.type "cluster-azure-blob-storage" }} + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + + + + remote + + + local + + {{- end }} + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "azure-blob-storage-v2-direct" -}} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/installer-info.json b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/installer-info.json new file mode 100644 index 000000000..79f42ed16 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/installer-info.json @@ -0,0 +1,32 @@ +{ + "productId": "Helm_artifactory/{{ .Chart.Version }}", + "features": [ + { + "featureId": "Platform/{{ printf "%s-%s" "kubernetes" .Capabilities.KubeVersion.Version }}" + }, + { + "featureId": "Database/{{ .Values.database.type }}" + }, + { + "featureId": "PostgreSQL_Enabled/{{ .Values.postgresql.enabled }}" + }, + { + "featureId": "Nginx_Enabled/{{ .Values.nginx.enabled }}" + }, + { + "featureId": "ArtifactoryPersistence_Type/{{ .Values.artifactory.persistence.type }}" + }, + { + "featureId": "SplitServicesToContainers_Enabled/{{ .Values.splitServicesToContainers }}" + }, + { + "featureId": "UnifiedSecretInstallation_Enabled/{{ .Values.artifactory.unifiedSecretInstallation }}" + }, + { + "featureId": "Filebeat_Enabled/{{ .Values.filebeat.enabled }}" + }, + { + "featureId": "ReplicaCount/{{ .Values.artifactory.replicaCount }}" + } + ] +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/migrate.sh b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/migrate.sh new file mode 100644 index 000000000..ba44160f4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/migrate.sh @@ -0,0 +1,4311 @@ +#!/bin/bash + +# Flags +FLAG_Y="y" +FLAG_N="n" +FLAGS_Y_N="$FLAG_Y $FLAG_N" +FLAG_NOT_APPLICABLE="_NA_" + +CURRENT_VERSION=$1 + +WRAPPER_SCRIPT_TYPE_RPMDEB="RPMDEB" +WRAPPER_SCRIPT_TYPE_DOCKER_COMPOSE="DOCKERCOMPOSE" + +SENSITIVE_KEY_VALUE="__sensitive_key_hidden___" + +# Shared system keys +SYS_KEY_SHARED_JFROGURL="shared.jfrogUrl" +SYS_KEY_SHARED_SECURITY_JOINKEY="shared.security.joinKey" +SYS_KEY_SHARED_SECURITY_MASTERKEY="shared.security.masterKey" + +SYS_KEY_SHARED_NODE_ID="shared.node.id" +SYS_KEY_SHARED_JAVAHOME="shared.javaHome" + +SYS_KEY_SHARED_DATABASE_TYPE="shared.database.type" +SYS_KEY_SHARED_DATABASE_TYPE_VALUE_POSTGRES="postgresql" +SYS_KEY_SHARED_DATABASE_DRIVER="shared.database.driver" +SYS_KEY_SHARED_DATABASE_URL="shared.database.url" +SYS_KEY_SHARED_DATABASE_USERNAME="shared.database.username" +SYS_KEY_SHARED_DATABASE_PASSWORD="shared.database.password" + +SYS_KEY_SHARED_ELASTICSEARCH_URL="shared.elasticsearch.url" +SYS_KEY_SHARED_ELASTICSEARCH_USERNAME="shared.elasticsearch.username" +SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD="shared.elasticsearch.password" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP="shared.elasticsearch.clusterSetup" +SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE="shared.elasticsearch.unicastFile" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP_VALUE="YES" + +# Define this in product specific script. Should contain the path to unitcast file +# File used by insight server to write cluster active nodes info. This will be read by elasticsearch +#SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE_VALUE="" + +SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME="shared.rabbitMq.active.node.name" +SYS_KEY_RABBITMQ_ACTIVE_NODE_IP="shared.rabbitMq.active.node.ip" + +# Filenames +FILE_NAME_SYSTEM_YAML="system.yaml" +FILE_NAME_JOIN_KEY="join.key" +FILE_NAME_MASTER_KEY="master.key" +FILE_NAME_INSTALLER_YAML="installer.yaml" + +# Global constants used in business logic +NODE_TYPE_STANDALONE="standalone" +NODE_TYPE_CLUSTER_NODE="node" +NODE_TYPE_DATABASE="database" + +# External(isable) databases +DATABASE_POSTGRES="POSTGRES" +DATABASE_ELASTICSEARCH="ELASTICSEARCH" +DATABASE_RABBITMQ="RABBITMQ" + +POSTGRES_LABEL="PostgreSQL" +ELASTICSEARCH_LABEL="Elasticsearch" +RABBITMQ_LABEL="Rabbitmq" + +ARTIFACTORY_LABEL="Artifactory" +JFMC_LABEL="Mission Control" +DISTRIBUTION_LABEL="Distribution" +XRAY_LABEL="Xray" + +POSTGRES_CONTAINER="postgres" +ELASTICSEARCH_CONTAINER="elasticsearch" +RABBITMQ_CONTAINER="rabbitmq" +REDIS_CONTAINER="redis" + +#Adding a small timeout before a read ensures it is positioned correctly in the screen +read_timeout=0.5 + +# Options related to data directory location +PROMPT_DATA_DIR_LOCATION="Installation Directory" +KEY_DATA_DIR_LOCATION="installer.data_dir" + +SYS_KEY_SHARED_NODE_HAENABLED="shared.node.haEnabled" +PROMPT_ADD_TO_CLUSTER="Are you adding an additional node to an existing product cluster?" +KEY_ADD_TO_CLUSTER="installer.ha" +VALID_VALUES_ADD_TO_CLUSTER="$FLAGS_Y_N" + +MESSAGE_POSTGRES_INSTALL="The installer can install a $POSTGRES_LABEL database, or you can connect to an existing compatible $POSTGRES_LABEL database\n(compatible databases: https://www.jfrog.com/confluence/display/JFROG/System+Requirements#SystemRequirements-RequirementsMatrix)" +PROMPT_POSTGRES_INSTALL="Do you want to install $POSTGRES_LABEL?" +KEY_POSTGRES_INSTALL="installer.install_postgresql" +VALID_VALUES_POSTGRES_INSTALL="$FLAGS_Y_N" + +# Postgres connection details +RPM_DEB_POSTGRES_HOME_DEFAULT="/var/opt/jfrog/postgres" +RPM_DEB_MESSAGE_STANDALONE_POSTGRES_DATA="$POSTGRES_LABEL home will have data and its configuration" +RPM_DEB_PROMPT_STANDALONE_POSTGRES_DATA="Type desired $POSTGRES_LABEL home location" +RPM_DEB_KEY_STANDALONE_POSTGRES_DATA="installer.postgresql.home" + +MESSAGE_DATABASE_URL="Provide the database connection details" +PROMPT_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://:/artifactory" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://:/mission_control?sslmode=disable" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://:/distribution?sslmode=disable" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://:/xraydb?sslmode=disable" + ;; + esac + if [ -z "$databaseURlExample" ]; then + echo -n "$POSTGRES_LABEL URL" # For consistency with username and password + return + fi + echo -n "$POSTGRES_LABEL url. Example: [$databaseURlExample]" +} +REGEX_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://.*/artifactory.*" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://.*/mission_control.*" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://.*/distribution.*" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://.*/xraydb.*" + ;; + esac + echo -n "^$databaseURlExample\$" +} +ERROR_MESSAGE_DATABASE_URL="Invalid $POSTGRES_LABEL URL" +KEY_DATABASE_URL="$SYS_KEY_SHARED_DATABASE_URL" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_USERNAME="$POSTGRES_LABEL username" +KEY_DATABASE_USERNAME="$SYS_KEY_SHARED_DATABASE_USERNAME" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_PASSWORD="$POSTGRES_LABEL password" +KEY_DATABASE_PASSWORD="$SYS_KEY_SHARED_DATABASE_PASSWORD" +IS_SENSITIVE_DATABASE_PASSWORD="$FLAG_Y" + +MESSAGE_STANDALONE_ELASTICSEARCH_INSTALL="The installer can install a $ELASTICSEARCH_LABEL database or you can connect to an existing compatible $ELASTICSEARCH_LABEL database" +PROMPT_STANDALONE_ELASTICSEARCH_INSTALL="Do you want to install $ELASTICSEARCH_LABEL?" +KEY_STANDALONE_ELASTICSEARCH_INSTALL="installer.install_elasticsearch" +VALID_VALUES_STANDALONE_ELASTICSEARCH_INSTALL="$FLAGS_Y_N" + +# Elasticsearch connection details +MESSAGE_ELASTICSEARCH_DETAILS="Provide the $ELASTICSEARCH_LABEL connection details" +PROMPT_ELASTICSEARCH_URL="$ELASTICSEARCH_LABEL URL" +KEY_ELASTICSEARCH_URL="$SYS_KEY_SHARED_ELASTICSEARCH_URL" + +PROMPT_ELASTICSEARCH_USERNAME="$ELASTICSEARCH_LABEL username" +KEY_ELASTICSEARCH_USERNAME="$SYS_KEY_SHARED_ELASTICSEARCH_USERNAME" + +PROMPT_ELASTICSEARCH_PASSWORD="$ELASTICSEARCH_LABEL password" +KEY_ELASTICSEARCH_PASSWORD="$SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD" +IS_SENSITIVE_ELASTICSEARCH_PASSWORD="$FLAG_Y" + +# Cluster related questions +MESSAGE_CLUSTER_MASTER_KEY="Provide the cluster's master key. It can be found in the data directory of the first node under /etc/security/master.key" +PROMPT_CLUSTER_MASTER_KEY="Master Key" +KEY_CLUSTER_MASTER_KEY="$SYS_KEY_SHARED_SECURITY_MASTERKEY" +IS_SENSITIVE_CLUSTER_MASTER_KEY="$FLAG_Y" + +MESSAGE_JOIN_KEY="The Join key is the secret key used to establish trust between services in the JFrog Platform.\n(You can copy the Join Key from Admin > User Management > Settings)" +PROMPT_JOIN_KEY="Join Key" +KEY_JOIN_KEY="$SYS_KEY_SHARED_SECURITY_JOINKEY" +IS_SENSITIVE_JOIN_KEY="$FLAG_Y" +REGEX_JOIN_KEY="^[a-zA-Z0-9]{16,}\$" +ERROR_MESSAGE_JOIN_KEY="Invalid Join Key" + +# Rabbitmq related cluster information +MESSAGE_RABBITMQ_ACTIVE_NODE_NAME="Provide an active ${RABBITMQ_LABEL} node name. Run the command [ hostname -s ] on any of the existing nodes in the product cluster to get this" +PROMPT_RABBITMQ_ACTIVE_NODE_NAME="${RABBITMQ_LABEL} active node name" +KEY_RABBITMQ_ACTIVE_NODE_NAME="$SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME" + +# Rabbitmq related cluster information (necessary only for docker-compose) +PROMPT_RABBITMQ_ACTIVE_NODE_IP="${RABBITMQ_LABEL} active node ip" +KEY_RABBITMQ_ACTIVE_NODE_IP="$SYS_KEY_RABBITMQ_ACTIVE_NODE_IP" + +MESSAGE_JFROGURL(){ + echo -e "The JFrog URL allows ${PRODUCT_NAME} to connect to a JFrog Platform Instance.\n(You can copy the JFrog URL from Administration > User Management > Settings > Connection details)" +} +PROMPT_JFROGURL="JFrog URL" +KEY_JFROGURL="$SYS_KEY_SHARED_JFROGURL" +REGEX_JFROGURL="^https?://.*:{0,}[0-9]{0,4}\$" +ERROR_MESSAGE_JFROGURL="Invalid JFrog URL" + + +# Set this to FLAG_Y on upgrade +IS_UPGRADE="${FLAG_N}" + +# This belongs in JFMC but is the ONLY one that needs it so keeping it here for now. Can be made into a method and overridden if necessary +MESSAGE_MULTIPLE_PG_SCHEME="Please setup $POSTGRES_LABEL with schema as described in https://www.jfrog.com/confluence/display/JFROG/Installing+Mission+Control" + +_getMethodOutputOrVariableValue() { + unset EFFECTIVE_MESSAGE + local keyToSearch=$1 + local effectiveMessage= + local result="0" + # logSilly "Searching for method: [$keyToSearch]" + LC_ALL=C type "$keyToSearch" > /dev/null 2>&1 || result="$?" + if [[ "$result" == "0" ]]; then + # logSilly "Found method for [$keyToSearch]" + EFFECTIVE_MESSAGE="$($keyToSearch)" + return + fi + eval EFFECTIVE_MESSAGE=\${$keyToSearch} + if [ ! -z "$EFFECTIVE_MESSAGE" ]; then + return + fi + # logSilly "Didn't find method or variable for [$keyToSearch]" +} + + +# REF https://misc.flogisoft.com/bash/tip_colors_and_formatting +cClear="\e[0m" +cBlue="\e[38;5;69m" +cRedDull="\e[1;31m" +cYellow="\e[1;33m" +cRedBright="\e[38;5;197m" +cBold="\e[1m" + + +_loggerGetModeRaw() { + local MODE="$1" + case $MODE in + INFO) + printf "" + ;; + DEBUG) + printf "%s" "[${MODE}] " + ;; + WARN) + printf "${cRedDull}%s%s${cClear}" "[" "${MODE}" "] " + ;; + ERROR) + printf "${cRedBright}%s%s${cClear}" "[" "${MODE}" "] " + ;; + esac +} + + +_loggerGetMode() { + local MODE="$1" + case $MODE in + INFO) + printf "${cBlue}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + DEBUG) + printf "%-7s" "[${MODE}]" + ;; + WARN) + printf "${cRedDull}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + ERROR) + printf "${cRedBright}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + esac +} + +# Capitalises the first letter of the message +_loggerGetMessage() { + local originalMessage="$*" + local firstChar=$(echo "${originalMessage:0:1}" | awk '{ print toupper($0) }') + local resetOfMessage="${originalMessage:1}" + echo "$firstChar$resetOfMessage" +} + +# The spec also says content should be left-trimmed but this is not necessary in our case. We don't reach the limit. +_loggerGetStackTrace() { + printf "%s%-30s%s" "[" "$1:$2" "]" +} + +_loggerGetThread() { + printf "%s" "[main]" +} + +_loggerGetServiceType() { + printf "%s%-5s%s" "[" "shell" "]" +} + +#Trace ID is not applicable to scripts +_loggerGetTraceID() { + printf "%s" "[]" +} + +logRaw() { + echo "" + printf "$1" + echo "" +} + +logBold(){ + echo "" + printf "${cBold}$1${cClear}" + echo "" +} + +# The date binary works differently based on whether it is GNU/BSD +is_date_supported=0 +date --version > /dev/null 2>&1 || is_date_supported=1 +IS_GNU=$(echo $is_date_supported) + +_loggerGetTimestamp() { + if [ "${IS_GNU}" == "0" ]; then + echo -n $(date -u +%FT%T.%3NZ) + else + echo -n $(date -u +%FT%T.000Z) + fi +} + +# https://www.shellscript.sh/tips/spinner/ +_spin() +{ + spinner="/|\\-/|\\-" + while : + do + for i in `seq 0 7` + do + echo -n "${spinner:$i:1}" + echo -en "\010" + sleep 1 + done + done +} + +showSpinner() { + # Start the Spinner: + _spin & + # Make a note of its Process ID (PID): + SPIN_PID=$! + # Kill the spinner on any signal, including our own exit. + trap "kill -9 $SPIN_PID" `seq 0 15` &> /dev/null || return 0 +} + +stopSpinner() { + local occurrences=$(ps -ef | grep -wc "${SPIN_PID}") + let "occurrences+=0" + # validate that it is present (2 since this search itself will show up in the results) + if [ $occurrences -gt 1 ]; then + kill -9 $SPIN_PID &>/dev/null || return 0 + wait $SPIN_ID &>/dev/null + fi +} + +_getEffectiveMessage(){ + local MESSAGE="$1" + local MODE=${2-"INFO"} + + if [ -z "$CONTEXT" ]; then + CONTEXT=$(caller) + fi + + _EFFECTIVE_MESSAGE= + if [ -z "$LOG_BEHAVIOR_ADD_META" ]; then + _EFFECTIVE_MESSAGE="$(_loggerGetModeRaw $MODE)$(_loggerGetMessage $MESSAGE)" + else + local SERVICE_TYPE="script" + local TRACE_ID="" + local THREAD="main" + + local CONTEXT_LINE=$(echo "$CONTEXT" | awk '{print $1}') + local CONTEXT_FILE=$(echo "$CONTEXT" | awk -F"/" '{print $NF}') + + _EFFECTIVE_MESSAGE="$(_loggerGetTimestamp) $(_loggerGetServiceType) $(_loggerGetMode $MODE) $(_loggerGetTraceID) $(_loggerGetStackTrace $CONTEXT_FILE $CONTEXT_LINE) $(_loggerGetThread) - $(_loggerGetMessage $MESSAGE)" + fi + CONTEXT= +} + +# Important - don't call any log method from this method. Will become an infinite loop. Use echo to debug +_logToFile() { + local MODE=${1-"INFO"} + local targetFile="$LOG_BEHAVIOR_ADD_REDIRECTION" + # IF the file isn't passed, abort + if [ -z "$targetFile" ]; then + return + fi + # IF this is not being run in verbose mode and mode is debug or lower, abort + if [ "${VERBOSE_MODE}" != "$FLAG_Y" ] && [ "${VERBOSE_MODE}" != "true" ] && [ "${VERBOSE_MODE}" != "debug" ]; then + if [ "$MODE" == "DEBUG" ] || [ "$MODE" == "SILLY" ]; then + return + fi + fi + + # Create the file if it doesn't exist + if [ ! -f "${targetFile}" ]; then + return + # touch $targetFile > /dev/null 2>&1 || true + fi + # # Make it readable + # chmod 640 $targetFile > /dev/null 2>&1 || true + + # Log contents + printf "%s\n" "$_EFFECTIVE_MESSAGE" >> "$targetFile" || true +} + +logger() { + if [ "$LOG_BEHAVIOR_ADD_NEW_LINE" == "$FLAG_Y" ]; then + echo "" + fi + _getEffectiveMessage "$@" + local MODE=${2-"INFO"} + printf "%s\n" "$_EFFECTIVE_MESSAGE" + _logToFile "$MODE" +} + +logDebug(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "$FLAG_Y" ] || [ "${VERBOSE_MODE}" == "true" ] || [ "${VERBOSE_MODE}" == "debug" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logSilly(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "silly" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logError() { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= +} + +errorExit () { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= + exit 1 +} + +warn () { + CONTEXT=$(caller) + logger "$1" "WARN" + CONTEXT= +} + +note () { + CONTEXT=$(caller) + logger "$1" "NOTE" + CONTEXT= +} + +bannerStart() { + title=$1 + echo + echo -e "\033[1m${title}\033[0m" + echo +} + +bannerSection() { + title=$1 + echo + echo -e "******************************** ${title} ********************************" + echo +} + +bannerSubSection() { + title=$1 + echo + echo -e "************** ${title} *******************" + echo +} + +bannerMessge() { + title=$1 + echo + echo -e "********************************" + echo -e "${title}" + echo -e "********************************" + echo +} + +setRed () { + local input="$1" + echo -e \\033[31m${input}\\033[0m +} +setGreen () { + local input="$1" + echo -e \\033[32m${input}\\033[0m +} +setYellow () { + local input="$1" + echo -e \\033[33m${input}\\033[0m +} + +logger_addLinebreak () { + echo -e "---\n" +} + +bannerImportant() { + title=$1 + local bold="\033[1m" + local noColour="\033[0m" + echo + echo -e "${bold}######################################## IMPORTANT ########################################${noColour}" + echo -e "${bold}${title}${noColour}" + echo -e "${bold}###########################################################################################${noColour}" + echo +} + +bannerEnd() { + #TODO pass a title and calculate length dynamically so that start and end look alike + echo + echo "*****************************************************************************" + echo +} + +banner() { + title=$1 + content=$2 + bannerStart "${title}" + echo -e "$content" +} + +# The logic below helps us redirect content we'd normally hide to the log file. + # + # We have several commands which clutter the console with output and so use + # `cmd > /dev/null` - this redirects the command's output to null. + # + # However, the information we just hid maybe useful for support. Using the code pattern + # `cmd >&6` (instead of `cmd> >/dev/null` ), the command's output is hidden from the console + # but redirected to the installation log file + # + +#Default value of 6 is just null +exec 6>>/dev/null +redirectLogsToFile() { + echo "" + # local file=$1 + + # [ ! -z "${file}" ] || return 0 + + # local logDir=$(dirname "$file") + + # if [ ! -f "${file}" ]; then + # [ -d "${logDir}" ] || mkdir -p ${logDir} || \ + # ( echo "WARNING : Could not create parent directory (${logDir}) to redirect console log : ${file}" ; return 0 ) + # fi + + # #6 now points to the log file + # exec 6>>${file} + # #reference https://unix.stackexchange.com/questions/145651/using-exec-and-tee-to-redirect-logs-to-stdout-and-a-log-file-in-the-same-time + # exec 2>&1 > >(tee -a "${file}") +} + +# Check if a give key contains any sensitive string as part of it +# Based on the result, the caller can decide its value can be displayed or not +# Sample usage : isKeySensitive "${key}" && displayValue="******" || displayValue=${value} +isKeySensitive(){ + local key=$1 + local sensitiveKeys="password|secret|key|token" + + if [ -z "${key}" ]; then + return 1 + else + local lowercaseKey=$(echo "${key}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + [[ "${lowercaseKey}" =~ ${sensitiveKeys} ]] && return 0 || return 1 + fi +} + +getPrintableValueOfKey(){ + local displayValue= + local key="$1" + if [ -z "$key" ]; then + # This is actually an incorrect usage of this method but any logging will cause unexpected content in the caller + echo -n "" + return + fi + + local value="$2" + isKeySensitive "${key}" && displayValue="$SENSITIVE_KEY_VALUE" || displayValue="${value}" + echo -n $displayValue +} + +_createConsoleLog(){ + if [ -z "${JF_PRODUCT_HOME}" ]; then + return + fi + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + mkdir -p "${JF_PRODUCT_HOME}/var/log" || true + if [ ! -f ${targetFile} ]; then + touch $targetFile > /dev/null 2>&1 || true + fi + chmod 640 $targetFile > /dev/null 2>&1 || true +} + +# Output from application's logs are piped to this method. It checks a configuration variable to determine if content should be logged to +# the common console.log file +redirectServiceLogsToFile() { + + local result="0" + # check if the function getSystemValue exists + LC_ALL=C type getSystemValue > /dev/null 2>&1 || result="$?" + if [[ "$result" != "0" ]]; then + warn "Couldn't find the systemYamlHelper. Skipping log redirection" + return 0 + fi + + getSystemValue "shared.consoleLog" "NOT_SET" + if [[ "${YAML_VALUE}" == "false" ]]; then + logger "Redirection is set to false. Skipping log redirection" + return 0; + fi + + if [ -z "${JF_PRODUCT_HOME}" ] || [ "${JF_PRODUCT_HOME}" == "" ]; then + warn "JF_PRODUCT_HOME is unavailable. Skipping log redirection" + return 0 + fi + + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + + _createConsoleLog + + while read -r line; do + printf '%s\n' "${line}" >> $targetFile || return 0 # Don't want to log anything - might clutter the screen + done +} + +## Display environment variables starting with JF_ along with its value +## Value of sensitive keys will be displayed as "******" +## +## Sample Display : +## +## ======================== +## JF Environment variables +## ======================== +## +## JF_SHARED_NODE_ID : locahost +## JF_SHARED_JOINKEY : ****** +## +## +displayEnv() { + local JFEnv=$(printenv | grep ^JF_ 2>/dev/null) + local key= + local value= + + if [ -z "${JFEnv}" ]; then + return + fi + + cat << ENV_START_MESSAGE + +======================== +JF Environment variables +======================== +ENV_START_MESSAGE + + for entry in ${JFEnv}; do + key=$(echo "${entry}" | awk -F'=' '{print $1}') + value=$(echo "${entry}" | awk -F'=' '{print $2}') + + isKeySensitive "${key}" && value="******" || value=${value} + + printf "\n%-35s%s" "${key}" " : ${value}" + done + echo; +} + +_addLogRotateConfiguration() { + logDebug "Method ${FUNCNAME[0]}" + # mandatory inputs + local confFile="$1" + local logFile="$2" + + # Method available in _ioOperations.sh + LC_ALL=C type io_setYQPath > /dev/null 2>&1 || return 1 + + io_setYQPath + + # Method available in _systemYamlHelper.sh + LC_ALL=C type getSystemValue > /dev/null 2>&1 || return 1 + + local frequency="daily" + local archiveFolder="archived" + + local compressLogFiles= + getSystemValue "shared.logging.rotation.compress" "true" + if [[ "${YAML_VALUE}" == "true" ]]; then + compressLogFiles="compress" + fi + + getSystemValue "shared.logging.rotation.maxFiles" "10" + local noOfBackupFiles="${YAML_VALUE}" + + getSystemValue "shared.logging.rotation.maxSizeMb" "25" + local sizeOfFile="${YAML_VALUE}M" + + logDebug "Adding logrotate configuration for [$logFile] to [$confFile]" + + # Add configuration to file + local confContent=$(cat << LOGROTATECONF +$logFile { + $frequency + missingok + rotate $noOfBackupFiles + $compressLogFiles + notifempty + olddir $archiveFolder + dateext + extension .log + dateformat -%Y-%m-%d + size ${sizeOfFile} +} +LOGROTATECONF +) + echo "${confContent}" > ${confFile} || return 1 +} + +_operationIsBySameUser() { + local targetUser="$1" + local currentUserID=$(id -u) + local currentUserName=$(id -un) + + if [ $currentUserID == $targetUser ] || [ $currentUserName == $targetUser ]; then + echo -n "yes" + else + echo -n "no" + fi +} + +_addCronJobForLogrotate() { + logDebug "Method ${FUNCNAME[0]}" + + # Abort if logrotate is not available + [ "$(io_commandExists 'crontab')" != "yes" ] && warn "cron is not available" && return 1 + + # mandatory inputs + local productHome="$1" + local confFile="$2" + local cronJobOwner="$3" + + # We want to use our binary if possible. It may be more recent than the one in the OS + local logrotateBinary="$productHome/app/third-party/logrotate/logrotate" + + if [ ! -f "$logrotateBinary" ]; then + logrotateBinary="logrotate" + [ "$(io_commandExists 'logrotate')" != "yes" ] && warn "logrotate is not available" && return 1 + fi + local cmd="$logrotateBinary ${confFile} --state $productHome/var/etc/logrotate/logrotate-state" #--verbose + + id -u $cronJobOwner > /dev/null 2>&1 || { warn "User $cronJobOwner does not exist. Aborting logrotate configuration" && return 1; } + + # Remove the existing line + removeLogRotation "$productHome" "$cronJobOwner" || true + + # Run logrotate daily at 23:55 hours + local cronInterval="55 23 * * * $cmd" + + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + # If this is standalone mode, we cannot use -u - the user running this process may not have the necessary privileges + if [ "$standaloneMode" == "no" ]; then + (crontab -l -u $cronJobOwner 2>/dev/null; echo "$cronInterval") | crontab -u $cronJobOwner - + else + (crontab -l 2>/dev/null; echo "$cronInterval") | crontab - + fi +} + +## Configure logrotate for a product +## Failure conditions: +## If logrotation could not be setup for some reason +## Parameters: +## $1: The product name +## $2: The product home +## Depends on global: none +## Updates global: none +## Returns: NA + +configureLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + + # mandatory inputs + local productName="$1" + if [ -z $productName ]; then + warn "Incorrect usage. A product name is necessary for configuring log rotation" && return 1 + fi + + local productHome="$2" + if [ -z $productHome ]; then + warn "Incorrect usage. A product home folder is necessary for configuring log rotation" && return 1 + fi + + local logFile="${productHome}/var/log/console.log" + if [[ $(uname) == "Darwin" ]]; then + logger "Log rotation for [$logFile] has not been configured. Please setup manually" + return 0 + fi + + local userID="$3" + if [ -z $userID ]; then + warn "Incorrect usage. A userID is necessary for configuring log rotation" && return 1 + fi + + local groupID=${4:-$userID} + local logConfigOwner=${5:-$userID} + + logDebug "Configuring log rotation as user [$userID], group [$groupID], effective cron User [$logConfigOwner]" + + local errorMessage="Could not configure logrotate. Please configure log rotation of the file: [$logFile] manually" + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + # TODO move to recursive method + createDir "${productHome}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log/archived" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + + # TODO move to recursive method + createDir "${productHome}/var/etc" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/etc/logrotate" "$logConfigOwner" || { warn "${errorMessage}" && return 1; } + + # conf file should be owned by the user running the script + createFile "${confFile}" "${logConfigOwner}" || { warn "Could not create configuration file [$confFile]" return 1; } + + _addLogRotateConfiguration "${confFile}" "${logFile}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + _addCronJobForLogrotate "${productHome}" "${confFile}" "${logConfigOwner}" || { warn "${errorMessage}" && return 1; } +} + +_pauseExecution() { + if [ "${VERBOSE_MODE}" == "debug" ]; then + + local breakPoint="$1" + if [ ! -z "$breakPoint" ]; then + printf "${cBlue}Breakpoint${cClear} [$breakPoint] " + echo "" + fi + printf "${cBlue}Press enter once you are ready to continue${cClear}" + read -s choice + echo "" + fi +} + +# removeLogRotation "$productHome" "$cronJobOwner" || true +removeLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + if [[ $(uname) == "Darwin" ]]; then + logDebug "Not implemented for Darwin." + return 0 + fi + local productHome="$1" + local cronJobOwner="$2" + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + if [ "$standaloneMode" == "no" ]; then + crontab -l -u $cronJobOwner 2>/dev/null | grep -v "$confFile" | crontab -u $cronJobOwner - + else + crontab -l 2>/dev/null | grep -v "$confFile" | crontab - + fi +} + +# NOTE: This method does not check the configuration to see if redirection is necessary. +# This is intentional. If we don't redirect, tomcat logs might get redirected to a folder/file +# that does not exist, causing the service itself to not start +setupTomcatRedirection() { + logDebug "Method ${FUNCNAME[0]}" + local consoleLog="${JF_PRODUCT_HOME}/var/log/console.log" + _createConsoleLog + export CATALINA_OUT="${consoleLog}" +} + +setupScriptLogsRedirection() { + logDebug "Method ${FUNCNAME[0]}" + if [ -z "${JF_PRODUCT_HOME}" ]; then + logDebug "No JF_PRODUCT_HOME. Returning" + return + fi + # Create the console.log file if it is not already present + # _createConsoleLog || true + # # Ensure any logs (logger/logError/warn) also get redirected to the console.log + # # Using installer.log as a temparory fix. Please change this to console.log once INST-291 is fixed + export LOG_BEHAVIOR_ADD_REDIRECTION="${JF_PRODUCT_HOME}/var/log/console.log" + export LOG_BEHAVIOR_ADD_META="$FLAG_Y" +} + +# Returns Y if this method is run inside a container +isRunningInsideAContainer() { + local check1=$(grep -sq 'docker\|kubepods' /proc/1/cgroup; echo $?) + local check2=$(grep -sq 'containers' /proc/self/mountinfo; echo $?) + if [[ $check1 == 0 || $check2 == 0 || -f "/.dockerenv" ]]; then + echo -n "$FLAG_Y" + else + echo -n "$FLAG_N" + fi +} + +POSTGRES_USER=999 +NGINX_USER=104 +NGINX_GROUP=107 +ES_USER=1000 +REDIS_USER=999 +MONGO_USER=999 +RABBITMQ_USER=999 +LOG_FILE_PERMISSION=640 +PID_FILE_PERMISSION=644 + +# Copy file +copyFile(){ + local source=$1 + local target=$2 + local mode=${3:-overwrite} + local enableVerbose=${4:-"${FLAG_N}"} + local verboseFlag="" + + if [ ! -z "${enableVerbose}" ] && [ "${enableVerbose}" == "${FLAG_Y}" ]; then + verboseFlag="-v" + fi + + if [[ ! ( $source && $target ) ]]; then + warn "Source and target is mandatory to copy file" + return 1 + fi + + if [[ -f "${target}" ]]; then + [[ "$mode" = "overwrite" ]] && ( cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}") || true + else + cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}" + fi +} + +# Copy files recursively from given source directory to destination directory +# This method wil copy but will NOT overwrite +# Destination will be created if its not available +copyFilesNoOverwrite(){ + local src=$1 + local dest=$2 + local enableVerboseCopy="${3:-${FLAG_Y}}" + + if [[ -z "${src}" || -z "${dest}" ]]; then + return + fi + + if [ -d "${src}" ] && [ "$(ls -A ${src})" ]; then + local relativeFilePath="" + local targetFilePath="" + + for file in $(find ${src} -type f 2>/dev/null) ; do + # Derive relative path and attach it to destination + # Example : + # src=/extra_config + # dest=/var/opt/jfrog/artifactory/etc + # file=/extra_config/config.xml + # relativeFilePath=config.xml + # targetFilePath=/var/opt/jfrog/artifactory/etc/config.xml + relativeFilePath=${file/${src}/} + targetFilePath=${dest}${relativeFilePath} + + createDir "$(dirname "$targetFilePath")" + copyFile "${file}" "${targetFilePath}" "no_overwrite" "${enableVerboseCopy}" + done + fi +} + +# TODO : WINDOWS ? +# Check the max open files and open processes set on the system +checkULimits () { + local minMaxOpenFiles=${1:-32000} + local minMaxOpenProcesses=${2:-1024} + local setValue=${3:-true} + local warningMsgForFiles=${4} + local warningMsgForProcesses=${5} + + logger "Checking open files and processes limits" + + local currentMaxOpenFiles=$(ulimit -n) + logger "Current max open files is $currentMaxOpenFiles" + if [ ${currentMaxOpenFiles} != "unlimited" ] && [ "$currentMaxOpenFiles" -lt "$minMaxOpenFiles" ]; then + if [ "${setValue}" ]; then + ulimit -n "${minMaxOpenFiles}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForFiles}" ] || warn "${warningMsgForFiles}" + else + errorExit "Max number of open files $currentMaxOpenFiles, is too low. Cannot run the application!" + fi + fi + + local currentMaxOpenProcesses=$(ulimit -u) + logger "Current max open processes is $currentMaxOpenProcesses" + if [ "$currentMaxOpenProcesses" != "unlimited" ] && [ "$currentMaxOpenProcesses" -lt "$minMaxOpenProcesses" ]; then + if [ "${setValue}" ]; then + ulimit -u "${minMaxOpenProcesses}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForProcesses}" ] || warn "${warningMsgForProcesses}" + else + errorExit "Max number of open files $currentMaxOpenProcesses, is too low. Cannot run the application!" + fi + fi +} + +createDirs() { + local appDataDir=$1 + local serviceName=$2 + local folders="backup bootstrap data etc logs work" + + [ -z "${appDataDir}" ] && errorExit "An application directory is mandatory to create its data structure" || true + [ -z "${serviceName}" ] && errorExit "A service name is mandatory to create service data structure" || true + + for folder in ${folders} + do + folder=${appDataDir}/${folder}/${serviceName} + if [ ! -d "${folder}" ]; then + logger "Creating folder : ${folder}" + mkdir -p "${folder}" || errorExit "Failed to create ${folder}" + fi + done +} + + +testReadWritePermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local test_file=${dir_to_check}/test-permissions + + # Write file + if echo test > ${test_file} 1> /dev/null 2>&1; then + # Write succeeded. Testing read... + if cat ${test_file} > /dev/null; then + rm -f ${test_file} + else + error=true + fi + else + error=true + fi + + if [ ${error} == true ]; then + return 1 + else + return 0 + fi +} + +# Test directory has read/write permissions for current user +testDirectoryPermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local u_id=$(id -u) + local id_str="id ${u_id}" + + logger "Testing directory ${dir_to_check} has read/write permissions for user ${id_str}" + + if ! testReadWritePermissions ${dir_to_check}; then + error=true + fi + + if [ "${error}" == true ]; then + local stat_data=$(stat -Lc "Directory: %n, permissions: %a, owner: %U, group: %G" ${dir_to_check}) + logger "###########################################################" + logger "${dir_to_check} DOES NOT have proper permissions for user ${id_str}" + logger "${stat_data}" + logger "Mounted directory must have read/write permissions for user ${id_str}" + logger "###########################################################" + errorExit "Directory ${dir_to_check} has bad permissions for user ${id_str}" + fi + logger "Permissions for ${dir_to_check} are good" +} + +# Utility method to create a directory path recursively with chown feature as +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: Root directory from where the path can be created +## $2: List of recursive child directories separated by space +## $3: user who should own the directory. Optional +## $4: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA +# +# Usage: +# createRecursiveDir "/opt/jfrog/product/var" "bootstrap tomcat lib" "user_name" "group_name" +createRecursiveDir(){ + local rootDir=$1 + local pathDirs=$2 + local user=$3 + local group=${4:-${user}} + local fullPath= + + [ ! -z "${rootDir}" ] || return 0 + + createDir "${rootDir}" "${user}" "${group}" + + [ ! -z "${pathDirs}" ] || return 0 + + fullPath=${rootDir} + + for dir in ${pathDirs}; do + fullPath=${fullPath}/${dir} + createDir "${fullPath}" "${user}" "${group}" + done +} + +# Utility method to create a directory +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: directory to create +## $2: user who should own the directory. Optional +## $3: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA + +createDir(){ + local dirName="$1" + local printMessage=no + logSilly "Method ${FUNCNAME[0]} invoked with [$dirName]" + [ -z "${dirName}" ] && return + + logDebug "Attempting to create ${dirName}" + mkdir -p "${dirName}" || errorExit "Unable to create directory: [${dirName}]" + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + # Earlier, this line would have returned 1 if it failed. Now it just warns. + # This is intentional. Earlier, this line would NOT be reached if the folder already existed. + # Since it will always come to this line and the script may be running as a non-root user, this method will just warn if + # setting permissions fails (so as to not affect any existing flows) + io_setOwnershipNonRecursive "$dirName" "$userID" "$groupID" || warn "Could not set owner of [$dirName] to [$userID:$groupID]" + fi + # logging message to print created dir with user and group + local logMessage=${4:-$printMessage} + if [[ "${logMessage}" == "yes" ]]; then + logger "Successfully created directory [${dirName}]. Owner: [${userID}:${groupID}]" + fi +} + +removeSoftLinkAndCreateDir () { + local dirName="$1" + local userID="$2" + local groupID="$3" + local logMessage="$4" + removeSoftLink "${dirName}" + createDir "${dirName}" "${userID}" "${groupID}" "${logMessage}" +} + +# Utility method to remove a soft link +removeSoftLink () { + local dirName="$1" + if [[ -L "${dirName}" ]]; then + targetLink=$(readlink -f "${dirName}") + logger "Removing the symlink [${dirName}] pointing to [${targetLink}]" + rm -f "${dirName}" + fi +} + +# Check Directory exist in the path +checkDirExists () { + local directoryPath="$1" + + [[ -d "${directoryPath}" ]] && echo -n "true" || echo -n "false" +} + + +# Utility method to create a file +# Failure conditions: +# Parameters: +## $1: file to create +# Depends on global: none +# Updates global: none +# Returns: NA + +createFile(){ + local fileName="$1" + logSilly "Method ${FUNCNAME[0]} [$fileName]" + [ -f "${fileName}" ] && return 0 + touch "${fileName}" || return 1 + + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + io_setOwnership "$fileName" "$userID" "$groupID" || return 1 + fi +} + +# Check File exist in the filePath +# IMPORTANT- DON'T ADD LOGGING to this method +checkFileExists () { + local filePath="$1" + + [[ -f "${filePath}" ]] && echo -n "true" || echo -n "false" +} + +# Check for directories contains any (files or sub directories) +# IMPORTANT- DON'T ADD LOGGING to this method +checkDirContents () { + local directoryPath="$1" + if [[ "$(ls -1 "${directoryPath}" | wc -l)" -gt 0 ]]; then + echo -n "true" + else + echo -n "false" + fi +} + +# Check contents exist in directory +# IMPORTANT- DON'T ADD LOGGING to this method +checkContentExists () { + local source="$1" + + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + echo -n "false" + else + echo -n "true" + fi +} + +# Resolve the variable +# IMPORTANT- DON'T ADD LOGGING to this method +evalVariable () { + local output="$1" + local input="$2" + + eval "${output}"=\${"${input}"} + eval echo \${"${output}"} +} + +# Usage: if [ "$(io_commandExists 'curl')" == "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_commandExists() { + local commandToExecute="$1" + hash "${commandToExecute}" 2>/dev/null + local rt=$? + if [ "$rt" == 0 ]; then echo -n "yes"; else echo -n "no"; fi +} + +# Usage: if [ "$(io_curlExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_curlExists() { + io_commandExists "curl" +} + + +io_hasMatch() { + logSilly "Method ${FUNCNAME[0]}" + local result=0 + logDebug "Executing [echo \"$1\" | grep \"$2\" >/dev/null 2>&1]" + echo "$1" | grep "$2" >/dev/null 2>&1 || result=1 + return $result +} + +# Utility method to check if the string passed (usually a connection url) corresponds to this machine itself +# Failure conditions: None +# Parameters: +## $1: string to check against +# Depends on global: none +# Updates global: IS_LOCALHOST with value "yes/no" +# Returns: NA + +io_getIsLocalhost() { + logSilly "Method ${FUNCNAME[0]}" + IS_LOCALHOST="$FLAG_N" + local inputString="$1" + logDebug "Parsing [$inputString] to check if we are dealing with this machine itself" + + io_hasMatch "$inputString" "localhost" && { + logDebug "Found localhost. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for localhost" + + local hostIP=$(io_getPublicHostIP) + io_hasMatch "$inputString" "$hostIP" && { + logDebug "Found $hostIP. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostIP" + + local hostID=$(io_getPublicHostID) + io_hasMatch "$inputString" "$hostID" && { + logDebug "Found $hostID. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostID" + + local hostName=$(io_getPublicHostName) + io_hasMatch "$inputString" "$hostName" && { + logDebug "Found $hostName. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostName" + +} + +# Usage: if [ "$(io_tarExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_tarExists() { + io_commandExists "tar" +} + +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostIP() { + local OS_TYPE=$(uname) + local publicHostIP= + if [ "${OS_TYPE}" == "Darwin" ]; then + ipStatus=$(ifconfig en0 | grep "status" | awk '{print$2}') + if [ "${ipStatus}" == "active" ]; then + publicHostIP=$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}') + else + errorExit "Host IP could not be resolved!" + fi + elif [ "${OS_TYPE}" == "Linux" ]; then + publicHostIP=$(hostname -i 2>/dev/null || echo "127.0.0.1") + fi + publicHostIP=$(echo "${publicHostIP}" | awk '{print $1}') + echo -n "${publicHostIP}" +} + +# Will return the short host name (up to the first dot) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostName() { + echo -n "$(hostname -s)" +} + +# Will return the full host name (use this as much as possible) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostID() { + echo -n "$(hostname)" +} + +# Utility method to backup a file +# Failure conditions: NA +# Parameters: filePath +# Depends on global: none, +# Updates global: none +# Returns: NA +io_backupFile() { + logSilly "Method ${FUNCNAME[0]}" + fileName="$1" + if [ ! -f "${filePath}" ]; then + logDebug "No file: [${filePath}] to backup" + return + fi + dateTime=$(date +"%Y-%m-%d-%H-%M-%S") + targetFileName="${fileName}.backup.${dateTime}" + yes | \cp -f "$fileName" "${targetFileName}" + logger "File [${fileName}] backedup as [${targetFileName}]" +} + +# Reference https://stackoverflow.com/questions/4023830/how-to-compare-two-strings-in-dot-separated-version-format-in-bash/4025065#4025065 +is_number() { + case "$BASH_VERSION" in + 3.1.*) + PATTERN='\^\[0-9\]+\$' + ;; + *) + PATTERN='^[0-9]+$' + ;; + esac + + [[ "$1" =~ $PATTERN ]] +} + +io_compareVersions() { + if [[ $# != 2 ]] + then + echo "Usage: min_version current minimum" + return + fi + + A="${1%%.*}" + B="${2%%.*}" + + if [[ "$A" != "$1" && "$B" != "$2" && "$A" == "$B" ]] + then + io_compareVersions "${1#*.}" "${2#*.}" + else + if is_number "$A" && is_number "$B" + then + if [[ "$A" -eq "$B" ]]; then + echo "0" + elif [[ "$A" -gt "$B" ]]; then + echo "1" + elif [[ "$A" -lt "$B" ]]; then + echo "-1" + fi + fi + fi +} + +# Reference https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-a-bash-variable +# Strip all leading and trailing spaces +# IMPORTANT- DON'T ADD LOGGING to this method +io_trim() { + local var="$1" + # remove leading whitespace characters + var="${var#"${var%%[![:space:]]*}"}" + # remove trailing whitespace characters + var="${var%"${var##*[![:space:]]}"}" + echo -n "$var" +} + +# temporary function will be removing it ASAP +# search for string and replace text in file +replaceText_migration_hook () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + fi +} + +# search for string and replace text in file +replaceText () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + logDebug "Replaced [$regexString] with [$replaceText] in [$file]" + fi +} + +# search for string and prepend text in file +prependText () { + local regexString="$1" + local text="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + else + sed -i -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + fi +} + +# add text to beginning of the file +addText () { + local text="$1" + local file="$2" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + else + sed -i -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + fi +} + +io_replaceString () { + local value="$1" + local firstString="$2" + local secondString="$3" + local separator=${4:-"/"} + local updateValue= + if [[ $(uname) == "Darwin" ]]; then + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + else + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + fi + echo -n "${updateValue}" +} + +_findYQ() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + local parentDir="$1" + if [ -z "$parentDir" ]; then + return + fi + logDebug "Executing command [find "${parentDir}" -name third-party -type d]" + local yq=$(find "${parentDir}" -name third-party -type d) + if [ -d "${yq}/yq" ]; then + export YQ_PATH="${yq}/yq" + fi +} + + +io_setYQPath() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + if [ "$(io_commandExists 'yq')" == "yes" ]; then + return + fi + + if [ ! -z "${JF_PRODUCT_HOME}" ] && [ -d "${JF_PRODUCT_HOME}" ]; then + _findYQ "${JF_PRODUCT_HOME}" + fi + + if [ -z "${YQ_PATH}" ] && [ ! -z "${COMPOSE_HOME}" ] && [ -d "${COMPOSE_HOME}" ]; then + _findYQ "${COMPOSE_HOME}" + fi + # TODO We can remove this block after all the code is restructured. + if [ -z "${YQ_PATH}" ] && [ ! -z "${SCRIPT_HOME}" ] && [ -d "${SCRIPT_HOME}" ]; then + _findYQ "${SCRIPT_HOME}" + fi + +} + +io_getLinuxDistribution() { + LINUX_DISTRIBUTION= + + # Make sure running on Linux + [ $(uname -s) != "Linux" ] && return + + # Find out what Linux distribution we are on + + cat /etc/*-release | grep -i Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 6.x + cat /etc/issue.net | grep Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 7.x + cat /etc/*-release | grep -i centos >/dev/null 2>&1 && LINUX_DISTRIBUTION=CentOS && LINUX_DISTRIBUTION_VER="7" || true + + # OS 8.x + grep -q -i "release 8" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="8" || true + + # OS 7.x + grep -q -i "release 7" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="7" || true + + # OS 6.x + grep -q -i "release 6" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="6" || true + + cat /etc/*-release | grep -i Red | grep -i 'VERSION=7' >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat && LINUX_DISTRIBUTION_VER="7" || true + + cat /etc/*-release | grep -i debian >/dev/null 2>&1 && LINUX_DISTRIBUTION=Debian || true + + cat /etc/*-release | grep -i ubuntu >/dev/null 2>&1 && LINUX_DISTRIBUTION=Ubuntu || true +} + +## Utility method to check ownership of folders/files +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If file is not owned by the user & group +## Parameters: + ## user + ## group + ## folder to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac +io_checkOwner () { + logSilly "Method ${FUNCNAME[0]}" + local osType=$(uname) + + if [ "${osType}" != "Linux" ]; then + logDebug "Unsupported OS. Skipping check" + return 0 + fi + + local file_to_check=$1 + local user_id_to_check=$2 + + + if [ -z "$user_id_to_check" ] || [ -z "$file_to_check" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group_id_to_check=${3:-$user_id_to_check} + local check_user_name=${4:-"no"} + + logDebug "Checking permissions on [$file_to_check] for user [$user_id_to_check] & group [$group_id_to_check]" + + local stat= + + if [ "${check_user_name}" == "yes" ]; then + stat=( $(stat -Lc "%U %G" ${file_to_check}) ) + else + stat=( $(stat -Lc "%u %g" ${file_to_check}) ) + fi + + local user_id=${stat[0]} + local group_id=${stat[1]} + + if [[ "${user_id}" != "${user_id_to_check}" ]] || [[ "${group_id}" != "${group_id_to_check}" ]] ; then + logDebug "Ownership mismatch. [${file_to_check}] is not owned by [${user_id_to_check}:${group_id_to_check}]" + return 1 + else + return 0 + fi +} + +## Utility method to change ownership of a file/folder - NON recursive +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnershipNonRecursive() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown ${user}:${group} ${targetFile}]" + chown ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to change ownership of a file. +## IMPORTANT +## If being called on a folder, should ONLY be called for fresh folders or may cause performance issues +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnership() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown -R ${user}:${group} ${targetFile}]" + chown -R ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to create third party folder structure necessary for Postgres +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## POSTGRESQL_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createPostgresDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${POSTGRESQL_DATA_ROOT}" ] && return 0 + + logDebug "Property [${POSTGRESQL_DATA_ROOT}] exists. Proceeding" + + createDir "${POSTGRESQL_DATA_ROOT}/data" + io_setOwnership "${POSTGRESQL_DATA_ROOT}" "${POSTGRES_USER}" "${POSTGRES_USER}" || errorExit "Setting ownership of [${POSTGRESQL_DATA_ROOT}] to [${POSTGRES_USER}:${POSTGRES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Nginx +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## NGINX_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createNginxDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${NGINX_DATA_ROOT}" ] && return 0 + + logDebug "Property [${NGINX_DATA_ROOT}] exists. Proceeding" + + createDir "${NGINX_DATA_ROOT}" + io_setOwnership "${NGINX_DATA_ROOT}" "${NGINX_USER}" "${NGINX_GROUP}" || errorExit "Setting ownership of [${NGINX_DATA_ROOT}] to [${NGINX_USER}:${NGINX_GROUP}] failed" +} + +## Utility method to create third party folder structure necessary for ElasticSearch +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## ELASTIC_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createElasticSearchDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${ELASTIC_DATA_ROOT}" ] && return 0 + + logDebug "Property [${ELASTIC_DATA_ROOT}] exists. Proceeding" + + createDir "${ELASTIC_DATA_ROOT}/data" + io_setOwnership "${ELASTIC_DATA_ROOT}" "${ES_USER}" "${ES_USER}" || errorExit "Setting ownership of [${ELASTIC_DATA_ROOT}] to [${ES_USER}:${ES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Redis +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## REDIS_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRedisDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${REDIS_DATA_ROOT}" ] && return 0 + + logDebug "Property [${REDIS_DATA_ROOT}] exists. Proceeding" + + createDir "${REDIS_DATA_ROOT}" + io_setOwnership "${REDIS_DATA_ROOT}" "${REDIS_USER}" "${REDIS_USER}" || errorExit "Setting ownership of [${REDIS_DATA_ROOT}] to [${REDIS_USER}:${REDIS_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Mongo +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## MONGODB_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createMongoDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${MONGODB_DATA_ROOT}" ] && return 0 + + logDebug "Property [${MONGODB_DATA_ROOT}] exists. Proceeding" + + createDir "${MONGODB_DATA_ROOT}/logs" + createDir "${MONGODB_DATA_ROOT}/configdb" + createDir "${MONGODB_DATA_ROOT}/db" + io_setOwnership "${MONGODB_DATA_ROOT}" "${MONGO_USER}" "${MONGO_USER}" || errorExit "Setting ownership of [${MONGODB_DATA_ROOT}] to [${MONGO_USER}:${MONGO_USER}] failed" +} + +## Utility method to create third party folder structure necessary for RabbitMQ +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## RABBITMQ_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRabbitMQDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${RABBITMQ_DATA_ROOT}" ] && return 0 + + logDebug "Property [${RABBITMQ_DATA_ROOT}] exists. Proceeding" + + createDir "${RABBITMQ_DATA_ROOT}" + io_setOwnership "${RABBITMQ_DATA_ROOT}" "${RABBITMQ_USER}" "${RABBITMQ_USER}" || errorExit "Setting ownership of [${RABBITMQ_DATA_ROOT}] to [${RABBITMQ_USER}:${RABBITMQ_USER}] failed" +} + +# Add or replace a property in provided properties file +addOrReplaceProperty() { + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + local delimiter=${4:-"="} + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}\s*${delimiter}.*$" ${propertiesPath} > /dev/null 2>&1 + [ $? -ne 0 ] && echo -e "\n${propertyName}${delimiter}${propertyValue}" >> ${propertiesPath} + sed -i -e "s|^${propertyName}\s*${delimiter}.*$|${propertyName}${delimiter}${propertyValue}|g;" ${propertiesPath} +} + +# Set property only if its not set +io_setPropertyNoOverride(){ + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}:" ${propertiesPath} > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo -e "${propertyName}: ${propertyValue}" >> ${propertiesPath} || warn "Setting property ${propertyName}: ${propertyValue} in [ ${propertiesPath} ] failed" + else + logger "Skipping update of property : ${propertyName}" >&6 + fi +} + +# Add a line to a file if it doesn't already exist +addLine() { + local line_to_add=$1 + local target_file=$2 + logger "Trying to add line $1 to $2" >&6 2>&1 + cat "$target_file" | grep -F "$line_to_add" -wq >&6 2>&1 + if [ $? != 0 ]; then + logger "Line does not exist and will be added" >&6 2>&1 + echo $line_to_add >> $target_file || errorExit "Could not update $target_file" + fi +} + +# Utility method to check if a value (first parameter) exists in an array (2nd parameter) +# 1st parameter "value to find" +# 2nd parameter "The array to search in. Please pass a string with each value separated by space" +# Example: containsElement "y" "y Y n N" +containsElement () { + local searchElement=$1 + local searchArray=($2) + local found=1 + for elementInIndex in "${searchArray[@]}";do + if [[ $elementInIndex == $searchElement ]]; then + found=0 + fi + done + return $found +} + +# Utility method to get user's choice +# 1st parameter "what to ask the user" +# 2nd parameter "what choices to accept, separated by spaces" +# 3rd parameter "what is the default choice (to use if the user simply presses Enter)" +# Example 'getUserChoice "Are you feeling lucky? Punk!" "y n Y N" "y"' +getUserChoice(){ + configureLogOutput + read_timeout=${read_timeout:-0.5} + local choice="na" + local text_to_display=$1 + local choices=$2 + local default_choice=$3 + users_choice= + + until containsElement "$choice" "$choices"; do + echo "";echo ""; + sleep $read_timeout #This ensures correct placement of the question. + read -p "$text_to_display :" choice + : ${choice:=$default_choice} + done + users_choice=$choice + echo -e "\n$text_to_display: $users_choice" >&6 + sleep $read_timeout #This ensures correct logging +} + +setFilePermission () { + local permission=$1 + local file=$2 + chmod "${permission}" "${file}" || warn "Setting permission ${permission} to file [ ${file} ] failed" +} + + +#setting required paths +setAppDir (){ + SCRIPT_DIR=$(dirname $0) + SCRIPT_HOME="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + APP_DIR="`cd "${SCRIPT_HOME}";pwd`" +} + +ZIP_TYPE="zip" +COMPOSE_TYPE="compose" +HELM_TYPE="helm" +RPM_TYPE="rpm" +DEB_TYPE="debian" + +sourceScript () { + local file="$1" + + [ ! -z "${file}" ] || errorExit "target file is not passed to source a file" + + if [ ! -f "${file}" ]; then + errorExit "${file} file is not found" + else + source "${file}" || errorExit "Unable to source ${file}, please check if the user ${USER} has permissions to perform this action" + fi +} +# Source required helpers +initHelpers () { + local systemYamlHelper="${APP_DIR}/systemYamlHelper.sh" + local thirdPartyDir=$(find ${APP_DIR}/.. -name third-party -type d) + export YQ_PATH="${thirdPartyDir}/yq" + LIBXML2_PATH="${thirdPartyDir}/libxml2/bin/xmllint" + export LD_LIBRARY_PATH="${thirdPartyDir}/libxml2/lib" + sourceScript "${systemYamlHelper}" +} +# Check migration info yaml file available in the path +checkMigrationInfoYaml () { + + if [[ -f "${APP_DIR}/migrationHelmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationHelmInfo.yaml" + INSTALLER="${HELM_TYPE}" + elif [[ -f "${APP_DIR}/migrationZipInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationZipInfo.yaml" + INSTALLER="${ZIP_TYPE}" + elif [[ -f "${APP_DIR}/migrationRpmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationRpmInfo.yaml" + INSTALLER="${RPM_TYPE}" + elif [[ -f "${APP_DIR}/migrationDebInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationDebInfo.yaml" + INSTALLER="${DEB_TYPE}" + elif [[ -f "${APP_DIR}/migrationComposeInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationComposeInfo.yaml" + INSTALLER="${COMPOSE_TYPE}" + else + errorExit "File migration Info yaml does not exist in [${APP_DIR}]" + fi +} + +retrieveYamlValue () { + local yamlPath="$1" + local value="$2" + local output="$3" + local message="$4" + + [[ -z "${yamlPath}" ]] && errorExit "yamlPath is mandatory to get value from ${MIGRATION_SYSTEM_YAML_INFO}" + + getYamlValue "${yamlPath}" "${MIGRATION_SYSTEM_YAML_INFO}" "false" + value="${YAML_VALUE}" + if [[ -z "${value}" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "Empty value for ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + elif [[ "${output}" == "Skip" ]]; then + return + else + errorExit "${message}" + fi + fi +} + +checkEnv () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + # check Environment JF_PRODUCT_HOME is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_PRODUCT_HOME")" + if [[ -z "${NEW_DATA_DIR}" ]]; then + errorExit "Environment variable JF_PRODUCT_HOME is not set, this is required to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + getCustomDataDir_hook + NEW_DATA_DIR="${OLD_DATA_DIR}" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + else + # check Environment JF_ROOT_DATA_DIR is set before migration + OLD_DATA_DIR="$(evalVariable "OLD_DATA_DIR" "JF_ROOT_DATA_DIR")" + # check Environment JF_ROOT_DATA_DIR is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_ROOT_DATA_DIR")" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi + +} + +getDataDir () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}"|| "${INSTALLER}" == "${HELM_TYPE}" ]]; then + checkEnv + else + getCustomDataDir_hook + NEW_DATA_DIR="`cd "${APP_DIR}"/../../;pwd`" + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi +} + +# Retrieve Product name from MIGRATION_SYSTEM_YAML_INFO +getProduct () { + retrieveYamlValue "migration.product" "${YAML_VALUE}" "Fail" "Empty value under ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + PRODUCT="${YAML_VALUE}" + PRODUCT=$(echo "${PRODUCT}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + if [[ "${PRODUCT}" != "artifactory" && "${PRODUCT}" != "distribution" && "${PRODUCT}" != "xray" ]]; then + errorExit "migration.product in [${MIGRATION_SYSTEM_YAML_INFO}] is not correct, please set based on product as ARTIFACTORY or DISTRIBUTION" + fi + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + JF_USER="${PRODUCT}" + fi +} +# Compare product version with minProductVersion and maxProductVersion +migrateCheckVersion () { + local productVersion="$1" + local minProductVersion="$2" + local maxProductVersion="$3" + local productVersion618="6.18.0" + local unSupportedProductVersions7=("7.2.0 7.2.1") + + if [[ "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 1 ]]; then + logger "Migration not necessary. ${PRODUCT} is already ${productVersion}" + exit 11 + elif [[ "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 1 ]]; then + if [[ ("$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 1) && " ${unSupportedProductVersions7[@]} " =~ " ${CURRENT_VERSION} " ]]; then + touch /tmp/error; + errorExit "Current ${PRODUCT} version (${productVersion}) does not support migration to ${CURRENT_VERSION}" + else + bannerStart "Detected ${PRODUCT} ${productVersion}, initiating migration" + fi + else + logger "Current ${PRODUCT} ${productVersion} version is not supported for migration" + exit 1 + fi +} + +getProductVersion () { + local minProductVersion="$1" + local maxProductVersion="$2" + local newfilePath="$3" + local oldfilePath="$4" + local propertyInDocker="$5" + local property="$6" + local productVersion= + local status= + + if [[ "$INSTALLER" == "${COMPOSE_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + elif [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${propertyInDocker}" "${newfilePath}")" + status="fail" + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + exit 0 + fi + elif [[ "$INSTALLER" == "${HELM_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + else + productVersion="${CURRENT_VERSION}" + [[ -z "${productVersion}" || "${productVersion}" == "" ]] && logger "${PRODUCT} CURRENT_VERSION is not set" && exit 0 + fi + else + if [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${property}" "${newfilePath}")" + status="fail" + elif [[ -f "${oldfilePath}" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + status="success" + else + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + logger "File [${newfilePath}] not found to get current version." + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + fi + exit 0 + fi + fi + if [[ -z "${productVersion}" || "${productVersion}" == "" ]]; then + [[ "${status}" == "success" ]] && logger "No version found in file [${oldfilePath}]." + [[ "${status}" == "fail" ]] && logger "No version found in file [${newfilePath}]." + exit 0 + fi + + migrateCheckVersion "${productVersion}" "${minProductVersion}" "${maxProductVersion}" +} + +readKey () { + local property="$1" + local file="$2" + local version= + + while IFS='=' read -r key value || [ -n "${key}" ]; + do + [[ ! "${key}" =~ \#.* && ! -z "${key}" && ! -z "${value}" ]] + key="$(io_trim "${key}")" + if [[ "${key}" == "${property}" ]]; then + version="${value}" && check=true && break + else + check=false + fi + done < "${file}" + if [[ "${check}" == "false" ]]; then + return + fi + echo "${version}" +} + +# create Log directory +createLogDir () { + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" + fi +} + +# Creating migration log file +creationMigrateLog () { + local LOG_FILE_NAME="migration.log" + createLogDir + local MIGRATION_LOG_FILE="${NEW_DATA_DIR}/log/${LOG_FILE_NAME}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + MIGRATION_LOG_FILE="${SCRIPT_HOME}/${LOG_FILE_NAME}" + fi + touch "${MIGRATION_LOG_FILE}" + setFilePermission "${LOG_FILE_PERMISSION}" "${MIGRATION_LOG_FILE}" + exec &> >(tee -a "${MIGRATION_LOG_FILE}") +} +# Set path where system.yaml should create +setSystemYamlPath () { + SYSTEM_YAML_PATH="${NEW_DATA_DIR}/etc/system.yaml" + if [[ "${INSTALLER}" != "${HELM_TYPE}" ]]; then + logger "system.yaml will be created in path [${SYSTEM_YAML_PATH}]" + fi +} +# Create directory +createDirectory () { + local directory="$1" + local output="$2" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${directory}" + mkdir -p "${directory}" && check=true || check=false + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi + setOwnershipBasedOnInstaller "${directory}" +} + +setOwnershipBasedOnInstaller () { + local directory="$1" + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + chown -R ${USER_TO_CHECK}:${GROUP_TO_CHECK} "${directory}" || warn "Setting ownership on $directory failed" + elif [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + io_setOwnership "${directory}" "${JF_USER}" "${JF_USER}" + fi +} + +getUserAndGroup () { + local file="$1" + read uid gid <<<$(stat -c '%U %G' ${file}) + USER_TO_CHECK="${uid}" + GROUP_TO_CHECK="${gid}" +} + +# set ownership +getUserAndGroupFromFile () { + case $PRODUCT in + artifactory) + getUserAndGroup "/etc/opt/jfrog/artifactory/artifactory.properties" + ;; + distribution) + getUserAndGroup "${OLD_DATA_DIR}/etc/versions.properties" + ;; + xray) + getUserAndGroup "${OLD_DATA_DIR}/security/master.key" + ;; + esac +} + +# creating required directories +createRequiredDirs () { + bannerSubSection "CREATING REQUIRED DIRECTORIES" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${JF_USER}" "${JF_USER}" "yes" + io_setOwnership "${NEW_DATA_DIR}" "${JF_USER}" "${JF_USER}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data/postgres" "${POSTGRES_USER}" "${POSTGRES_USER}" "yes" + fi + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + fi +} + +# Check entry in map is format +checkMapEntry () { + local entry="$1" + + [[ "${entry}" != *"="* ]] && echo -n "false" || echo -n "true" +} +# Check value Empty and warn +warnIfEmpty () { + local filePath="$1" + local yamlPath="$2" + local check= + + if [[ -z "${filePath}" ]]; then + warn "Empty value in yamlpath [${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + check=false + else + check=true + fi + echo "${check}" +} + +logCopyStatus () { + local status="$1" + local logMessage="$2" + local warnMessage="$3" + + [[ "${status}" == "success" ]] && logger "${logMessage}" + [[ "${status}" == "fail" ]] && warn "${warnMessage}" +} +# copy contents from source to destination +copyCmd () { + local source="$1" + local target="$2" + local mode="$3" + local status= + + case $mode in + unique) + cp -up "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + specific) + cp -pf "${source}" "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied file [${source}] to [${target}]" "Failed to copy file [${source}] to [${target}]" + ;; + patternFiles) + cp -pf "${source}"* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied files matching [${source}*] to [${target}]" "Failed to copy files matching [${source}*] to [${target}]" + ;; + full) + cp -prf "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + esac +} +# Check contents exist in source before copying +copyOnContentExist () { + local source="$1" + local target="$2" + local mode="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + copyCmd "${source}" "${target}" "${mode}" + else + logger "No contents to copy from [${source}]" + fi +} + +# move source to destination +moveCmd () { + local source="$1" + local target="$2" + local status= + + mv -f "${source}" "${target}" && status="success" || status="fail" + [[ "${status}" == "success" ]] && logger "Successfully moved directory [${source}] to [${target}]" + [[ "${status}" == "fail" ]] && warn "Failed to move directory [${source}] to [${target}]" +} + +# symlink target to source +symlinkCmd () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + local check=false + + if [[ "${symlinkSubDir}" == "subDir" ]]; then + ln -sf "${source}"/* "${target}" && check=true || check=false + else + ln -sf "${source}" "${target}" && check=true || check=false + fi + + [[ "${check}" == "true" ]] && logger "Successfully symlinked directory [${target}] to old [${source}]" + [[ "${check}" == "false" ]] && warn "Symlink operation failed" +} +# Check contents exist in source before symlinking +symlinkOnExist () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + if [[ "${symlinkSubDir}" == "subDir" ]]; then + symlinkCmd "${source}" "${target}" "subDir" + else + symlinkCmd "${source}" "${target}" + fi + else + logger "No contents to symlink from [${source}]" + fi +} + +prependDir () { + local absolutePath="$1" + local fullPath="$2" + local sourcePath= + + if [[ "${absolutePath}" = \/* ]]; then + sourcePath="${absolutePath}" + else + sourcePath="${fullPath}" + fi + echo "${sourcePath}" +} + +getFirstEntry (){ + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $1}' +} + +getSecondEntry () { + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $2}' +} +# To get absolutePath +pathResolver () { + local directoryPath="$1" + local dataDir= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Warning" + dataDir="${YAML_VALUE}" + cd "${dataDir}" + else + cd "${OLD_DATA_DIR}" + fi + absoluteDir="`cd "${directoryPath}";pwd`" + echo "${absoluteDir}" +} + +checkPathResolver () { + local value="$1" + + if [[ "${value}" == \/* ]]; then + value="${value}" + else + value="$(pathResolver "${value}")" + fi + echo "${value}" +} + +propertyMigrate () { + local entry="$1" + local filePath="$2" + local fileName="$3" + local check=false + + local yamlPath="$(getFirstEntry "${entry}")" + local property="$(getSecondEntry "${entry}")" + if [[ -z "${property}" ]]; then + warn "Property is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${property}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + local keyValues=$(cat "${NEW_DATA_DIR}/${filePath}/${fileName}" | grep "^[^#]" | grep "[*=*]") + for i in ${keyValues}; do + key=$(echo "${i}" | awk -F"=" '{print $1}') + value=$(echo "${i}" | cut -f 2- -d '=') + [ -z "${key}" ] && continue + [ -z "${value}" ] && continue + if [[ "${key}" == "${property}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + value="$(migrateResolveDerbyPath "${key}" "${value}")" + value="$(migrateResolveHaDirPath "${key}" "${value}")" + if [[ "${INSTALLER}" != "${DOCKER_TYPE}" ]]; then + value="$(updatePostgresUrlString_Hook "${yamlPath}" "${value}")" + fi + fi + if [[ "${key}" == "context.url" ]]; then + local ip=$(echo "${value}" | awk -F/ '{print $3}' | sed 's/:.*//') + setSystemValue "shared.node.ip" "${ip}" "${SYSTEM_YAML_PATH}" + logger "Setting [shared.node.ip] with [${ip}] in system.yaml" + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" && logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" && check=true && break || check=false + fi + done + [[ "${check}" == "false" ]] && logger "Property [${property}] not found in file [${fileName}]" +} + +setHaEnabled_hook () { + echo "" +} + +migratePropertiesFiles () { + local fileList= + local filePath= + local fileName= + local map= + + retrieveYamlValue "migration.propertyFiles.files" "fileList" "Skip" + fileList="${YAML_VALUE}" + if [[ -z "${fileList}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF PROPERTY FILES" + for file in ${fileList}; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.propertyFiles.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.propertyFiles.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + if [[ "$(checkFileExists "${NEW_DATA_DIR}/${filePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + # setting haEnabled with true only if ha-node.properties is present + setHaEnabled_hook "${filePath}" + retrieveYamlValue "migration.propertyFiles.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + propertyMigrate "${entry}" "${filePath}" "${fileName}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=property" + fi + done + else + logger "File [${fileName}] was not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} + +createTargetDir () { + local mountDir="$1" + local target="$2" + + logger "Target directory not found [${mountDir}/${target}], creating it" + createDirectoryRecursive "${mountDir}" "${target}" "Warning" +} + +createDirectoryRecursive () { + local mountDir="$1" + local target="$2" + local output="$3" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${mountDir}/${target}" + local directory=$(echo "${target}" | tr '/' ' ' ) + local targetDir="${mountDir}" + for dir in ${directory}; + do + targetDir="${targetDir}/${dir}" + mkdir -p "${targetDir}" && check=true || check=false + setOwnershipBasedOnInstaller "${targetDir}" + done + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi +} + +copyOperation () { + local source="$1" + local target="$2" + local mode="$3" + local check=false + local targetDataDir= + local targetLink= + local date= + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + #remove source if it is a symlink + if [[ -L "${source}" ]]; then + targetLink=$(readlink -f "${source}") + logger "Removing the symlink [${source}] pointing to [${targetLink}]" + rm -f "${source}" + source=${targetLink} + fi + if [[ "$(checkDirExists "${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path" + return + fi + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + logger "No contents to copy from [${source}]" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyOnContentExist "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copySpecificFiles () { + local source="$1" + local target="$2" + local mode="$3" + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkFileExists "${source}")" != "true" ]]; then + logger "Source file [${source}] does not exist in path" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copyPatternMatchingFiles () { + local source="$1" + local target="$2" + local mode="$3" + local sourcePath="${4}" + + # prepend OLD_DATA_DIR only if source is relative path + sourcePath="$(prependDir "${sourcePath}" "${OLD_DATA_DIR}/${sourcePath}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkDirExists "${sourcePath}")" != "true" ]]; then + logger "Source [${sourcePath}] directory not found in path" + return + fi + if ls "${sourcePath}/${source}"* 1> /dev/null 2>&1; then + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${sourcePath}/${source}" "${targetDataDir}/${target}" "${mode}" + else + logger "Source file [${sourcePath}/${source}*] does not exist in path" + fi +} + +copyLogMessage () { + local mode="$1" + case $mode in + specific) + logger "Copy file [${source}] to target [${targetDataDir}/${target}]" + ;; + patternFiles) + logger "Copy files matching [${sourcePath}/${source}*] to target [${targetDataDir}/${target}]" + ;; + full) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + unique) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + esac +} + +copyBannerMessages () { + local mode="$1" + local textMode="$2" + case $mode in + specific) + bannerSection "COPY ${textMode} FILES" + ;; + patternFiles) + bannerSection "COPY MATCHING ${textMode}" + ;; + full) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + unique) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + esac +} + +invokeCopyFunctions () { + local mode="$1" + local source="$2" + local target="$3" + + case $mode in + specific) + copySpecificFiles "${source}" "${target}" "${mode}" + ;; + patternFiles) + retrieveYamlValue "migration.${copyFormat}.sourcePath" "map" "Warning" + local sourcePath="${YAML_VALUE}" + copyPatternMatchingFiles "${source}" "${target}" "${mode}" "${sourcePath}" + ;; + full) + copyOperation "${source}" "${target}" "${mode}" + ;; + unique) + copyOperation "${source}" "${target}" "${mode}" + ;; + esac +} +# Copies contents from source directory and target directory +copyDataDirectories () { + local copyFormat="$1" + local mode="$2" + local map= + local source= + local target= + local textMode= + local targetDataDir= + local copyFormatValue= + + retrieveYamlValue "migration.${copyFormat}" "${copyFormat}" "Skip" + copyFormatValue="${YAML_VALUE}" + if [[ -z "${copyFormatValue}" ]]; then + return + fi + textMode=$(echo "${mode}" | tr '[:lower:]' '[:upper:]' 2>/dev/null) + copyBannerMessages "${mode}" "${textMode}" + retrieveYamlValue "migration.${copyFormat}.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeCopyFunctions "${mode}" "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +invokeMoveFunctions () { + local source="$1" + local target="$2" + local sourceDataDir= + local targetBasename= + # prepend OLD_DATA_DIR only if source is relative path + sourceDataDir=$(prependDir "${source}" "${OLD_DATA_DIR}/${source}") + targetBasename=$(dirname "${target}") + logger "Moving directory source [${sourceDataDir}] to target [${NEW_DATA_DIR}/${target}]" + if [[ "$(checkDirExists "${sourceDataDir}")" != "true" ]]; then + logger "Directory [${sourceDataDir}] not found in path to move" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${targetBasename}")" != "true" ]]; then + createTargetDir "${NEW_DATA_DIR}" "${targetBasename}" + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/${target}" + else + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/tempDir" + moveCmd "${NEW_DATA_DIR}/tempDir" "${NEW_DATA_DIR}/${target}" + fi +} + +# Move source directory and target directory +moveDirectories () { + local moveDataDirectories= + local map= + local source= + local target= + + retrieveYamlValue "migration.moveDirectories" "moveDirectories" "Skip" + moveDirectories="${YAML_VALUE}" + if [[ -z "${moveDirectories}" ]]; then + return + fi + bannerSection "MOVE DIRECTORIES" + retrieveYamlValue "migration.moveDirectories.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeMoveFunctions "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +# Trim masterKey if its generated using hex 32 +trimMasterKey () { + local masterKeyDir=/opt/jfrog/artifactory/var/etc/security + local oldMasterKey=$(<${masterKeyDir}/master.key) + local oldMasterKey_Length=$(echo ${#oldMasterKey}) + local newMasterKey= + if [[ ${oldMasterKey_Length} -gt 32 ]]; then + bannerSection "TRIM MASTERKEY" + newMasterKey=$(echo ${oldMasterKey:0:32}) + cp ${masterKeyDir}/master.key ${masterKeyDir}/backup_master.key + logger "Original masterKey is backed up : ${masterKeyDir}/backup_master.key" + rm -rf ${masterKeyDir}/master.key + echo ${newMasterKey} > ${masterKeyDir}/master.key + logger "masterKey is trimmed : ${masterKeyDir}/master.key" + fi +} + +copyDirectories () { + + copyDataDirectories "copyFiles" "full" + copyDataDirectories "copyUniqueFiles" "unique" + copyDataDirectories "copySpecificFiles" "specific" + copyDataDirectories "copyPatternMatchingFiles" "patternFiles" +} + +symlinkDir () { + local source="$1" + local target="$2" + local targetDir= + local basename= + local targetParentDir= + + targetDir="$(dirname "${target}")" + if [[ "${targetDir}" == "${source}" ]]; then + # symlink the sub directories + createDirectory "${NEW_DATA_DIR}/${target}" "Warning" + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" "subDir" + basename="$(basename "${target}")" + cd "${NEW_DATA_DIR}/${target}" && rm -f "${basename}" + fi + else + targetParentDir="$(dirname "${NEW_DATA_DIR}/${target}")" + createDirectory "${targetParentDir}" "Warning" + if [[ "$(checkDirExists "${targetParentDir}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" + fi + fi +} + +symlinkOperation () { + local source="$1" + local target="$2" + local check=false + local targetLink= + local date= + + # Check if source is a link and do symlink + if [[ -L "${OLD_DATA_DIR}/${source}" ]]; then + targetLink=$(readlink -f "${OLD_DATA_DIR}/${source}") + symlinkOnExist "${targetLink}" "${NEW_DATA_DIR}/${target}" + else + # check if source is directory and do symlink + if [[ "$(checkDirExists "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path to symlink" + return + fi + if [[ "$(checkDirContents "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "No contents found in [${OLD_DATA_DIR}/${source}] to symlink" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" != "true" ]]; then + logger "Target directory [${NEW_DATA_DIR}/${target}] does not exist to create symlink, creating it" + symlinkDir "${source}" "${target}" + else + rm -rf "${NEW_DATA_DIR}/${target}" && check=true || check=false + [[ "${check}" == "false" ]] && warn "Failed to remove contents in [${NEW_DATA_DIR}/${target}/]" + symlinkDir "${source}" "${target}" + fi + fi +} +# Creates a symlink path - Source directory to which the symbolic link should point. +symlinkDirectories () { + local linkFiles= + local map= + local source= + local target= + + retrieveYamlValue "migration.linkFiles" "linkFiles" "Skip" + linkFiles="${YAML_VALUE}" + if [[ -z "${linkFiles}" ]]; then + return + fi + bannerSection "SYMLINK DIRECTORIES" + retrieveYamlValue "migration.linkFiles.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + logger "Symlink directory [${NEW_DATA_DIR}/${target}] to old [${OLD_DATA_DIR}/${source}]" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + symlinkOperation "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +updateConnectionString () { + local yamlPath="$1" + local value="$2" + local mongoPath="shared.mongo.url" + local rabbitmqPath="shared.rabbitMq.url" + local postgresPath="shared.database.url" + local redisPath="shared.redis.connectionString" + local mongoConnectionString="mongo.connectionString" + local sourceKey= + local hostIp=$(io_getPublicHostIP) + local hostKey= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + # Replace @postgres:,@mongodb:,@rabbitmq:,@redis: to @{hostIp}: (Compose Installer) + hostKey="@${hostIp}:" + case $yamlPath in + ${postgresPath}) + sourceKey="@postgres:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoPath}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${rabbitmqPath}) + sourceKey="@rabbitmq:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${redisPath}) + sourceKey="@redis:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoConnectionString}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + esac + fi + echo -n "${value}" +} + +yamlMigrate () { + local entry="$1" + local sourceFile="$2" + local value= + local yamlPath= + local key= + yamlPath="$(getFirstEntry "${entry}")" + key="$(getSecondEntry "${entry}")" + if [[ -z "${key}" ]]; then + warn "key is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + getYamlValue "${key}" "${sourceFile}" "false" + value="${YAML_VALUE}" + if [[ ! -z "${value}" ]]; then + value=$(updateConnectionString "${yamlPath}" "${value}") + fi + if [[ -z "${value}" ]]; then + logger "No value for [${key}] in [${sourceFile}]" + else + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the key [${key}] in system.yaml" + fi +} + +migrateYamlFile () { + local files= + local filePath= + local fileName= + local sourceFile= + local map= + retrieveYamlValue "migration.yaml.files" "files" "Skip" + files="${YAML_VALUE}" + if [[ -z "${files}" ]]; then + return + fi + bannerSection "MIGRATION OF YAML FILES" + for file in $files; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.yaml.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.yaml.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + sourceFile="${NEW_DATA_DIR}/${filePath}/${fileName}" + if [[ "$(checkFileExists "${sourceFile}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + retrieveYamlValue "migration.yaml.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + yamlMigrate "${entry}" "${sourceFile}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done + else + logger "File [${fileName}] is not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} +# updates the key and value in system.yaml +updateYamlKeyValue () { + local entry="$1" + local value= + local yamlPath= + local key= + + yamlPath="$(getFirstEntry "${entry}")" + value="$(getSecondEntry "${entry}")" + if [[ -z "${value}" ]]; then + warn "value is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value [${value}] in system.yaml" +} + +updateSystemYamlFile () { + local updateYaml= + local map= + + retrieveYamlValue "migration.updateSystemYaml" "updateYaml" "Skip" + updateSystemYaml="${YAML_VALUE}" + if [[ -z "${updateSystemYaml}" ]]; then + return + fi + bannerSection "UPDATE SYSTEM YAML FILE WITH KEY AND VALUES" + retrieveYamlValue "migration.updateSystemYaml.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ -z "${map}" ]]; then + return + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + updateYamlKeyValue "${entry}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done +} + +backupFiles_hook () { + logSilly "Method ${FUNCNAME[0]}" +} + +backupDirectory () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyOnContentExist "${targetDir}" "${backupDirectory}/${dir}" "full" + fi +} + +removeOldDirectory () { + local backupDir="$1" + local entry="$2" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${entry}" "${OLD_DATA_DIR}/${entry}")" + local outputCheckDirExists="$(checkDirExists "${targetDir}")" + if [[ "${outputCheckDirExists}" != "true" ]]; then + logger "No [${targetDir}] directory found to delete" + echo ""; + return + fi + backupDirectory "${backupDir}" "${entry}" "${targetDir}" + rm -rf "${targetDir}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed directory [${targetDir}]" + [[ "${check}" == "false" ]] && warn "Failed to remove directory [${targetDir}]" + echo ""; +} + +cleanUpOldDataDirectories () { + local cleanUpOldDataDir= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldDataDir" "cleanUpOldDataDir" "Skip" + cleanUpOldDataDir="${YAML_VALUE}" + if [[ -z "${cleanUpOldDataDir}" ]]; then + return + fi + bannerSection "CLEAN UP OLD DATA DIRECTORIES" + retrieveYamlValue "migration.cleanUpOldDataDir.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old data configurations are backedup in [${backupDir}] directory ******" + backupFiles_hook "${backupDir}/${PRODUCT}" + for entry in $map; + do + removeOldDirectory "${backupDir}" "${entry}" + done +} + +backupFiles () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local fileName="$4" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyCmd "${targetDir}/${fileName}" "${backupDirectory}/${dir}" "specific" + fi +} + +removeOldFiles () { + local backupDir="$1" + local directoryName="$2" + local fileName="$3" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${directoryName}" "${OLD_DATA_DIR}/${directoryName}")" + local outputCheckFileExists="$(checkFileExists "${targetDir}/${fileName}")" + if [[ "${outputCheckFileExists}" != "true" ]]; then + logger "No [${targetDir}/${fileName}] file found to delete" + return + fi + backupFiles "${backupDir}" "${directoryName}" "${targetDir}" "${fileName}" + rm -f "${targetDir}/${fileName}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed file [${targetDir}/${fileName}]" + [[ "${check}" == "false" ]] && warn "Failed to remove file [${targetDir}/${fileName}]" + echo ""; +} + +cleanUpOldFiles () { + local cleanUpFiles= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldFiles" "cleanUpOldFiles" "Skip" + cleanUpOldFiles="${YAML_VALUE}" + if [[ -z "${cleanUpOldFiles}" ]]; then + return + fi + bannerSection "CLEAN UP OLD FILES" + retrieveYamlValue "migration.cleanUpOldFiles.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old files are backedup in [${backupDir}] directory ******" + for entry in $map; + do + local outputCheckMapEntry="$(checkMapEntry "${entry}")" + if [[ "${outputCheckMapEntry}" != "true" ]]; then + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e directoryName=fileName" + fi + local fileName="$(getSecondEntry "${entry}")" + local directoryName="$(getFirstEntry "${entry}")" + [[ -z "${fileName}" ]] && warn "File name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${directoryName}" ]] && warn "Directory name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + removeOldFiles "${backupDir}" "${directoryName}" "${fileName}" + echo ""; + done +} + +startMigration () { + bannerSection "STARTING MIGRATION" +} + +endMigration () { + bannerSection "MIGRATION COMPLETED SUCCESSFULLY" +} + +initialize () { + setAppDir + _pauseExecution "setAppDir" + initHelpers + _pauseExecution "initHelpers" + checkMigrationInfoYaml + _pauseExecution "checkMigrationInfoYaml" + getProduct + _pauseExecution "getProduct" + getDataDir + _pauseExecution "getDataDir" +} + +main () { + case $PRODUCT in + artifactory) + migrateArtifactory + ;; + distribution) + migrateDistribution + ;; + xray) + migrationXray + ;; + esac + exit 0 +} + +# Ensures meta data is logged +LOG_BEHAVIOR_ADD_META="$FLAG_Y" + + +migrateResolveDerbyPath () { + local key="$1" + local value="$2" + + if [[ "${key}" == "url" && "${value}" == *"db.home"* ]]; then + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + derbyPath="/opt/jfrog/artifactory/var/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + else + derbyPath="${NEW_DATA_DIR}/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + fi + fi + echo "${value}" +} + +migrateResolveHaDirPath () { + local key="$1" + local value="$2" + + if [[ "${INSTALLER}" == "${RPM_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" || "${INSTALLER}" == "${DEB_TYPE}" ]]; then + if [[ "${key}" == "artifactory.ha.data.dir" || "${key}" == "artifactory.ha.backup.dir" ]]; then + value=$(checkPathResolver "${value}") + fi + fi + echo "${value}" +} +updatePostgresUrlString_Hook () { + local yamlPath="$1" + local value="$2" + local hostIp=$(io_getPublicHostIP) + local sourceKey="//postgresql:" + if [[ "${yamlPath}" == "shared.database.url" ]]; then + value=$(io_replaceString "${value}" "${sourceKey}" "//${hostIp}:" "#") + fi + echo "${value}" +} +# Check Artifactory product version +checkArtifactoryVersion () { + local minProductVersion="6.0.0" + local maxProductVersion="7.0.0" + local propertyInDocker="ARTIFACTORY_VERSION" + local property="artifactory.version" + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + local newfilePath="${APP_DIR}/../.env" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + else + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="/etc/opt/jfrog/artifactory/artifactory.properties" + fi + + getProductVersion "${minProductVersion}" "${maxProductVersion}" "${newfilePath}" "${oldfilePath}" "${propertyInDocker}" "${property}" +} + +getCustomDataDir_hook () { + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Fail" + OLD_DATA_DIR="${YAML_VALUE}" +} + +# Get protocol value of connector +getXmlConnectorProtocol () { + local i="$1" + local filePath="$2" + local fileName="$3" + local protocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@protocol' ${filePath}/${fileName} 2>/dev/null |awk -F"=" '{print $2}' | tr -d '"') + echo -e "${protocolValue}" +} + +# Get all attributes of connector +getXmlConnectorAttributes () { + local i="$1" + local filePath="$2" + local fileName="$3" + local connectorAttributes=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@*' ${filePath}/${fileName} 2>/dev/null) + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + echo "${connectorAttributes}" +} + +# Get port value of connector +getXmlConnectorPort () { + local i="$1" + local filePath="$2" + local fileName="$3" + local portValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@port' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${portValue}" +} + +# Get maxThreads value of connector +getXmlConnectorMaxThreads () { + local i="$1" + local filePath="$2" + local fileName="$3" + local maxThreadValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@maxThreads' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${maxThreadValue}" +} +# Get sendReasonPhrase value of connector +getXmlConnectorSendReasonPhrase () { + local i="$1" + local filePath="$2" + local fileName="$3" + local sendReasonPhraseValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sendReasonPhrase' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${sendReasonPhraseValue}" +} +# Get relaxedPathChars value of connector +getXmlConnectorRelaxedPathChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedPathCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedPathChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedPathCharsValue=$(io_trim "${relaxedPathCharsValue}") + echo -e "${relaxedPathCharsValue}" +} +# Get relaxedQueryChars value of connector +getXmlConnectorRelaxedQueryChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedQueryCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedQueryChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedQueryCharsValue=$(io_trim "${relaxedQueryCharsValue}") + echo -e "${relaxedQueryCharsValue}" +} + +# Updating system.yaml with Connector port +setConnectorPort () { + local yamlPath="$1" + local valuePort="$2" + local portYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${valuePort}" ]]; then + warn "port value is empty, could not migrate to system.yaml" + return + fi + ## Getting port yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" portYamlPath "Warning" + portYamlPath="${YAML_VALUE}" + if [[ -z "${portYamlPath}" ]]; then + return + fi + setSystemValue "${portYamlPath}" "${valuePort}" "${SYSTEM_YAML_PATH}" + logger "Setting [${portYamlPath}] with value [${valuePort}] in system.yaml" +} + +# Updating system.yaml with Connector maxThreads +setConnectorMaxThread () { + local yamlPath="$1" + local threadValue="$2" + local maxThreadYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${threadValue}" ]]; then + return + fi + ## Getting max Threads yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" maxThreadYamlPath "Warning" + maxThreadYamlPath="${YAML_VALUE}" + if [[ -z "${maxThreadYamlPath}" ]]; then + return + fi + setSystemValue "${maxThreadYamlPath}" "${threadValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${maxThreadYamlPath}] with value [${threadValue}] in system.yaml" +} + +# Updating system.yaml with Connector sendReasonPhrase +setConnectorSendReasonPhrase () { + local yamlPath="$1" + local sendReasonPhraseValue="$2" + local sendReasonPhraseYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${sendReasonPhraseValue}" ]]; then + return + fi + ## Getting sendReasonPhrase yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" sendReasonPhraseYamlPath "Warning" + sendReasonPhraseYamlPath="${YAML_VALUE}" + if [[ -z "${sendReasonPhraseYamlPath}" ]]; then + return + fi + setSystemValue "${sendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${sendReasonPhraseYamlPath}] with value [${sendReasonPhraseValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedPathChars +setConnectorRelaxedPathChars () { + local yamlPath="$1" + local relaxedPathCharsValue="$2" + local relaxedPathCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedPathCharsValue}" ]]; then + return + fi + ## Getting relaxedPathChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedPathCharsYamlPath "Warning" + relaxedPathCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedPathCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedPathCharsYamlPath}" "${relaxedPathCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedPathCharsYamlPath}] with value [${relaxedPathCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedQueryChars +setConnectorRelaxedQueryChars () { + local yamlPath="$1" + local relaxedQueryCharsValue="$2" + local relaxedQueryCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedQueryCharsValue}" ]]; then + return + fi + ## Getting relaxedQueryChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedQueryCharsYamlPath "Warning" + relaxedQueryCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedQueryCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedQueryCharsYamlPath}" "${relaxedQueryCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedQueryCharsYamlPath}] with value [${relaxedQueryCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connectors configurations +setConnectorExtraConfig () { + local yamlPath="$1" + local connectorAttributes="$2" + local extraConfigPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${connectorAttributes}" ]]; then + return + fi + ## Getting extraConfig yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConfig "Warning" + extraConfigPath="${YAML_VALUE}" + if [[ -z "${extraConfigPath}" ]]; then + return + fi + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setSystemValue "${extraConfigPath}" "${connectorAttributes}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConfigPath}] with connector attributes in system.yaml" +} + +# Updating system.yaml with extra Connectors +setExtraConnector () { + local yamlPath="$1" + local extraConnector="$2" + local extraConnectorYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${extraConnector}" ]]; then + return + fi + ## Getting extraConnecotr yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConnectorYamlPath "Warning" + extraConnectorYamlPath="${YAML_VALUE}" + if [[ -z "${extraConnectorYamlPath}" ]]; then + return + fi + getYamlValue "${extraConnectorYamlPath}" "${SYSTEM_YAML_PATH}" "false" + local connectorExtra="${YAML_VALUE}" + if [[ -z "${connectorExtra}" ]]; then + setSystemValue "${extraConnectorYamlPath}" "${extraConnector}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + else + setSystemValue "${extraConnectorYamlPath}" "\"${connectorExtra} ${extraConnector}\"" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + fi +} + +# Migrate extra connectors to system.yaml +migrateExtraConnectors () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local excludeDefaultPort="$4" + local i="$5" + local extraConfig= + local extraConnector= + if [[ "${excludeDefaultPort}" == "yes" ]]; then + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" && "${portValue}" != "${DEFAULT_RT_PORT}" ]] || continue + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + done + else + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + fi +} + +# Migrate connector configurations +migrateConnectorConfig () { + local i="$1" + local protocolType="$2" + local portValue="$3" + local connectorPortYamlPath="$4" + local connectorMaxThreadYamlPath="$5" + local connectorAttributesYamlPath="$6" + local filePath="$7" + local fileName="$8" + local connectorSendReasonPhraseYamlPath="$9" + local connectorRelaxedPathCharsYamlPath="${10}" + local connectorRelaxedQueryCharsYamlPath="${11}" + + # migrate port + setConnectorPort "${connectorPortYamlPath}" "${portValue}" + + # migrate maxThreads + local maxThreadValue=$(getXmlConnectorMaxThreads "$i" "${filePath}" "${fileName}") + setConnectorMaxThread "${connectorMaxThreadYamlPath}" "${maxThreadValue}" + + # migrate sendReasonPhrase + local sendReasonPhraseValue=$(getXmlConnectorSendReasonPhrase "$i" "${filePath}" "${fileName}") + setConnectorSendReasonPhrase "${connectorSendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" + + # migrate relaxedPathChars + local relaxedPathCharsValue=$(getXmlConnectorRelaxedPathChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedPathChars "${connectorRelaxedPathCharsYamlPath}" "\"${relaxedPathCharsValue}\"" + # migrate relaxedQueryChars + local relaxedQueryCharsValue=$(getXmlConnectorRelaxedQueryChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedQueryChars "${connectorRelaxedQueryCharsYamlPath}" "\"${relaxedQueryCharsValue}\"" + + # migrate all attributes to extra config except port , maxThread , sendReasonPhrase ,relaxedPathChars and relaxedQueryChars + local connectorAttributes=$(getXmlConnectorAttributes "$i" "${filePath}" "${fileName}") + connectorAttributes=$(echo "${connectorAttributes}" | sed 's/port="'${portValue}'"//g' | sed 's/maxThreads="'${maxThreadValue}'"//g' | sed 's/sendReasonPhrase="'${sendReasonPhraseValue}'"//g' | sed 's/relaxedPathChars="\'${relaxedPathCharsValue}'\"//g' | sed 's/relaxedQueryChars="\'${relaxedQueryCharsValue}'\"//g') + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setConnectorExtraConfig "${connectorAttributesYamlPath}" "${connectorAttributes}" +} + +# Check for default port 8040 and 8081 in connectors and migrate +migrateConnectorPort () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + local connectorPortYamlPath="$5" + local connectorMaxThreadYamlPath="$6" + local connectorAttributesYamlPath="$7" + local connectorSendReasonPhraseYamlPath="$8" + local connectorRelaxedPathCharsYamlPath="$9" + local connectorRelaxedQueryCharsYamlPath="${10}" + local portYamlPath= + local maxThreadYamlPath= + local status= + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" == *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + RT_DEFAULTPORT_STATUS=success + else + AC_DEFAULTPORT_STATUS=success + fi + migrateConnectorConfig "${i}" "${protocolType}" "${portValue}" "${connectorPortYamlPath}" "${connectorMaxThreadYamlPath}" "${connectorAttributesYamlPath}" "${filePath}" "${fileName}" "${connectorSendReasonPhraseYamlPath}" "${connectorRelaxedPathCharsYamlPath}" "${connectorRelaxedQueryCharsYamlPath}" + done +} + +# migrate to extra, connector having default port and protocol is AJP +migrateDefaultPortIfAjp () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + done + +} + +# Comparing max threads in connectors +compareMaxThreads () { + local firstConnectorMaxThread="$1" + local firstConnectorNode="$2" + local secondConnectorMaxThread="$3" + local secondConnectorNode="$4" + local filePath="$5" + local fileName="$6" + + # choose higher maxThreads connector as Artifactory. + if [[ "${firstConnectorMaxThread}" -gt ${secondConnectorMaxThread} || "${firstConnectorMaxThread}" -eq ${secondConnectorMaxThread} ]]; then + # maxThread is higher in firstConnector, + # Taking firstConnector as Artifactory and SecondConnector as Access + # maxThread is equal in both connector,considering firstConnector as Artifactory and SecondConnector as Access + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + else + # maxThread is higher in SecondConnector, + # Taking SecondConnector as Artifactory and firstConnector as Access + local rtPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +# Check max threads exist to compare +maxThreadsExistToCompare () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local firstConnectorMaxThread= + local secondConnectorMaxThread= + local firstConnectorNode= + local secondConnectorNode= + local status=success + local firstnode=fail + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ ${protocolType} == *AJP* ]]; then + # Migrate Connectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + fi + # store maxthreads value of each connector + if [[ ${firstnode} == "fail" ]]; then + firstConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + firstConnectorNode="${i}" + firstnode=success + else + secondConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + secondConnectorNode="${i}" + fi + done + [[ -z "${firstConnectorMaxThread}" ]] && status=fail + [[ -z "${secondConnectorMaxThread}" ]] && status=fail + # maxThreads is set, now compare MaxThreads + if [[ "${status}" == "success" ]]; then + compareMaxThreads "${firstConnectorMaxThread}" "${firstConnectorNode}" "${secondConnectorMaxThread}" "${secondConnectorNode}" "${filePath}" "${fileName}" + else + # Assume first connector is RT, maxThreads is not set in both connectors + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +migrateExtraBasedOnNonAjpCount () { + local nonAjpCount="$1" + local filePath="$2" + local fileName="$3" + local connectorCount="$4" + local i="$5" + + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ "${protocolType}" == *AJP* ]]; then + if [[ "${nonAjpCount}" -eq 1 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + else + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + continue + fi + fi +} + +# find RT and AC Connector +findRtAndAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local initialAjpCount=0 + local nonAjpCount=0 + + # get the count of non AJP + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] || continue + nonAjpCount=$((initialAjpCount+1)) + initialAjpCount="${nonAjpCount}" + done + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access and artifactory connectors + # Mark port as 8040 for access + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + done + elif [[ "${nonAjpCount}" -eq 2 ]]; then + # compare maxThreads in both connectors + maxThreadsExistToCompare "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${nonAjpCount}" -gt 2 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # setting with default port in system.yaml + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# get the count of non AJP +getCountOfNonAjp () { + local port="$1" + local connectorCount="$2" + local filePath=$3 + local fileName=$4 + local initialNonAjpCount=0 + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${port}" ]] || continue + [[ "${protocolType}" != *AJP* ]] || continue + local nonAjpCount=$((initialNonAjpCount+1)) + initialNonAjpCount="${nonAjpCount}" + done + echo -e "${nonAjpCount}" +} + +# Find for access connector +findAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_RT_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access connector and mark port as that of connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take RT properties into access with 8040 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add RT connector details as access connector and mark port as 8040 + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# Find for artifactory connector +findRtConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_ACCESS_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as RT connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take access properties into artifactory with 8081 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add access connector details as RT connector and mark as ${DEFAULT_RT_PORT} + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +checkForTlsConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + local sslProtocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sslProtocol' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${sslProtocolValue}" == "TLS" ]]; then + bannerImportant "NOTE: Ignoring TLS connector during migration, modify the system yaml to enable TLS. Original server.xml is saved in path [${filePath}/${fileName}]" + TLS_CONNECTOR_EXISTS=${FLAG_Y} + continue + fi + done +} + +# set custom tomcat server Listeners to system.yaml +setListenerConnector () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + for ((i = 1 ; i <= "${listenerCount}" ; i++)) + do + local listenerConnector=$($LIBXML2_PATH --xpath '//Server/Listener['$i']' ${filePath}/${fileName} 2>/dev/null) + local listenerClassName=$($LIBXML2_PATH --xpath '//Server/Listener['$i']/@className' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${listenerClassName}" == *Apr* ]]; then + setExtraConnector "${EXTRA_LISTENER_CONFIG_YAMLPATH}" "${listenerConnector}" + fi + done +} +# add custom tomcat server Listeners +addTomcatServerListeners () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + if [[ "${listenerCount}" == "0" ]]; then + logger "No listener connectors found in the [${filePath}/${fileName}],skipping migration of listener connectors" + else + setListenerConnector "${filePath}" "${fileName}" "${listenerCount}" + setSystemValue "${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}" "true" "${SYSTEM_YAML_PATH}" + logger "Setting [${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}] with value [true] in system.yaml" + fi +} + +# server.xml migration operations +xmlMigrateOperation () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local listenerCount="$4" + RT_DEFAULTPORT_STATUS=fail + AC_DEFAULTPORT_STATUS=fail + TLS_CONNECTOR_EXISTS=${FLAG_N} + + # Check for connector with TLS , if found ignore migrating it + checkForTlsConnector "${filePath}" "${fileName}" "${connectorCount}" + if [[ "${TLS_CONNECTOR_EXISTS}" == "${FLAG_Y}" ]]; then + return + fi + addTomcatServerListeners "${filePath}" "${fileName}" "${listenerCount}" + # Migrate RT default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + # Migrate to extra if RT default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" + # Migrate AC default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + # Migrate to extra if access default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" + + if [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # RT and AC default port found + logger "Artifactory 8081 and Access 8040 default port are found" + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # Only AC default port found,find RT connector + logger "Found Access default 8040 port" + findRtConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # Only RT default port found,find AC connector + logger "Found Artifactory default 8081 port" + findAcConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # RT and AC default port not found, find connector + logger "Artifactory 8081 and Access 8040 default port are not found" + findRtAndAcConnector "${filePath}" "${fileName}" "${connectorCount}" + fi +} + +# get count of connectors +getXmlConnectorCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Service/Connector)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# get count of listener connectors +getTomcatServerListenersCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Listener)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# Migrate server.xml configuration to system.yaml +migrateXmlFile () { + local xmlFiles= + local fileName= + local filePath= + local sourceFilePath= + DEFAULT_ACCESS_PORT="8040" + DEFAULT_RT_PORT="8081" + AC_PORT_YAMLPATH="migration.xmlFiles.serverXml.access.port" + AC_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.access.maxThreads" + AC_SENDREASONPHRASE_YAMLPATH="migration.xmlFiles.serverXml.access.sendReasonPhrase" + AC_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.access.extraConfig" + RT_PORT_YAMLPATH="migration.xmlFiles.serverXml.artifactory.port" + RT_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.artifactory.maxThreads" + RT_SENDREASONPHRASE_YAMLPATH='migration.xmlFiles.serverXml.artifactory.sendReasonPhrase' + RT_RELAXEDPATHCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedPathChars' + RT_RELAXEDQUERYCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedQueryChars' + RT_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.artifactory.extraConfig" + ROUTER_PORT_YAMLPATH="migration.xmlFiles.serverXml.router.port" + EXTRA_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.config" + EXTRA_LISTENER_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.listener" + RT_TOMCAT_HTTPSCONNECTOR_ENABLED="artifactory.tomcat.httpsConnector.enabled" + + retrieveYamlValue "migration.xmlFiles" "xmlFiles" "Skip" + xmlFiles="${YAML_VALUE}" + if [[ -z "${xmlFiles}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF XML FILES" + retrieveYamlValue "migration.xmlFiles.serverXml.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + if [[ -z "${fileName}" ]]; then + return + fi + bannerSubSection "Processing Migration of $fileName" + retrieveYamlValue "migration.xmlFiles.serverXml.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + if [[ -z "${filePath}" ]]; then + return + fi + # prepend NEW_DATA_DIR only if filePath is relative path + sourceFilePath=$(prependDir "${filePath}" "${NEW_DATA_DIR}/${filePath}") + if [[ "$(checkFileExists "${sourceFilePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] is found in path [${sourceFilePath}]" + local connectorCount=$(getXmlConnectorCount "${sourceFilePath}" "${fileName}") + if [[ "${connectorCount}" == "0" ]]; then + logger "No connectors found in the [${filePath}/${fileName}],skipping migration of xml configuration" + return + fi + local listenerCount=$(getTomcatServerListenersCount "${sourceFilePath}" "${fileName}") + xmlMigrateOperation "${sourceFilePath}" "${fileName}" "${connectorCount}" "${listenerCount}" + else + logger "File [${fileName}] is not found in path [${sourceFilePath}] to migrate" + fi +} + +compareArtifactoryUser () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + + if [[ "${oldPropertyValue}" != "${newPropertyValue}" ]]; then + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" + else + logger "No change in property [${property}] value in [${sourceFile}] to migrate" + fi +} + +migrateReplicator () { + local property="$1" + local oldPropertyValue="$2" + local yamlPath="$3" + + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" +} + +compareJavaOptions () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + local oldJavaOption= + local newJavaOption= + local extraJavaOption= + local check=false + local success=true + local status=true + + oldJavaOption=$(echo "${oldPropertyValue}" | awk 'BEGIN{FS=OFS="\""}{for(i=2;i.+)\.{{ include "artifactory.fullname" . }} {{ include "artifactory.fullname" . }} +{{ tpl (include "artifactory.nginx.hosts" .) . }}; + +if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; +} +set $host_port {{ .Values.nginx.https.externalPort }}; +if ( $scheme = "http" ) { + set $host_port {{ .Values.nginx.http.externalPort }}; +} +## Application specific logs +## access_log /var/log/nginx/artifactory-access.log timing; +## error_log /var/log/nginx/artifactory-error.log; +rewrite ^/artifactory/?$ / redirect; +if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; +} +chunked_transfer_encoding on; +client_max_body_size 0; + +location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$host_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + {{- if .Values.nginx.disableProxyBuffering}} + proxy_http_version 1.1; + proxy_request_buffering off; + proxy_buffering off; + {{- end }} + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + location /pipelines/ { + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $http_host; + {{- if .Values.router.tlsEnabled }} + proxy_pass https://{{ include "artifactory.fullname" . }}:{{ .Values.router.internalPort }}; + {{- else }} + proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.router.internalPort }}; + {{- end }} + } +} +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/nginx-main-conf.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/nginx-main-conf.yaml new file mode 100644 index 000000000..6ee7f98f9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/nginx-main-conf.yaml @@ -0,0 +1,83 @@ +# Main Nginx configuration file +worker_processes 4; + +{{- if .Values.nginx.logs.stderr }} +error_log stderr {{ .Values.nginx.logs.level }}; +{{- else -}} +error_log {{ .Values.nginx.persistence.mountPath }}/logs/error.log {{ .Values.nginx.logs.level }}; +{{- end }} +pid /var/run/nginx.pid; + +{{- if .Values.artifactory.ssh.enabled }} +## SSH Server Configuration +stream { + server { + {{- if .Values.nginx.singleStackIPv6Cluster }} + listen [::]:{{ .Values.nginx.ssh.internalPort }}; + {{- else -}} + listen {{ .Values.nginx.ssh.internalPort }}; + {{- end }} + proxy_pass {{ include "artifactory.fullname" . }}:{{ .Values.artifactory.ssh.externalPort }}; + } +} +{{- end }} + +events { + worker_connections 1024; +} + +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + variables_hash_max_size 1024; + variables_hash_bucket_size 64; + server_names_hash_max_size 4096; + server_names_hash_bucket_size 128; + types_hash_max_size 2048; + types_hash_bucket_size 64; + proxy_read_timeout 2400s; + client_header_timeout 2400s; + client_body_timeout 2400s; + proxy_connect_timeout 75s; + proxy_send_timeout 2400s; + proxy_buffer_size 128k; + proxy_buffers 40 128k; + proxy_busy_buffers_size 128k; + proxy_temp_file_write_size 250m; + proxy_http_version 1.1; + client_body_buffer_size 128k; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + log_format timing 'ip = $remote_addr ' + 'user = \"$remote_user\" ' + 'local_time = \"$time_local\" ' + 'host = $host ' + 'request = \"$request\" ' + 'status = $status ' + 'bytes = $body_bytes_sent ' + 'upstream = \"$upstream_addr\" ' + 'upstream_time = $upstream_response_time ' + 'request_time = $request_time ' + 'referer = \"$http_referer\" ' + 'UA = \"$http_user_agent\"'; + + {{- if .Values.nginx.logs.stdout }} + access_log /dev/stdout timing; + {{- else -}} + access_log {{ .Values.nginx.persistence.mountPath }}/logs/access.log timing; + {{- end }} + + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + + #gzip on; + + include /etc/nginx/conf.d/*.conf; + +} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/system.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/system.yaml new file mode 100644 index 000000000..053207fd0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/files/system.yaml @@ -0,0 +1,156 @@ +router: + serviceRegistry: + insecure: {{ .Values.router.serviceRegistry.insecure }} +shared: +{{- if .Values.artifactory.coldStorage.enabled }} + jfrogColdStorage: + coldInstanceEnabled: true +{{- end }} +{{ tpl (include "artifactory.metrics" .) . }} + logging: + consoleLog: + enabled: {{ .Values.artifactory.consoleLog }} + extraJavaOpts: > + -Dartifactory.graceful.shutdown.max.request.duration.millis={{ mul .Values.artifactory.terminationGracePeriodSeconds 1000 }} + -Dartifactory.access.client.max.connections={{ .Values.access.tomcat.connector.maxThreads }} + {{- with .Values.artifactory.javaOpts }} + {{- if .corePoolSize }} + -Dartifactory.async.corePoolSize={{ .corePoolSize }} + {{- end }} + {{- if .xms }} + -Xms{{ .xms }} + {{- end }} + {{- if .xmx }} + -Xmx{{ .xmx }} + {{- end }} + {{- if .jmx.enabled }} + -Dcom.sun.management.jmxremote + -Dcom.sun.management.jmxremote.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.rmi.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.ssl={{ .jmx.ssl }} + {{- if .jmx.host }} + -Djava.rmi.server.hostname={{ tpl .jmx.host $ }} + {{- else }} + -Djava.rmi.server.hostname={{ template "artifactory.fullname" $ }} + {{- end }} + {{- if .jmx.authenticate }} + -Dcom.sun.management.jmxremote.authenticate=true + -Dcom.sun.management.jmxremote.access.file={{ .jmx.accessFile }} + -Dcom.sun.management.jmxremote.password.file={{ .jmx.passwordFile }} + {{- else }} + -Dcom.sun.management.jmxremote.authenticate=false + {{- end }} + {{- end }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + {{- if or .Values.database.type .Values.postgresql.enabled }} + database: + allowNonPostgresql: {{ .Values.database.allowNonPostgresql }} + {{- if .Values.postgresql.enabled }} + type: postgresql + url: "jdbc:postgresql://{{ .Release.Name }}-postgresql:{{ .Values.postgresql.service.port }}/{{ .Values.postgresql.postgresqlDatabase }}" + driver: org.postgresql.Driver + username: "{{ .Values.postgresql.postgresqlUsername }}" + {{- else }} + type: "{{ .Values.database.type }}" + driver: "{{ .Values.database.driver }}" + {{- end }} + {{- end }} +artifactory: +{{- if or .Values.artifactory.haDataDir.enabled .Values.artifactory.haBackupDir.enabled }} + node: + {{- if .Values.artifactory.haDataDir.path }} + haDataDir: {{ .Values.artifactory.haDataDir.path }} + {{- end }} + {{- if .Values.artifactory.haBackupDir.path }} + haBackupDir: {{ .Values.artifactory.haBackupDir.path }} + {{- end }} +{{- end }} + database: + maxOpenConnections: {{ .Values.artifactory.database.maxOpenConnections }} + tomcat: + maintenanceConnector: + port: {{ .Values.artifactory.tomcat.maintenanceConnector.port }} + connector: + maxThreads: {{ .Values.artifactory.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.artifactory.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.artifactory.tomcat.connector.extraConfig }} +frontend: + session: + timeMinutes: {{ .Values.frontend.session.timeoutMinutes | quote }} +access: + runOnArtifactoryTomcat: {{ .Values.access.runOnArtifactoryTomcat | default false }} + database: + maxOpenConnections: {{ .Values.access.database.maxOpenConnections }} + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + extraJavaOpts: > + {{- if .Values.splitServicesToContainers }} + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=70 + {{- end }} + {{- with .Values.access.javaOpts }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + {{- end }} + tomcat: + connector: + maxThreads: {{ .Values.access.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.access.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.access.tomcat.connector.extraConfig }} +{{- if .Values.mc.enabled }} +mc: + enabled: true + database: + maxOpenConnections: {{ .Values.mc.database.maxOpenConnections }} + idgenerator: + maxOpenConnections: {{ .Values.mc.idgenerator.maxOpenConnections }} + tomcat: + connector: + maxThreads: {{ .Values.mc.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.mc.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.mc.tomcat.connector.extraConfig }} +{{- end }} +metadata: + database: + maxOpenConnections: {{ .Values.metadata.database.maxOpenConnections }} +{{- if and .Values.jfconnect.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +jfconnect: + enabled: true +{{- else }} +jfconnect: + enabled: false +jfconnect_service: + enabled: false +{{- end }} +{{- if and .Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +federation: + enabled: true + embedded: {{ .Values.federation.embedded }} + extraJavaOpts: {{ .Values.federation.extraJavaOpts }} + port: {{ .Values.federation.internalPort }} +rtfs: + database: + driver: org.postgresql.Driver + type: postgresql + username: {{ .Values.federation.database.username }} + password: {{ .Values.federation.database.password }} + url: jdbc:postgresql://{{ .Values.federation.database.host }}:{{ .Values.federation.database.port }}/{{ .Values.federation.database.name }} +{{- else }} +federation: + enabled: false +{{- end }} +{{- if .Values.event.webhooks }} +event: + webhooks: {{ toYaml .Values.event.webhooks | nindent 6 }} +{{- end }} +{{- if .Values.evidence.enabled }} +evidence: + enabled: true +{{- else }} +evidence: + enabled: false +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/logo/artifactory-logo.png b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/logo/artifactory-logo.png new file mode 100644 index 000000000..fe6c23c5a Binary files /dev/null and b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/logo/artifactory-logo.png differ diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-2xlarge-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-2xlarge-extra-config.yaml new file mode 100644 index 000000000..7bccf330d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-2xlarge-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=200 + -Dartifactory.async.poolMaxQueueSize=100000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=200 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + tomcat: + connector: + maxThreads: 800 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 200 + +access: + tomcat: + connector: + maxThreads: 200 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 200 + +metadata: + database: + maxOpenConnections: 200 + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-2xlarge.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-2xlarge.yaml new file mode 100644 index 000000000..be477939b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-2xlarge.yaml @@ -0,0 +1,126 @@ +############################################################## +# The 2xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 6 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "4" + memory: 20Gi + limits: + # cpu: "20" + memory: 24Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 4Gi + +router: + resources: + requests: + cpu: "1" + memory: 1Gi + limits: + # cpu: "6" + memory: 2Gi + +frontend: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 1Gi + +metadata: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 2Gi + +event: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +observability: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +jfconnect: + resources: + requests: + cpu: 100m + memory: 100Mi + limits: + # cpu: "1" + memory: 250Mi + +nginx: + replicaCount: 3 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "6Gi" + limits: + # cpu: "14" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "5000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 256Gi + cpu: "64" + limits: + memory: 256Gi + # cpu: "128" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-large-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-large-extra-config.yaml new file mode 100644 index 000000000..d97a85c9f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-large-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=80 + -Dartifactory.async.poolMaxQueueSize=20000 + -Dartifactory.http.client.max.total.connections=100 + -Dartifactory.http.client.max.connections.per.route=100 + -Dartifactory.access.client.max.connections=125 + -Dartifactory.metadata.event.operator.threads=4 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=524288 + -XX:MaxDirectMemorySize=512m + tomcat: + connector: + maxThreads: 500 + extraConfig: 'acceptCount="800" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 100 + +access: + tomcat: + connector: + maxThreads: 125 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 100 + +metadata: + database: + maxOpenConnections: 100 + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-large.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-large.yaml new file mode 100644 index 000000000..80326a8e4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-large.yaml @@ -0,0 +1,126 @@ +############################################################## +# The large sizing +# This size is intended for large organizations. It can be increased with adding replicas or moving to the xlarge sizing +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 3 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 10Gi + limits: + # cpu: "14" + memory: 12Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "8" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 200m + memory: 400Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "1" + memory: "500Mi" + limits: + # cpu: "4" + memory: "1Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "600" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 64Gi + cpu: "16" + limits: + memory: 64Gi + # cpu: "32" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-medium-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-medium-extra-config.yaml new file mode 100644 index 000000000..1c294c043 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-medium-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-medium.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-medium.yaml new file mode 100644 index 000000000..8b7215041 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-medium.yaml @@ -0,0 +1,126 @@ +############################################################## +# The medium sizing +# This size is just 2 replicas of the small size. Vertical sizing of all services is not changed +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 2 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "200" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 32Gi + cpu: "8" + limits: + memory: 32Gi + # cpu: "16" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-small-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-small-extra-config.yaml new file mode 100644 index 000000000..1c294c043 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-small-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-small.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-small.yaml new file mode 100644 index 000000000..eb8d7239d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-small.yaml @@ -0,0 +1,124 @@ +############################################################## +# The small sizing +# This is the size recommended for running Artifactory for small teams +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "100" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 16Gi + cpu: "4" + limits: + memory: 16Gi + # cpu: "10" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xlarge-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xlarge-extra-config.yaml new file mode 100644 index 000000000..00e6099f2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xlarge-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=160 + -Dartifactory.async.poolMaxQueueSize=50000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=150 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + tomcat: + connector: + maxThreads: 600 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 150 + +access: + tomcat: + connector: + maxThreads: 150 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 150 + +metadata: + database: + maxOpenConnections: 150 + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xlarge.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xlarge.yaml new file mode 100644 index 000000000..e77152ee1 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xlarge.yaml @@ -0,0 +1,126 @@ +############################################################## +# The xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 4 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 14Gi + limits: + # cpu: "14" + memory: 16Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 2Gi + limits: + # cpu: 1 + memory: 3Gi + +router: + resources: + requests: + cpu: 200m + memory: 500Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "4Gi" + limits: + # cpu: "12" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "2000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 128Gi + cpu: "32" + limits: + memory: 128Gi + # cpu: "64" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xsmall-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xsmall-extra-config.yaml new file mode 100644 index 000000000..39709b691 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xsmall-extra-config.yaml @@ -0,0 +1,42 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=10 + -Dartifactory.async.poolMaxQueueSize=2000 + -Dartifactory.http.client.max.total.connections=20 + -Dartifactory.http.client.max.connections.per.route=20 + -Dartifactory.access.client.max.connections=15 + -Dartifactory.metadata.event.operator.threads=2 + -XX:MaxMetaspaceSize=400m + -XX:CompressedClassSpaceSize=96m + -Djdk.nio.maxCachedBufferSize=131072 + -XX:MaxDirectMemorySize=128m + tomcat: + connector: + maxThreads: 50 + extraConfig: 'acceptCount="200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 15 + +access: + tomcat: + connector: + maxThreads: 15 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 15 + +metadata: + database: + maxOpenConnections: 15 + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xsmall.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xsmall.yaml new file mode 100644 index 000000000..246f830a0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/sizing/artifactory-xsmall.yaml @@ -0,0 +1,125 @@ +############################################################## +# The xsmall sizing +# This is the minimum size recommended for running Artifactory +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 3Gi + limits: + # cpu: "10" + memory: 4Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 50m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "50m" + memory: "50Mi" + limits: + # cpu: "1" + memory: "250Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "50" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 8Gi + cpu: "2" + limits: + memory: 8Gi + # cpu: "8" + diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/NOTES.txt b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/NOTES.txt new file mode 100644 index 000000000..76652ac98 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/NOTES.txt @@ -0,0 +1,106 @@ +Congratulations. You have just deployed JFrog Artifactory! +{{- if .Values.artifactory.masterKey }} +{{- if and (not .Values.artifactory.masterKeySecretName) (eq .Values.artifactory.masterKey "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF") }} + + +***************************************** WARNING ****************************************** +* Your Artifactory master key is still set to the provided example: * +* artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF * +* * +* You should change this to your own generated key: * +* $ export MASTER_KEY=$(openssl rand -hex 32) * +* $ echo ${MASTER_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.masterKey=${MASTER_KEY}' * +* * +* Alternatively, you can use a pre-existing secret with a key called master-key with * +* '--set artifactory.masterKeySecretName=${SECRET_NAME}' * +******************************************************************************************** +{{- end }} +{{- end }} + +{{- if .Values.artifactory.joinKey }} +{{- if eq .Values.artifactory.joinKey "EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE" }} + + +***************************************** WARNING ****************************************** +* Your Artifactory join key is still set to the provided example: * +* artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE * +* * +* You should change this to your own generated key: * +* $ export JOIN_KEY=$(openssl rand -hex 32) * +* $ echo ${JOIN_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.joinKey=${JOIN_KEY}' * +* * +******************************************************************************************** +{{- end }} +{{- end }} + +{{- if .Values.artifactory.setSecurityContext }} +****************************************** WARNING ********************************************** +* From chart version 107.84.x, `setSecurityContext` has been renamed to `podSecurityContext`, * + please change your values.yaml before upgrade , For more Info , refer to 107.84.x changelog * +************************************************************************************************* +{{- end }} + +{{- if and (or (or (or (or (or ( or ( or ( or (or (or ( or (or .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName) .Values.systemYamlOverride.existingSecret) (or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled)) .Values.aws.licenseConfigSecretName) .Values.artifactory.persistence.customBinarystoreXmlSecret) .Values.access.customCertificatesSecretName) .Values.systemYamlOverride.existingSecret) .Values.artifactory.license.secret) .Values.artifactory.userPluginSecrets) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey)) (and .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName)) (or .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName)) .Values.artifactory.unifiedSecretInstallation }} +****************************************** WARNING ************************************************************************************************** +* The unifiedSecretInstallation flag is currently enabled, which creates the unified secret. The existing secrets will continue as separate secrets.* +* Update the values.yaml with the existing secrets to add them to the unified secret. * +***************************************************************************************************************************************************** +{{- end }} + +1. Get the Artifactory URL by running these commands: + + {{- if .Values.ingress.enabled }} + {{- range .Values.ingress.hosts }} + http://{{ . }} + {{- end }} + + {{- else if contains "NodePort" .Values.nginx.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "artifactory.nginx.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT/ + + {{- else if contains "LoadBalancer" .Values.nginx.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of the service by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "artifactory.nginx.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory.nginx.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP/ + + {{- else if contains "ClusterIP" .Values.nginx.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ .Values.nginx.name }}" -o jsonpath="{.items[0].metadata.name}") + echo http://127.0.0.1:{{ .Values.nginx.externalPortHttp }} + kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.nginx.externalPortHttp }}:{{ .Values.nginx.internalPortHttp }} + + {{- end }} + +2. Open Artifactory in your browser + Default credential for Artifactory: + user: admin + password: password + +{{ if .Values.artifactory.javaOpts.jmx.enabled }} +JMX configuration: +{{- if not (contains "LoadBalancer" .Values.artifactory.service.type) }} +If you want to access JMX from you computer with jconsole, you should set ".Values.artifactory.service.type=LoadBalancer" !!! +{{ end }} + +1. Get the Artifactory service IP: +export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + +2. Map the service name to the service IP in /etc/hosts: +sudo sh -c "echo \"${SERVICE_IP} {{ template "artifactory.fullname" . }}\" >> /etc/hosts" + +3. Launch jconsole: +jconsole {{ template "artifactory.fullname" . }}:{{ .Values.artifactory.javaOpts.jmx.port }} +{{- end }} + +{{- if and .Values.nginx.enabled .Values.ingress.hosts }} +***************************************** WARNING ***************************************************************************** +* when nginx is enabled , .Values.ingress.hosts will be deprecated in upcoming releases * +* It is recommended to use nginx.hosts instead ingress.hosts +******************************************************************************************************************************* +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/_helpers.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/_helpers.tpl new file mode 100644 index 000000000..7cea041f7 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/_helpers.tpl @@ -0,0 +1,528 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "artifactory.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Expand the name nginx service. +*/}} +{{- define "artifactory.nginx.name" -}} +{{- default .Chart.Name .Values.nginx.name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "artifactory.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified nginx name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "artifactory.nginx.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s-%s" .Release.Name $name .Values.nginx.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "artifactory.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} +{{ default (include "artifactory.fullname" .) .Values.serviceAccount.name }} +{{- else -}} +{{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "artifactory.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Generate SSL certificates +*/}} +{{- define "artifactory.gen-certs" -}} +{{- $altNames := list ( printf "%s.%s" (include "artifactory.fullname" .) .Release.Namespace ) ( printf "%s.%s.svc" (include "artifactory.fullname" .) .Release.Namespace ) -}} +{{- $ca := genCA "artifactory-ca" 365 -}} +{{- $cert := genSignedCert ( include "artifactory.fullname" . ) nil $altNames 365 $ca -}} +tls.crt: {{ $cert.Cert | b64enc }} +tls.key: {{ $cert.Key | b64enc }} +{{- end -}} + +{{/* +Scheme (http/https) based on Access or Router TLS enabled/disabled +*/}} +{{- define "artifactory.scheme" -}} +{{- if or .Values.access.accessConfig.security.tls .Values.router.tlsEnabled -}} +{{- printf "%s" "https" -}} +{{- else -}} +{{- printf "%s" "http" -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKey value +*/}} +{{- define "artifactory.joinKey" -}} +{{- if .Values.global.joinKey -}} +{{- .Values.global.joinKey -}} +{{- else if .Values.artifactory.joinKey -}} +{{- .Values.artifactory.joinKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectToken value +*/}} +{{- define "artifactory.jfConnectToken" -}} +{{- .Values.artifactory.jfConnectToken -}} +{{- end -}} + +{{/* +Resolve masterKey value +*/}} +{{- define "artifactory.masterKey" -}} +{{- if .Values.global.masterKey -}} +{{- .Values.global.masterKey -}} +{{- else if .Values.artifactory.masterKey -}} +{{- .Values.artifactory.masterKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKeySecretName value +*/}} +{{- define "artifactory.joinKeySecretName" -}} +{{- if .Values.global.joinKeySecretName -}} +{{- .Values.global.joinKeySecretName -}} +{{- else if .Values.artifactory.joinKeySecretName -}} +{{- .Values.artifactory.joinKeySecretName -}} +{{- else -}} +{{ include "artifactory.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectTokenSecretName value +*/}} +{{- define "artifactory.jfConnectTokenSecretName" -}} +{{- if .Values.artifactory.jfConnectTokenSecretName -}} +{{- .Values.artifactory.jfConnectTokenSecretName -}} +{{- else -}} +{{ include "artifactory.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve masterKeySecretName value +*/}} +{{- define "artifactory.masterKeySecretName" -}} +{{- if .Values.global.masterKeySecretName -}} +{{- .Values.global.masterKeySecretName -}} +{{- else if .Values.artifactory.masterKeySecretName -}} +{{- .Values.artifactory.masterKeySecretName -}} +{{- else -}} +{{ include "artifactory.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve imagePullSecrets value +*/}} +{{- define "artifactory.imagePullSecrets" -}} +{{- if .Values.global.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.global.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- else if .Values.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainersBegin value +*/}} +{{- define "artifactory.customInitContainersBegin" -}} +{{- if .Values.global.customInitContainersBegin -}} +{{- .Values.global.customInitContainersBegin -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainersBegin -}} +{{- .Values.artifactory.customInitContainersBegin -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory.customInitContainers" -}} +{{- if .Values.global.customInitContainers -}} +{{- .Values.global.customInitContainers -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainers -}} +{{- .Values.artifactory.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory.customVolumes" -}} +{{- if .Values.global.customVolumes -}} +{{- .Values.global.customVolumes -}} +{{- end -}} +{{- if .Values.artifactory.customVolumes -}} +{{- .Values.artifactory.customVolumes -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumeMounts value +*/}} +{{- define "artifactory.customVolumeMounts" -}} +{{- if .Values.global.customVolumeMounts -}} +{{- .Values.global.customVolumeMounts -}} +{{- end -}} +{{- if .Values.artifactory.customVolumeMounts -}} +{{- .Values.artifactory.customVolumeMounts -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory.customSidecarContainers" -}} +{{- if .Values.global.customSidecarContainers -}} +{{- .Values.global.customSidecarContainers -}} +{{- end -}} +{{- if .Values.artifactory.customSidecarContainers -}} +{{- .Values.artifactory.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory chart image names +*/}} +{{- define "artifactory.getImageInfoByValue" -}} +{{- $dot := index . 0 }} +{{- $indexReference := index . 1 }} +{{- $registryName := index $dot.Values $indexReference "image" "registry" -}} +{{- $repositoryName := index $dot.Values $indexReference "image" "repository" -}} +{{- $tag := default $dot.Chart.AppVersion (index $dot.Values $indexReference "image" "tag") | toString -}} +{{- if $dot.Values.global }} + {{- if and $dot.Values.splitServicesToContainers $dot.Values.global.versions.router (eq $indexReference "router") }} + {{- $tag = $dot.Values.global.versions.router | toString -}} + {{- end -}} + {{- if and $dot.Values.global.versions.initContainers (eq $indexReference "initContainers") }} + {{- $tag = $dot.Values.global.versions.initContainers | toString -}} + {{- end -}} + {{- if and $dot.Values.global.versions.artifactory (or (eq $indexReference "artifactory") (eq $indexReference "nginx") ) }} + {{- $tag = $dot.Values.global.versions.artifactory | toString -}} + {{- end -}} + {{- if $dot.Values.global.imageRegistry }} + {{- printf "%s/%s:%s" $dot.Values.global.imageRegistry $repositoryName $tag -}} + {{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} + {{- end -}} +{{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory app version +*/}} +{{- define "artifactory.app.version" -}} +{{- $tag := (splitList ":" ((include "artifactory.getImageInfoByValue" (list . "artifactory" )))) | last | toString -}} +{{- printf "%s" $tag -}} +{{- end -}} + +{{/* +Custom certificate copy command +*/}} +{{- define "artifactory.copyCustomCerts" -}} +echo "Copy custom certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; +for file in $(ls -1 /tmp/certs/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; fi done; +if [ -f {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt ]; then mv -v {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/ca.crt; fi; +{{- end -}} + +{{/* +Circle of trust certificates copy command +*/}} +{{- define "artifactory.copyCircleOfTrustCertsCerts" -}} +echo "Copy circle of trust certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; +for file in $(ls -1 /tmp/circleoftrustcerts/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; fi done; +{{- end -}} + +{{/* +Resolve requiredServiceTypes value +*/}} +{{- define "artifactory.router.requiredServiceTypes" -}} +{{- $requiredTypes := "jfrt,jfac" -}} +{{- if not .Values.access.enabled -}} + {{- $requiredTypes = "jfrt" -}} +{{- end -}} +{{- if .Values.observability.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfob" -}} +{{- end -}} +{{- if .Values.metadata.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmd" -}} +{{- end -}} +{{- if .Values.event.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevt" -}} +{{- end -}} +{{- if .Values.frontend.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jffe" -}} +{{- end -}} +{{- if .Values.jfconnect.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfcon" -}} +{{- end -}} +{{- if .Values.evidence.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevd" -}} +{{- end -}} +{{- if .Values.mc.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmc" -}} +{{- end -}} +{{- $requiredTypes -}} +{{- end -}} + +{{/* +Check if the image is artifactory pro or not +*/}} +{{- define "artifactory.isImageProType" -}} +{{- if not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository) -}} +{{ true }} +{{- else -}} +{{ false }} +{{- end -}} +{{- end -}} + +{{/* +Check if the artifactory is using derby database +*/}} +{{- define "artifactory.isUsingDerby" -}} +{{- if and (eq (default "derby" .Values.database.type) "derby") (not .Values.postgresql.enabled) -}} +{{ true }} +{{- else -}} +{{ false }} +{{- end -}} +{{- end -}} + +{{/* +nginx scheme (http/https) +*/}} +{{- define "nginx.scheme" -}} +{{- if .Values.nginx.http.enabled -}} +{{- printf "%s" "http" -}} +{{- else -}} +{{- printf "%s" "https" -}} +{{- end -}} +{{- end -}} + +{{/* +nginx command +*/}} +{{- define "nginx.command" -}} +{{- if .Values.nginx.customCommand }} +{{ toYaml .Values.nginx.customCommand }} +{{- end }} +{{- end -}} + +{{/* +nginx port (8080/8443) based on http/https enabled +*/}} +{{- define "nginx.port" -}} +{{- if .Values.nginx.http.enabled -}} +{{- .Values.nginx.http.internalPort -}} +{{- else -}} +{{- .Values.nginx.https.internalPort -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory.nginx.customInitContainers" -}} +{{- if .Values.nginx.customInitContainers -}} +{{- .Values.nginx.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory.nginx.customVolumes" -}} +{{- if .Values.nginx.customVolumes -}} +{{- .Values.nginx.customVolumes -}} +{{- end -}} +{{- end -}} + + +{{/* +Resolve customVolumeMounts nginx value +*/}} +{{- define "artifactory.nginx.customVolumeMounts" -}} +{{- if .Values.nginx.customVolumeMounts -}} +{{- .Values.nginx.customVolumeMounts -}} +{{- end -}} +{{- end -}} + + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory.nginx.customSidecarContainers" -}} +{{- if .Values.nginx.customSidecarContainers -}} +{{- .Values.nginx.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve Artifactory pod node selector value +*/}} +{{- define "artifactory.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.artifactory.nodeSelector }} +{{ toYaml .Values.artifactory.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve Nginx pods node selector value +*/}} +{{- define "nginx.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.nginx.nodeSelector }} +{{ toYaml .Values.nginx.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve unifiedCustomSecretVolumeName value +*/}} +{{- define "artifactory.unifiedCustomSecretVolumeName" -}} +{{- printf "%s-%s" (include "artifactory.name" .) ("unified-secret-volume") | trunc 63 -}} +{{- end -}} + +{{/* +Check the Duplication of volume names for secrets. If unifiedSecretInstallation is enabled then the method is checking for volume names, +if the volume exists in customVolume then an extra volume with the same name will not be getting added in unifiedSecretInstallation case. +*/}} +{{- define "artifactory.checkDuplicateUnifiedCustomVolume" -}} +{{- if or .Values.global.customVolumes .Values.artifactory.customVolumes -}} +{{- $val := (tpl (include "artifactory.customVolumes" .) .) | toJson -}} +{{- contains (include "artifactory.unifiedCustomSecretVolumeName" .) $val | toString -}} +{{- else -}} +{{- printf "%s" "false" -}} +{{- end -}} +{{- end -}} + +{{/* +Calculate the systemYaml from structured and unstructured text input +*/}} +{{- define "artifactory.finalSystemYaml" -}} +{{ tpl (mergeOverwrite (include "artifactory.systemYaml" . | fromYaml) .Values.artifactory.extraSystemYaml | toYaml) . }} +{{- end -}} + +{{/* +Calculate the systemYaml from the unstructured text input +*/}} +{{- define "artifactory.systemYaml" -}} +{{ include (print $.Template.BasePath "/_system-yaml-render.tpl") . }} +{{- end -}} + +{{/* +Metrics enabled +*/}} +{{- define "metrics.enabled" -}} +shared: + metrics: + enabled: true +{{- end }} + +{{/* +Resolve unified secret prepend release name +*/}} +{{- define "artifactory.unifiedSecretPrependReleaseName" -}} +{{- if .Values.artifactory.unifiedSecretPrependReleaseName }} +{{- printf "%s" (include "artifactory.fullname" .) -}} +{{- else }} +{{- printf "%s" (include "artifactory.name" .) -}} +{{- end }} +{{- end }} + +{{/* +Resolve artifactory metrics +*/}} +{{- define "artifactory.metrics" -}} +{{- if .Values.artifactory.openMetrics -}} +{{- if .Values.artifactory.openMetrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.openMetrics.filebeat }} +{{- if .Values.artifactory.openMetrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.openMetrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- else if .Values.artifactory.metrics -}} +{{- if .Values.artifactory.metrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.metrics.filebeat }} +{{- if .Values.artifactory.metrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.metrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve nginx hosts value +*/}} +{{- define "artifactory.nginx.hosts" -}} +{{- if .Values.ingress.hosts }} +{{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- else if .Values.nginx.hosts }} +{{- range .Values.nginx.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/_system-yaml-render.tpl b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/_system-yaml-render.tpl new file mode 100644 index 000000000..deaa773ea --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/_system-yaml-render.tpl @@ -0,0 +1,5 @@ +{{- if .Values.artifactory.systemYaml -}} +{{- tpl .Values.artifactory.systemYaml . -}} +{{- else -}} +{{ (tpl ( $.Files.Get "files/system.yaml" ) .) }} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/additional-resources.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/additional-resources.yaml new file mode 100644 index 000000000..c4d06f08a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/additional-resources.yaml @@ -0,0 +1,3 @@ +{{ if .Values.additionalResources }} +{{ tpl .Values.additionalResources . }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/admin-bootstrap-creds.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/admin-bootstrap-creds.yaml new file mode 100644 index 000000000..eb2d613c6 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/admin-bootstrap-creds.yaml @@ -0,0 +1,15 @@ +{{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} +{{- if and .Values.artifactory.admin.password (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-bootstrap-creds + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + bootstrap.creds: {{ (printf "%s@%s=%s" .Values.artifactory.admin.username .Values.artifactory.admin.ip .Values.artifactory.admin.password) | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-access-config.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-access-config.yaml new file mode 100644 index 000000000..4fcf85d94 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-access-config.yaml @@ -0,0 +1,15 @@ +{{- if and .Values.access.accessConfig (not .Values.artifactory.unifiedSecretInstallation) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" . }}-access-config + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +stringData: + access.config.patch.yml: | +{{ tpl (toYaml .Values.access.accessConfig) . | indent 4 }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-binarystore-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-binarystore-secret.yaml new file mode 100644 index 000000000..6b721dd4c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-binarystore-secret.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.artifactory.persistence.customBinarystoreXmlSecret) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-binarystore + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + binarystore.xml: |- +{{- if .Values.artifactory.persistence.binarystoreXml }} +{{ tpl .Values.artifactory.persistence.binarystoreXml . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/binarystore.xml" ) . | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-configmaps.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-configmaps.yaml new file mode 100644 index 000000000..359fa07d2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-configmaps.yaml @@ -0,0 +1,13 @@ +{{ if .Values.artifactory.configMaps }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-configmaps + labels: + app: {{ template "artifactory.fullname" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ tpl .Values.artifactory.configMaps . | indent 2 }} +{{ end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-custom-secrets.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-custom-secrets.yaml new file mode 100644 index 000000000..4b73e79fc --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-custom-secrets.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.artifactory.customSecrets (not .Values.artifactory.unifiedSecretInstallation) }} +{{- range .Values.artifactory.customSecrets }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" $ }}-{{ .name }} + labels: + app: "{{ template "artifactory.name" $ }}" + chart: "{{ template "artifactory.chart" $ }}" + component: "{{ $.Values.artifactory.name }}" + heritage: {{ $.Release.Service | quote }} + release: {{ $.Release.Name | quote }} +type: Opaque +stringData: + {{ .key }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-database-secrets.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-database-secrets.yaml new file mode 100644 index 000000000..f98d422e9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-database-secrets.yaml @@ -0,0 +1,24 @@ +{{- if and (not .Values.database.secrets) (not .Values.postgresql.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +{{- if or .Values.database.url .Values.database.user .Values.database.password }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" . }}-database-creds + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +data: + {{- with .Values.database.url }} + db-url: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.user }} + db-user: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.password }} + db-password: {{ tpl . $ | b64enc | quote }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-gcp-credentials-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-gcp-credentials-secret.yaml new file mode 100644 index 000000000..72dee6bb8 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-gcp-credentials-secret.yaml @@ -0,0 +1,16 @@ +{{- if not .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} +{{- if and (.Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-gcpcreds + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + gcp.credentials.json: |- +{{ tpl .Values.artifactory.persistence.googleStorage.gcpServiceAccount.config . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-hpa.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-hpa.yaml new file mode 100644 index 000000000..01f8a9fb7 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-hpa.yaml @@ -0,0 +1,29 @@ +{{- if .Values.autoscaling.enabled }} + {{- if semverCompare ">=v1.23.0-0" .Capabilities.KubeVersion.Version }} +apiVersion: autoscaling/v2 + {{- else }} +apiVersion: autoscaling/v2beta2 + {{- end }} +kind: HorizontalPodAutoscaler +metadata: + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "artifactory.fullname" . }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: StatefulSet + name: {{ template "artifactory.fullname" . }} + minReplicas: {{ .Values.autoscaling.minReplicas }} + maxReplicas: {{ .Values.autoscaling.maxReplicas }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-installer-info.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-installer-info.yaml new file mode 100644 index 000000000..cfb95b67d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-installer-info.yaml @@ -0,0 +1,16 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-installer-info + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + installer-info.json: | +{{- if .Values.installerInfo -}} +{{- tpl .Values.installerInfo . | nindent 4 -}} +{{- else -}} +{{ (tpl ( .Files.Get "files/installer-info.json" | nindent 4 ) .) }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-license-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-license-secret.yaml new file mode 100644 index 000000000..ba83aaf24 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-license-secret.yaml @@ -0,0 +1,16 @@ +{{ if and (not .Values.artifactory.unifiedSecretInstallation) (not .Values.artifactory.license.secret) (not .Values.artifactory.license.licenseKey) }} +{{- with .Values.artifactory.license.licenseKey }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" $ }}-license + labels: + app: {{ template "artifactory.name" $ }} + chart: {{ template "artifactory.chart" $ }} + heritage: {{ $.Release.Service }} + release: {{ $.Release.Name }} +type: Opaque +data: + artifactory.lic: {{ . | b64enc | quote }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-migration-scripts.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-migration-scripts.yaml new file mode 100644 index 000000000..4b1ba4027 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-migration-scripts.yaml @@ -0,0 +1,18 @@ +{{- if .Values.artifactory.migration.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-migration-scripts + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + migrate.sh: | +{{ .Files.Get "files/migrate.sh" | indent 4 }} + migrationHelmInfo.yaml: | +{{ .Files.Get "files/migrationHelmInfo.yaml" | indent 4 }} + migrationStatus.sh: | +{{ .Files.Get "files/migrationStatus.sh" | indent 4 }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-networkpolicy.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-networkpolicy.yaml new file mode 100644 index 000000000..d24203dc9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-networkpolicy.yaml @@ -0,0 +1,34 @@ +{{- range .Values.networkpolicy }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ template "artifactory.fullname" $ }}-{{ .name }}-networkpolicy + labels: + app: {{ template "artifactory.name" $ }} + chart: {{ template "artifactory.chart" $ }} + release: {{ $.Release.Name }} + heritage: {{ $.Release.Service }} +spec: +{{- if .podSelector }} + podSelector: +{{ .podSelector | toYaml | trimSuffix "\n" | indent 4 -}} +{{ else }} + podSelector: {} +{{- end }} + policyTypes: + {{- if .ingress }} + - Ingress + {{- end }} + {{- if .egress }} + - Egress + {{- end }} +{{- if .ingress }} + ingress: +{{ .ingress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +{{- if .egress }} + egress: +{{ .egress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +--- +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-nfs-pvc.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-nfs-pvc.yaml new file mode 100644 index 000000000..75d6d0c53 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-nfs-pvc.yaml @@ -0,0 +1,101 @@ +{{- if eq .Values.artifactory.persistence.type "nfs" }} +### Artifactory HA data +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory.fullname" . }}-data-pv + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory.name" . }}-data-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haDataMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-data-pvc + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory.name" . }}-data-pv + app: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} +--- +### Artifactory HA backup +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory.fullname" . }}-backup-pv + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory.name" . }}-backup-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haBackupMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-backup-pvc + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory.name" . }}-backup-pv + app: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-pdb.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-pdb.yaml new file mode 100644 index 000000000..68876d23b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/artifactory-pdb.yaml @@ -0,0 +1,24 @@ +{{- if .Values.artifactory.minAvailable -}} +{{- if semverCompare "= 107.79.x), just set databaseUpgradeReady=true \n" .Values.databaseUpgradeReady | quote }} +{{- end }} +{{- with .Values.artifactory.statefulset.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +{{- if and (eq (include "artifactory.isUsingDerby" .) "true") (gt (.Values.artifactory.replicaCount | int64) 1) }} + {{- fail "Derby database is not supported in HA mode" }} +{{- end }} +{{- if .Values.artifactory.postStartCommand }} + {{- fail ".Values.artifactory.postStartCommand is not supported and should be replaced with .Values.artifactory.lifecycle.postStart.exec.command" }} +{{- end }} +{{- if eq .Values.artifactory.persistence.type "aws-s3" }} + {{- fail "\nPersistence storage type 'aws-s3' is deprecated and is not supported and should be replaced with 'aws-s3-v3'" }} +{{- end }} +{{- if or .Values.artifactory.persistence.googleStorage.identity .Values.artifactory.persistence.googleStorage.credential }} + {{- fail "\nGCP Bucket Authentication with Identity and Credential is deprecated" }} +{{- end }} +{{- if (eq (.Values.artifactory.setSecurityContext | toString) "false" ) }} + {{- fail "\n You need to set security context at the pod level. .Values.artifactory.setSecurityContext is no longer supported. Replace it with .Values.artifactory.podSecurityContext" }} +{{- end }} +{{- if or .Values.artifactory.uid .Values.artifactory.gid }} +{{- if or (not (eq (.Values.artifactory.uid | toString) "1030" )) (not (eq (.Values.artifactory.gid | toString) "1030" )) }} + {{- fail "\n .Values.artifactory.uid and .Values.artifactory.gid are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.runAsUser .Values.artifactory.podSecurityContext.runAsGroup and .Values.artifactory.podSecurityContext.fsGroup" }} +{{- end }} +{{- end }} +{{- if or .Values.artifactory.fsGroupChangePolicy .Values.artifactory.seLinuxOptions }} + {{- fail "\n .Values.artifactory.fsGroupChangePolicy and .Values.artifactory.seLinuxOptions are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.fsGroupChangePolicy and .Values.artifactory.podSecurityContext.seLinuxOptions" }} +{{- end }} +{{- if .Values.initContainerImage }} + {{- fail "\n .Values.initContainerImage is no longer supported. Replace it with .Values.initContainers.image.registry .Values.initContainers.image.repository and .Values.initContainers.image.tag" }} +{{- end }} +spec: + serviceName: {{ template "artifactory.name" . }} + replicas: {{ .Values.artifactory.replicaCount }} + updateStrategy: {{- toYaml .Values.artifactory.updateStrategy | nindent 4 }} + selector: + matchLabels: + app: {{ template "artifactory.name" . }} + role: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} + template: + metadata: + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + role: {{ template "artifactory.name" . }} + component: {{ .Values.artifactory.name }} + release: {{ .Release.Name }} + {{- with .Values.artifactory.labels }} +{{ toYaml . | indent 8 }} + {{- end }} + annotations: + {{- if not .Values.artifactory.unifiedSecretInstallation }} + checksum/database-secrets: {{ include (print $.Template.BasePath "/artifactory-database-secrets.yaml") . | sha256sum }} + checksum/binarystore: {{ include (print $.Template.BasePath "/artifactory-binarystore-secret.yaml") . | sha256sum }} + checksum/systemyaml: {{ include (print $.Template.BasePath "/artifactory-system-yaml.yaml") . | sha256sum }} + {{- if .Values.access.accessConfig }} + checksum/access-config: {{ include (print $.Template.BasePath "/artifactory-access-config.yaml") . | sha256sum }} + {{- end }} + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + checksum/gcpcredentials: {{ include (print $.Template.BasePath "/artifactory-gcp-credentials-secret.yaml") . | sha256sum }} + {{- end }} + {{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + checksum/admin-creds: {{ include (print $.Template.BasePath "/admin-bootstrap-creds.yaml") . | sha256sum }} + {{- end }} + {{- else }} + checksum/artifactory-unified-secret: {{ include (print $.Template.BasePath "/artifactory-unified-secret.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.artifactory.annotations }} +{{ toYaml . | indent 8 }} + {{- end }} + spec: + {{- if .Values.artifactory.schedulerName }} + schedulerName: {{ .Values.artifactory.schedulerName | quote }} + {{- end }} + {{- if .Values.artifactory.priorityClass.existingPriorityClass }} + priorityClassName: {{ .Values.artifactory.priorityClass.existingPriorityClass }} + {{- else -}} + {{- if .Values.artifactory.priorityClass.create }} + priorityClassName: {{ default (include "artifactory.fullname" .) .Values.artifactory.priorityClass.name }} + {{- end }} + {{- end }} + serviceAccountName: {{ template "artifactory.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ add .Values.artifactory.terminationGracePeriodSeconds 10 }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.podSecurityContext.enabled }} + securityContext: {{- omit .Values.artifactory.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.artifactory.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.artifactory.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if or .Values.artifactory.customInitContainersBegin .Values.global.customInitContainersBegin }} +{{ tpl (include "artifactory.customInitContainersBegin" .) . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.persistence.enabled }} + {{- if .Values.artifactory.deleteDBPropertiesOnStartup }} + - name: "delete-db-properties" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - 'rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/db.properties' + volumeMounts: + - name: artifactory-volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- end }} + {{- end }} + {{- if or (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) .Values.artifactory.admin.password }} + - name: "access-bootstrap-creds" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > + echo "Preparing {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -Lrf /tmp/access/bootstrap.creds {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + chmod 600 {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + volumeMounts: + - name: artifactory-volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- if or (not .Values.artifactory.unifiedSecretInstallation) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + - name: access-bootstrap-creds + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/access/bootstrap.creds" + {{- if and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey }} + subPath: {{ .Values.artifactory.admin.dataKey }} + {{- else }} + subPath: "bootstrap.creds" + {{- end }} + {{- end }} + - name: 'copy-system-configurations' + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - '/bin/bash' + - '-c' + - > + if [[ -e "{{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml" ]]; then chmod 644 {{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml; fi; + echo "Copy system.yaml to {{ .Values.artifactory.persistence.mountPath }}/etc"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; + {{- if .Values.systemYamlOverride.existingSecret }} + cp -fv /tmp/etc/{{ .Values.systemYamlOverride.dataKey }} {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- else }} + cp -fv /tmp/etc/system.yaml {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- end }} + echo "Copy binarystore.xml file"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory; + cp -fv /tmp/etc/artifactory/binarystore.xml {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/binarystore.xml; + {{- if .Values.access.accessConfig }} + echo "Copy access.config.patch.yml to {{ .Values.artifactory.persistence.mountPath }}/etc/access"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -fv /tmp/etc/access.config.patch.yml {{ .Values.artifactory.persistence.mountPath }}/etc/access/access.config.patch.yml; + {{- end }} + {{- if .Values.access.resetAccessCAKeys }} + echo "Resetting Access CA Keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + touch {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/reset_ca_keys; + {{- end }} + {{- if .Values.access.customCertificatesSecretName }} + echo "Copying custom certificates to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + cp -fv /tmp/etc/tls.crt {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.crt; + cp -fv /tmp/etc/tls.key {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.private.key; + {{- end }} + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + echo "Copy joinKey to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security; + echo -n ${ARTIFACTORY_JOIN_KEY} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security/join.key; + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectTokenSecretName }} + echo "Copy jfConnectToken to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/; + echo -n ${ARTIFACTORY_JFCONNECT_TOKEN} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token; + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + echo "Copy masterKey to {{ .Values.artifactory.persistence.mountPath }}/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security; + echo -n ${ARTIFACTORY_MASTER_KEY} > {{ .Values.artifactory.persistence.mountPath }}/etc/security/master.key; + {{- end }} + env: + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + - name: ARTIFACTORY_JOIN_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + name: {{ include "artifactory.joinKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: join-key + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectSecretName }} + - name: ARTIFACTORY_JFCONNECT_TOKEN + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.jfConnectTokenSecretName }} + name: {{ include "artifactory.jfConnectTokenSecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: jfconnect-token + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + - name: ARTIFACTORY_MASTER_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + name: {{ include "artifactory.masterKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: master-key + {{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.systemYamlOverride.existingSecret }} + - name: systemyaml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + {{- if .Values.systemYamlOverride.existingSecret }} + mountPath: "/tmp/etc/{{.Values.systemYamlOverride.dataKey}}" + subPath: {{ .Values.systemYamlOverride.dataKey }} + {{- else }} + mountPath: "/tmp/etc/system.yaml" + subPath: "system.yaml" + {{- end }} + + ######################## Binarystore ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Access config ########################## + {{- if .Values.access.accessConfig }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + - name: access-config + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/access.config.patch.yml" + subPath: "access.config.patch.yml" + {{- end }} + + ######################## Access certs external secret ########################## + {{- if .Values.access.customCertificatesSecretName }} + - name: access-certs + mountPath: "/tmp/etc/tls.crt" + subPath: tls.crt + - name: access-certs + mountPath: "/tmp/etc/tls.key" + subPath: tls.key + {{- end }} + + {{- if or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled }} + - name: copy-custom-certificates + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCustomCerts" . | indent 10 }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: ca-certs + mountPath: "/tmp/certs" + {{- end }} + + {{- if .Values.artifactory.circleOfTrustCertificatesSecret }} + - name: copy-circle-of-trust-certificates + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCircleOfTrustCertsCerts" . | indent 10 }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: circle-of-trust-certs + mountPath: "/tmp/circleoftrustcerts" + {{- end }} + + {{- if .Values.waitForDatabase }} + {{- if .Values.postgresql.enabled }} + - name: "wait-for-db" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - /bin/bash + - -c + - | + echo "Waiting for postgresql to come up" + ready=false; + while ! $ready; do echo waiting; + timeout 2s bash -c " + {{- if .Values.artifactory.migration.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.migration.preStartCommand . }}; + {{- end }} + scriptsPath="/opt/jfrog/artifactory/app/bin"; + mkdir -p $scriptsPath; + echo "Copy migration scripts and Run migration"; + cp -fv /tmp/migrate.sh $scriptsPath/migrate.sh; + cp -fv /tmp/migrationHelmInfo.yaml $scriptsPath/migrationHelmInfo.yaml; + cp -fv /tmp/migrationStatus.sh $scriptsPath/migrationStatus.sh; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/log; + bash $scriptsPath/migrationStatus.sh {{ include "artifactory.app.version" . }} {{ .Values.artifactory.migration.timeoutSeconds }} > >(tee {{ .Values.artifactory.persistence.mountPath }}/log/helm-migration.log) 2>&1; + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: migration-scripts + mountPath: "/tmp/migrate.sh" + subPath: migrate.sh + - name: migration-scripts + mountPath: "/tmp/migrationHelmInfo.yaml" + subPath: migrationHelmInfo.yaml + - name: migration-scripts + mountPath: "/tmp/migrationStatus.sh" + subPath: migrationStatus.sh + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: "binarystore.xml" + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + + ######################## CustomVolumeMounts ########################## + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} +{{- end }} + {{- if .Values.hostAliases }} + hostAliases: +{{ toYaml .Values.hostAliases | indent 6 }} + {{- end }} + containers: + {{- if .Values.splitServicesToContainers }} + - name: {{ .Values.router.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "router") }} + imagePullPolicy: {{ .Values.router.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/router/app/bin/entrypoint-router.sh + {{- with .Values.router.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES + value: {{ include "artifactory.router.requiredServiceTypes" . }} +{{- with .Values.router.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - name: http + containerPort: {{ .Values.router.internalPort }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.router.persistence.mountPath | quote }} +{{- with .Values.router.customVolumeMounts }} +{{ tpl . $ | indent 8 }} +{{- end }} + resources: +{{ toYaml .Values.router.resources | indent 10 }} + {{- if .Values.router.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.router.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.router.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.router.livenessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.enabled }} + - name: {{ .Values.frontend.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/node/bin/node /opt/jfrog/artifactory/app/frontend/bin/server/dist/bundle.js /opt/jfrog/artifactory/app/frontend + {{- with .Values.frontend.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if and (gt (.Values.artifactory.replicaCount | int64) 1) (eq (include "artifactory.isImageProType" .) "true") (eq (include "artifactory.isUsingDerby" .) "false") }} + - name : JF_SHARED_NODE_HAENABLED + value: "true" + {{- end }} +{{- with .Values.frontend.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.frontend.resources | indent 10 }} + {{- if .Values.frontend.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.frontend.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.frontend.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.evidence.enabled }} + - name: {{ .Values.evidence.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/evidence/bin/jf-evidence start + {{- with .Values.evidence.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.evidence.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.evidence.internalPort }} + name: http-evidence + - containerPort: {{ .Values.evidence.externalPort }} + name: grpc-evidence + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.evidence.resources | indent 10 }} + {{- if .Values.evidence.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.evidence.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.evidence.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.evidence.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.metadata.enabled }} + - name: {{ .Values.metadata.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/metadata/bin/jf-metadata start + {{- with .Values.metadata.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.metadata.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.metadata.resources | indent 10 }} + {{- if .Values.metadata.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.metadata.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.metadata.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.metadata.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.event.enabled }} + - name: {{ .Values.event.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/event/bin/jf-event start + {{- with .Values.event.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.event.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.event.resources | indent 10 }} + {{- if .Values.event.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.event.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.event.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.event.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.jfconnect.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} + - name: {{ .Values.jfconnect.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/jfconnect/bin/jf-connect start + {{- with .Values.jfconnect.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.jfconnect.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.jfconnect.resources | indent 10 }} + {{- if .Values.jfconnect.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.jfconnect.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.jfconnect.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.jfconnect.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.access.enabled (not (.Values.access.runOnArtifactoryTomcat | default false)) }} + - name: {{ .Values.access.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.access.resources }} + resources: +{{ toYaml .Values.access.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + {{- if .Values.access.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.access.preStartCommand . }}; + {{- end }} + exec /opt/jfrog/artifactory/app/access/bin/entrypoint-access.sh + {{- with .Values.access.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (gt (.Values.artifactory.replicaCount | int64) 1) (eq (include "artifactory.isImageProType" .) "true") (eq (include "artifactory.isUsingDerby" .) "false") }} + - name : JF_SHARED_NODE_HAENABLED + value: "true" + {{- end }} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.access.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence googleStorage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.access.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.access.startupProbe.config . | indent 10 }} + {{- end }} + {{- if semverCompare " + exec /opt/jfrog/artifactory/app/third-party/java/bin/java {{ .Values.federation.extraJavaOpts }} -jar /opt/jfrog/artifactory/app/rtfs/lib/jf-rtfs + {{- with .Values.federation.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + # TODO - Password,Url,Username - should be derived from env variable +{{- with .Values.federation.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.federation.resources | indent 10 }} + {{- if .Values.federation.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.federation.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.federation.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.federation.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.observability.enabled }} + - name: {{ .Values.observability.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/observability/bin/jf-observability start + {{- with .Values.observability.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.observability.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.observability.resources | indent 10 }} + {{- if .Values.observability.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.observability.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.observability.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.observability.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- end }} + - name: {{ .Values.artifactory.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.artifactory.resources }} + resources: +{{ toYaml .Values.artifactory.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + if [ -d /artifactory_extra_conf ] && [ -d /artifactory_bootstrap ]; then + echo "Copying bootstrap config from /artifactory_extra_conf to /artifactory_bootstrap"; + cp -Lrfv /artifactory_extra_conf/ /artifactory_bootstrap/; + fi; + {{- if .Values.artifactory.configMapName }} + echo "Copying bootstrap configs"; + cp -Lrf /bootstrap/* /artifactory_bootstrap/; + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + echo "Copying plugins"; + cp -Lrf /tmp/plugin/*/* /artifactory_bootstrap/plugins; + {{- end }} + {{- range .Values.artifactory.copyOnEveryStartup }} + {{- $targetPath := printf "%s/%s" $.Values.artifactory.persistence.mountPath .target }} + {{- $baseDirectory := regexFind ".*/" $targetPath }} + mkdir -p {{ $baseDirectory }}; + cp -Lrf {{ .source }} {{ $.Values.artifactory.persistence.mountPath }}/{{ .target }}; + {{- end }} + {{- if .Values.artifactory.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.preStartCommand . }}; + {{- end }} + exec /entrypoint-artifactory.sh + {{- with .Values.artifactory.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (gt (.Values.artifactory.replicaCount | int64) 1) (eq (include "artifactory.isImageProType" .) "true") (eq (include "artifactory.isUsingDerby" .) "false") }} + - name : JF_SHARED_NODE_HAENABLED + value: "true" + {{- end }} + {{- if .Values.aws.license.enabled }} + - name: IS_AWS_LICENSE + value: "true" + - name: AWS_REGION + value: {{ .Values.aws.region | quote }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.aws.licenseConfigSecretName }} + key: iam_role + {{- end }} + {{- end }} + {{- if .Values.splitServicesToContainers }} + - name : JF_ROUTER_ENABLED + value: "true" + - name : JF_ROUTER_SERVICE_ENABLED + value: "false" + - name : JF_EVENT_ENABLED + value: "false" + - name : JF_METADATA_ENABLED + value: "false" + - name : JF_FRONTEND_ENABLED + value: "false" + - name: JF_FEDERATION_ENABLED + value: "false" + - name : JF_OBSERVABILITY_ENABLED + value: "false" + - name : JF_JFCONNECT_SERVICE_ENABLED + value: "false" + - name : JF_EVIDENCE_ENABLED + value: "false" + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + - name : JF_ACCESS_ENABLED + value: "false" + {{- end}} + {{- end}} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.artifactory.internalPort }} + name: http + - containerPort: {{ .Values.artifactory.internalArtifactoryPort }} + name: http-internal + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + {{- if .Values.artifactory.javaOpts.jmx.enabled }} + - containerPort: {{ .Values.artifactory.javaOpts.jmx.port }} + name: tcp-jmx + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.artifactory.ssh.internalPort }} + name: tcp-ssh + {{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + - name: bootstrap-plugins + mountPath: "/artifactory_bootstrap/plugins/" + {{- range .Values.artifactory.userPluginSecrets }} + - name: {{ tpl . $ }} + mountPath: "/tmp/plugin/{{ tpl . $ }}" + {{- end }} + {{- end }} + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## Artifactory config map ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystoreXml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence googleStorage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + + - name: installer-info + mountPath: "/artifactory_bootstrap/info/installer-info.json" + subPath: installer-info.json + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.artifactory.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.artifactory.startupProbe.config . | indent 10 }} + {{- end }} + {{- if and (not .Values.splitServicesToContainers) (semverCompare "=1.18.0-0" .Capabilities.KubeVersion.GitVersion) }} + ingressClassName: {{ .Values.ingress.className }} + {{- end }} + {{- if .Values.ingress.defaultBackend.enabled }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + defaultBackend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- else }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end }} + rules: +{{- if .Values.ingress.hosts }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- if not $.Values.ingress.disableRouterBypass }} + - path: {{ $.Values.ingress.artifactoryPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $artifactoryServicePort }} + {{- end }} + {{- if and $.Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" $.Values.artifactory.image.repository)) }} + - path: {{ $.Values.ingress.rtfsPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $.Values.federation.internalPort }} + {{- end }} + {{- end }} + {{- else }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- if not $.Values.ingress.disableRouterBypass }} + - path: {{ $.Values.ingress.artifactoryPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $artifactoryServicePort }} + {{- end }} + {{- end }} + {{- end }} +{{- end -}} + {{- with .Values.ingress.additionalRules }} +{{ tpl . $ | indent 2 }} + {{- end }} + + {{- if .Values.ingress.tls }} + tls: +{{ toYaml .Values.ingress.tls | indent 4 }} + {{- end -}} + +{{- if .Values.customIngress }} +--- +{{ .Values.customIngress | toYaml | trimSuffix "\n" }} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/logger-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/logger-configmap.yaml new file mode 100644 index 000000000..41a078b02 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/logger-configmap.yaml @@ -0,0 +1,63 @@ +{{- if or .Values.artifactory.loggers .Values.artifactory.catalinaLoggers }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-logger + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + tail-log.sh: | + #!/bin/sh + + LOG_DIR=$1 + LOG_NAME=$2 + PID= + + # Wait for log dir to appear + while [ ! -d ${LOG_DIR} ]; do + sleep 1 + done + + cd ${LOG_DIR} + + LOG_PREFIX=$(echo ${LOG_NAME} | sed 's/.log$//g') + + # Find the log to tail + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + + # Wait for the log file + while [ -z "${LOG_FILE}" ]; do + sleep 1 + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + done + + echo "Log file ${LOG_FILE} is ready!" + + # Get inode number + INODE_ID=$(ls -i ${LOG_FILE}) + + # echo "Tailing ${LOG_FILE}" + tail -F ${LOG_FILE} & + PID=$! + + # Loop forever to see if a new log was created + while true; do + # Check inode number + NEW_INODE_ID=$(ls -i ${LOG_FILE}) + + # If inode number changed, this means log was rotated and need to start a new tail + if [ "${INODE_ID}" != "${NEW_INODE_ID}" ]; then + kill -9 ${PID} 2>/dev/null + INODE_ID="${NEW_INODE_ID}" + + # Start a new tail + tail -F ${LOG_FILE} & + PID=$! + fi + sleep 1 + done + +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-artifactory-conf.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-artifactory-conf.yaml new file mode 100644 index 000000000..343448994 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-artifactory-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customArtifactoryConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-nginx-artifactory-conf + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + artifactory.conf: | +{{- if .Values.nginx.artifactoryConf }} +{{ tpl .Values.nginx.artifactoryConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-artifactory-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-certificate-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-certificate-secret.yaml new file mode 100644 index 000000000..f13d40174 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-certificate-secret.yaml @@ -0,0 +1,14 @@ +{{- if and (not .Values.nginx.tlsSecretName) .Values.nginx.enabled .Values.nginx.https.enabled }} +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: {{ template "artifactory.fullname" . }}-nginx-certificate + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ ( include "artifactory.gen-certs" . ) | indent 2 }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-conf.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-conf.yaml new file mode 100644 index 000000000..31219d58a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-nginx-conf + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + nginx.conf: | +{{- if .Values.nginx.mainConf }} +{{ tpl .Values.nginx.mainConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-main-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-deployment.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-deployment.yaml new file mode 100644 index 000000000..774bedcca --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-deployment.yaml @@ -0,0 +1,223 @@ +{{- if .Values.nginx.enabled -}} +{{- $serviceName := include "artifactory.fullname" . -}} +{{- $servicePort := .Values.artifactory.externalPort -}} +apiVersion: apps/v1 +kind: {{ .Values.nginx.kind }} +metadata: + name: {{ template "artifactory.nginx.fullname" . }} + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 4 }} +{{- end }} +{{- with .Values.nginx.deployment.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: +{{- if eq .Values.nginx.kind "StatefulSet" }} + serviceName: {{ template "artifactory.nginx.fullname" . }} +{{- end }} +{{- if ne .Values.nginx.kind "DaemonSet" }} + replicas: {{ .Values.nginx.replicaCount }} +{{- end }} + selector: + matchLabels: + app: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} + template: + metadata: + annotations: + checksum/nginx-conf: {{ include (print $.Template.BasePath "/nginx-conf.yaml") . | sha256sum }} + checksum/nginx-artifactory-conf: {{ include (print $.Template.BasePath "/nginx-artifactory-conf.yaml") . | sha256sum }} + {{- range $key, $value := .Values.nginx.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + component: {{ .Values.nginx.name }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 8 }} +{{- end }} + spec: + {{- if .Values.nginx.podSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + serviceAccountName: {{ template "artifactory.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ .Values.nginx.terminationGracePeriodSeconds }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.nginx.priorityClassName }} + priorityClassName: {{ .Values.nginx.priorityClassName | quote }} + {{- end }} + {{- if .Values.nginx.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.nginx.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if .Values.nginx.customInitContainers }} +{{ tpl (include "artifactory.nginx.customInitContainers" .) . | indent 6 }} + {{- end }} + - name: "setup" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/sh' + - '-c' + - > + rm -rfv {{ .Values.nginx.persistence.mountPath }}/lost+found; + mkdir -p {{ .Values.nginx.persistence.mountPath }}/logs; + resources: + {{- toYaml .Values.initContainers.resources | nindent 10 }} + volumeMounts: + - mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + name: nginx-volume + containers: + - name: {{ .Values.nginx.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "nginx") }} + imagePullPolicy: {{ .Values.nginx.image.pullPolicy }} + {{- if .Values.nginx.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.nginx.customCommand }} + command: +{{- tpl (include "nginx.command" .) . | indent 10 }} + {{- end }} + ports: +{{ if .Values.nginx.customPorts }} +{{ toYaml .Values.nginx.customPorts | indent 8 }} +{{ end }} + # DEPRECATION NOTE: The following is to maintain support for values pre 1.3.1 and + # will be cleaned up in a later version + {{- if .Values.nginx.http }} + {{- if .Values.nginx.http.enabled }} + - containerPort: {{ .Values.nginx.http.internalPort }} + name: http + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttp }} + name: http-internal + {{- end }} + {{- if .Values.nginx.https }} + {{- if .Values.nginx.https.enabled }} + - containerPort: {{ .Values.nginx.https.internalPort }} + name: https + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttps }} + name: https-internal + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.nginx.ssh.internalPort }} + name: tcp-ssh + {{- end }} + {{- with .Values.nginx.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + volumeMounts: + - name: nginx-conf + mountPath: /etc/nginx/nginx.conf + subPath: nginx.conf + - name: nginx-artifactory-conf + mountPath: "{{ .Values.nginx.persistence.mountPath }}/conf.d/" + - name: nginx-volume + mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + mountPath: "{{ .Values.nginx.persistence.mountPath }}/ssl" + {{- end }} + {{- if .Values.nginx.customVolumeMounts }} +{{ tpl (include "artifactory.nginx.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.nginx.resources | indent 10 }} + {{- if .Values.nginx.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.nginx.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.nginx.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.nginx.livenessProbe.config . | indent 10 }} + {{- end }} + {{- $mountPath := .Values.nginx.persistence.mountPath }} + {{- range .Values.nginx.loggers }} + - name: {{ . | replace "_" "-" | replace "." "-" }} + image: {{ include "artifactory.getImageInfoByValue" (list $ "initContainers") }} + imagePullPolicy: {{ $.Values.initContainers.image.pullPolicy }} + command: + - tail + args: + - '-F' + - '{{ $mountPath }}/logs/{{ . }}' + volumeMounts: + - name: nginx-volume + mountPath: {{ $mountPath }} + resources: +{{ toYaml $.Values.nginx.loggersResources | indent 10 }} + {{- end }} + {{- if .Values.nginx.customSidecarContainers }} +{{ tpl (include "artifactory.nginx.customSidecarContainers" .) . | indent 6 }} + {{- end }} + {{- if or .Values.nginx.nodeSelector .Values.global.nodeSelector }} +{{ tpl (include "nginx.nodeSelector" .) . | indent 6 }} + {{- end }} + {{- with .Values.nginx.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.nginx.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} + volumes: + {{- if .Values.nginx.customVolumes }} +{{ tpl (include "artifactory.nginx.customVolumes" .) . | indent 6 }} + {{- end }} + - name: nginx-conf + configMap: + {{- if .Values.nginx.customConfigMap }} + name: {{ .Values.nginx.customConfigMap }} + {{- else }} + name: {{ template "artifactory.fullname" . }}-nginx-conf + {{- end }} + - name: nginx-artifactory-conf + configMap: + {{- if .Values.nginx.customArtifactoryConfigMap }} + name: {{ .Values.nginx.customArtifactoryConfigMap }} + {{- else }} + name: {{ template "artifactory.fullname" . }}-nginx-artifactory-conf + {{- end }} + - name: nginx-volume + {{- if .Values.nginx.persistence.enabled }} + persistentVolumeClaim: + claimName: {{ .Values.nginx.persistence.existingClaim | default (include "artifactory.nginx.fullname" .) }} + {{- else }} + emptyDir: {} + {{- end }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + secret: + {{- if .Values.nginx.tlsSecretName }} + secretName: {{ .Values.nginx.tlsSecretName }} + {{- else }} + secretName: {{ template "artifactory.fullname" . }}-nginx-certificate + {{- end }} + {{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-pdb.yaml b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-pdb.yaml new file mode 100644 index 000000000..dff0c23a3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/charts/artifactory/templates/nginx-pdb.yaml @@ -0,0 +1,23 @@ +{{- if .Values.nginx.enabled -}} +{{- if semverCompare "; + # kubernetes.io/tls-acme: "true" + # nginx.ingress.kubernetes.io/proxy-body-size: "0" + labels: {} + # traffic-type: external + # traffic-type: internal + tls: [] + ## Secrets must be manually created in the namespace. + # - secretName: chart-example-tls + # hosts: + # - artifactory.domain.example + + ## Additional ingress rules + additionalRules: [] + ## This is an experimental feature, enabling this feature will route all traffic through the Router. + disableRouterBypass: false +## Allows to add custom ingress +customIngress: "" +networkpolicy: [] +## Allows all ingress and egress +# - name: artifactory +# podSelector: +# matchLabels: +# app: artifactory +# egress: +# - {} +# ingress: +# - {} +## Uncomment to allow only artifactory pods to communicate with postgresql (if postgresql.enabled is true) +# - name: postgresql +# podSelector: +# matchLabels: +# app: postgresql +# ingress: +# - from: +# - podSelector: +# matchLabels: +# app: artifactory + +## Apply horizontal pod auto scaling on artifactory pods +## Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 3 + targetCPUUtilizationPercentage: 70 +## You can use a pre-existing secret with keys license_token and iam_role by specifying licenseConfigSecretName +## Example : Create a generic secret using `kubectl create secret generic --from-literal=license_token=${TOKEN} --from-literal=iam_role=${ROLE_ARN}` +aws: + license: + enabled: false + licenseConfigSecretName: + region: us-east-1 +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +## @param containerSecurityContext.enabled Enabled containers' Security Context +## @param containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot +## @param containerSecurityContext.privileged Set container's Security Context privileged +## @param containerSecurityContext.allowPrivilegeEscalation Set container's Security Context allowPrivilegeEscalation +## @param containerSecurityContext.capabilities.drop List of capabilities to be dropped +## @param containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile +## +containerSecurityContext: + enabled: true + runAsNonRoot: true + privileged: false + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL +## The following router settings are to configure only when splitServicesToContainers set to true +router: + name: router + image: + registry: releases-docker.jfrog.io + repository: jfrog/router + tag: 7.118.3 + pullPolicy: IfNotPresent + serviceRegistry: + ## Service registry (Access) TLS verification skipped if enabled + insecure: false + internalPort: 8082 + externalPort: 8082 + tlsEnabled: false + ## Extra environment variables that can be used to tune router to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + + ## Add lifecycle hooks for router container + lifecycle: + ## From Artifactory versions 7.52.x, Wait for Artifactory to complete any open uploads or downloads before terminating + preStop: + exec: + command: ["sh", "-c", "while [[ $(curl --fail --silent --connect-timeout 2 http://localhost:8081/artifactory/api/v1/system/liveness) =~ OK ]]; do echo Artifactory is still alive; sleep 2; done"] + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: /scripts/script.sh + # subPath: script.sh + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "artifactory.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " prepended. + unifiedSecretPrependReleaseName: true + ## For HA installation, set this value > 1. This is only supported in Artifactory 7.25.x (appVersions) and above. + replicaCount: 1 + # minAvailable: 1 + + ## Note that by default we use appVersion to get image tag/version + image: + registry: releases-docker.jfrog.io + repository: jfrog/artifactory-pro + # tag: + pullPolicy: IfNotPresent + labels: {} + updateStrategy: + type: RollingUpdate + ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ + schedulerName: + ## Create a priority class for the Artifactory pod or use an existing one + ## NOTE - Maximum allowed value of a user defined priority is 1000000000 + priorityClass: + create: false + value: 1000000000 + ## Override default name + # name: + ## Use an existing priority class + # existingPriorityClass: + ## Spread Artifactory pods evenly across your nodes or some other topology + topologySpreadConstraints: [] + # - maxSkew: 1 + # topologyKey: kubernetes.io/hostname + # whenUnsatisfiable: DoNotSchedule + # labelSelector: + # matchLabels: + # app: '{{ template "artifactory.name" . }}' + # role: '{{ template "artifactory.name" . }}' + # release: "{{ .Release.Name }}" + + ## Delete the db.properties file in ARTIFACTORY_HOME/etc/db.properties + deleteDBPropertiesOnStartup: true + ## certificates added to this secret will be copied to $JFROG_HOME/artifactory/var/etc/security/keys/trusted directory + customCertificates: + enabled: false + # certificateSecretName: + database: + maxOpenConnections: 80 + tomcat: + maintenanceConnector: + port: 8091 + connector: + maxThreads: 200 + sendReasonPhrase: false + extraConfig: 'acceptCount="400"' + ## Support for metrics is only available for Artifactory 7.7.x (appVersions) and above. + ## To enable set `.Values.artifactory.metrics.enabled` to `true` + ## Note : Depricated openMetrics as part of 7.87.x and renamed to `metrics` + ## Refer - https://www.jfrog.com/confluence/display/JFROG/Open+Metrics + metrics: + enabled: false + ## Settings for pushing metrics to Insight - enable filebeat to true + filebeat: + enabled: false + log: + enabled: false + ## Log level for filebeat. Possible values: debug, info, warning, or error. + level: "info" + ## Elasticsearch details for filebeat to connect + elasticsearch: + url: "Elasticsearch url where JFrog Insight is installed For example, http://:8082" + username: "" + password: "" + ## Support for Cold Artifact Storage + ## set 'coldStorage.enabled' to 'true' only for Artifactory instance that you are designating as the Cold instance + ## Refer - https://jfrog.com/help/r/jfrog-platform-administration-documentation/setting-up-cold-artifact-storage + coldStorage: + enabled: false + ## This directory is intended for use with NFS eventual configuration for HA + haDataDir: + enabled: false + path: + haBackupDir: + enabled: false + path: + ## Files to copy to ARTIFACTORY_HOME/ on each Artifactory startup + ## Note : From 107.46.x chart versions, copyOnEveryStartup is not needed for binarystore.xml, it is always copied via initContainers + copyOnEveryStartup: + ## Absolute path + # - source: /artifactory_bootstrap/artifactory.lic + ## Relative to ARTIFACTORY_HOME/ + # target: etc/artifactory/ + + ## Sidecar containers for tailing Artifactory logs + loggers: [] + # - access-audit.log + # - access-request.log + # - access-security-audit.log + # - access-service.log + # - artifactory-access.log + # - artifactory-event.log + # - artifactory-import-export.log + # - artifactory-request.log + # - artifactory-service.log + # - frontend-request.log + # - frontend-service.log + # - metadata-request.log + # - metadata-service.log + # - router-request.log + # - router-service.log + # - router-traefik.log + # - derby.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Sidecar containers for tailing Tomcat (catalina) logs + catalinaLoggers: [] + # - tomcat-catalina.log + # - tomcat-localhost.log + + ## Tomcat (catalina) loggers resources + catalinaLoggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Migration support from 6.x to 7.x + migration: + enabled: false + timeoutSeconds: 3600 + ## Extra pre-start command in migration Init Container to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + ## Add custom init containers execution before predefined init containers + customInitContainersBegin: | + # - name: "custom-setup" + # image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'touch {{ .Values.artifactory.persistence.mountPath }}/example-custom-setup' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: artifactory-volume + ## Add custom init containers execution after predefined init containers + customInitContainers: | + # - name: "custom-systemyaml-setup" + # image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'curl -o {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml https:///systemyaml' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: artifactory-volume + ## Add custom sidecar containers + ## - The provided example uses a custom volume (customVolumes) + customSidecarContainers: | + # - name: "sidecar-list-etc" + # image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'sh /scripts/script.sh' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: artifactory-volume + # - mountPath: "/scripts/script.sh" + # name: custom-script + # subPath: script.sh + # resources: + # requests: + # memory: "32Mi" + # cpu: "50m" + # limits: + # memory: "128Mi" + # cpu: "100m" + ## Add custom volumes + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret' + customVolumes: | + # - name: custom-script + # configMap: + # name: custom-script + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: "/scripts/script.sh" + # subPath: script.sh + # - name: posthook-start + # mountPath: "/scripts/posthoook-start.sh" + # subPath: posthoook-start.sh + # - name: prehook-start + # mountPath: "/scripts/prehook-start.sh" + # subPath: prehook-start.sh + ## Add custom persistent volume mounts - Available to the entire namespace + customPersistentVolumeClaim: {} + # name: + # mountPath: + # accessModes: + # - "-" + # size: + # storageClassName: + + ## Artifactory license. + license: + ## licenseKey is the license key in plain text. Use either this or the license.secret setting + licenseKey: + ## If artifactory.license.secret is passed, it will be mounted as + ## ARTIFACTORY_HOME/etc/artifactory.lic and loaded at run time. + secret: + ## The dataKey should be the name of the secret data key created. + dataKey: + ## Create configMap with artifactory.config.import.xml and security.import.xml and pass name of configMap in following parameter + configMapName: + ## Add any list of configmaps to Artifactory + configMaps: | + # posthook-start.sh: |- + # echo "This is a post start script" + # posthook-end.sh: |- + # echo "This is a post end script" + ## List of secrets for Artifactory user plugins. + ## One Secret per plugin's files. + userPluginSecrets: + # - archive-old-artifacts + # - build-cleanup + # - webhook + # - '{{ template "my-chart.fullname" . }}' + + ## Artifactory requires a unique master key. + ## You can generate one with the command: "openssl rand -hex 32" + ## An initial one is auto generated by Artifactory on first startup. + # masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + ## Alternatively, you can use a pre-existing secret with a key called master-key by specifying masterKeySecretName + # masterKeySecretName: + + ## Join Key to connect other services to Artifactory + ## IMPORTANT: Setting this value overrides the existing joinKey + ## IMPORTANT: You should NOT use the example joinKey for a production deployment! + # joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + ## Alternatively, you can use a pre-existing secret with a key called join-key by specifying joinKeySecretName + # joinKeySecretName: + + ## Registration Token for JFConnect + # jfConnectToken: + ## Alternatively, you can use a pre-existing secret with a key called jfconnect-token by specifying jfConnectTokenSecretName + # jfConnectTokenSecretName: + + ## Add custom secrets - secret per file + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret' common to all secrets + customSecrets: + # - name: custom-secret + # key: custom-secret.yaml + # data: > + # custom_secret_config: + # parameter1: value1 + # parameter2: value2 + # - name: custom-secret2 + # key: custom-secret2.config + # data: | + # here the custom secret 2 config + + ## If false, all service console logs will not redirect to a common console.log + consoleLog: false + ## admin allows to set the password for the default admin user. + ## See: https://www.jfrog.com/confluence/display/JFROG/Users+and+Groups#UsersandGroups-RecreatingtheDefaultAdminUserrecreate + admin: + ip: "127.0.0.1" + username: "admin" + password: + secret: + dataKey: + ## Extra pre-start command to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + + ## Add lifecycle hooks for artifactory container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## Extra environment variables that can be used to tune Artifactory to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: SERVER_XML_ARTIFACTORY_PORT + # value: "8081" + # - name: SERVER_XML_ARTIFACTORY_MAX_THREADS + # value: "200" + # - name: SERVER_XML_ACCESS_MAX_THREADS + # value: "50" + # - name: SERVER_XML_ARTIFACTORY_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_ACCESS_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_EXTRA_CONNECTOR + # value: "" + # - name: DB_POOL_MAX_ACTIVE + # value: "100" + # - name: DB_POOL_MAX_IDLE + # value: "10" + # - name: MY_SECRET_ENV_VAR + # valueFrom: + # secretKeyRef: + # name: my-secret-name + # key: my-secret-key + + ## System YAML entries now reside under files/system.yaml. + ## You can provide the specific values that you want to add or override under 'artifactory.extraSystemYaml'. + ## For example: + ## extraSystemYaml: + ## shared: + ## node: + ## id: my-instance + ## The entries provided under 'artifactory.extraSystemYaml' are merged with files/system.yaml to create the final system.yaml. + ## If you have already provided system.yaml under, 'artifactory.systemYaml', the values in that entry take precedence over files/system.yaml + ## You can modify specific entries with your own value under `artifactory.extraSystemYaml`, The values under extraSystemYaml overrides the values under 'artifactory.systemYaml' and files/system.yaml + extraSystemYaml: {} + ## systemYaml is intentionally commented and the previous content has been moved under files/system.yaml. + ## You have to add the all entries of the system.yaml file here, and it overrides the values in files/system.yaml. + # systemYaml: + annotations: {} + service: + name: artifactory + type: ClusterIP + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Artifactory service (useful if setting service.type=LoadBalancer) + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + annotations: {} + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + statefulset: + annotations: {} + ## IMPORTANT: If overriding artifactory.internalPort: + ## DO NOT use port lower than 1024 as Artifactory runs as non-root and cannot bind to ports lower than 1024! + externalPort: 8082 + internalPort: 8082 + externalArtifactoryPort: 8081 + internalArtifactoryPort: 8081 + terminationGracePeriodSeconds: 30 + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## @param artifactory.podSecurityContext.enabled Enable security context + ## @param artifactory.podSecurityContext.runAsNonRoot Set pod's Security Context runAsNonRoot + ## @param artifactory.podSecurityContext.runAsUser User ID for the pod + ## @param artifactory.podSecurityContext.runASGroup Group ID for the pod + ## @param artifactory.podSecurityContext.fsGroup Group ID for the pod + ## + podSecurityContext: + enabled: true + runAsNonRoot: true + runAsUser: 1030 + runAsGroup: 1030 + fsGroup: 1030 + # fsGroupChangePolicy: "Always" + # seLinuxOptions: {} + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.artifactory.tomcat.maintenanceConnector.port }}/artifactory/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare "", + # "private_key_id": "?????", + # "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", + # "client_email": "???@j.iam.gserviceaccount.com", + # "client_id": "???????", + # "auth_uri": "https://accounts.google.com/o/oauth2/auth", + # "token_uri": "https://oauth2.googleapis.com/token", + # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + # "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." + # } + endpoint: commondatastorage.googleapis.com + httpsOnly: false + ## Set a unique bucket name + bucketName: "artifactory-gcp" + ## GCP Bucket Authentication with Identity and Credential is deprecated. + ## identity: + ## credential: + path: "artifactory/filestore" + bucketExists: false + useInstanceCredentials: false + enableSignedUrlRedirect: false + ## For artifactory.persistence.type aws-s3-v3, s3-storage-v3-direct, cluster-s3-storage-v3, s3-storage-v3-archive + awsS3V3: + testConnection: false + identity: + credential: + region: + bucketName: artifactory-aws + path: artifactory/filestore + endpoint: + port: + useHttp: + maxConnections: 50 + connectionTimeout: + socketTimeout: + kmsServerSideEncryptionKeyId: + kmsKeyRegion: + kmsCryptoMode: + useInstanceCredentials: true + usePresigning: false + signatureExpirySeconds: 300 + signedUrlExpirySeconds: 30 + cloudFrontDomainName: + cloudFrontKeyPairId: + cloudFrontPrivateKey: + enableSignedUrlRedirect: false + enablePathStyleAccess: false + multiPartLimit: + multipartElementSize: + ## For artifactory.persistence.type azure-blob, azure-blob-storage-direct, cluster-azure-blob-storage, azure-blob-storage-v2-direct + azureBlob: + accountName: + accountKey: + endpoint: + containerName: + multiPartLimit: 100000000 + multipartElementSize: 50000000 + testConnection: false + ## artifactory data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + ## Annotations for the Persistent Volume Claim + annotations: {} + ## Uncomment the following resources definitions or pass them from command line + ## to control the cpu and memory resources allocated by the Kubernetes cluster + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "2Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory. + ## You should set them according to the resources set above + javaOpts: + # xms: "1g" + # xmx: "2g" + jmx: + enabled: false + port: 9010 + host: + ssl: false + ## When authenticate is true, accessFile and passwordFile are required + authenticate: false + accessFile: + passwordFile: + # corePoolSize: 24 + # other: "" + nodeSelector: {} + tolerations: [] + affinity: {} + ## Only used if "affinity" is empty + podAntiAffinity: + ## Valid values are "soft" or "hard"; any other value indicates no anti-affinity + type: "soft" + topologyKey: "kubernetes.io/hostname" + ssh: + enabled: false + internalPort: 1339 + externalPort: 1339 +frontend: + name: frontend + enabled: true + internalPort: 8070 + ## Extra environment variables that can be used to tune frontend to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + + ## Add lifecycle hooks for frontend container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## Session settings + session: + ## Time in minutes after which the frontend token will need to be refreshed + timeoutMinutes: '30' + ## The following settings are to configure the frequency of the liveness and startup probes when splitServicesToContainers set to true + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.frontend.internalPort }}/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " --cert=ca.crt --key=ca.private.key` + # customCertificatesSecretName: + + ## When resetAccessCAKeys is true, Access will regenerate the CA certificate and matching private key + # resetAccessCAKeys: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 + sendReasonPhrase: false + extraConfig: 'acceptCount="100"' + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:8040/access/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " /var/opt/jfrog/nginx/message"] + # preStop: + # exec: + # command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] + + ## Sidecar containers for tailing Nginx logs + loggers: [] + # - access.log + # - error.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "64Mi" + # cpu: "25m" + # limits: + # memory: "128Mi" + # cpu: "50m" + + ## Logs options + logs: + stderr: false + stdout: false + level: warn + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: [] + # - containerPort: 8066 + # name: docker + + ## The nginx main conf was moved to files/nginx-main-conf.yaml. This key is commented out to keep support for the old configuration + # mainConf: | + + ## The nginx artifactory conf was moved to files/nginx-artifactory-conf.yaml. This key is commented out to keep support for the old configuration + # artifactoryConf: | + customInitContainers: "" + customSidecarContainers: "" + customVolumes: "" + customVolumeMounts: "" + customCommand: + ## allows overwriting the command for the nginx container. + ## defaults to [ 'nginx', '-g', 'daemon off;' ] + + service: + ## For minikube, set this to NodePort, elsewhere use LoadBalancer + type: LoadBalancer + ssloffload: false + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Nginx LoadBalancer service + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + annotations: {} + ## Provide static ip address + loadBalancerIP: + ## There are two available options: "Cluster" (default) and "Local". + externalTrafficPolicy: Cluster + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + ## A list of custom ports to be exposed on nginx service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: [] + # - port: 8066 + # targetPort: 8066 + # protocol: TCP + # name: docker + ## Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + http: + enabled: true + externalPort: 80 + internalPort: 8080 + https: + enabled: true + externalPort: 443 + internalPort: 8443 + ssh: + internalPort: 1339 + externalPort: 1339 + ## DEPRECATED: The following will be removed in a future release + # externalPortHttp: 8080 + # internalPortHttp: 8080 + # externalPortHttps: 8443 + # internalPortHttps: 8443 + + ## The following settings are to configure the frequency of the liveness and readiness probes. + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "nginx.scheme" . }}://localhost:{{ include "nginx.port" . }}/ + initialDelaySeconds: {{ if semverCompare " + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + resources: {} + # requests: + # memory: "250Mi" + # cpu: "100m" + # limits: + # memory: "250Mi" + # cpu: "500m" + nodeSelector: {} + tolerations: [] + affinity: {} +## Database configurations +## Use the wait-for-db init container. Set to false to skip +waitForDatabase: true +## Configuration values for the PostgreSQL dependency sub-chart +## ref: https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md +postgresql: + enabled: true + image: + registry: releases-docker.jfrog.io + repository: bitnami/postgresql + tag: 15.6.0-debian-11-r16 + postgresqlUsername: artifactory + postgresqlPassword: "" + postgresqlDatabase: artifactory + postgresqlExtendedConf: + listenAddresses: "*" + maxConnections: "1500" + persistence: + enabled: true + size: 200Gi + # existingClaim: + service: + port: 5432 + primary: + nodeSelector: {} + affinity: {} + tolerations: [] + readReplicas: + nodeSelector: {} + affinity: {} + tolerations: [] + resources: {} + securityContext: + enabled: true + containerSecurityContext: + enabled: true + # requests: + # memory: "512Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "500m" +## If NOT using the PostgreSQL in this chart (postgresql.enabled=false), +## specify custom database details here or leave empty and Artifactory will use embedded derby +database: + ## To run Artifactory with any database other than PostgreSQL allowNonPostgresql set to true. + allowNonPostgresql: false + type: + driver: + ## If you set the url, leave host and port empty + url: + ## If you would like this chart to create the secret containing the db + ## password, use these values + user: + password: + ## If you have existing Kubernetes secrets containing db credentials, use + ## these values + secrets: {} + # user: + # name: "rds-artifactory" + # key: "db-user" + # password: + # name: "rds-artifactory" + # key: "db-password" + # url: + # name: "rds-artifactory" + # key: "db-url" +## Filebeat Sidecar container +## The provided filebeat configuration is for Artifactory logs. It assumes you have a logstash installed and configured properly. +filebeat: + enabled: false + name: artifactory-filebeat + image: + repository: "docker.elastic.co/beats/filebeat" + version: 7.16.2 + logstashUrl: "logstash:5044" + livenessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + filebeat test output + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "100Mi" + # cpu: "100m" + + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output: + logstash: + hosts: ["{{ .Values.filebeat.logstashUrl }}"] +## Allows to add additional kubernetes resources +## Use --- as a separator between multiple resources +## For an example, refer - https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-values.yaml +additionalResources: "" +## Adding entries to a Pod's /etc/hosts file +## For an example, refer - https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases +hostAliases: [] +# - ip: "127.0.0.1" +# hostnames: +# - "foo.local" +# - "bar.local" +# - ip: "10.1.2.3" +# hostnames: +# - "foo.remote" +# - "bar.remote" + +## Toggling this feature is seamless and requires helm upgrade +## will enable all microservices to run in different containers in a single pod (by default it is true) +splitServicesToContainers: true +## Specify common probes parameters +probes: + timeoutSeconds: 5 diff --git a/charts/jfrog/artifactory-jcr/107.90.15/ci/default-values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/ci/default-values.yaml new file mode 100644 index 000000000..86355d3b3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/ci/default-values.yaml @@ -0,0 +1,7 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +artifactory: + databaseUpgradeReady: true + + # To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release + postgresql: + postgresqlPassword: password diff --git a/charts/jfrog/artifactory-jcr/107.90.15/logo/jcr-logo.png b/charts/jfrog/artifactory-jcr/107.90.15/logo/jcr-logo.png new file mode 100644 index 000000000..b1e312e32 Binary files /dev/null and b/charts/jfrog/artifactory-jcr/107.90.15/logo/jcr-logo.png differ diff --git a/charts/jfrog/artifactory-jcr/107.90.15/questions.yml b/charts/jfrog/artifactory-jcr/107.90.15/questions.yml new file mode 100644 index 000000000..9cde42870 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/questions.yml @@ -0,0 +1,271 @@ +questions: +# Advance Settings +- variable: artifactory.artifactory.masterKey + default: "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" + description: "Artifactory master key. For security reasons, we strongly recommend you generate your own master key using this command: 'openssl rand -hex 32'" + type: string + label: Artifactory master key + group: "Security Settings" + +# Container Images +- variable: defaultImage + default: true + description: "Use default Docker image" + label: Use Default Image + type: boolean + show_subquestion_if: false + group: "Container Images" + subquestions: + - variable: artifactory.artifactory.image.repository + default: "docker.bintray.io/jfrog/artifactory-jcr" + description: "JFrog Container Registry image name" + type: string + label: JFrog Container Registry Image Name + - variable: artifactory.artifactory.image.version + default: "7.6.3" + description: "JFrog Container Registry image tag" + type: string + label: JFrog Container Registry Image Tag + - variable: artifactory.imagePullSecrets + description: "Image Pull Secret" + type: string + label: Image Pull Secret + +# Services and LoadBalancing Settings +- variable: artifactory.ingress.enabled + default: false + description: "Expose app using Layer 7 Load Balancer - ingress" + type: boolean + label: Expose app using Layer 7 Load Balancer + show_subquestion_if: true + group: "Services and Load Balancing" + required: true + subquestions: + - variable: artifactory.ingress.hosts[0] + default: "xip.io" + description: "Hostname to your artifactory installation" + type: hostname + required: true + label: Hostname + +# Nginx Settings +- variable: artifactory.nginx.enabled + default: true + description: "Enable nginx server" + type: boolean + label: Enable Nginx Server + group: "Services and Load Balancing" + required: true + show_if: "artifactory.ingress.enabled=false" +- variable: artifactory.nginx.service.type + default: "LoadBalancer" + description: "Nginx service type" + type: enum + required: true + label: Nginx Service Type + show_if: "artifactory.nginx.enabled=true&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" + options: + - "ClusterIP" + - "NodePort" + - "LoadBalancer" +- variable: artifactory.nginx.service.loadBalancerIP + default: "" + description: "Provide Static IP to configure with Nginx" + type: string + label: Config Nginx LoadBalancer IP + show_if: "artifactory.nginx.enabled=true&&artifactory.nginx.service.type=LoadBalancer&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" +- variable: artifactory.nginx.tlsSecretName + default: "" + description: "Provide SSL Secret name to configure with Nginx" + type: string + label: Config Nginx SSL Secret + show_if: "artifactory.nginx.enabled=true&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" +- variable: artifactory.nginx.customArtifactoryConfigMap + default: "" + description: "Provide configMap name to configure Nginx with custom `artifactory.conf`" + type: string + label: ConfigMap for Nginx Artifactory Config + show_if: "artifactory.nginx.enabled=true&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" + +# Database Settings +- variable: artifactory.postgresql.enabled + default: true + description: "Enable PostgreSQL" + type: boolean + required: true + label: Enable PostgreSQL + group: "Database Settings" + show_subquestion_if: true + subquestions: + - variable: artifactory.postgresql.postgresqlPassword + default: "" + description: "PostgreSQL password" + type: password + required: true + label: PostgreSQL Password + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.persistence.size + default: 20Gi + description: "PostgreSQL persistent volume size" + type: string + label: PostgreSQL Persistent Volume Size + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.persistence.storageClass + default: "" + description: "If undefined or null, uses the default StorageClass. Default to null" + type: storageclass + label: Default StorageClass for PostgreSQL + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.requests.cpu + default: "200m" + description: "PostgreSQL initial cpu request" + type: string + label: PostgreSQL Initial CPU Request + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.requests.memory + default: "500Mi" + description: "PostgreSQL initial memory request" + type: string + label: PostgreSQL Initial Memory Request + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.limits.cpu + default: "1" + description: "PostgreSQL cpu limit" + type: string + label: PostgreSQL CPU Limit + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.limits.memory + default: "1Gi" + description: "PostgreSQL memory limit" + type: string + label: PostgreSQL Memory Limit + show_if: "artifactory.postgresql.enabled=true" +- variable: artifactory.database.type + default: "postgresql" + description: "xternal database type (postgresql, mysql, oracle or mssql)" + type: enum + required: true + label: External Database Type + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" + options: + - "postgresql" + - "mysql" + - "oracle" + - "mssql" +- variable: artifactory.database.url + default: "" + description: "External database URL. If you set the url, leave host and port empty" + type: string + label: External Database URL + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.host + default: "" + description: "External database hostname" + type: string + label: External Database Hostname + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.port + default: "" + description: "External database port" + type: string + label: External Database Port + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.user + default: "" + description: "External database username" + type: string + label: External Database Username + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.password + default: "" + description: "External database password" + type: password + label: External Database Password + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" + +# Advance Settings +- variable: artifactory.advancedOptions + default: false + description: "Show advanced configurations" + label: Show Advanced Configurations + type: boolean + show_subquestion_if: true + group: "Advanced Options" + subquestions: + - variable: artifactory.artifactory.primary.resources.requests.cpu + default: "500m" + description: "Artifactory primary node initial cpu request" + type: string + label: Artifactory Primary Node Initial CPU Request + - variable: artifactory.artifactory.primary.resources.requests.memory + default: "1Gi" + description: "Artifactory primary node initial memory request" + type: string + label: Artifactory Primary Node Initial Memory Request + - variable: artifactory.artifactory.primary.javaOpts.xms + default: "1g" + description: "Artifactory primary node java Xms size" + type: string + label: Artifactory Primary Node Java Xms Size + - variable: artifactory.artifactory.primary.resources.limits.cpu + default: "2" + description: "Artifactory primary node cpu limit" + type: string + label: Artifactory Primary Node CPU Limit + - variable: artifactory.artifactory.primary.resources.limits.memory + default: "4Gi" + description: "Artifactory primary node memory limit" + type: string + label: Artifactory Primary Node Memory Limit + - variable: artifactory.artifactory.primary.javaOpts.xmx + default: "4g" + description: "Artifactory primary node java Xmx size" + type: string + label: Artifactory Primary Node Java Xmx Size + - variable: artifactory.artifactory.node.resources.requests.cpu + default: "500m" + description: "Artifactory member node initial cpu request" + type: string + label: Artifactory Member Node Initial CPU Request + - variable: artifactory.artifactory.node.resources.requests.memory + default: "2Gi" + description: "Artifactory member node initial memory request" + type: string + label: Artifactory Member Node Initial Memory Request + - variable: artifactory.artifactory.node.javaOpts.xms + default: "1g" + description: "Artifactory member node java Xms size" + type: string + label: Artifactory Member Node Java Xms Size + - variable: artifactory.artifactory.node.resources.limits.cpu + default: "2" + description: "Artifactory member node cpu limit" + type: string + label: Artifactory Member Node CPU Limit + - variable: artifactory.artifactory.node.resources.limits.memory + default: "4Gi" + description: "Artifactory member node memory limit" + type: string + label: Artifactory Member Node Memory Limit + - variable: artifactory.artifactory.node.javaOpts.xmx + default: "4g" + description: "Artifactory member node java Xmx size" + type: string + label: Artifactory Member Node Java Xmx Size + +# Internal Settings +- variable: installerInfo + default: '\{\"productId\": \"RancherHelm_artifactory-jcr/7.6.3\", \"features\": \[\{\"featureId\": \"Partner/ACC-007246\"\}\]\}' + type: string + group: "Internal Settings (Do not modify)" diff --git a/charts/jfrog/artifactory-jcr/107.90.15/templates/NOTES.txt b/charts/jfrog/artifactory-jcr/107.90.15/templates/NOTES.txt new file mode 100644 index 000000000..035bf8417 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/templates/NOTES.txt @@ -0,0 +1 @@ +Congratulations. You have just deployed JFrog Container Registry! diff --git a/charts/jfrog/artifactory-jcr/107.90.15/values.yaml b/charts/jfrog/artifactory-jcr/107.90.15/values.yaml new file mode 100644 index 000000000..a96b4f7d2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.15/values.yaml @@ -0,0 +1,75 @@ +# Default values for artifactory-jcr. +# This is a YAML-formatted file. + +# Beware when changing values here. You should know what you are doing! +# Access the values with {{ .Values.key.subkey }} + +# This chart is based on the main artifactory chart with some customizations. +# See all supported configuration keys in https://github.com/jfrog/charts/tree/master/stable/artifactory + +## All values are under the 'artifactory' sub chart. +artifactory: + ## Artifactory + ## See full list of supported Artifactory options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + artifactory: + ## Default tag is from the artifactory sub-chart in the requirements.yaml + image: + registry: releases-docker.jfrog.io + repository: jfrog/artifactory-jcr + # tag: + ## Uncomment the following resources definitions or pass them from command line + ## to control the cpu and memory resources allocated by the Kubernetes cluster + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "4Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory. + ## You should set them according to the resources set above. + ## IMPORTANT: Make sure resources.limits.memory is at least 1G more than Xmx. + javaOpts: {} + # xms: "1g" + # xmx: "3g" + # other: "" + installer: + platform: jcr-helm + installerInfo: '{"productId":"Helm_artifactory-jcr/{{ .Chart.Version }}","features":[{"featureId":"Platform/{{ printf "%s-%s" "kubernetes" .Capabilities.KubeVersion.Version }}"},{"featureId":"Database/{{ .Values.database.type }}"},{"featureId":"PostgreSQL_Enabled/{{ .Values.postgresql.enabled }}"},{"featureId":"Nginx_Enabled/{{ .Values.nginx.enabled }}"},{"featureId":"ArtifactoryPersistence_Type/{{ .Values.artifactory.persistence.type }}"},{"featureId":"SplitServicesToContainers_Enabled/{{ .Values.splitServicesToContainers }}"},{"featureId":"UnifiedSecretInstallation_Enabled/{{ .Values.artifactory.unifiedSecretInstallation }}"},{"featureId":"Filebeat_Enabled/{{ .Values.filebeat.enabled }}"},{"featureId":"ReplicaCount/{{ .Values.artifactory.replicaCount }}"}]}' + ## Nginx + ## See full list of supported Nginx options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + nginx: + enabled: true + tlsSecretName: "" + service: + type: LoadBalancer + ## Ingress + ## See full list of supported Ingress options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + ingress: + enabled: false + tls: + ## PostgreSQL + ## See list of supported postgresql options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + ## Configuration values for the PostgreSQL dependency sub-chart + ## ref: https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md + postgresql: + enabled: true + ## This key is required for upgrades to protect old PostgreSQL chart's breaking changes. + databaseUpgradeReady: "yes" + ## If NOT using the PostgreSQL in this chart (artifactory.postgresql.enabled=false), + ## specify custom database details here or leave empty and Artifactory will use embedded derby. + ## See full list of database options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + # database: + jfconnect: + enabled: false + federation: + enabled: false +## Enable the PostgreSQL sub chart +postgresql: + enabled: true +router: + image: + tag: 7.118.3 +initContainers: + image: + tag: 9.4.949.1716471857 diff --git a/charts/kuma/kuma/2.9.0/.helmdocsignore b/charts/kuma/kuma/2.9.0/.helmdocsignore new file mode 100644 index 000000000..d8a5db8f8 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/.helmdocsignore @@ -0,0 +1 @@ +# Charts to ignore from helm-docs \ No newline at end of file diff --git a/charts/kuma/kuma/2.9.0/.helmignore b/charts/kuma/kuma/2.9.0/.helmignore new file mode 100644 index 000000000..0e8a0eb36 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/kuma/kuma/2.9.0/Chart.yaml b/charts/kuma/kuma/2.9.0/Chart.yaml new file mode 100644 index 000000000..7b38caf51 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/Chart.yaml @@ -0,0 +1,26 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: Kuma + catalog.cattle.io/namespace: kuma-system + catalog.cattle.io/release-name: kuma +apiVersion: v2 +appVersion: 2.9.0 +description: A Helm chart for the Kuma Control Plane +home: https://github.com/kumahq/kuma +icon: file://assets/icons/kuma.svg +keywords: +- service mesh +- control plane +maintainers: +- email: jakub.dyszkiewicz@konghq.com + name: Jakub Dyszkiewicz + url: https://github.com/jakubdyszkiewicz +- email: charly.molter@konghq.com + name: Charly Molter + url: https://github.com/lahabana +- email: michael.beaumont@konghq.com + name: Mike Beaumont + url: https://github.com/michaelbeaumont +name: kuma +type: application +version: 2.9.0 diff --git a/charts/kuma/kuma/2.9.0/README.md b/charts/kuma/kuma/2.9.0/README.md new file mode 100644 index 000000000..d0b75b62c --- /dev/null +++ b/charts/kuma/kuma/2.9.0/README.md @@ -0,0 +1,316 @@ +[![][kuma-logo]][kuma-url] + +A Helm chart for the Kuma Control Plane + +![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 2.9.0](https://img.shields.io/badge/Version-2.9.0-informational?style=flat-square) ![AppVersion: 2.9.0](https://img.shields.io/badge/AppVersion-2.9.0-informational?style=flat-square) + +**Homepage:** + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| global.image.registry | string | `"docker.io/kumahq"` | Default registry for all Kuma Images | +| global.image.tag | string | `nil` | The default tag for all Kuma images, which itself defaults to .Chart.AppVersion | +| global.imagePullSecrets | list | `[]` | Add `imagePullSecrets` to all the service accounts used for Kuma components | +| patchSystemNamespace | bool | `true` | Whether to patch the target namespace with the system label | +| installCrdsOnUpgrade.enabled | bool | `true` | Whether install new CRDs before upgrade (if any were introduced with the new version of Kuma) | +| installCrdsOnUpgrade.imagePullSecrets | list | `[]` | The `imagePullSecrets` to attach to the Service Account running CRD installation. This field will be deprecated in a future release, please use .global.imagePullSecrets | +| noHelmHooks | bool | `false` | Whether to disable all helm hooks | +| restartOnSecretChange | bool | `true` | Whether to restart control-plane by calculating a new checksum for the secret | +| controlPlane.environment | string | `"kubernetes"` | Environment that control plane is run in, useful when running universal global control plane on k8s | +| controlPlane.extraLabels | object | `{}` | Labels to add to resources in addition to default labels | +| controlPlane.logLevel | string | `"info"` | Kuma CP log level: one of off,info,debug | +| controlPlane.logOutputPath | string | `""` | Kuma CP log output path: Defaults to /dev/stdout | +| controlPlane.mode | string | `"zone"` | Kuma CP modes: one of zone,global | +| controlPlane.zone | string | `nil` | Kuma CP zone, if running multizone | +| controlPlane.kdsGlobalAddress | string | `""` | Only used in `zone` mode | +| controlPlane.replicas | int | `1` | Number of replicas of the Kuma CP. Ignored when autoscaling is enabled | +| controlPlane.minReadySeconds | int | `0` | Minimum number of seconds for which a newly created pod should be ready for it to be considered available. | +| controlPlane.deploymentAnnotations | object | `{}` | Annotations applied only to the `Deployment` resource | +| controlPlane.podAnnotations | object | `{}` | Annotations applied only to the `Pod` resource | +| controlPlane.autoscaling.enabled | bool | `false` | Whether to enable Horizontal Pod Autoscaling, which requires the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in the cluster | +| controlPlane.autoscaling.minReplicas | int | `2` | The minimum CP pods to allow | +| controlPlane.autoscaling.maxReplicas | int | `5` | The max CP pods to scale to | +| controlPlane.autoscaling.targetCPUUtilizationPercentage | int | `80` | For clusters that don't support autoscaling/v2, autoscaling/v1 is used | +| controlPlane.autoscaling.metrics | list | `[{"resource":{"name":"cpu","target":{"averageUtilization":80,"type":"Utilization"}},"type":"Resource"}]` | For clusters that do support autoscaling/v2, use metrics | +| controlPlane.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for the Kuma Control Plane pods | +| controlPlane.tolerations | list | `[]` | Tolerations for the Kuma Control Plane pods | +| controlPlane.podDisruptionBudget.enabled | bool | `false` | Whether to create a pod disruption budget | +| controlPlane.podDisruptionBudget.maxUnavailable | int | `1` | The maximum number of unavailable pods allowed by the budget | +| controlPlane.affinity | object | `{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app.kubernetes.io/name","operator":"In","values":["{{ include \"kuma.name\" . }}"]},{"key":"app.kubernetes.io/instance","operator":"In","values":["{{ .Release.Name }}"]},{"key":"app","operator":"In","values":["{{ include \"kuma.name\" . }}-control-plane"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}}` | Affinity placement rule for the Kuma Control Plane pods. This is rendered as a template, so you can reference other helm variables or includes. | +| controlPlane.topologySpreadConstraints | string | `nil` | Topology spread constraints rule for the Kuma Control Plane pods. This is rendered as a template, so you can use variables to generate match labels. | +| controlPlane.injectorFailurePolicy | string | `"Fail"` | Failure policy of the mutating webhook implemented by the Kuma Injector component | +| controlPlane.service.apiServer.http.nodePort | int | `30681` | Port on which Http api server Service is exposed on Node for service of type NodePort | +| controlPlane.service.apiServer.https.nodePort | int | `30682` | Port on which Https api server Service is exposed on Node for service of type NodePort | +| controlPlane.service.enabled | bool | `true` | Whether to create a service resource. | +| controlPlane.service.name | string | `nil` | Optionally override of the Kuma Control Plane Service's name | +| controlPlane.service.type | string | `"ClusterIP"` | Service type of the Kuma Control Plane | +| controlPlane.service.annotations | object | `{"prometheus.io/port":"5680","prometheus.io/scrape":"true"}` | Annotations to put on the Kuma Control Plane | +| controlPlane.ingress.enabled | bool | `false` | Install K8s Ingress resource that exposes GUI and API | +| controlPlane.ingress.ingressClassName | string | `nil` | IngressClass defines which controller will implement the resource | +| controlPlane.ingress.hostname | string | `nil` | Ingress hostname | +| controlPlane.ingress.annotations | object | `{}` | Map of ingress annotations. | +| controlPlane.ingress.path | string | `"/"` | Ingress path. | +| controlPlane.ingress.pathType | string | `"ImplementationSpecific"` | Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix) | +| controlPlane.ingress.servicePort | int | `5681` | Port from kuma-cp to use to expose API and GUI. Switch to 5682 to expose TLS port | +| controlPlane.globalZoneSyncService.enabled | bool | `true` | Whether to create a k8s service for the global zone sync service. It will only be created when enabled and deploying the global control plane. | +| controlPlane.globalZoneSyncService.type | string | `"LoadBalancer"` | Service type of the Global-zone sync | +| controlPlane.globalZoneSyncService.loadBalancerIP | string | `nil` | Optionally specify IP to be used by cloud provider when configuring load balancer | +| controlPlane.globalZoneSyncService.loadBalancerSourceRanges | list | `[]` | Optionally specify allowed source ranges that can access the load balancer | +| controlPlane.globalZoneSyncService.annotations | object | `{}` | Additional annotations to put on the Global Zone Sync Service | +| controlPlane.globalZoneSyncService.nodePort | int | `30685` | Port on which Global Zone Sync Service is exposed on Node for service of type NodePort | +| controlPlane.globalZoneSyncService.port | int | `5685` | Port on which Global Zone Sync Service is exposed | +| controlPlane.globalZoneSyncService.protocol | string | `"grpc"` | Protocol of the Global Zone Sync service port | +| controlPlane.defaults.skipMeshCreation | bool | `false` | Whether to skip creating the default Mesh | +| controlPlane.automountServiceAccountToken | bool | `true` | Whether to automountServiceAccountToken for cp. Optionally set to false | +| controlPlane.resources | object | `{"limits":{"memory":"256Mi"},"requests":{"cpu":"500m","memory":"256Mi"}}` | Optionally override the resource spec | +| controlPlane.lifecycle | object | `{}` | Pod lifecycle settings (useful for adding a preStop hook, when using AWS ALB or NLB) | +| controlPlane.terminationGracePeriodSeconds | int | `30` | Number of seconds to wait before force killing the pod. Make sure to update this if you add a preStop hook. | +| controlPlane.tls.general.secretName | string | `""` | Secret that contains tls.crt, tls.key [and ca.crt when no controlPlane.tls.general.caSecretName specified] for protecting Kuma in-cluster communication | +| controlPlane.tls.general.caSecretName | string | `""` | Secret that contains ca.crt that was used to sign cert for protecting Kuma in-cluster communication (ca.crt present in this secret have precedence over the one provided in the controlPlane.tls.general.secretName) | +| controlPlane.tls.general.caBundle | string | `""` | Base64 encoded CA certificate (the same as in controlPlane.tls.general.secret#ca.crt) | +| controlPlane.tls.apiServer.secretName | string | `""` | Secret that contains tls.crt, tls.key for protecting Kuma API on HTTPS | +| controlPlane.tls.apiServer.clientCertsSecretName | string | `""` | Secret that contains list of .pem certificates that can access admin endpoints of Kuma API on HTTPS | +| controlPlane.tls.kdsGlobalServer.secretName | string | `""` | Name of the K8s TLS Secret resource. If you set this and don't set create=true, you have to create the secret manually. | +| controlPlane.tls.kdsGlobalServer.create | bool | `false` | Whether to create the TLS secret in helm. | +| controlPlane.tls.kdsGlobalServer.cert | string | `""` | The TLS certificate to offer. | +| controlPlane.tls.kdsGlobalServer.key | string | `""` | The TLS key to use. | +| controlPlane.tls.kdsZoneClient.secretName | string | `""` | Name of the K8s Secret resource that contains ca.crt which was used to sign the certificate of KDS Global Server. If you set this and don't set create=true, you have to create the secret manually. | +| controlPlane.tls.kdsZoneClient.create | bool | `false` | Whether to create the TLS secret in helm. | +| controlPlane.tls.kdsZoneClient.cert | string | `""` | CA bundle that was used to sign the certificate of KDS Global Server. | +| controlPlane.tls.kdsZoneClient.skipVerify | bool | `false` | If true, TLS cert of the server is not verified. | +| controlPlane.serviceAccountAnnotations | object | `{}` | Annotations to add for Control Plane's Service Account | +| controlPlane.image.pullPolicy | string | `"IfNotPresent"` | Kuma CP ImagePullPolicy | +| controlPlane.image.repository | string | `"kuma-cp"` | Kuma CP image repository | +| controlPlane.image.tag | string | `nil` | Kuma CP Image tag. When not specified, the value is copied from global.tag | +| controlPlane.secrets | object with { Env: string, Secret: string, Key: string } | `nil` | Secrets to add as environment variables, where `Env` is the name of the env variable, `Secret` is the name of the Secret, and `Key` is the key of the Secret value to use | +| controlPlane.envVars | object | `{}` | Additional environment variables that will be passed to the control plane | +| controlPlane.envVarEntries | string | `nil` | Additional environment variables that will be passed to the control plane. Can be used with Kubernetes downward API | +| controlPlane.extraConfigMaps | list | `[]` | Additional config maps to mount into the control plane, with optional inline values | +| controlPlane.extraSecrets | object with { name: string, mountPath: string, readOnly: string } | `nil` | Additional secrets to mount into the control plane, where `Env` is the name of the env variable, `Secret` is the name of the Secret, and `Key` is the key of the Secret value to use | +| controlPlane.webhooks.validator.additionalRules | string | `""` | Additional rules to apply on Kuma validator webhook. Useful when building custom policy on top of Kuma. | +| controlPlane.webhooks.ownerReference.additionalRules | string | `""` | Additional rules to apply on Kuma owner reference webhook. Useful when building custom policy on top of Kuma. | +| controlPlane.hostNetwork | bool | `false` | Specifies if the deployment should be started in hostNetwork mode. | +| controlPlane.admissionServerPort | int | `5443` | Define a new server port for the admission controller. Recommended to set in combination with hostNetwork to prevent multiple port bindings on the same port (like Calico in AWS EKS). | +| controlPlane.podSecurityContext | object | `{"runAsNonRoot":true}` | Security context at the pod level for control plane. | +| controlPlane.containerSecurityContext | object | `{"readOnlyRootFilesystem":true}` | Security context at the container level for control plane. | +| controlPlane.supportGatewaySecretsInAllNamespaces | bool | `false` | If true, then control plane can support TLS secrets for builtin gateway outside of mesh system namespace. The downside is that control plane requires permission to read Secrets in all namespaces. | +| controlPlane.dns | object | `{"config":{"nameservers":[],"searches":[]},"policy":""}` | DNS configuration for the control-plane pod. This is equivalent to the [Kubernetes DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). | +| controlPlane.dns.policy | string | `""` | Defines how DNS resolution is configured for that Pod. | +| controlPlane.dns.config | object | `{"nameservers":[],"searches":[]}` | Optional dns configuration, required when policy is 'None' | +| controlPlane.dns.config.nameservers | list | `[]` | A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. | +| controlPlane.dns.config.searches | list | `[]` | A list of DNS search domains for hostname lookup in the Pod. | +| cni.enabled | bool | `false` | Install Kuma with CNI instead of proxy init container | +| cni.chained | bool | `false` | Install CNI in chained mode | +| cni.netDir | string | `"/etc/cni/multus/net.d"` | Set the CNI install directory | +| cni.binDir | string | `"/var/lib/cni/bin"` | Set the CNI bin directory | +| cni.confName | string | `"kuma-cni.conf"` | Set the CNI configuration name | +| cni.logLevel | string | `"info"` | CNI log level: one of off,info,debug | +| cni.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node Selector for the CNI pods | +| cni.tolerations | list | `[]` | Tolerations for the CNI pods | +| cni.podAnnotations | object | `{}` | Additional pod annotations | +| cni.namespace | string | `"kube-system"` | Set the CNI namespace | +| cni.image.repository | string | `"kuma-cni"` | CNI image repository | +| cni.image.tag | string | `nil` | CNI image tag - defaults to .Chart.AppVersion | +| cni.image.imagePullPolicy | string | `"IfNotPresent"` | CNI image pull policy | +| cni.delayStartupSeconds | int | `0` | it's only useful in tests to trigger a possible race condition | +| cni.experimental | object | `{"imageEbpf":{"registry":"docker.io/kumahq","repository":"merbridge","tag":"0.8.5"}}` | use new CNI (experimental) | +| cni.experimental.imageEbpf.registry | string | `"docker.io/kumahq"` | CNI experimental eBPF image registry | +| cni.experimental.imageEbpf.repository | string | `"merbridge"` | CNI experimental eBPF image repository | +| cni.experimental.imageEbpf.tag | string | `"0.8.5"` | CNI experimental eBPF image tag | +| cni.resources.requests.cpu | string | `"100m"` | | +| cni.resources.requests.memory | string | `"100Mi"` | | +| cni.resources.limits.memory | string | `"100Mi"` | | +| cni.podSecurityContext | object | `{}` | Security context at the pod level for cni | +| cni.containerSecurityContext | object | `{"readOnlyRootFilesystem":true,"runAsGroup":0,"runAsNonRoot":false,"runAsUser":0}` | Security context at the container level for cni | +| dataPlane.dnsLogging | bool | `false` | If true, then turn on CoreDNS query logging | +| dataPlane.image.repository | string | `"kuma-dp"` | The Kuma DP image repository | +| dataPlane.image.pullPolicy | string | `"IfNotPresent"` | Kuma DP ImagePullPolicy | +| dataPlane.image.tag | string | `nil` | Kuma DP Image Tag. When not specified, the value is copied from global.tag | +| dataPlane.initImage.repository | string | `"kuma-init"` | The Kuma DP init image repository | +| dataPlane.initImage.tag | string | `nil` | Kuma DP init image tag When not specified, the value is copied from global.tag | +| ingress.enabled | bool | `false` | If true, it deploys Ingress for cross cluster communication | +| ingress.extraLabels | object | `{}` | Labels to add to resources, in addition to default labels | +| ingress.drainTime | string | `"30s"` | Time for which old listener will still be active as draining | +| ingress.replicas | int | `1` | Number of replicas of the Ingress. Ignored when autoscaling is enabled. | +| ingress.logLevel | string | `"info"` | Log level for ingress (available values: off|info|debug) | +| ingress.resources | object | `{"limits":{"cpu":"1000m","memory":"512Mi"},"requests":{"cpu":"50m","memory":"64Mi"}}` | Define the resources to allocate to mesh ingress | +| ingress.lifecycle | object | `{}` | Pod lifecycle settings (useful for adding a preStop hook, when using AWS ALB or NLB) | +| ingress.terminationGracePeriodSeconds | int | `40` | Number of seconds to wait before force killing the pod. Make sure to update this if you add a preStop hook. | +| ingress.autoscaling.enabled | bool | `false` | Whether to enable Horizontal Pod Autoscaling, which requires the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in the cluster | +| ingress.autoscaling.minReplicas | int | `2` | The minimum CP pods to allow | +| ingress.autoscaling.maxReplicas | int | `5` | The max CP pods to scale to | +| ingress.autoscaling.targetCPUUtilizationPercentage | int | `80` | For clusters that don't support autoscaling/v2, autoscaling/v1 is used | +| ingress.autoscaling.metrics | list | `[{"resource":{"name":"cpu","target":{"averageUtilization":80,"type":"Utilization"}},"type":"Resource"}]` | For clusters that do support autoscaling/v2, use metrics | +| ingress.service.enabled | bool | `true` | Whether to create a Service resource. | +| ingress.service.type | string | `"LoadBalancer"` | Service type of the Ingress | +| ingress.service.loadBalancerIP | string | `nil` | Optionally specify IP to be used by cloud provider when configuring load balancer | +| ingress.service.annotations | object | `{}` | Additional annotations to put on the Ingress service | +| ingress.service.port | int | `10001` | Port on which Ingress is exposed | +| ingress.service.nodePort | string | `nil` | Port on which service is exposed on Node for service of type NodePort | +| ingress.annotations | object | `{}` | Additional pod annotations (deprecated favor `podAnnotations`) | +| ingress.podAnnotations | object | `{}` | Additional pod annotations | +| ingress.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node Selector for the Ingress pods | +| ingress.tolerations | list | `[]` | Tolerations for the Ingress pods | +| ingress.podDisruptionBudget.enabled | bool | `false` | Whether to create a pod disruption budget | +| ingress.podDisruptionBudget.maxUnavailable | int | `1` | The maximum number of unavailable pods allowed by the budget | +| ingress.affinity | object | `{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app.kubernetes.io/name","operator":"In","values":["{{ include \"kuma.name\" . }}"]},{"key":"app.kubernetes.io/instance","operator":"In","values":["{{ .Release.Name }}"]},{"key":"app","operator":"In","values":["kuma-ingress"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}}` | Affinity placement rule for the Kuma Ingress pods This is rendered as a template, so you can reference other helm variables or includes. | +| ingress.topologySpreadConstraints | string | `nil` | Topology spread constraints rule for the Kuma Mesh Ingress pods. This is rendered as a template, so you can use variables to generate match labels. | +| ingress.podSecurityContext | object | `{"runAsGroup":5678,"runAsNonRoot":true,"runAsUser":5678}` | Security context at the pod level for ingress | +| ingress.containerSecurityContext | object | `{"readOnlyRootFilesystem":true}` | Security context at the container level for ingress | +| ingress.serviceAccountAnnotations | object | `{}` | Annotations to add for Control Plane's Service Account | +| ingress.automountServiceAccountToken | bool | `true` | Whether to automountServiceAccountToken for cp. Optionally set to false | +| ingress.dns | object | `{"config":{"nameservers":[],"searches":[]},"policy":""}` | DNS configuration for the ingress pod. This is equivalent to the [Kubernetes DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). | +| ingress.dns.policy | string | `""` | Defines how DNS resolution is configured for that Pod. | +| ingress.dns.config | object | `{"nameservers":[],"searches":[]}` | Optional dns configuration, required when policy is 'None' | +| ingress.dns.config.nameservers | list | `[]` | A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. | +| ingress.dns.config.searches | list | `[]` | A list of DNS search domains for hostname lookup in the Pod. | +| egress.enabled | bool | `false` | If true, it deploys Egress for cross cluster communication | +| egress.extraLabels | object | `{}` | Labels to add to resources, in addition to the default labels. | +| egress.drainTime | string | `"30s"` | Time for which old listener will still be active as draining | +| egress.replicas | int | `1` | Number of replicas of the Egress. Ignored when autoscaling is enabled. | +| egress.logLevel | string | `"info"` | Log level for egress (available values: off|info|debug) | +| egress.autoscaling.enabled | bool | `false` | Whether to enable Horizontal Pod Autoscaling, which requires the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in the cluster | +| egress.autoscaling.minReplicas | int | `2` | The minimum CP pods to allow | +| egress.autoscaling.maxReplicas | int | `5` | The max CP pods to scale to | +| egress.autoscaling.targetCPUUtilizationPercentage | int | `80` | For clusters that don't support autoscaling/v2, autoscaling/v1 is used | +| egress.autoscaling.metrics | list | `[{"resource":{"name":"cpu","target":{"averageUtilization":80,"type":"Utilization"}},"type":"Resource"}]` | For clusters that do support autoscaling/v2, use metrics | +| egress.resources.requests.cpu | string | `"50m"` | | +| egress.resources.requests.memory | string | `"64Mi"` | | +| egress.resources.limits.cpu | string | `"1000m"` | | +| egress.resources.limits.memory | string | `"512Mi"` | | +| egress.service.enabled | bool | `true` | Whether to create the service object | +| egress.service.type | string | `"ClusterIP"` | Service type of the Egress | +| egress.service.loadBalancerIP | string | `nil` | Optionally specify IP to be used by cloud provider when configuring load balancer | +| egress.service.annotations | object | `{}` | Additional annotations to put on the Egress service | +| egress.service.port | int | `10002` | Port on which Egress is exposed | +| egress.service.nodePort | string | `nil` | Port on which service is exposed on Node for service of type NodePort | +| egress.annotations | object | `{}` | Additional pod annotations (deprecated favor `podAnnotations`) | +| egress.podAnnotations | object | `{}` | Additional pod annotations | +| egress.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node Selector for the Egress pods | +| egress.tolerations | list | `[]` | Tolerations for the Egress pods | +| egress.podDisruptionBudget.enabled | bool | `false` | Whether to create a pod disruption budget | +| egress.podDisruptionBudget.maxUnavailable | int | `1` | The maximum number of unavailable pods allowed by the budget | +| egress.affinity | object | `{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app.kubernetes.io/name","operator":"In","values":["{{ include \"kuma.name\" . }}"]},{"key":"app.kubernetes.io/instance","operator":"In","values":["{{ .Release.Name }}"]},{"key":"app","operator":"In","values":["kuma-egress"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}}` | Affinity placement rule for the Kuma Egress pods. This is rendered as a template, so you can reference other helm variables or includes. | +| egress.topologySpreadConstraints | string | `nil` | Topology spread constraints rule for the Kuma Egress pods. This is rendered as a template, so you can use variables to generate match labels. | +| egress.podSecurityContext | object | `{"runAsGroup":5678,"runAsNonRoot":true,"runAsUser":5678}` | Security context at the pod level for egress | +| egress.containerSecurityContext | object | `{"readOnlyRootFilesystem":true}` | Security context at the container level for egress | +| egress.serviceAccountAnnotations | object | `{}` | Annotations to add for Control Plane's Service Account | +| egress.automountServiceAccountToken | bool | `true` | Whether to automountServiceAccountToken for cp. Optionally set to false | +| egress.dns | object | `{"config":{"nameservers":[],"searches":[]},"policy":""}` | DNS configuration for the egress pod. This is equivalent to the [Kubernetes DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). | +| egress.dns.policy | string | `""` | Defines how DNS resolution is configured for that Pod. | +| egress.dns.config | object | `{"nameservers":[],"searches":[]}` | Optional dns configuration, required when policy is 'None' | +| egress.dns.config.nameservers | list | `[]` | A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. | +| egress.dns.config.searches | list | `[]` | A list of DNS search domains for hostname lookup in the Pod. | +| kumactl.image.repository | string | `"kumactl"` | The kumactl image repository | +| kumactl.image.tag | string | `nil` | The kumactl image tag. When not specified, the value is copied from global.tag | +| kubectl.image.registry | string | `"docker.io"` | The kubectl image registry | +| kubectl.image.repository | string | `"bitnami/kubectl"` | The kubectl image repository | +| kubectl.image.tag | string | `"1.27.5"` | The kubectl image tag | +| hooks.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for the HELM hooks | +| hooks.tolerations | list | `[]` | Tolerations for the HELM hooks | +| hooks.podSecurityContext | object | `{"runAsNonRoot":true}` | Security context at the pod level for crd/webhook/ns | +| hooks.containerSecurityContext | object | `{"readOnlyRootFilesystem":true}` | Security context at the container level for crd/webhook/ns | +| hooks.ebpfCleanup | object | `{"containerSecurityContext":{"readOnlyRootFilesystem":false},"podSecurityContext":{"runAsNonRoot":false}}` | ebpf-cleanup hook needs write access to the root filesystem to clean ebpf programs Changing below values will potentially break ebpf cleanup completely, so be cautious when doing so. | +| hooks.ebpfCleanup.podSecurityContext | object | `{"runAsNonRoot":false}` | Security context at the pod level for crd/webhook/cleanup-ebpf | +| hooks.ebpfCleanup.containerSecurityContext | object | `{"readOnlyRootFilesystem":false}` | Security context at the container level for crd/webhook/cleanup-ebpf | +| transparentProxy.configMap.enabled | bool | `false` | If true, enables the use of a ConfigMap to manage transparent proxy configuration instead of directly configuring it within the Kuma system | +| transparentProxy.configMap.name | string | `"kuma-transparent-proxy-config"` | The name of the ConfigMap used to store the transparent proxy configuration | +| transparentProxy.configMap.config.kumaDPUser | string | `"5678"` | The username or UID of the user that will run kuma-dp. If not provided, the system will use the default UID ("5678") or the default username ("kuma-dp") | +| transparentProxy.configMap.config.ipFamilyMode | string | `"dualstack"` | The IP family mode used for configuring traffic redirection in the transparent proxy Supports "dualstack" (for both IPv4 and IPv6) and "ipv4" modes | +| transparentProxy.configMap.config.redirect.dns.enabled | bool | `true` | Enables DNS redirection in the transparent proxy | +| transparentProxy.configMap.config.redirect.dns.captureAll | bool | `true` | Redirect all DNS queries | +| transparentProxy.configMap.config.redirect.dns.port | int | `15053` | The port on which the DNS server listens | +| transparentProxy.configMap.config.redirect.dns.resolvConfigPath | string | `"/etc/resolv.conf"` | Path to the system's resolv.conf file | +| transparentProxy.configMap.config.redirect.dns.skipConntrackZoneSplit | bool | `false` | Disables conntrack zone splitting, which can prevent potential DNS issues | +| transparentProxy.configMap.config.redirect.inbound.enabled | bool | `true` | Enables inbound traffic redirection | +| transparentProxy.configMap.config.redirect.inbound.port | int | `15006` | Port used for redirecting inbound traffic | +| transparentProxy.configMap.config.redirect.inbound.excludePorts | list | `[]` | List of ports to exclude from inbound traffic redirection | +| transparentProxy.configMap.config.redirect.inbound.excludePortsForIPs | list | `[]` | List of IP addresses to exclude from inbound traffic redirection for specific ports | +| transparentProxy.configMap.config.redirect.inbound.excludePortsForUIDs | list | `[]` | List of UIDs to exclude from inbound traffic redirection for specific ports | +| transparentProxy.configMap.config.redirect.inbound.includePorts | list | `[]` | List of ports to include in inbound traffic redirection | +| transparentProxy.configMap.config.redirect.inbound.insertRedirectInsteadOfAppend | bool | `false` | Inserts the redirection rule at the beginning of the chain instead of appending it | +| transparentProxy.configMap.config.redirect.outbound.enabled | bool | `true` | Enables outbound traffic redirection | +| transparentProxy.configMap.config.redirect.outbound.port | int | `15001` | Port used for redirecting outbound traffic | +| transparentProxy.configMap.config.redirect.outbound.excludePorts | list | `[]` | List of ports to exclude from outbound traffic redirection | +| transparentProxy.configMap.config.redirect.outbound.excludePortsForIPs | list | `[]` | List of IP addresses to exclude from outbound traffic redirection for specific ports | +| transparentProxy.configMap.config.redirect.outbound.excludePortsForUIDs | list | `[]` | List of UIDs to exclude from outbound traffic redirection for specific ports | +| transparentProxy.configMap.config.redirect.outbound.includePorts | list | `[]` | List of ports to include in outbound traffic redirection | +| transparentProxy.configMap.config.redirect.outbound.insertRedirectInsteadOfAppend | bool | `false` | Inserts the redirection rule at the beginning of the chain instead of appending it | +| transparentProxy.configMap.config.redirect.vnet.networks | list | `[]` | Specifies virtual networks using the format interfaceName:CIDR Allows matching traffic on specific network interfaces Examples: - "docker0:172.17.0.0/16" - "br+:172.18.0.0/16" (matches any interface starting with "br") - "iface:::1/64" (for IPv6) | +| transparentProxy.configMap.config.ebpf.enabled | bool | `false` | Enables eBPF support for handling traffic redirection in the transparent proxy | +| transparentProxy.configMap.config.ebpf.bpffsPath | string | `"/run/kuma/bpf"` | The path of the BPF filesystem | +| transparentProxy.configMap.config.ebpf.cgroupPath | string | `"/sys/fs/cgroup"` | The path of cgroup2 | +| transparentProxy.configMap.config.ebpf.instanceIPEnvVarName | string | `""` | The name of the environment variable containing the IP address of the instance (pod/vm) where transparent proxy will be installed | +| transparentProxy.configMap.config.ebpf.programsSourcePath | string | `"/tmp/kuma-ebpf"` | Path where compiled eBPF programs and other necessary files for eBPF mode can be found | +| transparentProxy.configMap.config.ebpf.tcAttachIface | string | `""` | The network interface for TC eBPF programs to bind to. If not provided, it will be automatically determined | +| transparentProxy.configMap.config.retry.maxRetries | int | `4` | The maximum number of retry attempts for operations | +| transparentProxy.configMap.config.retry.sleepBetweenRetries | string | `"2s"` | The time duration to wait between retry attempts | +| transparentProxy.configMap.config.iptablesExecutables.iptables | string | `""` | Custom path for the iptables executable (IPv4) | +| transparentProxy.configMap.config.iptablesExecutables.iptables-save | string | `""` | Custom path for the iptables-save executable (IPv4) | +| transparentProxy.configMap.config.iptablesExecutables.iptables-restore | string | `""` | Custom path for the iptables-restore executable (IPv4) | +| transparentProxy.configMap.config.iptablesExecutables.ip6tables | string | `""` | Custom path for the ip6tables executable (IPv6) | +| transparentProxy.configMap.config.iptablesExecutables.ip6tables-save | string | `""` | Custom path for the ip6tables-save executable (IPv6) | +| transparentProxy.configMap.config.iptablesExecutables.ip6tables-restore | string | `""` | Custom path for the ip6tables-restore executable (IPv6) | +| transparentProxy.configMap.config.log.enabled | bool | `false` | Enables logging of iptables rules for diagnostics and monitoring | +| transparentProxy.configMap.config.comments.disabled | bool | `false` | Disables comments in the generated iptables rules | +| transparentProxy.configMap.config.wait | int | `5` | Time in seconds to wait for acquiring the xtables lock before failing Value 0 means wait indefinitely | +| transparentProxy.configMap.config.waitInterval | int | `0` | Time interval between retries to acquire the xtables lock in seconds | +| transparentProxy.configMap.config.dropInvalidPackets | bool | `false` | Drops invalid packets to avoid connection resets in high-throughput scenarios | +| transparentProxy.configMap.config.storeFirewalld | bool | `false` | Enables firewalld support to store iptables rules | +| transparentProxy.configMap.config.verbose | bool | `false` | Enables verbose mode with longer argument/flag names and additional comments | +| experimental.ebpf.enabled | bool | `false` | If true, ebpf will be used instead of using iptables to install/configure transparent proxy | +| experimental.ebpf.instanceIPEnvVarName | string | `"INSTANCE_IP"` | Name of the environmental variable which will contain the IP address of a pod | +| experimental.ebpf.bpffsPath | string | `"/sys/fs/bpf"` | Path where BPF file system should be mounted | +| experimental.ebpf.cgroupPath | string | `"/sys/fs/cgroup"` | Host's cgroup2 path | +| experimental.ebpf.tcAttachIface | string | `""` | Name of the network interface which TC programs should be attached to, we'll try to automatically determine it if empty | +| experimental.ebpf.programsSourcePath | string | `"/tmp/kuma-ebpf"` | Path where compiled eBPF programs which will be installed can be found | +| experimental.sidecarContainers | bool | `false` | If true, enable native Kubernetes sidecars. This requires at least Kubernetes v1.29 | +| postgres.port | string | `"5432"` | Postgres port, password should be provided as a secret reference in "controlPlane.secrets" with the Env value "KUMA_STORE_POSTGRES_PASSWORD". Example: controlPlane: secrets: - Secret: postgres-postgresql Key: postgresql-password Env: KUMA_STORE_POSTGRES_PASSWORD | +| postgres.tls.mode | string | `"disable"` | Mode of TLS connection. Available values are: "disable", "verifyNone", "verifyCa", "verifyFull" | +| postgres.tls.disableSSLSNI | bool | `false` | Whether to disable SNI the postgres `sslsni` option. | +| postgres.tls.caSecretName | string | `nil` | Secret name that contains the ca.crt | +| postgres.tls.secretName | string | `nil` | Secret name that contains the client tls.crt, tls.key | + +## Custom Resource Definitions + +All Kuma CRDs are loaded via the [`crds`](crds) directory. For more detailed information on CRDs and Helm, +please refer to [the Helm documentation][helm-crd]. + +## Deleting + +As part of [Helm's limitations][helm-crd-limitations], CRDs will not be deleted when the `kuma` chart is deleted and +must be deleted manually. When a CRD is deleted Kubernetes deletes all resources of that kind as well, so this should +be done carefully. + +To do this with `kubectl` on *nix platforms, run: + +```shell +kubectl get crds | grep kuma.io | tr -s " " | cut -d " " -f1 | xargs kubectl delete crd + +# or with jq +kubectl get crds -o json | jq '.items[].metadata.name | select(.|test(".*kuma\\.io"))' | xargs kubectl delete crd +``` + +## Autoscaling + +In production, it is advisable to enable Control Plane autoscaling for High Availability. Autoscaling uses the +`HorizontalPodAutoscaler` resource to add redundancy and scale the CP pods based on CPU utilization, which requires +the [k8s metrics-server][kube-metrics-server] to be running on the cluster. + +## Development + +The charts are used internally in `kumactl install`, therefore the following rules apply when developing new chat features: + * all templates that start with `pre-` and `post-` are omitted when processing in `kumactl install` + +### Installing Metrics Server for Autoscaling + +If running on kind, or on a cluster with a similarly self-signed cert, the metrics server must be configured to allow +insecure kubelet TLS. The make task `kind/deploy/metrics-server` installs this patched version of the server. + +[kuma-url]: https://kuma.io/ +[kuma-logo]: https://kuma-public-assets.s3.amazonaws.com/kuma-logo-v2.png +[helm-crd]: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/ +[helm-crd-limitations]: https://helm.sh/docs/topics/charts/#limitations-on-crds +[kube-metrics-server]: https://github.com/kubernetes-sigs/metrics-server diff --git a/charts/kuma/kuma/2.9.0/README.md.gotmpl b/charts/kuma/kuma/2.9.0/README.md.gotmpl new file mode 100644 index 000000000..3b296a411 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/README.md.gotmpl @@ -0,0 +1,52 @@ +[![][kuma-logo]][kuma-url] + +{{ template "chart.description" . }} + +{{ template "chart.typeBadge" . }}{{ template "chart.versionBadge" . }}{{ template "chart.appVersionBadge" . }} + +{{ template "chart.homepageLine" . }} + +{{ template "chart.valuesSection" . }} + +## Custom Resource Definitions + +All Kuma CRDs are loaded via the [`crds`](crds) directory. For more detailed information on CRDs and Helm, +please refer to [the Helm documentation][helm-crd]. + +## Deleting + +As part of [Helm's limitations][helm-crd-limitations], CRDs will not be deleted when the `kuma` chart is deleted and +must be deleted manually. When a CRD is deleted Kubernetes deletes all resources of that kind as well, so this should +be done carefully. + +To do this with `kubectl` on *nix platforms, run: + +```shell +kubectl get crds | grep kuma.io | tr -s " " | cut -d " " -f1 | xargs kubectl delete crd + +# or with jq +kubectl get crds -o json | jq '.items[].metadata.name | select(.|test(".*kuma\\.io"))' | xargs kubectl delete crd +``` + +## Autoscaling + +In production, it is advisable to enable Control Plane autoscaling for High Availability. Autoscaling uses the +`HorizontalPodAutoscaler` resource to add redundancy and scale the CP pods based on CPU utilization, which requires +the [k8s metrics-server][kube-metrics-server] to be running on the cluster. + +## Development + +The charts are used internally in `kumactl install`, therefore the following rules apply when developing new chat features: + * all templates that start with `pre-` and `post-` are omitted when processing in `kumactl install` + +### Installing Metrics Server for Autoscaling + +If running on kind, or on a cluster with a similarly self-signed cert, the metrics server must be configured to allow +insecure kubelet TLS. The make task `kind/deploy/metrics-server` installs this patched version of the server. + + +[kuma-url]: https://kuma.io/ +[kuma-logo]: https://kuma-public-assets.s3.amazonaws.com/kuma-logo-v2.png +[helm-crd]: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/ +[helm-crd-limitations]: https://helm.sh/docs/topics/charts/#limitations-on-crds +[kube-metrics-server]: https://github.com/kubernetes-sigs/metrics-server diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_circuitbreakers.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_circuitbreakers.yaml new file mode 100644 index 000000000..ea955f2ab --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_circuitbreakers.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: circuitbreakers.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: CircuitBreaker + listKind: CircuitBreakerList + plural: circuitbreakers + singular: circuitbreaker + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma CircuitBreaker resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_containerpatches.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_containerpatches.yaml new file mode 100644 index 000000000..9fc77a966 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_containerpatches.yaml @@ -0,0 +1,114 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: containerpatches.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ContainerPatch + listKind: ContainerPatchList + plural: containerpatches + singular: containerpatch + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + description: ContainerPatch stores a list of patches to apply to init and + sidecar containers. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + type: string + metadata: + type: object + spec: + description: ContainerPatchSpec specifies the options available for a + ContainerPatch + properties: + initPatch: + description: InitPatch specifies jsonpatch to apply to an init container. + items: + description: JsonPatchBlock is one json patch operation block. + properties: + from: + description: From is a jsonpatch from string, used by move and + copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: |- + Value must be a string representing a valid json object used + by replace and add operations. String has to be escaped with " to be valid a json object. + type: string + required: + - op + - path + type: object + type: array + sidecarPatch: + description: SidecarPatch specifies jsonpatch to apply to a sidecar + container. + items: + description: JsonPatchBlock is one json patch operation block. + properties: + from: + description: From is a jsonpatch from string, used by move and + copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: |- + Value must be a string representing a valid json object used + by replace and add operations. String has to be escaped with " to be valid a json object. + type: string + required: + - op + - path + type: object + type: array + type: object + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_dataplaneinsights.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_dataplaneinsights.yaml new file mode 100644 index 000000000..23c4538ea --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_dataplaneinsights.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: dataplaneinsights.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: DataplaneInsight + listKind: DataplaneInsightList + plural: dataplaneinsights + singular: dataplaneinsight + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + status: + description: Status is the status the Kuma resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_dataplanes.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_dataplanes.yaml new file mode 100644 index 000000000..ec8f06342 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_dataplanes.yaml @@ -0,0 +1,70 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: dataplanes.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: Dataplane + listKind: DataplaneList + plural: dataplanes + singular: dataplane + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: Service tag of the first inbound + jsonPath: .spec.networking.inbound[0].tags['kuma\.io/service'] + name: kuma.io/service + type: string + - description: Service tag of the second inbound + jsonPath: .spec.networking.inbound[1].tags['kuma\.io/service'] + name: kuma.io/service + type: string + - description: Service tag of the third inbound + jsonPath: .spec.networking.inbound[2].tags['kuma\.io/service'] + name: kuma.io/service + priority: 1 + type: string + - description: Service tag of the fourth inbound + jsonPath: .spec.networking.inbound[3].tags['kuma\.io/service'] + name: kuma.io/service + priority: 1 + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma Dataplane resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_externalservices.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_externalservices.yaml new file mode 100644 index 000000000..be37a7b7f --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_externalservices.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: externalservices.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ExternalService + listKind: ExternalServiceList + plural: externalservices + singular: externalservice + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ExternalService resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_faultinjections.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_faultinjections.yaml new file mode 100644 index 000000000..6fb6366d5 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_faultinjections.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: faultinjections.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: FaultInjection + listKind: FaultInjectionList + plural: faultinjections + singular: faultinjection + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma FaultInjection resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_healthchecks.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_healthchecks.yaml new file mode 100644 index 000000000..9f2d075b5 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_healthchecks.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: healthchecks.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: HealthCheck + listKind: HealthCheckList + plural: healthchecks + singular: healthcheck + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma HealthCheck resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_hostnamegenerators.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_hostnamegenerators.yaml new file mode 100644 index 000000000..943421775 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_hostnamegenerators.yaml @@ -0,0 +1,72 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: hostnamegenerators.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: HostnameGenerator + listKind: HostnameGeneratorList + plural: hostnamegenerators + singular: hostnamegenerator + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma HostnameGenerator resource. + properties: + selector: + properties: + meshExternalService: + properties: + matchLabels: + additionalProperties: + type: string + type: object + type: object + meshMultiZoneService: + properties: + matchLabels: + additionalProperties: + type: string + type: object + type: object + meshService: + properties: + matchLabels: + additionalProperties: + type: string + type: object + type: object + type: object + template: + type: string + type: object + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshaccesslogs.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshaccesslogs.yaml new file mode 100644 index 000000000..16191c5ba --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshaccesslogs.yaml @@ -0,0 +1,557 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshaccesslogs.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshAccessLog + listKind: MeshAccessLogList + plural: meshaccesslogs + singular: meshaccesslog + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshAccessLog resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of clients referenced in + 'targetRef' + properties: + backends: + items: + properties: + file: + description: FileBackend defines configuration for + file based access logs + properties: + format: + description: |- + Format of access logs. Placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + properties: + json: + example: + - key: start_time + value: '%START_TIME%' + - key: bytes_received + value: '%BYTES_RECEIVED%' + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + omitEmptyValues: + default: false + type: boolean + plain: + example: '[%START_TIME%] %KUMA_MESH% %UPSTREAM_HOST%' + type: string + type: + enum: + - Plain + - Json + type: string + required: + - type + type: object + path: + description: Path to a file that logs will be + written to + example: /tmp/access.log + minLength: 1 + type: string + required: + - path + type: object + openTelemetry: + description: Defines an OpenTelemetry logging backend. + properties: + attributes: + description: |- + Attributes can contain placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + example: + - key: mesh + value: '%KUMA_MESH%' + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + body: + description: |- + Body is a raw string or an OTLP any value as described at + https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#field-body + It can contain placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + example: + kvlistValue: + values: + - key: mesh + value: + stringValue: '%KUMA_MESH%' + x-kubernetes-preserve-unknown-fields: true + endpoint: + description: Endpoint of OpenTelemetry collector. + An empty port defaults to 4317. + example: otel-collector:4317 + minLength: 1 + type: string + required: + - endpoint + type: object + tcp: + description: TCPBackend defines a TCP logging backend. + properties: + address: + description: Address of the TCP logging backend + example: 127.0.0.1:5000 + minLength: 1 + type: string + format: + description: |- + Format of access logs. Placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + properties: + json: + example: + - key: start_time + value: '%START_TIME%' + - key: bytes_received + value: '%BYTES_RECEIVED%' + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + omitEmptyValues: + default: false + type: boolean + plain: + example: '[%START_TIME%] %KUMA_MESH% %UPSTREAM_HOST%' + type: string + type: + enum: + - Plain + - Json + type: string + required: + - type + type: object + required: + - address + type: object + type: + enum: + - Tcp + - File + - OpenTelemetry + type: string + required: + - type + type: object + type: array + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + clients. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined in-place. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between the consumed services and + corresponding configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + backends: + items: + properties: + file: + description: FileBackend defines configuration for + file based access logs + properties: + format: + description: |- + Format of access logs. Placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + properties: + json: + example: + - key: start_time + value: '%START_TIME%' + - key: bytes_received + value: '%BYTES_RECEIVED%' + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + omitEmptyValues: + default: false + type: boolean + plain: + example: '[%START_TIME%] %KUMA_MESH% %UPSTREAM_HOST%' + type: string + type: + enum: + - Plain + - Json + type: string + required: + - type + type: object + path: + description: Path to a file that logs will be + written to + example: /tmp/access.log + minLength: 1 + type: string + required: + - path + type: object + openTelemetry: + description: Defines an OpenTelemetry logging backend. + properties: + attributes: + description: |- + Attributes can contain placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + example: + - key: mesh + value: '%KUMA_MESH%' + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + body: + description: |- + Body is a raw string or an OTLP any value as described at + https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#field-body + It can contain placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + example: + kvlistValue: + values: + - key: mesh + value: + stringValue: '%KUMA_MESH%' + x-kubernetes-preserve-unknown-fields: true + endpoint: + description: Endpoint of OpenTelemetry collector. + An empty port defaults to 4317. + example: otel-collector:4317 + minLength: 1 + type: string + required: + - endpoint + type: object + tcp: + description: TCPBackend defines a TCP logging backend. + properties: + address: + description: Address of the TCP logging backend + example: 127.0.0.1:5000 + minLength: 1 + type: string + format: + description: |- + Format of access logs. Placeholders available on + https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators + properties: + json: + example: + - key: start_time + value: '%START_TIME%' + - key: bytes_received + value: '%BYTES_RECEIVED%' + items: + properties: + key: + type: string + value: + type: string + type: object + type: array + omitEmptyValues: + default: false + type: boolean + plain: + example: '[%START_TIME%] %KUMA_MESH% %UPSTREAM_HOST%' + type: string + type: + enum: + - Plain + - Json + type: string + required: + - type + type: object + required: + - address + type: object + type: + enum: + - Tcp + - File + - OpenTelemetry + type: string + required: + - type + type: object + type: array + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshcircuitbreakers.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshcircuitbreakers.yaml new file mode 100644 index 000000000..bea1fb597 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshcircuitbreakers.yaml @@ -0,0 +1,739 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshcircuitbreakers.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshCircuitBreaker + listKind: MeshCircuitBreakerList + plural: meshcircuitbreakers + singular: meshcircuitbreaker + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshCircuitBreaker + resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations + referenced in 'targetRef' + properties: + connectionLimits: + description: |- + ConnectionLimits contains configuration of each circuit breaking limit, + which when exceeded makes the circuit breaker to become open (no traffic + is allowed like no current is allowed in the circuits when physical + circuit breaker ir open) + properties: + maxConnectionPools: + description: |- + The maximum number of connection pools per cluster that are concurrently + supported at once. Set this for clusters which create a large number of + connection pools. + format: int32 + type: integer + maxConnections: + description: |- + The maximum number of connections allowed to be made to the upstream + cluster. + format: int32 + type: integer + maxPendingRequests: + description: |- + The maximum number of pending requests that are allowed to the upstream + cluster. This limit is applied as a connection limit for non-HTTP + traffic. + format: int32 + type: integer + maxRequests: + description: |- + The maximum number of parallel requests that are allowed to be made + to the upstream cluster. This limit does not apply to non-HTTP traffic. + format: int32 + type: integer + maxRetries: + description: |- + The maximum number of parallel retries that will be allowed to + the upstream cluster. + format: int32 + type: integer + type: object + outlierDetection: + description: |- + OutlierDetection contains the configuration of the process of dynamically + determining whether some number of hosts in an upstream cluster are + performing unlike the others and removing them from the healthy load + balancing set. Performance might be along different axes such as + consecutive failures, temporal success rate, temporal latency, etc. + Outlier detection is a form of passive health checking. + properties: + baseEjectionTime: + description: |- + The base time that a host is ejected for. The real time is equal to + the base time multiplied by the number of times the host has been + ejected. + type: string + detectors: + description: Contains configuration for supported outlier + detectors + properties: + failurePercentage: + description: |- + Failure Percentage based outlier detection functions similarly to success + rate detection, in that it relies on success rate data from each host in + a cluster. However, rather than compare those values to the mean success + rate of the cluster as a whole, they are compared to a flat + user-configured threshold. This threshold is configured via the + outlierDetection.failurePercentageThreshold field. + The other configuration fields for failure percentage based detection are + similar to the fields for success rate detection. As with success rate + detection, detection will not be performed for a host if its request + volume over the aggregation interval is less than the + outlierDetection.detectors.failurePercentage.requestVolume value. + Detection also will not be performed for a cluster if the number of hosts + with the minimum required request volume in an interval is less than the + outlierDetection.detectors.failurePercentage.minimumHosts value. + properties: + minimumHosts: + description: |- + The minimum number of hosts in a cluster in order to perform failure + percentage-based ejection. If the total number of hosts in the cluster is + less than this value, failure percentage-based ejection will not be + performed. + format: int32 + type: integer + requestVolume: + description: |- + The minimum number of total requests that must be collected in one + interval (as defined by the interval duration above) to perform failure + percentage-based ejection for this host. If the volume is lower than this + setting, failure percentage-based ejection will not be performed for this + host. + format: int32 + type: integer + threshold: + description: |- + The failure percentage to use when determining failure percentage-based + outlier detection. If the failure percentage of a given host is greater + than or equal to this value, it will be ejected. + format: int32 + type: integer + type: object + gatewayFailures: + description: |- + In the default mode (outlierDetection.splitExternalLocalOriginErrors is + false) this detection type takes into account a subset of 5xx errors, + called "gateway errors" (502, 503 or 504 status code) and local origin + failures, such as timeout, TCP reset etc. + In split mode (outlierDetection.splitExternalLocalOriginErrors is true) + this detection type takes into account a subset of 5xx errors, called + "gateway errors" (502, 503 or 504 status code) and is supported only by + the http router. + properties: + consecutive: + description: |- + The number of consecutive gateway failures (502, 503, 504 status codes) + before a consecutive gateway failure ejection occurs. + format: int32 + type: integer + type: object + localOriginFailures: + description: |- + This detection type is enabled only when + outlierDetection.splitExternalLocalOriginErrors is true and takes into + account only locally originated errors (timeout, reset, etc). + If Envoy repeatedly cannot connect to an upstream host or communication + with the upstream host is repeatedly interrupted, it will be ejected. + Various locally originated problems are detected: timeout, TCP reset, + ICMP errors, etc. This detection type is supported by http router and + tcp proxy. + properties: + consecutive: + description: |- + The number of consecutive locally originated failures before ejection + occurs. Parameter takes effect only when splitExternalAndLocalErrors + is set to true. + format: int32 + type: integer + type: object + successRate: + description: |- + Success Rate based outlier detection aggregates success rate data from + every host in a cluster. Then at given intervals ejects hosts based on + statistical outlier detection. Success Rate outlier detection will not be + calculated for a host if its request volume over the aggregation interval + is less than the outlierDetection.detectors.successRate.requestVolume + value. + Moreover, detection will not be performed for a cluster if the number of + hosts with the minimum required request volume in an interval is less + than the outlierDetection.detectors.successRate.minimumHosts value. + In the default configuration mode + (outlierDetection.splitExternalLocalOriginErrors is false) this detection + type takes into account all types of errors: locally and externally + originated. + In split mode (outlierDetection.splitExternalLocalOriginErrors is true), + locally originated errors and externally originated (transaction) errors + are counted and treated separately. + properties: + minimumHosts: + description: |- + The number of hosts in a cluster that must have enough request volume to + detect success rate outliers. If the number of hosts is less than this + setting, outlier detection via success rate statistics is not performed + for any host in the cluster. + format: int32 + type: integer + requestVolume: + description: |- + The minimum number of total requests that must be collected in one + interval (as defined by the interval duration configured in + outlierDetection section) to include this host in success rate based + outlier detection. If the volume is lower than this setting, outlier + detection via success rate statistics is not performed for that host. + format: int32 + type: integer + standardDeviationFactor: + anyOf: + - type: integer + - type: string + description: |- + This factor is used to determine the ejection threshold for success rate + outlier ejection. The ejection threshold is the difference between + the mean success rate, and the product of this factor and the standard + deviation of the mean success rate: mean - (standard_deviation * + success_rate_standard_deviation_factor). + Either int or decimal represented as string. + x-kubernetes-int-or-string: true + type: object + totalFailures: + description: |- + In the default mode (outlierDetection.splitExternalAndLocalErrors is + false) this detection type takes into account all generated errors: + locally originated and externally originated (transaction) errors. + In split mode (outlierDetection.splitExternalLocalOriginErrors is true) + this detection type takes into account only externally originated + (transaction) errors, ignoring locally originated errors. + If an upstream host is an HTTP-server, only 5xx types of error are taken + into account (see Consecutive Gateway Failure for exceptions). + Properly formatted responses, even when they carry an operational error + (like index not found, access denied) are not taken into account. + properties: + consecutive: + description: |- + The number of consecutive server-side error responses (for HTTP traffic, + 5xx responses; for TCP traffic, connection failures; for Redis, failure + to respond PONG; etc.) before a consecutive total failure ejection + occurs. + format: int32 + type: integer + type: object + type: object + disabled: + description: When set to true, outlierDetection configuration + won't take any effect + type: boolean + interval: + description: |- + The time interval between ejection analysis sweeps. This can result in + both new ejections and hosts being returned to service. + type: string + maxEjectionPercent: + description: |- + The maximum % of an upstream cluster that can be ejected due to outlier + detection. Defaults to 10% but will eject at least one host regardless of + the value. + format: int32 + type: integer + splitExternalAndLocalErrors: + description: |- + Determines whether to distinguish local origin failures from external + errors. If set to true the following configuration parameters are taken + into account: detectors.localOriginFailures.consecutive + type: boolean + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined in place. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: |- + To list makes a match between the consumed services and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations + referenced in 'targetRef' + properties: + connectionLimits: + description: |- + ConnectionLimits contains configuration of each circuit breaking limit, + which when exceeded makes the circuit breaker to become open (no traffic + is allowed like no current is allowed in the circuits when physical + circuit breaker ir open) + properties: + maxConnectionPools: + description: |- + The maximum number of connection pools per cluster that are concurrently + supported at once. Set this for clusters which create a large number of + connection pools. + format: int32 + type: integer + maxConnections: + description: |- + The maximum number of connections allowed to be made to the upstream + cluster. + format: int32 + type: integer + maxPendingRequests: + description: |- + The maximum number of pending requests that are allowed to the upstream + cluster. This limit is applied as a connection limit for non-HTTP + traffic. + format: int32 + type: integer + maxRequests: + description: |- + The maximum number of parallel requests that are allowed to be made + to the upstream cluster. This limit does not apply to non-HTTP traffic. + format: int32 + type: integer + maxRetries: + description: |- + The maximum number of parallel retries that will be allowed to + the upstream cluster. + format: int32 + type: integer + type: object + outlierDetection: + description: |- + OutlierDetection contains the configuration of the process of dynamically + determining whether some number of hosts in an upstream cluster are + performing unlike the others and removing them from the healthy load + balancing set. Performance might be along different axes such as + consecutive failures, temporal success rate, temporal latency, etc. + Outlier detection is a form of passive health checking. + properties: + baseEjectionTime: + description: |- + The base time that a host is ejected for. The real time is equal to + the base time multiplied by the number of times the host has been + ejected. + type: string + detectors: + description: Contains configuration for supported outlier + detectors + properties: + failurePercentage: + description: |- + Failure Percentage based outlier detection functions similarly to success + rate detection, in that it relies on success rate data from each host in + a cluster. However, rather than compare those values to the mean success + rate of the cluster as a whole, they are compared to a flat + user-configured threshold. This threshold is configured via the + outlierDetection.failurePercentageThreshold field. + The other configuration fields for failure percentage based detection are + similar to the fields for success rate detection. As with success rate + detection, detection will not be performed for a host if its request + volume over the aggregation interval is less than the + outlierDetection.detectors.failurePercentage.requestVolume value. + Detection also will not be performed for a cluster if the number of hosts + with the minimum required request volume in an interval is less than the + outlierDetection.detectors.failurePercentage.minimumHosts value. + properties: + minimumHosts: + description: |- + The minimum number of hosts in a cluster in order to perform failure + percentage-based ejection. If the total number of hosts in the cluster is + less than this value, failure percentage-based ejection will not be + performed. + format: int32 + type: integer + requestVolume: + description: |- + The minimum number of total requests that must be collected in one + interval (as defined by the interval duration above) to perform failure + percentage-based ejection for this host. If the volume is lower than this + setting, failure percentage-based ejection will not be performed for this + host. + format: int32 + type: integer + threshold: + description: |- + The failure percentage to use when determining failure percentage-based + outlier detection. If the failure percentage of a given host is greater + than or equal to this value, it will be ejected. + format: int32 + type: integer + type: object + gatewayFailures: + description: |- + In the default mode (outlierDetection.splitExternalLocalOriginErrors is + false) this detection type takes into account a subset of 5xx errors, + called "gateway errors" (502, 503 or 504 status code) and local origin + failures, such as timeout, TCP reset etc. + In split mode (outlierDetection.splitExternalLocalOriginErrors is true) + this detection type takes into account a subset of 5xx errors, called + "gateway errors" (502, 503 or 504 status code) and is supported only by + the http router. + properties: + consecutive: + description: |- + The number of consecutive gateway failures (502, 503, 504 status codes) + before a consecutive gateway failure ejection occurs. + format: int32 + type: integer + type: object + localOriginFailures: + description: |- + This detection type is enabled only when + outlierDetection.splitExternalLocalOriginErrors is true and takes into + account only locally originated errors (timeout, reset, etc). + If Envoy repeatedly cannot connect to an upstream host or communication + with the upstream host is repeatedly interrupted, it will be ejected. + Various locally originated problems are detected: timeout, TCP reset, + ICMP errors, etc. This detection type is supported by http router and + tcp proxy. + properties: + consecutive: + description: |- + The number of consecutive locally originated failures before ejection + occurs. Parameter takes effect only when splitExternalAndLocalErrors + is set to true. + format: int32 + type: integer + type: object + successRate: + description: |- + Success Rate based outlier detection aggregates success rate data from + every host in a cluster. Then at given intervals ejects hosts based on + statistical outlier detection. Success Rate outlier detection will not be + calculated for a host if its request volume over the aggregation interval + is less than the outlierDetection.detectors.successRate.requestVolume + value. + Moreover, detection will not be performed for a cluster if the number of + hosts with the minimum required request volume in an interval is less + than the outlierDetection.detectors.successRate.minimumHosts value. + In the default configuration mode + (outlierDetection.splitExternalLocalOriginErrors is false) this detection + type takes into account all types of errors: locally and externally + originated. + In split mode (outlierDetection.splitExternalLocalOriginErrors is true), + locally originated errors and externally originated (transaction) errors + are counted and treated separately. + properties: + minimumHosts: + description: |- + The number of hosts in a cluster that must have enough request volume to + detect success rate outliers. If the number of hosts is less than this + setting, outlier detection via success rate statistics is not performed + for any host in the cluster. + format: int32 + type: integer + requestVolume: + description: |- + The minimum number of total requests that must be collected in one + interval (as defined by the interval duration configured in + outlierDetection section) to include this host in success rate based + outlier detection. If the volume is lower than this setting, outlier + detection via success rate statistics is not performed for that host. + format: int32 + type: integer + standardDeviationFactor: + anyOf: + - type: integer + - type: string + description: |- + This factor is used to determine the ejection threshold for success rate + outlier ejection. The ejection threshold is the difference between + the mean success rate, and the product of this factor and the standard + deviation of the mean success rate: mean - (standard_deviation * + success_rate_standard_deviation_factor). + Either int or decimal represented as string. + x-kubernetes-int-or-string: true + type: object + totalFailures: + description: |- + In the default mode (outlierDetection.splitExternalAndLocalErrors is + false) this detection type takes into account all generated errors: + locally originated and externally originated (transaction) errors. + In split mode (outlierDetection.splitExternalLocalOriginErrors is true) + this detection type takes into account only externally originated + (transaction) errors, ignoring locally originated errors. + If an upstream host is an HTTP-server, only 5xx types of error are taken + into account (see Consecutive Gateway Failure for exceptions). + Properly formatted responses, even when they carry an operational error + (like index not found, access denied) are not taken into account. + properties: + consecutive: + description: |- + The number of consecutive server-side error responses (for HTTP traffic, + 5xx responses; for TCP traffic, connection failures; for Redis, failure + to respond PONG; etc.) before a consecutive total failure ejection + occurs. + format: int32 + type: integer + type: object + type: object + disabled: + description: When set to true, outlierDetection configuration + won't take any effect + type: boolean + interval: + description: |- + The time interval between ejection analysis sweeps. This can result in + both new ejections and hosts being returned to service. + type: string + maxEjectionPercent: + description: |- + The maximum % of an upstream cluster that can be ejected due to outlier + detection. Defaults to 10% but will eject at least one host regardless of + the value. + format: int32 + type: integer + splitExternalAndLocalErrors: + description: |- + Determines whether to distinguish local origin failures from external + errors. If set to true the following configuration parameters are taken + into account: detectors.localOriginFailures.consecutive + type: boolean + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshes.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshes.yaml new file mode 100644 index 000000000..a9fec649c --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshes.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshes.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: Mesh + listKind: MeshList + plural: meshes + singular: mesh + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma Mesh resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshexternalservices.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshexternalservices.yaml new file mode 100644 index 000000000..12f87ab5a --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshexternalservices.yaml @@ -0,0 +1,333 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshexternalservices.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshExternalService + listKind: MeshExternalServiceList + plural: meshexternalservices + singular: meshexternalservice + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.addresses[0].hostname + name: Hostname + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshExternalService + resource. + properties: + endpoints: + description: Endpoints defines a list of destinations to send traffic + to. + items: + properties: + address: + description: Address defines an address to which a user want + to send a request. Is possible to provide `domain`, `ip`. + example: example.com + minLength: 1 + type: string + port: + description: Port of the endpoint + maximum: 65535 + minimum: 1 + type: integer + required: + - address + - port + type: object + type: array + extension: + description: Extension struct for a plugin configuration, in the presence + of an extension `endpoints` and `tls` are not required anymore - + it's up to the extension to validate them independently. + properties: + config: + description: Config freeform configuration for the extension. + x-kubernetes-preserve-unknown-fields: true + type: + description: Type of the extension. + type: string + required: + - config + - type + type: object + match: + description: Match defines traffic that should be routed through the + sidecar. + properties: + port: + description: Port defines a port to which a user does request. + maximum: 65535 + minimum: 1 + type: integer + protocol: + default: tcp + description: 'Protocol defines a protocol of the communication. + Possible values: `tcp`, `grpc`, `http`, `http2`.' + enum: + - tcp + - grpc + - http + - http2 + type: string + type: + default: HostnameGenerator + description: Type of the match, only `HostnameGenerator` is available + at the moment. + enum: + - HostnameGenerator + type: string + required: + - port + type: object + tls: + description: Tls provides a TLS configuration when proxy is resposible + for a TLS origination + properties: + allowRenegotiation: + default: false + description: |- + AllowRenegotiation defines if TLS sessions will allow renegotiation. + Setting this to true is not recommended for security reasons. + type: boolean + enabled: + default: false + description: Enabled defines if proxy should originate TLS. + type: boolean + verification: + description: Verification section for providing TLS verification + details. + properties: + caCert: + description: CaCert defines a certificate of CA. + properties: + inline: + description: Data source is inline bytes. + format: byte + type: string + inlineString: + description: Data source is inline string` + type: string + secret: + description: Data source is a secret with given Secret + key. + type: string + type: object + clientCert: + description: ClientCert defines a certificate of a client. + properties: + inline: + description: Data source is inline bytes. + format: byte + type: string + inlineString: + description: Data source is inline string` + type: string + secret: + description: Data source is a secret with given Secret + key. + type: string + type: object + clientKey: + description: ClientKey defines a client private key. + properties: + inline: + description: Data source is inline bytes. + format: byte + type: string + inlineString: + description: Data source is inline string` + type: string + secret: + description: Data source is a secret with given Secret + key. + type: string + type: object + mode: + default: Secured + description: Mode defines if proxy should skip verification, + one of `SkipSAN`, `SkipCA`, `Secured`, `SkipAll`. Default + `Secured`. + enum: + - SkipSAN + - SkipCA + - Secured + - SkipAll + type: string + serverName: + description: ServerName overrides the default Server Name + Indicator set by Kuma. + type: string + subjectAltNames: + description: SubjectAltNames list of names to verify in the + certificate. + items: + properties: + type: + default: Exact + description: 'Type specifies matching type, one of `Exact`, + `Prefix`. Default: `Exact`' + enum: + - Exact + - Prefix + type: string + value: + description: Value to match. + type: string + required: + - value + type: object + type: array + type: object + version: + description: Version section for providing version specification. + properties: + max: + default: TLSAuto + description: Max defines maximum supported version. One of + `TLSAuto`, `TLS10`, `TLS11`, `TLS12`, `TLS13`. + enum: + - TLSAuto + - TLS10 + - TLS11 + - TLS12 + - TLS13 + type: string + min: + default: TLSAuto + description: Min defines minimum supported version. One of + `TLSAuto`, `TLS10`, `TLS11`, `TLS12`, `TLS13`. + enum: + - TLSAuto + - TLS10 + - TLS11 + - TLS12 + - TLS13 + type: string + type: object + type: object + required: + - match + type: object + status: + description: Status is the current status of the Kuma MeshExternalService + resource. + properties: + addresses: + description: Addresses section for generated domains + items: + properties: + hostname: + type: string + hostnameGeneratorRef: + properties: + coreName: + type: string + required: + - coreName + type: object + origin: + type: string + type: object + type: array + hostnameGenerators: + items: + properties: + conditions: + description: Conditions is an array of hostname generator conditions. + items: + properties: + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, + Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + hostnameGeneratorRef: + properties: + coreName: + type: string + required: + - coreName + type: object + required: + - hostnameGeneratorRef + type: object + type: array + vip: + description: Vip section for allocated IP + properties: + ip: + description: Value allocated IP for a provided domain with `HostnameGenerator` + type in a match section. + type: string + type: object + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshfaultinjections.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshfaultinjections.yaml new file mode 100644 index 000000000..538675b6e --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshfaultinjections.yaml @@ -0,0 +1,420 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshfaultinjections.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshFaultInjection + listKind: MeshFaultInjectionList + plural: meshfaultinjections + singular: meshfaultinjection + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshFaultInjection + resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + http: + description: Http allows to define list of Http faults between + dataplanes. + items: + description: FaultInjection defines the configuration + of faults between dataplanes. + properties: + abort: + description: |- + Abort defines a configuration of not delivering requests to destination + service and replacing the responses from destination dataplane by + predefined status code + properties: + httpStatus: + description: HTTP status code which will be returned + to source side + format: int32 + type: integer + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests on which abort will be injected, has to be + either int or decimal represented as string. + x-kubernetes-int-or-string: true + required: + - httpStatus + - percentage + type: object + delay: + description: Delay defines configuration of delaying + a response from a destination + properties: + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests on which delay will be injected, has to be + either int or decimal represented as string. + x-kubernetes-int-or-string: true + value: + description: The duration during which the response + will be delayed + type: string + required: + - percentage + - value + type: object + responseBandwidth: + description: |- + ResponseBandwidth defines a configuration to limit the speed of + responding to the requests + properties: + limit: + description: |- + Limit is represented by value measure in Gbps, Mbps, kbps, e.g. + 10kbps + type: string + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests on which response bandwidth limit will be + either int or decimal represented as string. + x-kubernetes-int-or-string: true + required: + - limit + - percentage + type: object + type: object + type: array + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + http: + description: Http allows to define list of Http faults between + dataplanes. + items: + description: FaultInjection defines the configuration + of faults between dataplanes. + properties: + abort: + description: |- + Abort defines a configuration of not delivering requests to destination + service and replacing the responses from destination dataplane by + predefined status code + properties: + httpStatus: + description: HTTP status code which will be returned + to source side + format: int32 + type: integer + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests on which abort will be injected, has to be + either int or decimal represented as string. + x-kubernetes-int-or-string: true + required: + - httpStatus + - percentage + type: object + delay: + description: Delay defines configuration of delaying + a response from a destination + properties: + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests on which delay will be injected, has to be + either int or decimal represented as string. + x-kubernetes-int-or-string: true + value: + description: The duration during which the response + will be delayed + type: string + required: + - percentage + - value + type: object + responseBandwidth: + description: |- + ResponseBandwidth defines a configuration to limit the speed of + responding to the requests + properties: + limit: + description: |- + Limit is represented by value measure in Gbps, Mbps, kbps, e.g. + 10kbps + type: string + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests on which response bandwidth limit will be + either int or decimal represented as string. + x-kubernetes-int-or-string: true + required: + - limit + - percentage + type: object + type: object + type: array + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgatewayinstances.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgatewayinstances.yaml new file mode 100644 index 000000000..f68545cf0 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgatewayinstances.yaml @@ -0,0 +1,354 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshgatewayinstances.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshGatewayInstance + listKind: MeshGatewayInstanceList + plural: meshgatewayinstances + singular: meshgatewayinstance + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + description: |- + MeshGatewayInstance represents a managed instance of a dataplane proxy for a Kuma + Gateway. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: MeshGatewayInstanceSpec specifies the options available for + a GatewayDataplane. + properties: + podTemplate: + description: PodTemplate configures the Pod owned by this config. + properties: + metadata: + description: Metadata holds metadata configuration for a Service. + properties: + annotations: + additionalProperties: + type: string + description: Annotations holds annotations to be set on an + object. + type: object + labels: + additionalProperties: + type: string + description: Labels holds labels to be set on an objects. + type: object + type: object + spec: + description: Spec holds some customizable fields of a Pod. + properties: + container: + description: Container corresponds to PodSpec.Container + properties: + securityContext: + description: ContainerSecurityContext corresponds to PodSpec.Container.SecurityContext + properties: + readOnlyRootFilesystem: + description: ReadOnlyRootFilesystem corresponds to + PodSpec.Container.SecurityContext.ReadOnlyRootFilesystem + type: boolean + type: object + type: object + securityContext: + description: PodSecurityContext corresponds to PodSpec.SecurityContext + properties: + fsGroup: + description: FSGroup corresponds to PodSpec.SecurityContext.FSGroup + format: int64 + type: integer + type: object + serviceAccountName: + description: ServiceAccountName corresponds to PodSpec.ServiceAccountName. + type: string + type: object + type: object + replicas: + default: 1 + description: |- + Replicas is the number of dataplane proxy replicas to create. For + now this is a fixed number, but in the future it could be + automatically scaled based on metrics. + format: int32 + minimum: 1 + type: integer + resources: + description: |- + Resources specifies the compute resources for the proxy container. + The default can be set in the control plane config. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + serviceTemplate: + description: ServiceTemplate configures the Service owned by this + config. + properties: + metadata: + description: Metadata holds metadata configuration for a Service. + properties: + annotations: + additionalProperties: + type: string + description: Annotations holds annotations to be set on an + object. + type: object + labels: + additionalProperties: + type: string + description: Labels holds labels to be set on an objects. + type: object + type: object + spec: + description: Spec holds some customizable fields of a Service. + properties: + loadBalancerIP: + description: LoadBalancerIP corresponds to ServiceSpec.LoadBalancerIP. + type: string + type: object + type: object + serviceType: + default: LoadBalancer + description: |- + ServiceType specifies the type of managed Service that will be + created to expose the dataplane proxies to traffic from outside + the cluster. The ports to expose will be taken from the matching Gateway + resource. If there is no matching Gateway, the managed Service will + be deleted. + enum: + - LoadBalancer + - ClusterIP + - NodePort + type: string + tags: + additionalProperties: + type: string + description: |- + Tags specifies the Kuma tags that are propagated to the managed + dataplane proxies. These tags should not include `kuma.io/service` tag + since is auto-generated, and should match exactly one Gateway + resource. + type: object + type: object + status: + description: |- + MeshGatewayInstanceStatus holds information about the status of the gateway + instance. + properties: + conditions: + description: Conditions is an array of gateway instance conditions. + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + loadBalancer: + description: |- + LoadBalancer contains the current status of the load-balancer, + if one is present. + properties: + ingress: + description: |- + Ingress is a list containing ingress points for the load-balancer. + Traffic intended for the service should be sent to these ingress points. + items: + description: |- + LoadBalancerIngress represents the status of a load-balancer ingress point: + traffic intended for the service should be sent to an ingress point. + properties: + hostname: + description: |- + Hostname is set for load-balancer ingress points that are DNS based + (typically AWS load-balancers) + type: string + ip: + description: |- + IP is set for load-balancer ingress points that are IP based + (typically GCE or OpenStack load-balancers) + type: string + ipMode: + description: |- + IPMode specifies how the load-balancer IP behaves, and may only be specified when the ip field is specified. + Setting this to "VIP" indicates that traffic is delivered to the node with + the destination set to the load-balancer's IP and port. + Setting this to "Proxy" indicates that traffic is delivered to the node or pod with + the destination set to the node's IP and node port or the pod's IP and port. + Service implementations may use this information to adjust traffic routing. + type: string + ports: + description: |- + Ports is a list of records of service ports + If used, every port defined in the service should have an entry in it + items: + properties: + error: + description: |- + Error is to record the problem with the service port + The format of the error shall comply with the following rules: + - built-in error values shall be specified in this file and those shall use + CamelCase names + - cloud provider specific error values must have names that comply with the + format foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + port: + description: Port is the port number of the service + port of which status is recorded here + format: int32 + type: integer + protocol: + description: |- + Protocol is the protocol of the service port of which status is recorded here + The supported values are: "TCP", "UDP", "SCTP" + type: string + required: + - error + - port + - protocol + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgatewayroutes.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgatewayroutes.yaml new file mode 100644 index 000000000..ef006e9cb --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgatewayroutes.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshgatewayroutes.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshGatewayRoute + listKind: MeshGatewayRouteList + plural: meshgatewayroutes + singular: meshgatewayroute + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshGatewayRoute resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgateways.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgateways.yaml new file mode 100644 index 000000000..20ff66677 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshgateways.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshgateways.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshGateway + listKind: MeshGatewayList + plural: meshgateways + singular: meshgateway + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshGateway resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshhealthchecks.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshhealthchecks.yaml new file mode 100644 index 000000000..d1a3a49f9 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshhealthchecks.yaml @@ -0,0 +1,382 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshhealthchecks.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshHealthCheck + listKind: MeshHealthCheckList + plural: meshhealthchecks + singular: meshhealthcheck + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshHealthCheck resource. + properties: + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between the consumed services and + corresponding configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + alwaysLogHealthCheckFailures: + description: |- + If set to true, health check failure events will always be logged. If set + to false, only the initial health check failure event will be logged. The + default value is false. + type: boolean + eventLogPath: + description: |- + Specifies the path to the file where Envoy can log health check events. + If empty, no event log will be written. + type: string + failTrafficOnPanic: + description: |- + If set to true, Envoy will not consider any hosts when the cluster is in + 'panic mode'. Instead, the cluster will fail all requests as if all hosts + are unhealthy. This can help avoid potentially overwhelming a failing + service. + type: boolean + grpc: + description: |- + GrpcHealthCheck defines gRPC configuration which will instruct the service + the health check will be made for is a gRPC service. + properties: + authority: + description: |- + The value of the :authority header in the gRPC health check request, + by default name of the cluster this health check is associated with + type: string + disabled: + description: If true the GrpcHealthCheck is disabled + type: boolean + serviceName: + description: Service name parameter which will be sent + to gRPC service + type: string + type: object + healthyPanicThreshold: + anyOf: + - type: integer + - type: string + description: |- + Allows to configure panic threshold for Envoy cluster. If not specified, + the default is 50%. To disable panic mode, set to 0%. + Either int or decimal represented as string. + x-kubernetes-int-or-string: true + healthyThreshold: + default: 1 + description: Number of consecutive healthy checks before + considering a host healthy. + format: int32 + type: integer + http: + description: |- + HttpHealthCheck defines HTTP configuration which will instruct the service + the health check will be made for is an HTTP service. + properties: + disabled: + description: If true the HttpHealthCheck is disabled + type: boolean + expectedStatuses: + description: List of HTTP response statuses which are + considered healthy + items: + format: int32 + type: integer + type: array + path: + default: / + description: |- + The HTTP path which will be requested during the health check + (ie. /health) + type: string + requestHeadersToAdd: + description: |- + The list of HTTP headers which should be added to each health check + request + properties: + add: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + set: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + type: object + type: object + initialJitter: + description: |- + If specified, Envoy will start health checking after a random time in + ms between 0 and initialJitter. This only applies to the first health + check. + type: string + interval: + default: 1m + description: Interval between consecutive health checks. + type: string + intervalJitter: + description: |- + If specified, during every interval Envoy will add IntervalJitter to the + wait time. + type: string + intervalJitterPercent: + description: |- + If specified, during every interval Envoy will add IntervalJitter * + IntervalJitterPercent / 100 to the wait time. If IntervalJitter and + IntervalJitterPercent are both set, both of them will be used to + increase the wait time. + format: int32 + type: integer + noTrafficInterval: + description: |- + The "no traffic interval" is a special health check interval that is used + when a cluster has never had traffic routed to it. This lower interval + allows cluster information to be kept up to date, without sending a + potentially large amount of active health checking traffic for no reason. + Once a cluster has been used for traffic routing, Envoy will shift back + to using the standard health check interval that is defined. Note that + this interval takes precedence over any other. The default value for "no + traffic interval" is 60 seconds. + type: string + reuseConnection: + description: Reuse health check connection between health + checks. Default is true. + type: boolean + tcp: + description: |- + TcpHealthCheck defines configuration for specifying bytes to send and + expected response during the health check + properties: + disabled: + description: If true the TcpHealthCheck is disabled + type: boolean + receive: + description: |- + List of Base64 encoded blocks of strings expected as a response. When checking the response, + "fuzzy" matching is performed such that each block must be found, and + in the order specified, but not necessarily contiguous. + If not provided or empty, checks will be performed as "connect only" and be marked as successful when TCP connection is successfully established. + items: + type: string + type: array + send: + description: Base64 encoded content of the message which + will be sent during the health check to the target + type: string + type: object + timeout: + default: 15s + description: Maximum time to wait for a health check response. + type: string + unhealthyThreshold: + default: 5 + description: |- + Number of consecutive unhealthy checks before considering a host + unhealthy. + format: int32 + type: integer + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshhttproutes.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshhttproutes.yaml new file mode 100644 index 000000000..14f8974b1 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshhttproutes.yaml @@ -0,0 +1,668 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshhttproutes.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshHTTPRoute + listKind: MeshHTTPRouteList + plural: meshhttproutes + singular: meshhttproute + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshHTTPRoute resource. + properties: + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To matches destination services of requests and holds + configuration. + items: + properties: + hostnames: + description: |- + Hostnames is only valid when targeting MeshGateway and limits the + effects of the rules to requests to this hostname. + Given hostnames must intersect with the hostname of the listeners the + route attaches to. + items: + type: string + type: array + rules: + description: |- + Rules contains the routing rules applies to a combination of top-level + targetRef and the targetRef in this entry. + items: + properties: + default: + description: |- + Default holds routing rules that can be merged with rules from other + policies. + properties: + backendRefs: + items: + description: BackendRef defines where to forward + traffic. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use + to identify cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + port: + description: Port is only supported when this + ref refers to a real MeshService object + format: int32 + type: integer + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + weight: + default: 1 + minimum: 0 + type: integer + type: object + type: array + filters: + items: + properties: + requestHeaderModifier: + description: |- + Only one action is supported per header name. + Configuration to set or add multiple values for a header must use RFC 7230 + header value formatting, separating each value with a comma. + properties: + add: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + remove: + items: + type: string + maxItems: 16 + type: array + set: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + type: object + requestMirror: + properties: + backendRef: + description: BackendRef defines where to + forward traffic. + properties: + kind: + description: Kind of the referenced + resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future + use to identify cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + port: + description: Port is only supported + when this ref refers to a real MeshService + object + format: int32 + type: integer + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + weight: + default: 1 + minimum: 0 + type: integer + type: object + percentage: + anyOf: + - type: integer + - type: string + description: |- + Percentage of requests to mirror. If not specified, all requests + to the target cluster will be mirrored. + x-kubernetes-int-or-string: true + required: + - backendRef + type: object + requestRedirect: + properties: + hostname: + description: |- + PreciseHostname is the fully qualified domain name of a network host. This + matches the RFC 1123 definition of a hostname with 1 notable exception that + numeric IP addresses are not allowed. + + Note that as per RFC1035 and RFC1123, a *label* must consist of lower case + alphanumeric characters or '-', and must start and end with an alphanumeric + character. No other punctuation is allowed. + maxLength: 253 + minLength: 1 + pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ + type: string + path: + description: |- + Path defines parameters used to modify the path of the incoming request. + The modified path is then used to construct the location header. + When empty, the request path is used as-is. + properties: + replaceFullPath: + type: string + replacePrefixMatch: + type: string + type: + enum: + - ReplaceFullPath + - ReplacePrefixMatch + type: string + required: + - type + type: object + port: + description: |- + Port is the port to be used in the value of the `Location` + header in the response. + When empty, port (if specified) of the request is used. + format: int32 + maximum: 65535 + minimum: 1 + type: integer + scheme: + enum: + - http + - https + type: string + statusCode: + default: 302 + description: StatusCode is the HTTP status + code to be used in response. + enum: + - 301 + - 302 + - 303 + - 307 + - 308 + type: integer + type: object + responseHeaderModifier: + description: |- + Only one action is supported per header name. + Configuration to set or add multiple values for a header must use RFC 7230 + header value formatting, separating each value with a comma. + properties: + add: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + remove: + items: + type: string + maxItems: 16 + type: array + set: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + type: object + type: + enum: + - RequestHeaderModifier + - ResponseHeaderModifier + - RequestRedirect + - URLRewrite + - RequestMirror + type: string + urlRewrite: + properties: + hostToBackendHostname: + description: |- + HostToBackendHostname rewrites the hostname to the hostname of the + upstream host. This option is only available when targeting MeshGateways. + type: boolean + hostname: + description: Hostname is the value to be + used to replace the host header value + during forwarding. + maxLength: 253 + minLength: 1 + pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ + type: string + path: + description: Path defines a path rewrite. + properties: + replaceFullPath: + type: string + replacePrefixMatch: + type: string + type: + enum: + - ReplaceFullPath + - ReplacePrefixMatch + type: string + required: + - type + type: object + type: object + required: + - type + type: object + type: array + type: object + matches: + description: |- + Matches describes how to match HTTP requests this rule should be applied + to. + items: + properties: + headers: + items: + description: |- + HeaderMatch describes how to select an HTTP route by matching HTTP request + headers. + properties: + name: + description: |- + Name is the name of the HTTP Header to be matched. Name MUST be lower case + as they will be handled with case insensitivity (See https://tools.ietf.org/html/rfc7230#section-3.2). + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + type: + default: Exact + description: Type specifies how to match against + the value of the header. + enum: + - Exact + - Present + - RegularExpression + - Absent + - Prefix + type: string + value: + description: Value is the value of HTTP Header + to be matched. + type: string + required: + - name + type: object + type: array + method: + enum: + - CONNECT + - DELETE + - GET + - HEAD + - OPTIONS + - PATCH + - POST + - PUT + - TRACE + type: string + path: + properties: + type: + enum: + - Exact + - PathPrefix + - RegularExpression + type: string + value: + description: |- + Exact or prefix matches must be an absolute path. A prefix matches only + if separated by a slash or the entire path. + minLength: 1 + type: string + required: + - type + - value + type: object + queryParams: + description: |- + QueryParams matches based on HTTP URL query parameters. Multiple matches + are ANDed together such that all listed matches must succeed. + items: + properties: + name: + minLength: 1 + type: string + type: + enum: + - Exact + - RegularExpression + type: string + value: + type: string + required: + - name + - type + - value + type: object + type: array + type: object + minItems: 1 + type: array + required: + - default + - matches + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + request destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshinsights.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshinsights.yaml new file mode 100644 index 000000000..93b570048 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshinsights.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshinsights.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshInsight + listKind: MeshInsightList + plural: meshinsights + singular: meshinsight + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshInsight resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshloadbalancingstrategies.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshloadbalancingstrategies.yaml new file mode 100644 index 000000000..8fe3d6634 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshloadbalancingstrategies.yaml @@ -0,0 +1,572 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshloadbalancingstrategies.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshLoadBalancingStrategy + listKind: MeshLoadBalancingStrategyList + plural: meshloadbalancingstrategies + singular: meshloadbalancingstrategy + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshLoadBalancingStrategy + resource. + properties: + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between the consumed services and + corresponding configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + loadBalancer: + description: LoadBalancer allows to specify load balancing + algorithm. + properties: + leastRequest: + description: |- + LeastRequest selects N random available hosts as specified in 'choiceCount' (2 by default) + and picks the host which has the fewest active requests + properties: + activeRequestBias: + anyOf: + - type: integer + - type: string + description: |- + ActiveRequestBias refers to dynamic weights applied when hosts have varying load + balancing weights. A higher value here aggressively reduces the weight of endpoints + that are currently handling active requests. In essence, the higher the ActiveRequestBias + value, the more forcefully it reduces the load balancing weight of endpoints that are + actively serving requests. + x-kubernetes-int-or-string: true + choiceCount: + description: |- + ChoiceCount is the number of random healthy hosts from which the host with + the fewest active requests will be chosen. Defaults to 2 so that Envoy performs + two-choice selection if the field is not set. + format: int32 + minimum: 2 + type: integer + type: object + maglev: + description: |- + Maglev implements consistent hashing to upstream hosts. Maglev can be used as + a drop in replacement for the ring hash load balancer any place in which + consistent hashing is desired. + properties: + hashPolicies: + description: |- + HashPolicies specify a list of request/connection properties that are used to calculate a hash. + These hash policies are executed in the specified order. If a hash policy has the “terminal” attribute + set to true, and there is already a hash generated, the hash is returned immediately, + ignoring the rest of the hash policy list. + items: + properties: + connection: + properties: + sourceIP: + description: Hash on source IP address. + type: boolean + type: object + cookie: + properties: + name: + description: The name of the cookie that + will be used to obtain the hash key. + minLength: 1 + type: string + path: + description: The name of the path for + the cookie. + type: string + ttl: + description: If specified, a cookie with + the TTL will be generated if the cookie + is not present. + type: string + required: + - name + type: object + filterState: + properties: + key: + description: |- + The name of the Object in the per-request filterState, which is + an Envoy::Hashable object. If there is no data associated with the key, + or the stored object is not Envoy::Hashable, no hash will be produced. + minLength: 1 + type: string + required: + - key + type: object + header: + properties: + name: + description: The name of the request header + that will be used to obtain the hash + key. + minLength: 1 + type: string + required: + - name + type: object + queryParameter: + properties: + name: + description: |- + The name of the URL query parameter that will be used to obtain the hash key. + If the parameter is not present, no hash will be produced. Query parameter names + are case-sensitive. + minLength: 1 + type: string + required: + - name + type: object + terminal: + description: |- + Terminal is a flag that short-circuits the hash computing. This field provides + a ‘fallback’ style of configuration: “if a terminal policy doesn’t work, fallback + to rest of the policy list”, it saves time when the terminal policy works. + If true, and there is already a hash computed, ignore rest of the list of hash polices. + type: boolean + type: + enum: + - Header + - Cookie + - SourceIP + - QueryParameter + - FilterState + type: string + required: + - type + type: object + type: array + tableSize: + description: |- + The table size for Maglev hashing. Maglev aims for “minimal disruption” + rather than an absolute guarantee. Minimal disruption means that when + the set of upstream hosts change, a connection will likely be sent + to the same upstream as it was before. Increasing the table size reduces + the amount of disruption. The table size must be prime number limited to 5000011. + If it is not specified, the default is 65537. + format: int32 + maximum: 5000011 + minimum: 1 + type: integer + type: object + random: + description: |- + Random selects a random available host. The random load balancer generally + performs better than round-robin if no health checking policy is configured. + Random selection avoids bias towards the host in the set that comes after a failed host. + type: object + ringHash: + description: |- + RingHash implements consistent hashing to upstream hosts. Each host is mapped + onto a circle (the “ring”) by hashing its address; each request is then routed + to a host by hashing some property of the request, and finding the nearest + corresponding host clockwise around the ring. + properties: + hashFunction: + description: |- + HashFunction is a function used to hash hosts onto the ketama ring. + The value defaults to XX_HASH. Available values – XX_HASH, MURMUR_HASH_2. + enum: + - XXHash + - MurmurHash2 + type: string + hashPolicies: + description: |- + HashPolicies specify a list of request/connection properties that are used to calculate a hash. + These hash policies are executed in the specified order. If a hash policy has the “terminal” attribute + set to true, and there is already a hash generated, the hash is returned immediately, + ignoring the rest of the hash policy list. + items: + properties: + connection: + properties: + sourceIP: + description: Hash on source IP address. + type: boolean + type: object + cookie: + properties: + name: + description: The name of the cookie that + will be used to obtain the hash key. + minLength: 1 + type: string + path: + description: The name of the path for + the cookie. + type: string + ttl: + description: If specified, a cookie with + the TTL will be generated if the cookie + is not present. + type: string + required: + - name + type: object + filterState: + properties: + key: + description: |- + The name of the Object in the per-request filterState, which is + an Envoy::Hashable object. If there is no data associated with the key, + or the stored object is not Envoy::Hashable, no hash will be produced. + minLength: 1 + type: string + required: + - key + type: object + header: + properties: + name: + description: The name of the request header + that will be used to obtain the hash + key. + minLength: 1 + type: string + required: + - name + type: object + queryParameter: + properties: + name: + description: |- + The name of the URL query parameter that will be used to obtain the hash key. + If the parameter is not present, no hash will be produced. Query parameter names + are case-sensitive. + minLength: 1 + type: string + required: + - name + type: object + terminal: + description: |- + Terminal is a flag that short-circuits the hash computing. This field provides + a ‘fallback’ style of configuration: “if a terminal policy doesn’t work, fallback + to rest of the policy list”, it saves time when the terminal policy works. + If true, and there is already a hash computed, ignore rest of the list of hash polices. + type: boolean + type: + enum: + - Header + - Cookie + - SourceIP + - QueryParameter + - FilterState + type: string + required: + - type + type: object + type: array + maxRingSize: + description: |- + Maximum hash ring size. Defaults to 8M entries, and limited to 8M entries, + but can be lowered to further constrain resource use. + format: int32 + maximum: 8000000 + minimum: 1 + type: integer + minRingSize: + description: |- + Minimum hash ring size. The larger the ring is (that is, + the more hashes there are for each provided host) the better the request distribution + will reflect the desired weights. Defaults to 1024 entries, and limited to 8M entries. + format: int32 + maximum: 8000000 + minimum: 1 + type: integer + type: object + roundRobin: + description: |- + RoundRobin is a load balancing algorithm that distributes requests + across available upstream hosts in round-robin order. + type: object + type: + enum: + - RoundRobin + - LeastRequest + - RingHash + - Random + - Maglev + type: string + required: + - type + type: object + localityAwareness: + description: LocalityAwareness contains configuration for + locality aware load balancing. + properties: + crossZone: + description: |- + CrossZone defines locality aware load balancing priorities when dataplane proxies inside local zone + are unavailable + properties: + failover: + description: Failover defines list of load balancing + rules in order of priority + items: + properties: + from: + description: From defines the list of zones + to which the rule applies + properties: + zones: + items: + type: string + type: array + required: + - zones + type: object + to: + description: To defines to which zones the + traffic should be load balanced + properties: + type: + description: Type defines how target zones + will be picked from available zones + enum: + - None + - Only + - Any + - AnyExcept + type: string + zones: + items: + type: string + type: array + required: + - type + type: object + required: + - to + type: object + type: array + failoverThreshold: + description: |- + FailoverThreshold defines the percentage of live destination dataplane proxies below which load balancing to the + next priority starts. + Example: If you configure failoverThreshold to 70, and you have deployed 10 destination dataplane proxies. + Load balancing to next priority will start when number of live destination dataplane proxies drops below 7. + Default 50 + properties: + percentage: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - percentage + type: object + type: object + disabled: + description: |- + Disabled allows to disable locality-aware load balancing. + When disabled requests are distributed across all endpoints regardless of locality. + type: boolean + localZone: + description: LocalZone defines locality aware load balancing + priorities between dataplane proxies inside a zone + properties: + affinityTags: + description: AffinityTags list of tags for local + zone load balancing. + items: + properties: + key: + description: Key defines tag for which affinity + is configured + type: string + weight: + description: |- + Weight of the tag used for load balancing. The bigger the weight the bigger the priority. + Percentage of local traffic load balanced to tag is computed by dividing weight by sum of weights from all tags. + For example with two affinity tags first with weight 80 and second with weight 20, + then 80% of traffic will be redirected to the first tag, and 20% of traffic will be redirected to second one. + Setting weights is not mandatory. When weights are not set control plane will compute default weight based on list order. + Default: If you do not specify weight we will adjust them so that 90% traffic goes to first tag, 9% to next, and 1% to third and so on. + format: int32 + type: integer + required: + - key + type: object + type: array + type: object + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshmetrics.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshmetrics.yaml new file mode 100644 index 000000000..d244c2e04 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshmetrics.yaml @@ -0,0 +1,292 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshmetrics.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshMetric + listKind: MeshMetricList + plural: meshmetrics + singular: meshmetric + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshMetric resource. + properties: + default: + description: MeshMetric configuration. + properties: + applications: + description: Applications is a list of application that Dataplane + Proxy will scrape + items: + properties: + address: + description: Address on which an application listens. + type: string + name: + description: Name of the application to scrape + type: string + path: + default: /metrics/prometheus + description: Path on which an application expose HTTP endpoint + with metrics. + type: string + port: + description: Port on which an application expose HTTP endpoint + with metrics. + format: int32 + type: integer + required: + - port + type: object + type: array + backends: + description: Backends list that will be used to collect metrics. + items: + properties: + openTelemetry: + description: OpenTelemetry backend configuration + properties: + endpoint: + description: Endpoint for OpenTelemetry collector + type: string + refreshInterval: + description: RefreshInterval defines how frequent metrics + should be pushed to collector + type: string + required: + - endpoint + type: object + prometheus: + description: Prometheus backend configuration. + properties: + clientId: + description: ClientId of the Prometheus backend. Needed + when using MADS for DP discovery. + type: string + path: + default: /metrics + description: Path on which a dataplane should expose + HTTP endpoint with Prometheus metrics. + type: string + port: + default: 5670 + description: Port on which a dataplane should expose + HTTP endpoint with Prometheus metrics. + format: int32 + type: integer + tls: + description: Configuration of TLS for prometheus listener. + properties: + mode: + default: Disabled + description: Configuration of TLS for Prometheus + listener. + enum: + - Disabled + - ProvidedTLS + - ActiveMTLSBackend + type: string + required: + - mode + type: object + required: + - path + - port + type: object + type: + description: Type of the backend that will be used to collect + metrics. At the moment only Prometheus backend is available. + enum: + - Prometheus + - OpenTelemetry + type: string + required: + - type + type: object + type: array + sidecar: + description: Sidecar metrics collection configuration + properties: + includeUnused: + default: false + description: |- + IncludeUnused if false will scrape only metrics that has been by sidecar (counters incremented + at least once, gauges changed at least once, and histograms added to at + least once). If true will scrape all metrics (even the ones with zeros). + type: boolean + profiles: + description: Profiles allows to customize which metrics are + published. + properties: + appendProfiles: + description: AppendProfiles allows to combine the metrics + from multiple predefined profiles. + items: + properties: + name: + description: 'Name of the predefined profile, one + of: all, basic, none' + enum: + - All + - Basic + - None + type: string + required: + - name + type: object + type: array + exclude: + description: |- + Exclude makes it possible to exclude groups of metrics from a resulting profile. + Exclude is subordinate to Include. + items: + properties: + match: + description: Match is the value used to match using + particular Type + type: string + type: + description: 'Type defined the type of selector, + one of: prefix, regex, exact' + enum: + - Prefix + - Regex + - Exact + - Contains + type: string + required: + - match + - type + type: object + type: array + include: + description: |- + Include makes it possible to include additional metrics in a selected profiles. + Include takes precedence over Exclude. + items: + properties: + match: + description: Match is the value used to match using + particular Type + type: string + type: + description: 'Type defined the type of selector, + one of: prefix, regex, exact' + enum: + - Prefix + - Regex + - Exact + - Contains + type: string + required: + - match + - type + type: object + type: array + type: object + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined in-place. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshmultizoneservices.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshmultizoneservices.yaml new file mode 100644 index 000000000..4772b0cfb --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshmultizoneservices.yaml @@ -0,0 +1,199 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshmultizoneservices.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshMultiZoneService + listKind: MeshMultiZoneServiceList + plural: meshmultizoneservices + singular: meshmultizoneservice + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.addresses[0].hostname + name: Hostname + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshMultiZoneService + resource. + properties: + ports: + description: Ports is a list of ports from selected MeshServices + items: + properties: + appProtocol: + default: tcp + description: Protocol identifies a protocol supported by a service. + type: string + name: + type: string + port: + format: int32 + type: integer + required: + - port + type: object + minItems: 1 + type: array + selector: + description: Selector is a way to select multiple MeshServices + properties: + meshService: + description: MeshService selects MeshServices + properties: + matchLabels: + additionalProperties: + type: string + description: MatchLabels matches multiple MeshServices by + labels + type: object + required: + - matchLabels + type: object + required: + - meshService + type: object + required: + - selector + type: object + status: + description: Status is the current status of the Kuma MeshMultiZoneService + resource. + properties: + addresses: + description: Addresses is a list of addresses generated by HostnameGenerator + items: + properties: + hostname: + type: string + hostnameGeneratorRef: + properties: + coreName: + type: string + required: + - coreName + type: object + origin: + type: string + type: object + type: array + hostnameGenerators: + description: Status of hostnames generator applied on this resource + items: + properties: + conditions: + description: Conditions is an array of hostname generator conditions. + items: + properties: + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, + Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + hostnameGeneratorRef: + properties: + coreName: + type: string + required: + - coreName + type: object + required: + - hostnameGeneratorRef + type: object + type: array + meshServices: + description: MeshServices is a list of matched MeshServices + items: + properties: + mesh: + type: string + name: + description: Name is a core name of MeshService + type: string + namespace: + type: string + zone: + type: string + required: + - mesh + - name + - namespace + - zone + type: object + type: array + vips: + description: VIPs is a list of assigned Kuma VIPs. + items: + properties: + ip: + type: string + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshpassthroughs.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshpassthroughs.yaml new file mode 100644 index 000000000..9f5822b55 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshpassthroughs.yaml @@ -0,0 +1,164 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshpassthroughs.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshPassthrough + listKind: MeshPassthroughList + plural: meshpassthroughs + singular: meshpassthrough + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshPassthrough resource. + properties: + default: + description: MeshPassthrough configuration. + properties: + appendMatch: + description: AppendMatch is a list of destinations that should + be allowed through the sidecar. + items: + properties: + port: + description: Port defines the port to which a user makes + a request. + type: integer + protocol: + default: tcp + description: 'Protocol defines the communication protocol. + Possible values: `tcp`, `tls`, `grpc`, `http`, `http2`.' + enum: + - tcp + - tls + - grpc + - http + - http2 + type: string + type: + description: Type of the match, one of `Domain`, `IP` or + `CIDR` is available. + enum: + - Domain + - IP + - CIDR + type: string + value: + description: Value for the specified Type. + type: string + type: object + type: array + passthroughMode: + default: None + description: |- + Defines the passthrough behavior. Possible values: `All`, `None`, `Matched` + When `All` or `None` `appendMatch` has no effect. + enum: + - All + - Matched + - None + type: string + type: object + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined in-place. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshproxypatches.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshproxypatches.yaml new file mode 100644 index 000000000..bf6342d25 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshproxypatches.yaml @@ -0,0 +1,550 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshproxypatches.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshProxyPatch + listKind: MeshProxyPatchList + plural: meshproxypatches + singular: meshproxypatch + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshProxyPatch resource. + properties: + default: + description: |- + Default is a configuration specific to the group of destinations + referenced in 'targetRef'. + properties: + appendModifications: + description: AppendModifications is a list of modifications applied + on the selected proxy. + items: + properties: + cluster: + description: Cluster is a modification of Envoy's Cluster + resource. + properties: + jsonPatches: + description: |- + JsonPatches specifies list of jsonpatches to apply to on Envoy's Cluster + resource + items: + description: JsonPatchBlock is one json patch operation + block. + properties: + from: + description: From is a jsonpatch from string, + used by move and copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: Value must be a valid json value + used by replace and add operations. + x-kubernetes-preserve-unknown-fields: true + required: + - op + - path + type: object + type: array + match: + description: Match is a set of conditions that have + to be matched for modification operation to happen. + properties: + name: + description: Name of the cluster to match. + type: string + origin: + description: |- + Origin is the name of the component or plugin that generated the resource. + + Here is the list of well-known origins: + inbound - resources generated for handling incoming traffic. + outbound - resources generated for handling outgoing traffic. + transparent - resources generated for transparent proxy functionality. + prometheus - resources generated when Prometheus metrics are enabled. + direct-access - resources generated for Direct Access functionality. + ingress - resources generated for Zone Ingress. + egress - resources generated for Zone Egress. + gateway - resources generated for MeshGateway. + + The list is not complete, because policy plugins can introduce new resources. + For example MeshTrace plugin can create Cluster with "mesh-trace" origin. + type: string + type: object + operation: + description: Operation to execute on matched cluster. + enum: + - Add + - Remove + - Patch + type: string + value: + description: Value of xDS resource in YAML format to + add or patch. + type: string + required: + - operation + type: object + httpFilter: + description: |- + HTTPFilter is a modification of Envoy HTTP Filter + available in HTTP Connection Manager in a Listener resource. + properties: + jsonPatches: + description: |- + JsonPatches specifies list of jsonpatches to apply to on Envoy's + HTTP Filter available in HTTP Connection Manager in a Listener resource. + items: + description: JsonPatchBlock is one json patch operation + block. + properties: + from: + description: From is a jsonpatch from string, + used by move and copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: Value must be a valid json value + used by replace and add operations. + x-kubernetes-preserve-unknown-fields: true + required: + - op + - path + type: object + type: array + match: + description: Match is a set of conditions that have + to be matched for modification operation to happen. + properties: + listenerName: + description: Name of the listener to match. + type: string + listenerTags: + additionalProperties: + type: string + description: Listener tags available in Listener#Metadata#FilterMetadata[io.kuma.tags] + type: object + name: + description: Name of the HTTP filter. For example + "envoy.filters.http.local_ratelimit" + type: string + origin: + description: |- + Origin is the name of the component or plugin that generated the resource. + + Here is the list of well-known origins: + inbound - resources generated for handling incoming traffic. + outbound - resources generated for handling outgoing traffic. + transparent - resources generated for transparent proxy functionality. + prometheus - resources generated when Prometheus metrics are enabled. + direct-access - resources generated for Direct Access functionality. + ingress - resources generated for Zone Ingress. + egress - resources generated for Zone Egress. + gateway - resources generated for MeshGateway. + + The list is not complete, because policy plugins can introduce new resources. + For example MeshTrace plugin can create Cluster with "mesh-trace" origin. + type: string + type: object + operation: + description: Operation to execute on matched listener. + enum: + - Remove + - Patch + - AddFirst + - AddBefore + - AddAfter + - AddLast + type: string + value: + description: Value of xDS resource in YAML format to + add or patch. + type: string + required: + - operation + type: object + listener: + description: Listener is a modification of Envoy's Listener + resource. + properties: + jsonPatches: + description: |- + JsonPatches specifies list of jsonpatches to apply to on Envoy's Listener + resource + items: + description: JsonPatchBlock is one json patch operation + block. + properties: + from: + description: From is a jsonpatch from string, + used by move and copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: Value must be a valid json value + used by replace and add operations. + x-kubernetes-preserve-unknown-fields: true + required: + - op + - path + type: object + type: array + match: + description: Match is a set of conditions that have + to be matched for modification operation to happen. + properties: + name: + description: Name of the listener to match. + type: string + origin: + description: |- + Origin is the name of the component or plugin that generated the resource. + + Here is the list of well-known origins: + inbound - resources generated for handling incoming traffic. + outbound - resources generated for handling outgoing traffic. + transparent - resources generated for transparent proxy functionality. + prometheus - resources generated when Prometheus metrics are enabled. + direct-access - resources generated for Direct Access functionality. + ingress - resources generated for Zone Ingress. + egress - resources generated for Zone Egress. + gateway - resources generated for MeshGateway. + + The list is not complete, because policy plugins can introduce new resources. + For example MeshTrace plugin can create Cluster with "mesh-trace" origin. + type: string + tags: + additionalProperties: + type: string + description: Tags available in Listener#Metadata#FilterMetadata[io.kuma.tags] + type: object + type: object + operation: + description: Operation to execute on matched listener. + enum: + - Add + - Remove + - Patch + type: string + value: + description: Value of xDS resource in YAML format to + add or patch. + type: string + required: + - operation + type: object + networkFilter: + description: NetworkFilter is a modification of Envoy Listener's + filter. + properties: + jsonPatches: + description: |- + JsonPatches specifies list of jsonpatches to apply to on Envoy Listener's + filter. + items: + description: JsonPatchBlock is one json patch operation + block. + properties: + from: + description: From is a jsonpatch from string, + used by move and copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: Value must be a valid json value + used by replace and add operations. + x-kubernetes-preserve-unknown-fields: true + required: + - op + - path + type: object + type: array + match: + description: Match is a set of conditions that have + to be matched for modification operation to happen. + properties: + listenerName: + description: Name of the listener to match. + type: string + listenerTags: + additionalProperties: + type: string + description: Listener tags available in Listener#Metadata#FilterMetadata[io.kuma.tags] + type: object + name: + description: Name of the network filter. For example + "envoy.filters.network.ratelimit" + type: string + origin: + description: |- + Origin is the name of the component or plugin that generated the resource. + + Here is the list of well-known origins: + inbound - resources generated for handling incoming traffic. + outbound - resources generated for handling outgoing traffic. + transparent - resources generated for transparent proxy functionality. + prometheus - resources generated when Prometheus metrics are enabled. + direct-access - resources generated for Direct Access functionality. + ingress - resources generated for Zone Ingress. + egress - resources generated for Zone Egress. + gateway - resources generated for MeshGateway. + + The list is not complete, because policy plugins can introduce new resources. + For example MeshTrace plugin can create Cluster with "mesh-trace" origin. + type: string + type: object + operation: + description: Operation to execute on matched listener. + enum: + - Remove + - Patch + - AddFirst + - AddBefore + - AddAfter + - AddLast + type: string + value: + description: Value of xDS resource in YAML format to + add or patch. + type: string + required: + - operation + type: object + virtualHost: + description: |- + VirtualHost is a modification of Envoy's VirtualHost + referenced in HTTP Connection Manager in a Listener resource. + properties: + jsonPatches: + description: |- + JsonPatches specifies list of jsonpatches to apply to on Envoy's + VirtualHost resource + items: + description: JsonPatchBlock is one json patch operation + block. + properties: + from: + description: From is a jsonpatch from string, + used by move and copy operations. + type: string + op: + description: Op is a jsonpatch operation string. + enum: + - add + - remove + - replace + - move + - copy + type: string + path: + description: Path is a jsonpatch path string. + type: string + value: + description: Value must be a valid json value + used by replace and add operations. + x-kubernetes-preserve-unknown-fields: true + required: + - op + - path + type: object + type: array + match: + description: Match is a set of conditions that have + to be matched for modification operation to happen. + properties: + name: + description: Name of the VirtualHost to match. + type: string + origin: + description: |- + Origin is the name of the component or plugin that generated the resource. + + Here is the list of well-known origins: + inbound - resources generated for handling incoming traffic. + outbound - resources generated for handling outgoing traffic. + transparent - resources generated for transparent proxy functionality. + prometheus - resources generated when Prometheus metrics are enabled. + direct-access - resources generated for Direct Access functionality. + ingress - resources generated for Zone Ingress. + egress - resources generated for Zone Egress. + gateway - resources generated for MeshGateway. + + The list is not complete, because policy plugins can introduce new resources. + For example MeshTrace plugin can create Cluster with "mesh-trace" origin. + type: string + routeConfigurationName: + description: Name of the RouteConfiguration resource + to match. + type: string + type: object + operation: + description: Operation to execute on matched listener. + enum: + - Add + - Remove + - Patch + type: string + value: + description: Value of xDS resource in YAML format to + add or patch. + type: string + required: + - match + - operation + type: object + type: object + type: array + required: + - appendModifications + type: object + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - default + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshratelimits.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshratelimits.yaml new file mode 100644 index 000000000..52424a985 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshratelimits.yaml @@ -0,0 +1,499 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshratelimits.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshRateLimit + listKind: MeshRateLimitList + plural: meshratelimits + singular: meshratelimit + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshRateLimit resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of clients referenced in + 'targetRef' + properties: + local: + description: LocalConf defines local http or/and tcp rate + limit configuration + properties: + http: + description: |- + LocalHTTP defines configuration of local HTTP rate limiting + https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/local_rate_limit_filter + properties: + disabled: + description: Define if rate limiting should be disabled. + type: boolean + onRateLimit: + description: Describes the actions to take on a + rate limit event + properties: + headers: + description: The Headers to be added to the + HTTP response on a rate limit event + properties: + add: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + set: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + type: object + status: + description: The HTTP status code to be set + on a rate limit event + format: int32 + type: integer + type: object + requestRate: + description: Defines how many requests are allowed + per interval. + properties: + interval: + description: The interval the number of units + is accounted for. + type: string + num: + description: |- + Number of units per interval (depending on usage it can be a number of requests, + or a number of connections). + format: int32 + type: integer + required: + - interval + - num + type: object + type: object + tcp: + description: |- + LocalTCP defines confguration of local TCP rate limiting + https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network_filters/local_rate_limit_filter + properties: + connectionRate: + description: Defines how many connections are allowed + per interval. + properties: + interval: + description: The interval the number of units + is accounted for. + type: string + num: + description: |- + Number of units per interval (depending on usage it can be a number of requests, + or a number of connections). + format: int32 + type: integer + required: + - interval + - num + type: object + disabled: + description: |- + Define if rate limiting should be disabled. + Default: false + type: boolean + type: object + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + clients. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of clients referenced in + 'targetRef' + properties: + local: + description: LocalConf defines local http or/and tcp rate + limit configuration + properties: + http: + description: |- + LocalHTTP defines configuration of local HTTP rate limiting + https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/local_rate_limit_filter + properties: + disabled: + description: Define if rate limiting should be disabled. + type: boolean + onRateLimit: + description: Describes the actions to take on a + rate limit event + properties: + headers: + description: The Headers to be added to the + HTTP response on a rate limit event + properties: + add: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + set: + items: + properties: + name: + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + value: + type: string + required: + - name + - value + type: object + maxItems: 16 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + type: object + status: + description: The HTTP status code to be set + on a rate limit event + format: int32 + type: integer + type: object + requestRate: + description: Defines how many requests are allowed + per interval. + properties: + interval: + description: The interval the number of units + is accounted for. + type: string + num: + description: |- + Number of units per interval (depending on usage it can be a number of requests, + or a number of connections). + format: int32 + type: integer + required: + - interval + - num + type: object + type: object + tcp: + description: |- + LocalTCP defines confguration of local TCP rate limiting + https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network_filters/local_rate_limit_filter + properties: + connectionRate: + description: Defines how many connections are allowed + per interval. + properties: + interval: + description: The interval the number of units + is accounted for. + type: string + num: + description: |- + Number of units per interval (depending on usage it can be a number of requests, + or a number of connections). + format: int32 + type: integer + required: + - interval + - num + type: object + disabled: + description: |- + Define if rate limiting should be disabled. + Default: false + type: boolean + type: object + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + clients. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshretries.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshretries.yaml new file mode 100644 index 000000000..f4337a105 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshretries.yaml @@ -0,0 +1,507 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshretries.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshRetry + listKind: MeshRetryList + plural: meshretries + singular: meshretry + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshRetry resource. + properties: + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between the consumed services and + corresponding configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + grpc: + description: GRPC defines a configuration of retries for + GRPC traffic + properties: + backOff: + description: |- + BackOff is a configuration of durations which will be used in an exponential + backoff strategy between retries. + properties: + baseInterval: + default: 25ms + description: |- + BaseInterval is an amount of time which should be taken between retries. + Must be greater than zero. Values less than 1 ms are rounded up to 1 ms. + type: string + maxInterval: + description: |- + MaxInterval is a maximal amount of time which will be taken between retries. + Default is 10 times the "BaseInterval". + type: string + type: object + numRetries: + description: |- + NumRetries is the number of attempts that will be made on failed (and + retriable) requests. If not set, the default value is 1. + format: int32 + type: integer + perTryTimeout: + description: |- + PerTryTimeout is the maximum amount of time each retry attempt can take + before it times out. If not set, the global request timeout for the route + will be used. Setting this value to 0 will disable the per-try timeout. + type: string + rateLimitedBackOff: + description: |- + RateLimitedBackOff is a configuration of backoff which will be used when + the upstream returns one of the headers configured. + properties: + maxInterval: + default: 300s + description: MaxInterval is a maximal amount of + time which will be taken between retries. + type: string + resetHeaders: + description: |- + ResetHeaders specifies the list of headers (like Retry-After or X-RateLimit-Reset) + to match against the response. Headers are tried in order, and matched + case-insensitive. The first header to be parsed successfully is used. + If no headers match the default exponential BackOff is used instead. + items: + properties: + format: + description: The format of the reset header. + enum: + - Seconds + - UnixTimestamp + type: string + name: + description: The Name of the reset header. + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + required: + - format + - name + type: object + type: array + type: object + retryOn: + description: RetryOn is a list of conditions which will + cause a retry. + example: + - Canceled + - DeadlineExceeded + - Internal + - ResourceExhausted + - Unavailable + items: + enum: + - Canceled + - DeadlineExceeded + - Internal + - ResourceExhausted + - Unavailable + type: string + type: array + type: object + http: + description: HTTP defines a configuration of retries for + HTTP traffic + properties: + backOff: + description: |- + BackOff is a configuration of durations which will be used in exponential + backoff strategy between retries. + properties: + baseInterval: + default: 25ms + description: |- + BaseInterval is an amount of time which should be taken between retries. + Must be greater than zero. Values less than 1 ms are rounded up to 1 ms. + type: string + maxInterval: + description: |- + MaxInterval is a maximal amount of time which will be taken between retries. + Default is 10 times the "BaseInterval". + type: string + type: object + hostSelection: + description: |- + HostSelection is a list of predicates that dictate how hosts should be selected + when requests are retried. + items: + properties: + predicate: + description: Type is requested predicate mode. + enum: + - OmitPreviousHosts + - OmitHostsWithTags + - OmitPreviousPriorities + type: string + tags: + additionalProperties: + type: string + description: |- + Tags is a map of metadata to match against for selecting the omitted hosts. Required if Type is + OmitHostsWithTags + type: object + updateFrequency: + default: 2 + description: |- + UpdateFrequency is how often the priority load should be updated based on previously attempted priorities. + Used for OmitPreviousPriorities. + format: int32 + type: integer + required: + - predicate + type: object + type: array + hostSelectionMaxAttempts: + description: |- + HostSelectionMaxAttempts is the maximum number of times host selection will be + reattempted before giving up, at which point the host that was last selected will + be routed to. If unspecified, this will default to retrying once. + format: int64 + type: integer + numRetries: + description: |- + NumRetries is the number of attempts that will be made on failed (and + retriable) requests. If not set, the default value is 1. + format: int32 + type: integer + perTryTimeout: + description: |- + PerTryTimeout is the amount of time after which retry attempt should time out. + If left unspecified, the global route timeout for the request will be used. + Consequently, when using a 5xx based retry policy, a request that times out + will not be retried as the total timeout budget would have been exhausted. + Setting this timeout to 0 will disable it. + type: string + rateLimitedBackOff: + description: |- + RateLimitedBackOff is a configuration of backoff which will be used + when the upstream returns one of the headers configured. + properties: + maxInterval: + default: 300s + description: MaxInterval is a maximal amount of + time which will be taken between retries. + type: string + resetHeaders: + description: |- + ResetHeaders specifies the list of headers (like Retry-After or X-RateLimit-Reset) + to match against the response. Headers are tried in order, and matched + case-insensitive. The first header to be parsed successfully is used. + If no headers match the default exponential BackOff is used instead. + items: + properties: + format: + description: The format of the reset header. + enum: + - Seconds + - UnixTimestamp + type: string + name: + description: The Name of the reset header. + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + required: + - format + - name + type: object + type: array + type: object + retriableRequestHeaders: + description: |- + RetriableRequestHeaders is an HTTP headers which must be present in the request + for retries to be attempted. + items: + description: |- + HeaderMatch describes how to select an HTTP route by matching HTTP request + headers. + properties: + name: + description: |- + Name is the name of the HTTP Header to be matched. Name MUST be lower case + as they will be handled with case insensitivity (See https://tools.ietf.org/html/rfc7230#section-3.2). + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + type: + default: Exact + description: Type specifies how to match against + the value of the header. + enum: + - Exact + - Present + - RegularExpression + - Absent + - Prefix + type: string + value: + description: Value is the value of HTTP Header + to be matched. + type: string + required: + - name + type: object + type: array + retriableResponseHeaders: + description: |- + RetriableResponseHeaders is an HTTP response headers that trigger a retry + if present in the response. A retry will be triggered if any of the header + matches the upstream response headers. + items: + description: |- + HeaderMatch describes how to select an HTTP route by matching HTTP request + headers. + properties: + name: + description: |- + Name is the name of the HTTP Header to be matched. Name MUST be lower case + as they will be handled with case insensitivity (See https://tools.ietf.org/html/rfc7230#section-3.2). + maxLength: 256 + minLength: 1 + pattern: ^[a-z0-9!#$%&'*+\-.^_\x60|~]+$ + type: string + type: + default: Exact + description: Type specifies how to match against + the value of the header. + enum: + - Exact + - Present + - RegularExpression + - Absent + - Prefix + type: string + value: + description: Value is the value of HTTP Header + to be matched. + type: string + required: + - name + type: object + type: array + retryOn: + description: |- + RetryOn is a list of conditions which will cause a retry. Available values are: + [5XX, GatewayError, Reset, Retriable4xx, ConnectFailure, EnvoyRatelimited, + RefusedStream, Http3PostConnectFailure, HttpMethodConnect, HttpMethodDelete, + HttpMethodGet, HttpMethodHead, HttpMethodOptions, HttpMethodPatch, + HttpMethodPost, HttpMethodPut, HttpMethodTrace]. + Also, any HTTP status code (500, 503, etc.). + example: + - 5XX + - GatewayError + - Reset + - Retriable4xx + - ConnectFailure + - EnvoyRatelimited + - RefusedStream + - Http3PostConnectFailure + - HttpMethodConnect + - HttpMethodDelete + - HttpMethodGet + - HttpMethodHead + - HttpMethodOptions + - HttpMethodPatch + - HttpMethodPost + - HttpMethodPut + - HttpMethodTrace + - "500" + - "503" + items: + type: string + type: array + type: object + tcp: + description: TCP defines a configuration of retries for + TCP traffic + properties: + maxConnectAttempt: + description: |- + MaxConnectAttempt is a maximal amount of TCP connection attempts + which will be made before giving up + format: int32 + type: integer + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshservices.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshservices.yaml new file mode 100644 index 000000000..5ac9cf40b --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshservices.yaml @@ -0,0 +1,218 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshservices.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshService + listKind: MeshServiceList + plural: meshservices + singular: meshservice + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.addresses[0].hostname + name: Hostname + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshService resource. + properties: + identities: + items: + properties: + type: + enum: + - ServiceTag + type: string + value: + type: string + required: + - type + - value + type: object + type: array + ports: + items: + properties: + appProtocol: + default: tcp + description: Protocol identifies a protocol supported by a service. + type: string + name: + type: string + port: + format: int32 + type: integer + targetPort: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + type: array + x-kubernetes-list-map-keys: + - port + - appProtocol + x-kubernetes-list-type: map + selector: + properties: + dataplaneRef: + properties: + name: + type: string + type: object + dataplaneTags: + additionalProperties: + type: string + type: object + type: object + state: + description: |- + State of MeshService. Available if there is at least one healthy endpoint. Otherwise, Unavailable. + It's used for cross zone communication to check if we should send traffic to it, when MeshService is aggregated into MeshMultiZoneService. + enum: + - Available + - Unavailable + type: string + type: object + status: + description: Status is the current status of the Kuma MeshService resource. + properties: + addresses: + items: + properties: + hostname: + type: string + hostnameGeneratorRef: + properties: + coreName: + type: string + required: + - coreName + type: object + origin: + type: string + type: object + type: array + dataplaneProxies: + description: Data plane proxies statistics selected by this MeshService. + properties: + connected: + description: Number of data plane proxies connected to the zone + control plane + type: integer + healthy: + description: Number of data plane proxies with all healthy inbounds + selected by this MeshService. + type: integer + total: + description: Total number of data plane proxies. + type: integer + type: object + hostnameGenerators: + items: + properties: + conditions: + description: Conditions is an array of hostname generator conditions. + items: + properties: + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, + Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + hostnameGeneratorRef: + properties: + coreName: + type: string + required: + - coreName + type: object + required: + - hostnameGeneratorRef + type: object + type: array + tls: + properties: + status: + enum: + - Ready + - NotReady + type: string + type: object + vips: + items: + properties: + ip: + type: string + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtcproutes.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtcproutes.yaml new file mode 100644 index 000000000..9d1d0ad7e --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtcproutes.yaml @@ -0,0 +1,282 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshtcproutes.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshTCPRoute + listKind: MeshTCPRouteList + plural: meshtcproutes + singular: meshtcproute + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshTCPRoute resource. + properties: + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined in-place. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: |- + To list makes a match between the consumed services and corresponding + configurations + items: + properties: + rules: + description: |- + Rules contains the routing rules applies to a combination of top-level + targetRef and the targetRef in this entry. + items: + properties: + default: + description: |- + Default holds routing rules that can be merged with rules from other + policies. + properties: + backendRefs: + items: + description: BackendRef defines where to forward + traffic. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use + to identify cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + port: + description: Port is only supported when this + ref refers to a real MeshService object + format: int32 + type: integer + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + weight: + default: 1 + minimum: 0 + type: integer + type: object + minItems: 1 + type: array + required: + - backendRefs + type: object + required: + - default + type: object + maxItems: 1 + type: array + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + minItems: 1 + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtimeouts.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtimeouts.yaml new file mode 100644 index 000000000..330873a94 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtimeouts.yaml @@ -0,0 +1,363 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshtimeouts.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshTimeout + listKind: MeshTimeoutList + plural: meshtimeouts + singular: meshtimeout + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshTimeout resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of clients referenced in + 'targetRef' + properties: + connectionTimeout: + description: |- + ConnectionTimeout specifies the amount of time proxy will wait for an TCP connection to be established. + Default value is 5 seconds. Cannot be set to 0. + type: string + http: + description: Http provides configuration for HTTP specific + timeouts + properties: + maxConnectionDuration: + description: |- + MaxConnectionDuration is the time after which a connection will be drained and/or closed, + starting from when it was first established. Setting this timeout to 0 will disable it. + Disabled by default. + type: string + maxStreamDuration: + description: |- + MaxStreamDuration is the maximum time that a stream’s lifetime will span. + Setting this timeout to 0 will disable it. Disabled by default. + type: string + requestHeadersTimeout: + description: |- + RequestHeadersTimeout The amount of time that proxy will wait for the request headers to be received. The timer is + activated when the first byte of the headers is received, and is disarmed when the last byte of + the headers has been received. If not specified or set to 0, this timeout is disabled. + Disabled by default. + type: string + requestTimeout: + description: |- + RequestTimeout The amount of time that proxy will wait for the entire request to be received. + The timer is activated when the request is initiated, and is disarmed when the last byte of the request is sent, + OR when the response is initiated. Setting this timeout to 0 will disable it. + Default is 15s. + type: string + streamIdleTimeout: + description: |- + StreamIdleTimeout is the amount of time that proxy will allow a stream to exist with no activity. + Setting this timeout to 0 will disable it. Default is 30m + type: string + type: object + idleTimeout: + description: |- + IdleTimeout is defined as the period in which there are no bytes sent or received on connection + Setting this timeout to 0 will disable it. Be cautious when disabling it because + it can lead to connection leaking. Default value is 1h. + type: string + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + clients. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + to: + description: To list makes a match between the consumed services and + corresponding configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of destinations referenced in + 'targetRef' + properties: + connectionTimeout: + description: |- + ConnectionTimeout specifies the amount of time proxy will wait for an TCP connection to be established. + Default value is 5 seconds. Cannot be set to 0. + type: string + http: + description: Http provides configuration for HTTP specific + timeouts + properties: + maxConnectionDuration: + description: |- + MaxConnectionDuration is the time after which a connection will be drained and/or closed, + starting from when it was first established. Setting this timeout to 0 will disable it. + Disabled by default. + type: string + maxStreamDuration: + description: |- + MaxStreamDuration is the maximum time that a stream’s lifetime will span. + Setting this timeout to 0 will disable it. Disabled by default. + type: string + requestHeadersTimeout: + description: |- + RequestHeadersTimeout The amount of time that proxy will wait for the request headers to be received. The timer is + activated when the first byte of the headers is received, and is disarmed when the last byte of + the headers has been received. If not specified or set to 0, this timeout is disabled. + Disabled by default. + type: string + requestTimeout: + description: |- + RequestTimeout The amount of time that proxy will wait for the entire request to be received. + The timer is activated when the request is initiated, and is disarmed when the last byte of the request is sent, + OR when the response is initiated. Setting this timeout to 0 will disable it. + Default is 15s. + type: string + streamIdleTimeout: + description: |- + StreamIdleTimeout is the amount of time that proxy will allow a stream to exist with no activity. + Setting this timeout to 0 will disable it. Default is 30m + type: string + type: object + idleTimeout: + description: |- + IdleTimeout is defined as the period in which there are no bytes sent or received on connection + Setting this timeout to 0 will disable it. Be cautious when disabling it because + it can lead to connection leaking. Default value is 1h. + type: string + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + destinations. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtlses.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtlses.yaml new file mode 100644 index 000000000..4ddbfffcb --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtlses.yaml @@ -0,0 +1,239 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshtlses.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshTLS + listKind: MeshTLSList + plural: meshtlses + singular: meshtls + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshTLS resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of clients referenced in + 'targetRef' + properties: + mode: + description: Mode defines the behavior of inbound listeners + with regard to traffic encryption. + enum: + - Permissive + - Strict + type: string + tlsCiphers: + description: TlsCiphers section for providing ciphers specification. + items: + enum: + - ECDHE-ECDSA-AES128-GCM-SHA256 + - ECDHE-ECDSA-AES256-GCM-SHA384 + - ECDHE-ECDSA-CHACHA20-POLY1305 + - ECDHE-RSA-AES128-GCM-SHA256 + - ECDHE-RSA-AES256-GCM-SHA384 + - ECDHE-RSA-CHACHA20-POLY1305 + type: string + type: array + tlsVersion: + description: Version section for providing version specification. + properties: + max: + default: TLSAuto + description: Max defines maximum supported version. + One of `TLSAuto`, `TLS10`, `TLS11`, `TLS12`, `TLS13`. + enum: + - TLSAuto + - TLS10 + - TLS11 + - TLS12 + - TLS13 + type: string + min: + default: TLSAuto + description: Min defines minimum supported version. + One of `TLSAuto`, `TLS10`, `TLS11`, `TLS12`, `TLS13`. + enum: + - TLSAuto + - TLS10 + - TLS11 + - TLS12 + - TLS13 + type: string + type: object + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + clients. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined in-place. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtraces.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtraces.yaml new file mode 100644 index 000000000..b16244ce6 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtraces.yaml @@ -0,0 +1,283 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshtraces.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshTrace + listKind: MeshTraceList + plural: meshtraces + singular: meshtrace + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshTrace resource. + properties: + default: + description: MeshTrace configuration. + properties: + backends: + description: |- + A one element array of backend definition. + Envoy allows configuring only 1 backend, so the natural way of + representing that would be just one object. Unfortunately due to the + reasons explained in MADR 009-tracing-policy this has to be a one element + array for now. + items: + description: Only one of zipkin, datadog or openTelemetry can + be used. + properties: + datadog: + description: Datadog backend configuration. + properties: + splitService: + default: false + description: |- + Determines if datadog service name should be split based on traffic + direction and destination. For example, with `splitService: true` and a + `backend` service that communicates with a couple of databases, you would + get service names like `backend_INBOUND`, `backend_OUTBOUND_db1`, and + `backend_OUTBOUND_db2` in Datadog. + type: boolean + url: + description: |- + Address of Datadog collector, only host and port are allowed (no paths, + fragments etc.) + type: string + required: + - url + type: object + openTelemetry: + description: OpenTelemetry backend configuration. + properties: + endpoint: + description: Address of OpenTelemetry collector. + example: otel-collector:4317 + minLength: 1 + type: string + required: + - endpoint + type: object + type: + enum: + - Zipkin + - Datadog + - OpenTelemetry + type: string + zipkin: + description: Zipkin backend configuration. + properties: + apiVersion: + default: httpJson + description: |- + Version of the API. + https://github.com/envoyproxy/envoy/blob/v1.22.0/api/envoy/config/trace/v3/zipkin.proto#L66 + enum: + - httpJson + - httpProto + type: string + sharedSpanContext: + default: true + description: |- + Determines whether client and server spans will share the same span + context. + https://github.com/envoyproxy/envoy/blob/v1.22.0/api/envoy/config/trace/v3/zipkin.proto#L63 + type: boolean + traceId128bit: + default: false + description: Generate 128bit traces. + type: boolean + url: + description: Address of Zipkin collector. + type: string + required: + - url + type: object + required: + - type + type: object + maxItems: 1 + type: array + sampling: + description: |- + Sampling configuration. + Sampling is the process by which a decision is made on whether to + process/export a span or not. + properties: + client: + anyOf: + - type: integer + - type: string + default: 100 + description: |- + Target percentage of requests that will be force traced if the + 'x-client-trace-id' header is set. Mirror of client_sampling in Envoy + https://github.com/envoyproxy/envoy/blob/v1.22.0/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto#L127-L133 + Either int or decimal represented as string. + x-kubernetes-int-or-string: true + overall: + anyOf: + - type: integer + - type: string + default: 100 + description: |- + Target percentage of requests will be traced + after all other sampling checks have been applied (client, force tracing, + random sampling). This field functions as an upper limit on the total + configured sampling rate. For instance, setting client to 100 + but overall to 1 will result in only 1% of client requests with + the appropriate headers to be force traced. Mirror of + overall_sampling in Envoy + https://github.com/envoyproxy/envoy/blob/v1.22.0/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto#L142-L150 + Either int or decimal represented as string. + x-kubernetes-int-or-string: true + random: + anyOf: + - type: integer + - type: string + default: 100 + description: |- + Target percentage of requests that will be randomly selected for trace + generation, if not requested by the client or not forced. + Mirror of random_sampling in Envoy + https://github.com/envoyproxy/envoy/blob/v1.22.0/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto#L135-L140 + Either int or decimal represented as string. + x-kubernetes-int-or-string: true + type: object + tags: + description: |- + Custom tags configuration. You can add custom tags to traces based on + headers or literal values. + items: + description: |- + Custom tags configuration. + Only one of literal or header can be used. + properties: + header: + description: Tag taken from a header. + properties: + default: + description: |- + Default value to use if header is missing. + If the default is missing and there is no value the tag will not be + included. + type: string + name: + description: Name of the header. + type: string + required: + - name + type: object + literal: + description: Tag taken from literal value. + type: string + name: + description: Name of the tag. + type: string + required: + - name + type: object + type: array + type: object + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtrafficpermissions.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtrafficpermissions.yaml new file mode 100644 index 000000000..3e38acc06 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_meshtrafficpermissions.yaml @@ -0,0 +1,203 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: meshtrafficpermissions.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: MeshTrafficPermission + listKind: MeshTrafficPermissionList + plural: meshtrafficpermissions + singular: meshtrafficpermission + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .spec.targetRef.kind + name: TargetRef Kind + type: string + - jsonPath: .spec.targetRef.name + name: TargetRef Name + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma MeshTrafficPermission + resource. + properties: + from: + description: From list makes a match between clients and corresponding + configurations + items: + properties: + default: + description: |- + Default is a configuration specific to the group of clients referenced in + 'targetRef' + properties: + action: + description: 'Action defines a behavior for the specified + group of clients:' + enum: + - Allow + - Deny + - AllowWithShadowDeny + type: string + type: object + targetRef: + description: |- + TargetRef is a reference to the resource that represents a group of + clients. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify + cross mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + required: + - targetRef + type: object + type: array + targetRef: + description: |- + TargetRef is a reference to the resource the policy takes an effect on. + The resource could be either a real store object or virtual resource + defined inplace. + properties: + kind: + description: Kind of the referenced resource + enum: + - Mesh + - MeshSubset + - MeshGateway + - MeshService + - MeshExternalService + - MeshMultiZoneService + - MeshServiceSubset + - MeshHTTPRoute + type: string + labels: + additionalProperties: + type: string + description: |- + Labels are used to select group of MeshServices that match labels. Either Labels or + Name and Namespace can be used. + type: object + mesh: + description: Mesh is reserved for future use to identify cross + mesh resources. + type: string + name: + description: |- + Name of the referenced resource. Can only be used with kinds: `MeshService`, + `MeshServiceSubset` and `MeshGatewayRoute` + type: string + namespace: + description: |- + Namespace specifies the namespace of target resource. If empty only resources in policy namespace + will be targeted. + type: string + proxyTypes: + description: |- + ProxyTypes specifies the data plane types that are subject to the policy. When not specified, + all data plane types are targeted by the policy. + items: + enum: + - Sidecar + - Gateway + type: string + minItems: 1 + type: array + sectionName: + description: |- + SectionName is used to target specific section of resource. + For example, you can target port from MeshService.ports[] by its name. Only traffic to this port will be affected. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags used to select a subset of proxies by tags. Can only be used with kinds + `MeshSubset` and `MeshServiceSubset` + type: object + type: object + type: object + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_proxytemplates.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_proxytemplates.yaml new file mode 100644 index 000000000..78b1d55e4 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_proxytemplates.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: proxytemplates.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ProxyTemplate + listKind: ProxyTemplateList + plural: proxytemplates + singular: proxytemplate + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ProxyTemplate resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_ratelimits.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_ratelimits.yaml new file mode 100644 index 000000000..85f1876eb --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_ratelimits.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: ratelimits.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: RateLimit + listKind: RateLimitList + plural: ratelimits + singular: ratelimit + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma RateLimit resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_retries.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_retries.yaml new file mode 100644 index 000000000..10a4843e1 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_retries.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: retries.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: Retry + listKind: RetryList + plural: retries + singular: retry + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma Retry resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_serviceinsights.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_serviceinsights.yaml new file mode 100644 index 000000000..827ea521d --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_serviceinsights.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: serviceinsights.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ServiceInsight + listKind: ServiceInsightList + plural: serviceinsights + singular: serviceinsight + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ServiceInsight resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_timeouts.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_timeouts.yaml new file mode 100644 index 000000000..ba78d88c5 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_timeouts.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: timeouts.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: Timeout + listKind: TimeoutList + plural: timeouts + singular: timeout + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma Timeout resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficlogs.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficlogs.yaml new file mode 100644 index 000000000..ece8562e5 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficlogs.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: trafficlogs.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: TrafficLog + listKind: TrafficLogList + plural: trafficlogs + singular: trafficlog + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma TrafficLog resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficpermissions.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficpermissions.yaml new file mode 100644 index 000000000..9c79605af --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficpermissions.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: trafficpermissions.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: TrafficPermission + listKind: TrafficPermissionList + plural: trafficpermissions + singular: trafficpermission + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma TrafficPermission resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficroutes.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficroutes.yaml new file mode 100644 index 000000000..5bdd3ac85 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_trafficroutes.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: trafficroutes.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: TrafficRoute + listKind: TrafficRouteList + plural: trafficroutes + singular: trafficroute + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma TrafficRoute resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_traffictraces.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_traffictraces.yaml new file mode 100644 index 000000000..c224ea526 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_traffictraces.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: traffictraces.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: TrafficTrace + listKind: TrafficTraceList + plural: traffictraces + singular: traffictrace + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma TrafficTrace resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_virtualoutbounds.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_virtualoutbounds.yaml new file mode 100644 index 000000000..c4372dd0b --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_virtualoutbounds.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: virtualoutbounds.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: VirtualOutbound + listKind: VirtualOutboundList + plural: virtualoutbounds + singular: virtualoutbound + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma VirtualOutbound resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneegresses.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneegresses.yaml new file mode 100644 index 000000000..143aaafdb --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneegresses.yaml @@ -0,0 +1,56 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: zoneegresses.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ZoneEgress + listKind: ZoneEgressList + plural: zoneegresses + singular: zoneegress + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: Zone name + jsonPath: .spec.zone + name: zone + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ZoneEgress resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneegressinsights.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneegressinsights.yaml new file mode 100644 index 000000000..05746b39a --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneegressinsights.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: zoneegressinsights.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ZoneEgressInsight + listKind: ZoneEgressInsightList + plural: zoneegressinsights + singular: zoneegressinsight + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ZoneEgressInsight resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneingresses.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneingresses.yaml new file mode 100644 index 000000000..d02c5b35b --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneingresses.yaml @@ -0,0 +1,56 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: zoneingresses.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ZoneIngress + listKind: ZoneIngressList + plural: zoneingresses + singular: zoneingress + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: Zone name + jsonPath: .spec.zone + name: zone + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ZoneIngress resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true + subresources: {} diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneingressinsights.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneingressinsights.yaml new file mode 100644 index 000000000..ded86e6c2 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneingressinsights.yaml @@ -0,0 +1,51 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: zoneingressinsights.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ZoneIngressInsight + listKind: ZoneIngressInsightList + plural: zoneingressinsights + singular: zoneingressinsight + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ZoneIngressInsight + resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneinsights.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneinsights.yaml new file mode 100644 index 000000000..aad82d4be --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_zoneinsights.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: zoneinsights.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: ZoneInsight + listKind: ZoneInsightList + plural: zoneinsights + singular: zoneinsight + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma ZoneInsight resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/crds/kuma.io_zones.yaml b/charts/kuma/kuma/2.9.0/crds/kuma.io_zones.yaml new file mode 100644 index 000000000..12022fce9 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/crds/kuma.io_zones.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.3 + name: zones.kuma.io +spec: + group: kuma.io + names: + categories: + - kuma + kind: Zone + listKind: ZoneList + plural: zones + singular: zone + scope: Cluster + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + mesh: + description: |- + Mesh is the name of the Kuma mesh this resource belongs to. + It may be omitted for cluster-scoped resources. + type: string + metadata: + type: object + spec: + description: Spec is the specification of the Kuma Zone resource. + x-kubernetes-preserve-unknown-fields: true + type: object + served: true + storage: true diff --git a/charts/kuma/kuma/2.9.0/templates/NOTES.txt b/charts/kuma/kuma/2.9.0/templates/NOTES.txt new file mode 100644 index 000000000..228ac26e7 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/NOTES.txt @@ -0,0 +1,42 @@ +{{ .Chart.Name }} has been installed! + +Your release is named '{{ .Release.Name }}'. + +You can access the control-plane via either the GUI, kubectl, the HTTP API, or the kumactl CLI. +{{- if .Values.noHelmHooks }} + +------------------------------------------------------------------------------- + + WARNING + + When the "noHelmHooks" value is provided, you will need to manually delete + the "ValidatingWebhookConfiguration" responsible for validating {{ include "kuma.name" . }} resources + before you can uninstall Helm release. This is because the validation provided + by the webhook is not necessary during the release removal and might potentially + even prevent you from doing it. You can do this by running the following command: + + kubectl delete ValidatingWebhookConfiguration {{ include "kuma.name" . }}-validating-webhook-configuration + + WARNING + + When the "noHelmHooks" value is set, Helm will not automatically update + the CustomResourceDefinitions (CRDs) when upgrading release. You must manually + update the CRDs if the new {{ include "kuma.name" . }} version has changes + to the CRDs. You can achieve this by calling the following command: + + kumactl install crds --no-config | kubectl apply -f + +{{- if and .Values.experimental.ebpf.enabled (not .Values.cni.enabled) }} + + WARNING + + When the "noHelmHooks" value is set, Helm will not automatically uninstall + the eBPF resources. You will need to manually uninstall these resources after + uninstalling Helm release. To do this, run the following command: + + kumactl uninstall ebpf --cleanup-image-registry {{ .Values.global.image.registry }} --cleanup-image-repository {{ .Values.dataPlane.initImage.repository }} + +{{- end }} + +------------------------------------------------------------------------------- +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/_helpers.tpl b/charts/kuma/kuma/2.9.0/templates/_helpers.tpl new file mode 100644 index 000000000..a33fa04dc --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/_helpers.tpl @@ -0,0 +1,432 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "kuma.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +This is the Kuma version the chart is intended to be used with. +*/}} +{{- define "kuma.appVersion" -}} +{{- .Chart.AppVersion -}} +{{- end }} + +{{/* +This is only used in the `kuma.formatImage` function below. +*/}} +{{- define "kuma.defaultRegistry" -}} +docker.io/kumahq +{{- end }} + +{{- define "kuma.product" -}} +Kuma +{{- end }} + +{{- define "kuma.tagPrefix" -}} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "kuma.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "kuma.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{- define "kuma.controlPlane.serviceName" -}} +{{- $defaultSvcName := printf "%s-control-plane" (include "kuma.name" .) -}} +{{ printf "%s" (default $defaultSvcName .Values.controlPlane.service.name) }} +{{- end }} + +{{- define "kuma.controlPlane.globalZoneSync.serviceName" -}} +{{- $defaultSvcName := printf "%s-global-zone-sync" (include "kuma.name" .) -}} +{{ printf "%s" (default $defaultSvcName .Values.controlPlane.globalZoneSyncService.name) }} +{{- end }} + +{{- define "kuma.ingress.serviceName" -}} +{{- $defaultSvcName := printf "%s-ingress" (include "kuma.name" .) -}} +{{ printf "%s" (default $defaultSvcName .Values.ingress.service.name) }} +{{- end }} + +{{- define "kuma.egress.serviceName" -}} +{{- $defaultSvcName := printf "%s-egress" (include "kuma.name" .) -}} +{{ printf "%s" (default $defaultSvcName .Values.egress.service.name) }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "kuma.labels" -}} +helm.sh/chart: {{ include "kuma.chart" . }} +{{ include "kuma.selectorLabels" . }} +{{- if (include "kuma.appVersion" .) }} +app.kubernetes.io/version: {{ (include "kuma.appVersion" .) | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "kuma.selectorLabels" -}} +app.kubernetes.io/name: {{ include "kuma.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +CNI labels +*/}} +{{- define "kuma.cniLabels" -}} +app: {{ include "kuma.name" . }}-cni +{{ include "kuma.labels" . }} +{{- end }} + +{{/* +control plane labels +*/}} +{{- define "kuma.cpLabels" -}} +app: {{ include "kuma.name" . }}-control-plane +{{- range $key, $value := $.Values.controlPlane.extraLabels }} +{{ $key | quote }}: {{ $value | quote }} +{{- end }} +{{ include "kuma.labels" . }} +{{- end }} + +{{/* +control plane deployment annotations +*/}} +{{- define "kuma.cpDeploymentAnnotations" -}} +{{- range $key, $value := $.Values.controlPlane.deploymentAnnotations }} +{{ $key | quote }}: {{ $value | quote }} +{{- end }} +{{- end }} + +{{/* +ingress labels +*/}} +{{- define "kuma.ingressLabels" -}} +app: {{ include "kuma.name" . }}-ingress +{{- range $key, $value := .Values.ingress.extraLabels }} +{{ $key | quote }}: {{ $value | quote }} +{{- end }} +{{ include "kuma.labels" . }} +{{- end }} + +{{/* +egress labels +*/}} +{{- define "kuma.egressLabels" -}} +app: {{ include "kuma.name" . }}-egress +{{ range $key, $value := .Values.egress.extraLabels }} +{{ $key | quote }}: {{ $value | quote }} +{{ end }} +{{- include "kuma.labels" . }} +{{- end }} + +{{/* +CNI selector labels +*/}} +{{- define "kuma.cniSelectorLabels" -}} +app: {{ include "kuma.name" . }}-cni +{{ include "kuma.selectorLabels" . }} +{{- end }} + +{{/* +params: { dns: { policy?, config: {nameservers?, searches?}} } +returns: formatted dnsConfig +*/}} +{{- define "kuma.dnsConfig" -}} +{{- $dns := .dns }} +{{- if $dns.policy }} +dnsPolicy: {{ $dns.policy }} +{{- end }} +{{- if or (gt (len $dns.config.nameservers) 0) (gt (len $dns.config.searches) 0) }} +dnsConfig: + {{- if gt (len $dns.config.nameservers) 0 }} + nameservers: + {{- range $nameserver := $dns.config.nameservers }} + - {{ $nameserver }} + {{- end }} + {{- end }} + {{- if gt (len $dns.config.searches) 0 }} + searches: + {{- range $search := $dns.config.searches }} + - {{ $search }} + {{- end }} + {{- end }} +{{- end }} +{{- end -}} + +{{/* +params: { image: { registry?, repository, tag? }, root: $ } +returns: formatted image string +*/}} +{{- define "kuma.formatImage" -}} +{{- $img := .image }} +{{- $root := .root }} +{{- $registry := ($img.registry | default $root.Values.global.image.registry) -}} +{{- $repo := ($img.repository | required "Must specify image repository") -}} +{{- $product := (include "kuma.product" .) }} +{{- $tagPrefix := (include "kuma.tagPrefix" .) }} +{{- $expectedVersion := (include "kuma.appVersion" $root) }} +{{- if + and + $root.Values.global.image.tag + (ne $root.Values.global.image.tag (include "kuma.appVersion" $root)) + (eq $root.Values.global.image.registry (include "kuma.defaultRegistry" .)) +-}} +{{- fail ( + printf "This chart only supports %s version %q but %sglobal.image.tag is set to %q. Set %sglobal.image.tag to %q or skip this check by setting %s*.image.tag for each individual component." + $product $expectedVersion $tagPrefix $root.Values.global.image.tag $tagPrefix $expectedVersion $tagPrefix +) -}} +{{- end -}} +{{- $defaultTag := ($root.Values.global.image.tag | default (include "kuma.appVersion" $root)) -}} +{{- $tag := ($img.tag | default $defaultTag) -}} +{{- printf "%s/%s:%s" $registry $repo $tag -}} +{{- end -}} + +{{- define "kuma.parentEnv" -}} +{{- end -}} + +{{- define "kuma.parentSecrets" -}} +{{- end -}} + +{{- define "kuma.pluginPoliciesEnabled" -}} +{{- $list := list -}} +{{- range $k, $v := .Values.plugins.policies -}} +{{- if $v -}} +{{- $list = append $list (printf "%s" $k) -}} +{{- end -}} +{{- end -}} +{{ join "," $list }} +{{- end -}} + +{{- define "kuma.transparentProxyConfigMapName" -}} +{{- if .Values.transparentProxy.configMap.name }} +{{- .Values.transparentProxy.configMap.name | trunc 253 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-transparent-proxy-config" .Chart.Name }} +{{- end }} +{{- end }} + +{{- define "kuma.defaultEnv" -}} +env: +{{ include "kuma.parentEnv" . }} +- name: KUMA_ENVIRONMENT + value: "kubernetes" +- name: KUMA_STORE_TYPE + value: "kubernetes" +- name: KUMA_STORE_KUBERNETES_SYSTEM_NAMESPACE + value: {{ .Release.Namespace | quote }} +- name: KUMA_RUNTIME_KUBERNETES_CONTROL_PLANE_SERVICE_NAME + value: {{ include "kuma.controlPlane.serviceName" . }} +- name: KUMA_GENERAL_TLS_CERT_FILE + value: /var/run/secrets/kuma.io/tls-cert/tls.crt +- name: KUMA_GENERAL_TLS_KEY_FILE + value: /var/run/secrets/kuma.io/tls-cert/tls.key +{{- if eq .Values.controlPlane.mode "zone" }} +- name: KUMA_MULTIZONE_ZONE_GLOBAL_ADDRESS + value: {{ .Values.controlPlane.kdsGlobalAddress }} +{{- end }} +- name: KUMA_DP_SERVER_HDS_ENABLED + value: "false" +- name: KUMA_API_SERVER_READ_ONLY + value: "true" +- name: KUMA_RUNTIME_KUBERNETES_ADMISSION_SERVER_PORT + value: {{ .Values.controlPlane.admissionServerPort | default "5443" | quote }} +- name: KUMA_RUNTIME_KUBERNETES_ADMISSION_SERVER_CERT_DIR + value: /var/run/secrets/kuma.io/tls-cert +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_CNI_ENABLED + value: {{ .Values.cni.enabled | quote }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_IMAGE + value: {{ include "kuma.formatImage" (dict "image" .Values.dataPlane.image "root" $) | quote }} +- name: KUMA_INJECTOR_INIT_CONTAINER_IMAGE + value: {{ include "kuma.formatImage" (dict "image" .Values.dataPlane.initImage "root" $) | quote }} +{{- if .Values.dataPlane.dnsLogging }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_BUILTIN_DNS_LOGGING + value: "true" +{{- end }} +{{- if and .Values.transparentProxy.configMap.enabled .Values.transparentProxy.configMap.config }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_TRANSPARENT_PROXY_CONFIGMAP_NAME + value: {{ include "kuma.transparentProxyConfigMapName" . | quote }} +{{- end }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_CA_CERT_FILE + value: /var/run/secrets/kuma.io/tls-cert/ca.crt +- name: KUMA_DEFAULTS_SKIP_MESH_CREATION + value: {{ .Values.controlPlane.defaults.skipMeshCreation | quote }} +- name: KUMA_MODE + value: {{ .Values.controlPlane.mode | quote }} +{{- if .Values.controlPlane.zone }} +- name: KUMA_MULTIZONE_ZONE_NAME + value: {{ .Values.controlPlane.zone | quote }} +{{- end }} +{{- if .Values.controlPlane.tls.apiServer.secretName }} +- name: KUMA_API_SERVER_HTTPS_TLS_CERT_FILE + value: /var/run/secrets/kuma.io/api-server-tls-cert/tls.crt +- name: KUMA_API_SERVER_HTTPS_TLS_KEY_FILE + value: /var/run/secrets/kuma.io/api-server-tls-cert/tls.key +{{- end }} +{{- if .Values.controlPlane.tls.apiServer.clientCertsSecretName }} +- name: KUMA_API_SERVER_AUTH_CLIENT_CERTS_DIR + value: /var/run/secrets/kuma.io/api-server-client-certs/ +{{- end }} +{{- if and (eq .Values.controlPlane.mode "global") (or .Values.controlPlane.tls.kdsGlobalServer.secretName .Values.controlPlane.tls.kdsGlobalServer.create) }} +- name: KUMA_MULTIZONE_GLOBAL_KDS_TLS_CERT_FILE + value: /var/run/secrets/kuma.io/kds-server-tls-cert/tls.crt +- name: KUMA_MULTIZONE_GLOBAL_KDS_TLS_KEY_FILE + value: /var/run/secrets/kuma.io/kds-server-tls-cert/tls.key +{{- end }} +{{- if and (eq .Values.controlPlane.mode "zone") (or .Values.controlPlane.tls.kdsZoneClient.secretName .Values.controlPlane.tls.kdsZoneClient.create) }} +- name: KUMA_MULTIZONE_ZONE_KDS_ROOT_CA_FILE + value: /var/run/secrets/kuma.io/kds-client-tls-cert/ca.crt +{{- end }} +- name: KUMA_API_SERVER_AUTHN_LOCALHOST_IS_ADMIN + value: "false" +- name: KUMA_RUNTIME_KUBERNETES_ALLOWED_USERS + value: "system:serviceaccount:{{ .Release.Namespace }}:{{ include "kuma.name" . }}-control-plane" +{{- if .Values.experimental.sidecarContainers }} +- name: KUMA_EXPERIMENTAL_SIDECAR_CONTAINERS + value: "true" +{{- end }} +{{- if .Values.cni.enabled }} +- name: KUMA_RUNTIME_KUBERNETES_NODE_TAINT_CONTROLLER_ENABLED + value: "true" +- name: KUMA_RUNTIME_KUBERNETES_NODE_TAINT_CONTROLLER_CNI_APP + value: "{{ include "kuma.name" . }}-cni" +- name: KUMA_RUNTIME_KUBERNETES_NODE_TAINT_CONTROLLER_CNI_NAMESPACE + value: {{ .Values.cni.namespace }} +{{- end }} +{{- if .Values.experimental.ebpf.enabled }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_EBPF_ENABLED + value: "true" +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_EBPF_INSTANCE_IP_ENV_VAR_NAME + value: {{ .Values.experimental.ebpf.instanceIPEnvVarName }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_EBPF_BPFFS_PATH + value: {{ .Values.experimental.ebpf.bpffsPath }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_EBPF_CGROUP_PATH + value: {{ .Values.experimental.ebpf.cgroupPath }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_EBPF_TC_ATTACH_IFACE + value: {{ .Values.experimental.ebpf.tcAttachIface }} +- name: KUMA_RUNTIME_KUBERNETES_INJECTOR_EBPF_PROGRAMS_SOURCE_PATH + value: {{ .Values.experimental.ebpf.programsSourcePath }} +{{- end }} +{{- if .Values.controlPlane.tls.kdsZoneClient.skipVerify }} +- name: KUMA_MULTIZONE_ZONE_KDS_TLS_SKIP_VERIFY + value: "true" +{{- end }} +- name: KUMA_PLUGIN_POLICIES_ENABLED + value: {{ include "kuma.pluginPoliciesEnabled" . | quote }} +{{- if .Values.controlPlane.supportGatewaySecretsInAllNamespaces }} +- name: KUMA_RUNTIME_KUBERNETES_SUPPORT_GATEWAY_SECRETS_IN_ALL_NAMESPACES + value: true +{{- end }} +{{- end }} + +{{- define "kuma.controlPlane.tls.general.caSecretName" -}} +{{ .Values.controlPlane.tls.general.caSecretName | default .Values.controlPlane.tls.general.secretName | default (printf "%s-tls-cert" (include "kuma.name" .)) | quote }} +{{- end }} + +{{- define "kuma.universal.defaultEnv" -}} +{{ if eq .Values.controlPlane.mode "zone" }} + {{ if .Values.ingress.enabled }} + {{ fail "Can't have ingress.enabled when running controlPlane.mode=='universal'" }} + {{ end }} + {{ if .Values.egress.enabled }} + {{ fail "Can't have egress.enabled when running controlPlane.mode=='universal'" }} + {{ end }} +{{ end }} + +env: +- name: KUMA_PLUGIN_POLICIES_ENABLED + value: {{ include "kuma.pluginPoliciesEnabled" . | quote }} +- name: KUMA_GENERAL_WORK_DIR + value: "/tmp/kuma" +- name: KUMA_ENVIRONMENT + value: "universal" +- name: KUMA_STORE_TYPE + value: "postgres" +- name: KUMA_STORE_POSTGRES_PORT + value: "{{ .Values.postgres.port }}" +- name: KUMA_DEFAULTS_SKIP_MESH_CREATION + value: {{ .Values.controlPlane.defaults.skipMeshCreation | quote }} +{{ if and (eq .Values.controlPlane.mode "zone") .Values.controlPlane.tls.general.secretName }} +- name: KUMA_GENERAL_TLS_CERT_FILE + value: /var/run/secrets/kuma.io/tls-cert/tls.crt +- name: KUMA_GENERAL_TLS_KEY_FILE + value: /var/run/secrets/kuma.io/tls-cert/tls.key +{{ end }} +- name: KUMA_MODE + value: {{ .Values.controlPlane.mode | quote }} +{{- if eq .Values.controlPlane.mode "zone" }} +- name: KUMA_MULTIZONE_ZONE_GLOBAL_ADDRESS + value: {{ .Values.controlPlane.kdsGlobalAddress }} +{{- end }} +{{- if .Values.controlPlane.zone }} +- name: KUMA_MULTIZONE_ZONE_NAME + value: {{ .Values.controlPlane.zone | quote }} +{{- end }} +{{- if and (eq .Values.controlPlane.mode "zone") (or .Values.controlPlane.tls.kdsZoneClient.secretName .Values.controlPlane.tls.kdsZoneClient.create) }} +- name: KUMA_MULTIZONE_ZONE_KDS_ROOT_CA_FILE + value: /var/run/secrets/kuma.io/kds-client-tls-cert/ca.crt +{{- end }} +{{- if .Values.controlPlane.tls.kdsZoneClient.skipVerify }} +- name: KUMA_MULTIZONE_ZONE_KDS_TLS_SKIP_VERIFY + value: "true" +{{- end }} +{{- if .Values.controlPlane.tls.apiServer.secretName }} +- name: KUMA_API_SERVER_HTTPS_TLS_CERT_FILE + value: /var/run/secrets/kuma.io/api-server-tls-cert/tls.crt +- name: KUMA_API_SERVER_HTTPS_TLS_KEY_FILE + value: /var/run/secrets/kuma.io/api-server-tls-cert/tls.key +{{- end }} +{{- if .Values.controlPlane.tls.apiServer.clientCertsSecretName }} +- name: KUMA_API_SERVER_AUTH_CLIENT_CERTS_DIR + value: /var/run/secrets/kuma.io/api-server-client-certs/ +{{- end }} +{{- if .Values.controlPlane.tls.kdsGlobalServer.secretName }} +- name: KUMA_MULTIZONE_GLOBAL_KDS_TLS_CERT_FILE + value: /var/run/secrets/kuma.io/kds-server-tls-cert/tls.crt +- name: KUMA_MULTIZONE_GLOBAL_KDS_TLS_KEY_FILE + value: /var/run/secrets/kuma.io/kds-server-tls-cert/tls.key +{{- end }} +- name: KUMA_STORE_POSTGRES_TLS_MODE + value: {{ .Values.postgres.tls.mode }} +{{- if or (eq .Values.postgres.tls.mode "verifyCa") (eq .Values.postgres.tls.mode "verifyFull") }} +{{- if empty .Values.postgres.tls.caSecretName }} +{{ fail "if mode is 'verifyCa' or 'verifyFull' then you must provide .Values.postgres.tls.caSecretName" }} +{{- end }} +{{- if .Values.postgres.tls.secretName }} +- name: KUMA_STORE_POSTGRES_TLS_CERT_PATH + value: /var/run/secrets/kuma.io/postgres-tls-cert/tls.crt +- name: KUMA_STORE_POSTGRES_TLS_KEY_PATH + value: /var/run/secrets/kuma.io/postgres-tls-cert/tls.key +{{- end }} +{{- if .Values.postgres.tls.caSecretName }} +- name: KUMA_STORE_POSTGRES_TLS_CA_PATH + value: /var/run/secrets/kuma.io/postgres-tls-cert/ca.crt +{{- end }} +{{- if .Values.postgres.tls.disableSSLSNI }} +- name: KUMA_STORE_POSTGRES_TLS_DISABLE_SSLSNI + value: {{ .Values.postgres.tls.disableSSLSNI }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cni-configmap.yaml b/charts/kuma/kuma/2.9.0/templates/cni-configmap.yaml new file mode 100644 index 000000000..8d27de9ef --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cni-configmap.yaml @@ -0,0 +1,22 @@ +{{- if and .Values.cni.enabled (not .Values.experimental.ebpf.enabled) }} +kind: ConfigMap +apiVersion: v1 +metadata: + name: {{ include "kuma.name" . }}-cni-config + namespace: {{ .Values.cni.namespace }} + labels: {{ include "kuma.cniLabels" . | nindent 4 }} +data: + # The CNI network configuration to add to the plugin chain on each node. + cni_network_config: |- + { + "cniVersion": "0.3.1", + "name": "kuma-cni", + "type": "kuma-cni", + "log_level": "{{ .Values.cni.logLevel }}", + "kubernetes": { + "kubeconfig": "__KUBECONFIG_FILEPATH__", + "cni_bin_dir": "{{ .Values.cni.binDir }}", + "exclude_namespaces": [ "kube-system" ] + } + } + {{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cni-daemonset.yaml b/charts/kuma/kuma/2.9.0/templates/cni-daemonset.yaml new file mode 100644 index 000000000..b5d8db761 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cni-daemonset.yaml @@ -0,0 +1,152 @@ +{{- if .Values.cni.enabled }} +kind: DaemonSet +apiVersion: apps/v1 +metadata: + name: {{ include "kuma.name" . }}-cni-node + namespace: {{ .Values.cni.namespace }} + annotations: + ignore-check.kube-linter.io/run-as-non-root: "The container installs a CNI plugin" + labels: {{- include "kuma.cniLabels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "kuma.cniSelectorLabels" . | nindent 6 }} + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + template: + metadata: + labels: + {{- include "kuma.cniSelectorLabels" . | nindent 8 }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/cni-configmap.yaml") . | sha256sum }} + {{- range $key, $value := .Values.cni.podAnnotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + spec: + # This, along with the CriticalAddonsOnly toleration below, + # marks the pod as a critical add-on, ensuring it gets + # priority scheduling and that its resources are reserved + # if it ever gets evicted. + priorityClassName: system-node-critical + {{- with .Values.cni.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.cni.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + tolerations: + # Make sure kuma-cni-node gets scheduled on all nodes. + - effect: NoSchedule + operator: Exists + # Mark the pod as a critical add-on for rescheduling. + - key: CriticalAddonsOnly + operator: Exists + - effect: NoExecute + operator: Exists + serviceAccountName: {{ include "kuma.name" . }}-cni + # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force + # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. + terminationGracePeriodSeconds: 5 + securityContext: + {{- toYaml .Values.cni.podSecurityContext | trim | nindent 8 }} + containers: + - name: install-cni + imagePullPolicy: {{ .Values.cni.image.imagePullPolicy }} + {{- if not .Values.experimental.ebpf.enabled }} + image: {{ include "kuma.formatImage" (dict "image" .Values.cni.image "root" $) | quote }} + readinessProbe: + initialDelaySeconds: {{ .Values.cni.delayStartupSeconds }} + exec: + command: + - cat + - /tmp/ready + command: [ "sh", "-c", "--" ] + args: [ "sleep {{.Values.cni.delayStartupSeconds}} && exec /install-cni" ] + {{- else }} + {{- with .Values.cni.experimental.imageEbpf }} + image: {{ printf "%s/%s:%s" .registry .repository .tag | quote }} + {{- end }} + args: + - /app/mbctl + - --mode=kuma + - --use-reconnect=true + - --cni-mode=true + {{- if eq .Values.cni.logLevel "debug" }} + - --debug=true + {{- end }} + lifecycle: + preStop: + exec: + command: + - make + - --keep-going + - clean + {{- end }} + securityContext: + {{- toYaml .Values.cni.containerSecurityContext | trim | nindent 12 }} + {{- if .Values.experimental.ebpf.enabled }} + privileged: true + {{- end }} + {{- if not .Values.experimental.ebpf.enabled }} + env: + # Name of the CNI config file to create. + - name: CNI_CONF_NAME + value: "{{ .Values.cni.confName }}" + # The CNI network config to install on each node. + - name: CNI_NETWORK_CONFIG + valueFrom: + configMapKeyRef: + name: {{ include "kuma.name" . }}-cni-config + key: cni_network_config + - name: CNI_NET_DIR + value: "{{ .Values.cni.netDir }}" + # If true, deploy as a chained CNI plugin, otherwise deploy as a standalone CNI + - name: CHAINED_CNI_PLUGIN + value: "{{ .Values.cni.chained }}" + - name: CNI_LOG_LEVEL + value: "{{ .Values.cni.logLevel }}" + {{- end }} + resources: + {{- toYaml .Values.cni.resources | trim | nindent 12 }} + volumeMounts: + - mountPath: /host/opt/cni/bin + name: cni-bin-dir + - mountPath: /host/etc/cni/net.d + name: cni-net-dir + {{- if .Values.experimental.ebpf.enabled }} + - mountPath: /sys/fs/cgroup + name: sys-fs-cgroup + - mountPath: /host/proc + name: host-proc + - mountPath: /host/var/run + name: host-var-run + mountPropagation: Bidirectional + {{- end }} + - name: tmp + mountPath: /tmp + volumes: + # Used to install CNI. + - name: cni-bin-dir + hostPath: + path: {{ .Values.cni.binDir }} + - name: cni-net-dir + hostPath: + path: {{ .Values.cni.netDir }} + {{- if .Values.experimental.ebpf.enabled }} + - hostPath: + path: /var/run + name: host-var-run + - hostPath: + path: /sys/fs/cgroup + name: sys-fs-cgroup + - hostPath: + path: /proc + name: host-proc + {{- end }} + - name: tmp + emptyDir: {} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cni-rbac.yaml b/charts/kuma/kuma/2.9.0/templates/cni-rbac.yaml new file mode 100644 index 000000000..07af2b215 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cni-rbac.yaml @@ -0,0 +1,51 @@ +{{- if .Values.cni.enabled }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "kuma.name" . }}-cni + namespace: {{ .Values.cni.namespace }} + labels: {{ include "kuma.cniLabels" . | nindent 4 }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "kuma.name" . }}-cni + labels: + {{ include "kuma.cniLabels" . | nindent 4 }} +rules: + - apiGroups: [""] + resources: + - nodes + verbs: + - get + - apiGroups: [""] + resources: + - pods + verbs: + - get + {{- if .Values.experimental.ebpf.enabled }} + - list + - watch + {{- end }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "kuma.name" . }}-cni + labels: + {{ include "kuma.cniLabels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "kuma.name" . }}-cni +subjects: + - kind: ServiceAccount + name: {{ include "kuma.name" . }}-cni + namespace: {{ .Values.cni.namespace }} + {{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-configmap.yaml b/charts/kuma/kuma/2.9.0/templates/cp-configmap.yaml new file mode 100644 index 000000000..b2c94ed4d --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-configmap.yaml @@ -0,0 +1,46 @@ +{{ $kumaCpLabels := include "kuma.cpLabels" . }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "kuma.name" . }}-control-plane-config + namespace: {{ .Release.Namespace }} + labels: {{ $kumaCpLabels | nindent 4 }} +data: + config.yaml: | + # use this file to override default configuration of `kuma-cp` + # + # see conf/kuma-cp.conf.yml for available settings + {{ if .Values.controlPlane.config }} + {{ .Values.controlPlane.config | nindent 4 }} + {{ end }} + +{{- $releaseNamespace := .Release.Namespace}} +{{- range $extraConfigMap := .Values.controlPlane.extraConfigMaps }} +{{- if $extraConfigMap.values }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ $extraConfigMap.name }} + namespace: {{ $releaseNamespace }} + labels: {{ $kumaCpLabels | nindent 4 }} +data: + {{- range $fileName, $fileContents := $extraConfigMap.values }} + {{- $fileName | nindent 2 }}: | + {{- $fileContents | nindent 4 }} + {{- end }} +{{- end }} +{{- end }} +{{- if and .Values.transparentProxy.configMap.enabled .Values.transparentProxy.configMap.config }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "kuma.transparentProxyConfigMapName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- $kumaCpLabels | nindent 4 }} +data: + config.yaml: | + {{- .Values.transparentProxy.configMap.config | toYaml | nindent 4 }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-deployment.yaml b/charts/kuma/kuma/2.9.0/templates/cp-deployment.yaml new file mode 100644 index 000000000..1111b149b --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-deployment.yaml @@ -0,0 +1,412 @@ +{{ $kdsGlobalServerTLSSecretName := "" }} +{{ if eq .Values.controlPlane.mode "global" }} + {{ $kdsGlobalServerTLSSecretName = .Values.controlPlane.tls.kdsGlobalServer.secretName }} + {{ if and .Values.controlPlane.tls.kdsGlobalServer.create (not $kdsGlobalServerTLSSecretName) }} + {{ $kdsGlobalServerTLSSecretName = print (include "kuma.name" .) "-kds-global-server-tls" }} + {{ end }} +{{ end }} + +{{ $kdsZoneClientTLSSecretName := "" }} +{{ if eq .Values.controlPlane.mode "zone" }} + {{ $kdsZoneClientTLSSecretName = .Values.controlPlane.tls.kdsZoneClient.secretName }} + {{ if and .Values.controlPlane.tls.kdsZoneClient.create (not $kdsZoneClientTLSSecretName) }} + {{ $kdsZoneClientTLSSecretName = print (include "kuma.name" .) "-kds-zone-client-tls" }} + {{ end }} +{{ end }} + +{{ if not (or (eq .Values.controlPlane.mode "zone") (eq .Values.controlPlane.mode "global") (eq .Values.controlPlane.mode "standalone")) }} + {{ $msg := printf "controlPlane.mode invalid got:'%s' supported values: global,zone,standalone" .Values.controlPlane.mode }} + {{ fail $msg }} +{{ end }} +{{ if eq .Values.controlPlane.mode "zone" }} + {{ if not (empty .Values.controlPlane.zone) }} + {{ if gt (len .Values.controlPlane.zone) 253 }} + {{ fail "controlPlane.zone must be no more than 253 characters" }} + {{ else }} + {{ if not (regexMatch "^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$" .Values.controlPlane.zone) }} + {{ fail "controlPlane.zone must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character" }} + {{ end }} + {{ end }} + {{ end }} + {{ if not (empty .Values.controlPlane.kdsGlobalAddress) }} + {{ $url := urlParse .Values.controlPlane.kdsGlobalAddress }} + {{ if not (or (eq $url.scheme "grpcs") (eq $url.scheme "grpc")) }} + {{ $msg := printf "controlPlane.kdsGlobalAddress must be a url with scheme grpcs:// or grpc:// got:'%s'" .Values.controlPlane.kdsGlobalAddress }} + {{ fail $msg }} + {{ end }} + {{ end }} +{{ else }} + {{ if not (empty .Values.controlPlane.zone) }} + {{ fail "Can't specify a controlPlane.zone when controlPlane.mode!='zone'" }} + {{ end }} + {{ if not (empty .Values.controlPlane.kdsGlobalAddress) }} + {{ fail "Can't specify a controlPlane.kdsGlobalAddress when controlPlane.mode!='zone'" }} + {{ end }} +{{ end }} + +{{- $defaultEnv := include "kuma.defaultEnv" . | fromYaml | pluck "env" | first }} +{{- if eq .Values.controlPlane.environment "universal" }} +{{- $defaultEnv = include "kuma.universal.defaultEnv" . | fromYaml | pluck "env" | first }} +{{- end }} +{{- $defaultEnvDict := dict }} +{{- range $index, $item := $defaultEnv }} +{{- $name := $item.name | upper }} +{{- $defaultEnvDict := set $defaultEnvDict $name $item.value }} +{{- end }} +{{- $envVarsCopy := deepCopy .Values.controlPlane.envVars }} +{{- $mergedEnv := merge $envVarsCopy $defaultEnvDict }} +{{- $defaultSecrets := include "kuma.parentSecrets" . | fromYaml }} +{{- $extraSecrets := .Values.controlPlane.extraSecrets }} +{{- $mergedSecrets := merge $extraSecrets $defaultSecrets }} + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} + annotations: {{ include "kuma.cpDeploymentAnnotations" . | nindent 4 }} +spec: + {{- if not .Values.controlPlane.autoscaling.enabled }} + replicas: {{ .Values.controlPlane.replicas }} + {{- end }} + minReadySeconds: {{ .Values.controlPlane.minReadySeconds }} + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + selector: + matchLabels: + {{- include "kuma.selectorLabels" . | nindent 6 }} + app: {{ include "kuma.name" . }}-control-plane + template: + metadata: + annotations: + checksum/config: {{ include (print $.Template.BasePath "/cp-configmap.yaml") . | sha256sum }} + {{- if .Values.restartOnSecretChange }} + checksum/tls-secrets: {{ include (print $.Template.BasePath "/cp-webhooks-and-secrets.yaml") . | sha256sum }} + {{- end }} + {{- range $key, $value := $.Values.controlPlane.podAnnotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: {{ include "kuma.cpLabels" . | nindent 8 }} + spec: + {{- with .Values.controlPlane.affinity }} + affinity: {{ tpl (toYaml . | nindent 8) $ }} + {{- end }} + {{- with .Values.controlPlane.topologySpreadConstraints }} + topologySpreadConstraints: {{ tpl (toYaml . | nindent 8) $ }} + {{- end }} + securityContext: + {{- toYaml .Values.controlPlane.podSecurityContext | trim | nindent 8 }} + serviceAccountName: {{ include "kuma.name" . }}-control-plane + automountServiceAccountToken: {{ .Values.controlPlane.automountServiceAccountToken }} + {{- with .Values.controlPlane.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.controlPlane.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + hostNetwork: {{ .Values.controlPlane.hostNetwork }} + terminationGracePeriodSeconds: {{ .Values.controlPlane.terminationGracePeriodSeconds }} + {{ include "kuma.dnsConfig" (dict "dns" .Values.controlPlane.dns) | nindent 6 | trim }} + {{- if (eq .Values.controlPlane.environment "universal") }} + initContainers: + - name: migration + image: {{ include "kuma.formatImage" (dict "image" .Values.controlPlane.image "root" $) | quote }} + imagePullPolicy: {{ .Values.controlPlane.image.pullPolicy }} + securityContext: + {{- toYaml .Values.controlPlane.containerSecurityContext | trim | nindent 12 }} + env: + {{- range $key, $value := $mergedEnv }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- range $element := .Values.controlPlane.secrets }} + - name: {{ $element.Env }} + valueFrom: + secretKeyRef: + name: {{ $element.Secret }} + key: {{ $element.Key }} + {{- end }} + args: + - migrate + - up + - --log-level=info + - --config-file=/etc/kuma.io/kuma-control-plane/config.yaml + resources: + {{- if .Values.controlPlane.resources }} + {{- .Values.controlPlane.resources | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.postgres.tls.caSecretName }} + - name: postgres-tls-cert-ca + subPath: ca.crt + mountPath: /var/run/secrets/kuma.io/postgres-tls-cert/ca.crt + readOnly: true + {{- end }} + {{- if .Values.postgres.tls.secretName }} + - name: postgres-tls-cert + subPath: tls.crt + mountPath: /var/run/secrets/kuma.io/postgres-tls-cert/tls.crt + readOnly: true + - name: postgres-tls-cert + subPath: tls.key + mountPath: /var/run/secrets/kuma.io/postgres-tls-cert/tls.key + readOnly: true + {{- end }} + - name: {{ include "kuma.name" . }}-control-plane-config + mountPath: /etc/kuma.io/kuma-control-plane + readOnly: true + {{- end }} + containers: + - name: control-plane + image: {{ include "kuma.formatImage" (dict "image" .Values.controlPlane.image "root" $) | quote }} + imagePullPolicy: {{ .Values.controlPlane.image.pullPolicy }} + securityContext: + {{- toYaml .Values.controlPlane.containerSecurityContext | trim | nindent 12 }} + env: + {{- if .Values.controlPlane.envVarEntries }} + {{- .Values.controlPlane.envVarEntries | toYaml | nindent 12 }} + {{- end }} + {{- range $key, $value := $mergedEnv }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- range $element := .Values.controlPlane.secrets }} + - name: {{ $element.Env }} + valueFrom: + secretKeyRef: + name: {{ $element.Secret }} + key: {{ $element.Key }} + {{- end }} + - name: KUMA_INTER_CP_CATALOG_INSTANCE_ADDRESS + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: GOMEMLIMIT + valueFrom: + resourceFieldRef: + containerName: control-plane + resource: limits.memory + - name: GOMAXPROCS + valueFrom: + resourceFieldRef: + containerName: control-plane + resource: limits.cpu + args: + - run + - --log-level={{ .Values.controlPlane.logLevel }} + - --log-output-path={{ .Values.controlPlane.logOutputPath }} + - --config-file=/etc/kuma.io/kuma-control-plane/config.yaml + ports: + - containerPort: 5680 + name: diagnostics + protocol: TCP + - containerPort: 5681 + - containerPort: 5682 + - containerPort: {{ .Values.controlPlane.admissionServerPort | default "5443" }} + {{- if ne .Values.controlPlane.mode "global" }} + - containerPort: 5678 + {{- end }} + livenessProbe: + timeoutSeconds: 10 + httpGet: + path: /healthy + port: 5680 + readinessProbe: + timeoutSeconds: 10 + httpGet: + path: /ready + port: 5680 + resources: + {{- if .Values.controlPlane.resources }} + {{- .Values.controlPlane.resources | toYaml | nindent 12 }} + {{- end }} + {{ with .Values.controlPlane.lifecycle }} + lifecycle: {{ . | toYaml | nindent 14 }} + {{ end }} + volumeMounts: + {{- if eq .Values.controlPlane.environment "kubernetes" }} + {{- if not .Values.controlPlane.automountServiceAccountToken }} + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: serviceaccount-token + readOnly: true + {{- end }} + - name: general-tls-cert + mountPath: /var/run/secrets/kuma.io/tls-cert/tls.crt + subPath: tls.crt + readOnly: true + - name: general-tls-cert + mountPath: /var/run/secrets/kuma.io/tls-cert/tls.key + subPath: tls.key + readOnly: true + - name: general-tls-cert{{- if .Values.controlPlane.tls.general.caSecretName }}-ca{{- end }} + mountPath: /var/run/secrets/kuma.io/tls-cert/ca.crt + subPath: ca.crt + readOnly: true + {{- end }} + {{- if and (eq .Values.controlPlane.environment "universal") (eq .Values.controlPlane.mode "zone") }} + {{- if .Values.controlPlane.tls.general.secretName }} + - name: general-tls-cert + mountPath: /var/run/secrets/kuma.io/tls-cert/tls.crt + subPath: tls.crt + readOnly: true + - name: general-tls-cert + mountPath: /var/run/secrets/kuma.io/tls-cert/tls.key + subPath: tls.key + readOnly: true + - name: general-tls-cert{{- if .Values.controlPlane.tls.general.caSecretName }}-ca{{- end }} + mountPath: /var/run/secrets/kuma.io/tls-cert/ca.crt + subPath: ca.crt + readOnly: true + {{- end }} + {{- end }} + - name: {{ include "kuma.name" . }}-control-plane-config + mountPath: /etc/kuma.io/kuma-control-plane + readOnly: true + {{- if .Values.controlPlane.tls.apiServer.secretName }} + - name: api-server-tls-cert + mountPath: /var/run/secrets/kuma.io/api-server-tls-cert + readOnly: true + {{- end }} + {{- if .Values.postgres.tls.caSecretName }} + - name: postgres-tls-cert-ca + subPath: ca.crt + mountPath: /var/run/secrets/kuma.io/postgres-tls-cert/ca.crt + readOnly: true + {{- end }} + {{- if .Values.postgres.tls.secretName }} + - name: postgres-tls-cert + subPath: tls.crt + mountPath: /var/run/secrets/kuma.io/postgres-tls-cert/tls.crt + readOnly: true + - name: postgres-tls-cert + subPath: tls.key + mountPath: /var/run/secrets/kuma.io/postgres-tls-cert/tls.key + readOnly: true + {{- end }} + {{- if .Values.controlPlane.tls.apiServer.clientCertsSecretName }} + - name: api-server-client-certs + mountPath: /var/run/secrets/kuma.io/api-server-client-certs + readOnly: true + {{- end }} + {{- if $kdsGlobalServerTLSSecretName }} + - name: kds-server-tls-cert + mountPath: /var/run/secrets/kuma.io/kds-server-tls-cert + readOnly: true + {{- end }} + {{- if $kdsZoneClientTLSSecretName }} + - name: kds-client-tls-cert + mountPath: /var/run/secrets/kuma.io/kds-client-tls-cert + readOnly: true + {{- end }} + {{- range $extraConfigMap := .Values.controlPlane.extraConfigMaps }} + - name: {{ $extraConfigMap.name }} + mountPath: {{ $extraConfigMap.mountPath }} + readOnly: {{ $extraConfigMap.readOnly }} + {{- end }} + {{- range $mergedSecret := $mergedSecrets }} + - name: {{ $mergedSecret.name }} + mountPath: {{ $mergedSecret.mountPath }} + subPath: {{ $mergedSecret.subPath }} + readOnly: {{ $mergedSecret.readOnly }} + {{- end }} + - name: tmp + mountPath: /tmp + volumes: + {{- if eq .Values.controlPlane.environment "kubernetes" }} + {{- if not .Values.controlPlane.automountServiceAccountToken }} + - name: serviceaccount-token + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3600 + path: token + - configMap: + name: kube-root-ca.crt + items: + - key: ca.crt + path: ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace + {{- end }} + {{- if .Values.controlPlane.tls.general.secretName }} + - name: general-tls-cert + secret: + secretName: {{ .Values.controlPlane.tls.general.secretName }} + {{- else }} + - name: general-tls-cert + secret: + secretName: {{ include "kuma.name" . }}-tls-cert + {{- end }} + {{- if .Values.controlPlane.tls.general.caSecretName }} + - name: general-tls-cert-ca + secret: + secretName: {{ .Values.controlPlane.tls.general.caSecretName }} + {{- end }} + {{- end }} + {{- if and (eq .Values.controlPlane.environment "universal") (eq .Values.controlPlane.mode "zone") }} + {{- if .Values.controlPlane.tls.general.secretName }} + - name: general-tls-cert + secret: + secretName: {{ .Values.controlPlane.tls.general.secretName }} + {{- end }} + {{- if .Values.controlPlane.tls.general.caSecretName }} + - name: general-tls-cert-ca + secret: + secretName: {{ .Values.controlPlane.tls.general.caSecretName }} + {{- end }} + {{- end }} + {{- if .Values.controlPlane.tls.apiServer.secretName }} + - name: api-server-tls-cert + secret: + secretName: {{ .Values.controlPlane.tls.apiServer.secretName }} + {{- end }} + {{- if .Values.postgres.tls.caSecretName }} + - name: postgres-tls-cert-ca + secret: + secretName: {{ .Values.postgres.tls.caSecretName }} + {{- end }} + {{- if .Values.postgres.tls.secretName }} + - name: postgres-tls-cert + secret: + secretName: {{ .Values.postgres.tls.secretName }} + {{- end }} + {{- if .Values.controlPlane.tls.apiServer.clientCertsSecretName }} + - name: api-server-client-certs + secret: + secretName: {{ .Values.controlPlane.tls.apiServer.clientCertsSecretName }} + {{- end }} + {{- if $kdsGlobalServerTLSSecretName }} + - name: kds-server-tls-cert + secret: + secretName: {{ $kdsGlobalServerTLSSecretName }} + {{- end }} + {{- if $kdsZoneClientTLSSecretName }} + - name: kds-client-tls-cert + secret: + secretName: {{ $kdsZoneClientTLSSecretName }} + {{- end }} + - name: {{ include "kuma.name" . }}-control-plane-config + configMap: + name: {{ include "kuma.name" . }}-control-plane-config + {{- range $extraConfigMap := .Values.controlPlane.extraConfigMaps }} + - name: {{ $extraConfigMap.name }} + configMap: + name: {{ $extraConfigMap.name }} + {{- end }} + {{- range $mergedSecret := $mergedSecrets }} + - name: {{ $mergedSecret.name }} + secret: + secretName: {{ $mergedSecret.name }} + {{- end }} + - name: tmp + emptyDir: {} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-global-sync-service.yaml b/charts/kuma/kuma/2.9.0/templates/cp-global-sync-service.yaml new file mode 100644 index 000000000..c5b3555a8 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-global-sync-service.yaml @@ -0,0 +1,33 @@ +{{- if and (eq .Values.controlPlane.mode "global") .Values.controlPlane.globalZoneSyncService.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kuma.controlPlane.globalZoneSync.serviceName" . }} + namespace: {{ .Release.Namespace }} + annotations: + {{- range $key, $value := .Values.controlPlane.globalZoneSyncService.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +spec: + type: {{ .Values.controlPlane.globalZoneSyncService.type }} + {{- if .Values.controlPlane.globalZoneSyncService.loadBalancerIP }} + loadBalancerIP: {{ .Values.controlPlane.globalZoneSyncService.loadBalancerIP }} + {{- end }} + {{- if .Values.controlPlane.globalZoneSyncService.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range .Values.controlPlane.globalZoneSyncService.loadBalancerSourceRanges }} + - {{.}} + {{- end }} + {{- end }} + ports: + - port: {{ .Values.controlPlane.globalZoneSyncService.port }} + appProtocol: {{ .Values.controlPlane.globalZoneSyncService.protocol }} + {{- if and (eq .Values.controlPlane.globalZoneSyncService.type "NodePort") .Values.controlPlane.globalZoneSyncService.nodePort }} + nodePort: {{ .Values.controlPlane.globalZoneSyncService.nodePort }} + {{- end }} + name: global-zone-sync + selector: + app: {{ include "kuma.name" . }}-control-plane + {{ include "kuma.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-hpa.yaml b/charts/kuma/kuma/2.9.0/templates/cp-hpa.yaml new file mode 100644 index 000000000..dc4981020 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-hpa.yaml @@ -0,0 +1,24 @@ +{{- if .Values.controlPlane.autoscaling.enabled }} +{{ if .Capabilities.APIVersions.Has "autoscaling/v2" }} +apiVersion: "autoscaling/v2" +{{ else }} +apiVersion: "autoscaling/v1" +{{ end }} +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "kuma.name" . }}-control-plane + minReplicas: {{ .Values.controlPlane.autoscaling.minReplicas }} + maxReplicas: {{ .Values.controlPlane.autoscaling.maxReplicas }} + {{ if .Capabilities.APIVersions.Has "autoscaling/v2" }} + metrics: {{- toYaml .Values.controlPlane.autoscaling.metrics | nindent 4 }} + {{ else }} + targetCPUUtilizationPercentage: {{ .Values.controlPlane.autoscaling.targetCPUUtilizationPercentage }} + {{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-ingress.yaml b/charts/kuma/kuma/2.9.0/templates/cp-ingress.yaml new file mode 100644 index 000000000..8ceae01f8 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-ingress.yaml @@ -0,0 +1,25 @@ +{{- if .Values.controlPlane.ingress.enabled }} +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: {{ include "kuma.controlPlane.serviceName" . }} + namespace: {{ .Release.Namespace }} + {{- with .Values.controlPlane.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +spec: + ingressClassName: {{ .Values.controlPlane.ingress.ingressClassName }} + rules: + - host: {{ .Values.controlPlane.ingress.hostname }} + http: + paths: + - path: {{ .Values.controlPlane.ingress.path }} + pathType: {{ .Values.controlPlane.ingress.pathType }} + backend: + service: + name: {{ include "kuma.controlPlane.serviceName" . }} + port: + number: {{ .Values.controlPlane.ingress.servicePort }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-kds-global-server-secret.yaml b/charts/kuma/kuma/2.9.0/templates/cp-kds-global-server-secret.yaml new file mode 100644 index 000000000..5ea3314a3 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-kds-global-server-secret.yaml @@ -0,0 +1,15 @@ +{{ if and (eq .Values.controlPlane.mode "global") .Values.controlPlane.tls.kdsGlobalServer.create }} +apiVersion: v1 +kind: Secret +metadata: +{{ with .Values.controlPlane.tls.kdsGlobalServer.secretName }} + name: {{ . }} +{{ else }} + name: {{ include "kuma.name" . }}-kds-global-server-tls +{{ end }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +type: kubernetes.io/tls +stringData: + tls.crt: {{ required "you must provide a kds tls cert" .Values.controlPlane.tls.kdsGlobalServer.cert | quote }} + tls.key: {{ required "you must provide a kds tls key" .Values.controlPlane.tls.kdsGlobalServer.key | quote }} +{{ end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-kds-zone-client-tls-secret.yaml b/charts/kuma/kuma/2.9.0/templates/cp-kds-zone-client-tls-secret.yaml new file mode 100644 index 000000000..99b15c5bd --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-kds-zone-client-tls-secret.yaml @@ -0,0 +1,13 @@ +{{ if and (eq .Values.controlPlane.mode "zone") .Values.controlPlane.tls.kdsZoneClient.create }} +apiVersion: v1 +kind: Secret +metadata: +{{ with .Values.controlPlane.tls.kdsZoneClient.secretName }} + name: {{ . }} +{{ else }} + name: {{ include "kuma.name" . }}-kds-zone-client-tls +{{ end }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +stringData: + ca.crt: {{ required "you must provide a kds cert" .Values.controlPlane.tls.kdsZoneClient.cert | quote }} +{{ end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-pdb.yaml b/charts/kuma/kuma/2.9.0/templates/cp-pdb.yaml new file mode 100644 index 000000000..bb29bfd20 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-pdb.yaml @@ -0,0 +1,20 @@ +{{ if $.Values.controlPlane.podDisruptionBudget.enabled }} +{{ if .Capabilities.APIVersions.Has "policy/v1" }} +apiVersion: policy/v1 +{{ else if .Capabilities.APIVersions.Has "policy/v1beta1" }} +apiVersion: policy/v1beta1 +{{ else }} +{{ fail "pod disruption budgets are not supported by this version of kubernetes" }} +{{ end }} +kind: PodDisruptionBudget +metadata: + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +spec: + maxUnavailable: {{ .Values.controlPlane.podDisruptionBudget.maxUnavailable }} + selector: + matchLabels: + {{- include "kuma.selectorLabels" . | nindent 6 }} + app: {{ include "kuma.name" . }}-control-plane +{{ end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-rbac.yaml b/charts/kuma/kuma/2.9.0/templates/cp-rbac.yaml new file mode 100644 index 000000000..52ce1bfa8 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-rbac.yaml @@ -0,0 +1,320 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +{{- with .Values.controlPlane.serviceAccountAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} +{{- end }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +{{- if (eq .Values.controlPlane.environment "kubernetes") }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "kuma.name" . }}-control-plane + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - namespaces + - pods +{{- if not (and .Values.transparentProxy.configMap.enabled .Values.transparentProxy.configMap.config) }} + - configmaps +{{- end }} + - nodes +{{- if .Values.controlPlane.supportGatewaySecretsInAllNamespaces }} + - secrets +{{- end }} + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - secrets + verbs: + - list + - watch + - apiGroups: + - "discovery.k8s.io" + resources: + - endpointslices + verbs: + - get + - list + - watch + - apiGroups: + - "apps" + resources: + - deployments + - replicasets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch + - apiGroups: + - "batch" + resources: + - jobs + verbs: + - get + - list + - watch + - apiGroups: + - gateway.networking.k8s.io + resources: + - gatewayclasses + - gateways + - referencegrants + - httproutes + verbs: + - create + - delete + - get + - list + - patch + - update + - watch + - apiGroups: + - gateway.networking.k8s.io + resources: + - gatewayclasses/status + - gateways/status + - httproutes/status + verbs: + - get + - patch + - update + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - list + - watch + - create + - update + - patch + - delete + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - apiGroups: + - "" + resources: + - services +{{- if and .Values.transparentProxy.configMap.enabled .Values.transparentProxy.configMap.config }} + - configmaps +{{- end }} + verbs: + - get + - delete + - list + - watch + - create + - update + - patch + - apiGroups: + - "discovery.k8s.io" + resources: + - endpointslices + verbs: + - get + - list + - watch + - apiGroups: + - kuma.io + resources: + - dataplanes + - dataplaneinsights + - meshes + - zones + - zoneinsights + - zoneingresses + - zoneingressinsights + - zoneegresses + - zoneegressinsights + - meshinsights + - serviceinsights + - proxytemplates + - ratelimits + - trafficpermissions + - trafficroutes + - timeouts + - retries + - circuitbreakers + - virtualoutbounds + - containerpatches + - externalservices + - faultinjections + - healthchecks + - trafficlogs + - traffictraces + - meshgateways + - meshgatewayroutes + - meshgatewayinstances + - meshgatewayconfigs + {{- range $policy, $v := .Values.plugins.policies }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + {{- range $policy, $v := .Values.plugins.resources }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + verbs: + - get + - list + - watch + - create + - update + - patch + - delete + - apiGroups: + - kuma.io + resources: + - meshgatewayinstances/status + - meshgatewayinstances/finalizers + - meshes/finalizers + - dataplanes/finalizers + verbs: + - get + - patch + - update + - apiGroups: + - "" + resources: + - pods/finalizers + verbs: + - get + - patch + - update + {{- if .Values.cni.enabled }} + - apiGroups: + - k8s.cni.cncf.io + resources: + - network-attachment-definitions + verbs: + - get + - list + - watch + - create + - delete + - apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - nodes + verbs: + - update + - apiGroups: + - "pods" + resources: + - pods + verbs: + - list + {{- end }} + # validate k8s token before issuing mTLS cert + - apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "kuma.name" . }}-control-plane + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "kuma.name" . }}-control-plane +subjects: + - kind: ServiceAccount + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - watch + - create + - update + - patch + - delete + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - watch + - create + - update + - patch + - delete + # leader-for-life election deletes Pods in some circumstances + - apiGroups: + - "" + resources: + - pods + verbs: + - delete +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ include "kuma.name" . }}-control-plane +subjects: + - kind: ServiceAccount + name: {{ include "kuma.name" . }}-control-plane + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-service.yaml b/charts/kuma/kuma/2.9.0/templates/cp-service.yaml new file mode 100644 index 000000000..3b9c3e31f --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-service.yaml @@ -0,0 +1,49 @@ +{{ if .Values.controlPlane.service.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kuma.controlPlane.serviceName" . }} + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} + annotations: + {{- range $key, $value := .Values.controlPlane.service.annotations }} + {{- if $value }} + {{ $key }}: {{ $value | quote }} + {{- end }} + {{- end }} +spec: + type: {{ .Values.controlPlane.service.type }} + ports: + - port: 5680 + name: diagnostics + appProtocol: http + - port: 5681 + name: http-api-server + appProtocol: http + {{- if and (eq .Values.controlPlane.service.type "NodePort") .Values.controlPlane.service.apiServer.http.nodePort }} + nodePort: {{ .Values.controlPlane.service.apiServer.http.nodePort }} + {{- end }} + - port: 5682 + name: https-api-server + appProtocol: https + {{- if and (eq .Values.controlPlane.service.type "NodePort") .Values.controlPlane.service.apiServer.https.nodePort }} + nodePort: {{ .Values.controlPlane.service.apiServer.https.nodePort }} + {{- end }} + {{- if ne .Values.controlPlane.environment "universal" }} + - port: 443 + name: https-admission-server + targetPort: {{ .Values.controlPlane.admissionServerPort | default "5443" }} + appProtocol: https + {{- end }} + {{- if ne .Values.controlPlane.mode "global" }} + - port: 5676 + name: mads-server + appProtocol: https + - port: 5678 + name: dp-server + appProtocol: https + {{- end }} + selector: + app: {{ include "kuma.name" . }}-control-plane + {{- include "kuma.selectorLabels" . | nindent 4 }} +{{ end }} diff --git a/charts/kuma/kuma/2.9.0/templates/cp-webhooks-and-secrets.yaml b/charts/kuma/kuma/2.9.0/templates/cp-webhooks-and-secrets.yaml new file mode 100644 index 000000000..15b38e0fd --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/cp-webhooks-and-secrets.yaml @@ -0,0 +1,346 @@ +{{- if not (eq (empty .Values.controlPlane.tls.general.caBundle) (empty .Values.controlPlane.tls.general.secretName)) }} + {{ fail "You need to send both or neither of controlPlane.tls.general.caBundle and controlPlane.tls.general.secretName"}} +{{- end }} +{{- $caBundle := .Values.controlPlane.tls.general.caBundle }} +{{/* +Generate certificates +see: https://masterminds.github.io/sprig/crypto.html +see: https://medium.com/nuvo-group-tech/move-your-certs-to-helm-4f5f61338aca +see: https://github.com/networkservicemesh/networkservicemesh/blob/804ad5026bb5dbd285c220f15395fe25e46f5edb/deployments/helm/nsm/charts/admission-webhook/templates/admission-webhook-secret.tpl + +We only autogenerate certs if user did not chose their own secret. +We only autogenerate certs if the cert is not yet generated. This way we keep the secrets between HELM upgrades. +*/}} + +{{- if eq .Values.controlPlane.tls.general.secretName "" -}} +{{- $cert := "" }} +{{- $key := "" }} +{{- $secretName := print (include "kuma.name" .) "-tls-cert" }} + +{{- $secret := (lookup "v1" "Secret" .Release.Namespace $secretName) -}} +{{- if $secret -}} + {{- $cert = index $secret.data "tls.crt" -}} + {{- $key = index $secret.data "tls.key" -}} + {{- $caBundle = index $secret.data "ca.crt" -}} +{{- else -}} + {{- $commonName := (include "kuma.controlPlane.serviceName" .) -}} + {{- $altNames := list (printf "%s.%s" $commonName .Release.Namespace) (printf "%s.%s.svc" $commonName .Release.Namespace) -}} + {{- $certTTL := 3650 -}} + {{- $ca := genCA "kuma-ca" $certTTL -}} + + {{- $genCert := genSignedCert $commonName nil $altNames $certTTL $ca -}} + {{- $cert = $genCert.Cert | b64enc -}} + {{- $key = $genCert.Key | b64enc -}} + {{ $caBundle = $ca.Cert | b64enc }} +{{- end -}} +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: {{ $secretName }} + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +data: + tls.crt: {{ $cert }} + tls.key: {{ $key }} + ca.crt: {{ $caBundle }} +{{- end }} +{{- if (eq .Values.controlPlane.environment "kubernetes") }} +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + name: {{ include "kuma.name" . }}-admission-mutating-webhook-configuration + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +webhooks: + - name: mesh.defaulter.kuma-admission.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: Fail + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /default-kuma-io-v1alpha1-mesh + rules: + - apiGroups: + - kuma.io + apiVersions: + - v1alpha1 + operations: + - CREATE + - UPDATE + resources: + - meshes + - dataplanes + - dataplaneinsights + - meshgateways + - zoneingresses + - zoneingressinsights + - zoneegresses + - zoneegressinsights + - serviceinsights + - zone + - zoneinsights + {{- range $policy, $v := .Values.plugins.policies }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + {{- range $policy, $v := .Values.plugins.resources }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + sideEffects: None + - name: owner-reference.kuma-admission.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: Fail + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /owner-reference-kuma-io-v1alpha1 + rules: + - apiGroups: + - kuma.io + apiVersions: + - v1alpha1 + operations: + - CREATE + resources: + - circuitbreakers + - externalservices + - faultinjections + - healthchecks + - meshgateways + - meshgatewayroutes + - proxytemplates + - ratelimits + - retries + - timeouts + - trafficlogs + - trafficpermissions + - trafficroutes + - traffictraces + - virtualoutbounds + {{- range $policy, $v := .Values.plugins.policies }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + {{- range $policy, $v := .Values.plugins.resources }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + {{ .Values.controlPlane.webhooks.ownerReference.additionalRules | nindent 6 }} + sideEffects: None + {{- if ne .Values.controlPlane.mode "global" }} + - name: namespace-kuma-injector.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: {{ .Values.controlPlane.injectorFailurePolicy }} + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + - key: kuma.io/sidecar-injection + operator: In + values: ["enabled", "true"] + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /inject-sidecar + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + resources: + - pods + sideEffects: None + - name: pods-kuma-injector.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: {{ .Values.controlPlane.injectorFailurePolicy }} + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + objectSelector: + matchLabels: + kuma.io/sidecar-injection: enabled + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /inject-sidecar + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + resources: + - pods + sideEffects: None + {{- end }} +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: ValidatingWebhookConfiguration +metadata: + name: {{ include "kuma.name" . }}-validating-webhook-configuration + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.cpLabels" . | nindent 4 }} +webhooks: + - name: validator.kuma-admission.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: Fail + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /validate-kuma-io-v1alpha1 + rules: + - apiGroups: + - kuma.io + apiVersions: + - v1alpha1 + operations: + - CREATE + - UPDATE + - DELETE + resources: + - circuitbreakers + - dataplanes + - externalservices + - faultinjections + - meshgatewayinstances + - healthchecks + - meshes + - meshgateways + - meshgatewayroutes + - proxytemplates + - ratelimits + - retries + - trafficlogs + - trafficpermissions + - trafficroutes + - traffictraces + - virtualoutbounds + - zones + - containerpatches + {{- range $policy, $v := .Values.plugins.policies }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + {{- range $policy, $v := .Values.plugins.resources }} + {{- if $v }} + - {{ $policy }} + {{- end}} + {{- end}} + {{ .Values.controlPlane.webhooks.validator.additionalRules | nindent 6 }} + sideEffects: None + {{- if ne .Values.controlPlane.mode "global" }} + - name: service.validator.kuma-admission.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: Ignore + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /validate-v1-service + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + resources: + - services + sideEffects: None + {{- end }} + - name: secret.validator.kuma-admission.kuma.io + admissionReviewVersions: ["v1"] + namespaceSelector: + matchLabels: + kuma.io/system-namespace: "true" + failurePolicy: Ignore + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /validate-v1-secret + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + - DELETE + resources: + - secrets + sideEffects: None + - name: gateway.validator.kuma-admission.kuma.io + admissionReviewVersions: ["v1"] + failurePolicy: Ignore + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: ["kube-system"] + clientConfig: + caBundle: {{ $caBundle }} + service: + namespace: {{ .Release.Namespace }} + name: {{ include "kuma.controlPlane.serviceName" . }} + path: /validate-gatewayclass + rules: + - apiGroups: + - "gateway.networking.k8s.io" + apiVersions: + - v1beta1 + operations: + - CREATE + resources: + - gatewayclasses + sideEffects: None +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/egress-deployment.yaml b/charts/kuma/kuma/2.9.0/templates/egress-deployment.yaml new file mode 100644 index 000000000..3b6617eee --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/egress-deployment.yaml @@ -0,0 +1,138 @@ +{{- if .Values.egress.enabled }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "kuma.name" . }}-egress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.egressLabels" . | nindent 4 }} +spec: + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + {{- if not .Values.egress.autoscaling.enabled }} + replicas: {{ .Values.egress.replicas }} + {{- end }} + selector: + matchLabels: + {{- include "kuma.selectorLabels" . | nindent 6 }} + app: {{ include "kuma.name" . }}-egress + template: + metadata: + annotations: + kuma.io/egress: enabled + {{- range $key, $value := merge .Values.egress.podAnnotations .Values.egress.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: + {{- include "kuma.egressLabels" . | nindent 8 }} + spec: + {{- with .Values.egress.affinity }} + affinity: {{ tpl (toYaml . | nindent 8) $ }} + {{- end }} + {{- with .Values.egress.topologySpreadConstraints }} + topologySpreadConstraints: {{ tpl (toYaml . | nindent 8) $ }} + {{- end }} + securityContext: + {{- toYaml .Values.egress.podSecurityContext | trim | nindent 8 }} + serviceAccountName: {{ include "kuma.name" . }}-egress + automountServiceAccountToken: {{ .Values.egress.automountServiceAccountToken }} + {{- with .Values.egress.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.egress.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + {{ include "kuma.dnsConfig" (dict "dns" .Values.egress.dns) | nindent 6 | trim }} + containers: + - name: egress + image: {{ include "kuma.formatImage" (dict "image" .Values.dataPlane.image "root" $) | quote }} + imagePullPolicy: {{ .Values.dataPlane.image.pullPolicy }} + securityContext: + {{- toYaml .Values.egress.containerSecurityContext | trim | nindent 12 }} + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: KUMA_CONTROL_PLANE_URL + value: "https://{{ include "kuma.controlPlane.serviceName" . }}.{{ .Release.Namespace }}:5678" + - name: KUMA_CONTROL_PLANE_CA_CERT_FILE + value: /var/run/secrets/kuma.io/cp-ca/ca.crt + - name: KUMA_DATAPLANE_DRAIN_TIME + value: {{ .Values.egress.drainTime }} + - name: KUMA_DATAPLANE_RUNTIME_TOKEN_PATH + value: /var/run/secrets/kubernetes.io/serviceaccount/token + - name: KUMA_DATAPLANE_PROXY_TYPE + value: "egress" + args: + - run + - --log-level={{ .Values.egress.logLevel | default "info" }} + ports: + - containerPort: 10002 + livenessProbe: + httpGet: + path: "/ready" + port: 9901 + failureThreshold: 12 + initialDelaySeconds: 60 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 3 + readinessProbe: + httpGet: + path: "/ready" + port: 9901 + failureThreshold: 12 + initialDelaySeconds: 1 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 3 + resources: {{ toYaml .Values.egress.resources | nindent 12 }} + volumeMounts: +{{- if not .Values.egress.automountServiceAccountToken }} + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: serviceaccount-token + readOnly: true +{{- end }} + - name: control-plane-ca + mountPath: /var/run/secrets/kuma.io/cp-ca + readOnly: true + - name: tmp + mountPath: /tmp + volumes: +{{- if not .Values.egress.automountServiceAccountToken }} + - name: serviceaccount-token + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3600 + path: token + - configMap: + name: kube-root-ca.crt + items: + - key: ca.crt + path: ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +{{- end }} + - name: control-plane-ca + secret: + secretName: {{ include "kuma.controlPlane.tls.general.caSecretName" . }} + items: + - key: ca.crt + path: ca.crt + - name: tmp + emptyDir: {} + {{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/egress-hpa.yaml b/charts/kuma/kuma/2.9.0/templates/egress-hpa.yaml new file mode 100644 index 000000000..8d4284f41 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/egress-hpa.yaml @@ -0,0 +1,24 @@ +{{- if .Values.egress.autoscaling.enabled }} +{{ if .Capabilities.APIVersions.Has "autoscaling/v2" }} +apiVersion: "autoscaling/v2" +{{ else }} +apiVersion: "autoscaling/v1" +{{ end }} +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "kuma.name" . }}-egress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.egressLabels" . | nindent 4 }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "kuma.name" . }}-egress + minReplicas: {{ .Values.egress.autoscaling.minReplicas }} + maxReplicas: {{ .Values.egress.autoscaling.maxReplicas }} + {{ if .Capabilities.APIVersions.Has "autoscaling/v2" }} + metrics: {{- toYaml .Values.egress.autoscaling.metrics | nindent 4 }} + {{ else }} + targetCPUUtilizationPercentage: {{ .Values.egress.autoscaling.targetCPUUtilizationPercentage }} + {{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/egress-pdb.yaml b/charts/kuma/kuma/2.9.0/templates/egress-pdb.yaml new file mode 100644 index 000000000..ee599003b --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/egress-pdb.yaml @@ -0,0 +1,20 @@ +{{ if $.Values.egress.podDisruptionBudget.enabled }} +{{ if .Capabilities.APIVersions.Has "policy/v1" }} +apiVersion: policy/v1 +{{ else if .Capabilities.APIVersions.Has "policy/v1beta1" }} +apiVersion: policy/v1beta1 +{{ else }} +{{ fail "pod disruption budgets are not supported by this version of kubernetes" }} +{{ end }} +kind: PodDisruptionBudget +metadata: + name: {{ include "kuma.name" . }}-egress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.egressLabels" . | nindent 4 }} +spec: + maxUnavailable: {{ .Values.egress.podDisruptionBudget.maxUnavailable }} + selector: + matchLabels: + {{- include "kuma.selectorLabels" . | nindent 6 }} + app: {{ include "kuma.name" . }}-egress +{{ end }} diff --git a/charts/kuma/kuma/2.9.0/templates/egress-rbac.yaml b/charts/kuma/kuma/2.9.0/templates/egress-rbac.yaml new file mode 100644 index 000000000..1b4326fdb --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/egress-rbac.yaml @@ -0,0 +1,18 @@ +{{- if .Values.egress.enabled }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "kuma.name" . }}-egress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.egressLabels" . | nindent 4 }} +{{- with .Values.egress.serviceAccountAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} +{{- end }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/egress-service.yaml b/charts/kuma/kuma/2.9.0/templates/egress-service.yaml new file mode 100644 index 000000000..2127811fe --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/egress-service.yaml @@ -0,0 +1,32 @@ +{{- if .Values.egress.enabled }} +{{- if eq .Values.controlPlane.mode "global" }} +{{ fail "You shouldn't run zoneEgress when running the CP in global" }} +{{- end }} +{{- end }} +{{- if and .Values.egress.enabled .Values.egress.service.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kuma.egress.serviceName" . }} + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.egressLabels" . | nindent 4 }} + annotations: + {{- range $key, $value := .Values.egress.service.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} +spec: + type: {{ .Values.egress.service.type }} + {{- if .Values.egress.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.egress.service.loadBalancerIP }} + {{- end }} + ports: + - port: {{ .Values.egress.service.port }} + protocol: TCP + targetPort: 10002 + {{- if and (eq .Values.egress.service.type "NodePort") .Values.egress.service.nodePort }} + nodePort: {{ .Values.egress.service.nodePort }} + {{- end }} + selector: + app: {{ include "kuma.name" . }}-egress + {{- include "kuma.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/gateway-class.yaml b/charts/kuma/kuma/2.9.0/templates/gateway-class.yaml new file mode 100644 index 000000000..cf1ae305d --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/gateway-class.yaml @@ -0,0 +1,19 @@ +{{- if and (eq .Values.controlPlane.environment "kubernetes") (eq .Values.controlPlane.mode "zone") }} +{{- if .Capabilities.APIVersions.Has "gateway.networking.k8s.io/v1/GatewayClass" }} +--- +apiVersion: gateway.networking.k8s.io/v1 +kind: GatewayClass +metadata: + name: kuma +spec: + controllerName: "gateways.kuma.io/controller" +{{- else if .Capabilities.APIVersions.Has "gateway.networking.k8s.io/v1beta1/GatewayClass" }} +--- +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: GatewayClass +metadata: + name: kuma +spec: + controllerName: "gateways.kuma.io/controller" +{{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/ingress-deployment.yaml b/charts/kuma/kuma/2.9.0/templates/ingress-deployment.yaml new file mode 100644 index 000000000..fcefeaac6 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/ingress-deployment.yaml @@ -0,0 +1,142 @@ +{{- if .Values.ingress.enabled }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "kuma.name" . }}-ingress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.ingressLabels" . | nindent 4 }} +spec: + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + {{- if not .Values.ingress.autoscaling.enabled }} + replicas: {{ .Values.ingress.replicas }} + {{- end }} + selector: + matchLabels: + {{- include "kuma.selectorLabels" . | nindent 6 }} + app: {{ include "kuma.name" . }}-ingress + template: + metadata: + annotations: + kuma.io/ingress: enabled + {{- range $key, $value := merge .Values.ingress.podAnnotations .Values.ingress.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: + {{- include "kuma.ingressLabels" . | nindent 8 }} + spec: + {{- with .Values.ingress.affinity }} + affinity: {{ tpl (toYaml . | nindent 8) $ }} + {{- end }} + {{- with .Values.ingress.topologySpreadConstraints }} + topologySpreadConstraints: {{ tpl (toYaml . | nindent 8) $ }} + {{- end }} + securityContext: + {{- toYaml .Values.ingress.podSecurityContext | trim | nindent 8 }} + serviceAccountName: {{ include "kuma.name" . }}-ingress + automountServiceAccountToken: {{ .Values.ingress.automountServiceAccountToken }} + {{- with .Values.ingress.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.ingress.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + terminationGracePeriodSeconds: {{ .Values.ingress.terminationGracePeriodSeconds }} + {{ include "kuma.dnsConfig" (dict "dns" .Values.ingress.dns) | nindent 6 | trim }} + containers: + - name: ingress + image: {{ include "kuma.formatImage" (dict "image" .Values.dataPlane.image "root" $) | quote }} + imagePullPolicy: {{ .Values.dataPlane.image.pullPolicy }} + securityContext: + {{- toYaml .Values.ingress.containerSecurityContext | trim | nindent 12 }} + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: KUMA_CONTROL_PLANE_URL + value: "https://{{ include "kuma.controlPlane.serviceName" . }}.{{ .Release.Namespace }}:5678" + - name: KUMA_CONTROL_PLANE_CA_CERT_FILE + value: /var/run/secrets/kuma.io/cp-ca/ca.crt + - name: KUMA_DATAPLANE_DRAIN_TIME + value: {{ .Values.ingress.drainTime }} + - name: KUMA_DATAPLANE_RUNTIME_TOKEN_PATH + value: /var/run/secrets/kubernetes.io/serviceaccount/token + - name: KUMA_DATAPLANE_PROXY_TYPE + value: "ingress" + args: + - run + - --log-level={{ .Values.ingress.logLevel | default "info" }} + ports: + - containerPort: 10001 + livenessProbe: + httpGet: + path: "/ready" + port: 9901 + failureThreshold: 12 + initialDelaySeconds: 60 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 3 + readinessProbe: + httpGet: + path: "/ready" + port: 9901 + failureThreshold: 12 + initialDelaySeconds: 1 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 3 + resources: {{ toYaml .Values.ingress.resources | nindent 12 }} + {{ with .Values.ingress.lifecycle}} + lifecycle: {{ . | toYaml | nindent 12 }} + {{ end }} + volumeMounts: +{{- if not .Values.ingress.automountServiceAccountToken }} + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: serviceaccount-token + readOnly: true +{{- end }} + - name: control-plane-ca + mountPath: /var/run/secrets/kuma.io/cp-ca + readOnly: true + - name: tmp + mountPath: /tmp + volumes: +{{- if not .Values.ingress.automountServiceAccountToken }} + - name: serviceaccount-token + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3600 + path: token + - configMap: + name: kube-root-ca.crt + items: + - key: ca.crt + path: ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +{{- end }} + - name: control-plane-ca + secret: + secretName: {{ include "kuma.controlPlane.tls.general.caSecretName" . }} + items: + - key: ca.crt + path: ca.crt + - name: tmp + emptyDir: {} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/ingress-hpa.yaml b/charts/kuma/kuma/2.9.0/templates/ingress-hpa.yaml new file mode 100644 index 000000000..4aaeabe67 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/ingress-hpa.yaml @@ -0,0 +1,24 @@ +{{- if .Values.ingress.autoscaling.enabled }} +{{ if .Capabilities.APIVersions.Has "autoscaling/v2" }} +apiVersion: "autoscaling/v2" +{{ else }} +apiVersion: "autoscaling/v1" +{{ end }} +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "kuma.name" . }}-ingress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.ingressLabels" . | nindent 4 }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "kuma.name" . }}-ingress + minReplicas: {{ .Values.ingress.autoscaling.minReplicas }} + maxReplicas: {{ .Values.ingress.autoscaling.maxReplicas }} + {{ if .Capabilities.APIVersions.Has "autoscaling/v2" }} + metrics: {{- toYaml .Values.ingress.autoscaling.metrics | nindent 4 }} + {{ else }} + targetCPUUtilizationPercentage: {{ .Values.ingress.autoscaling.targetCPUUtilizationPercentage }} + {{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/ingress-pdb.yaml b/charts/kuma/kuma/2.9.0/templates/ingress-pdb.yaml new file mode 100644 index 000000000..639d1b574 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/ingress-pdb.yaml @@ -0,0 +1,20 @@ +{{ if $.Values.ingress.podDisruptionBudget.enabled }} +{{ if .Capabilities.APIVersions.Has "policy/v1" }} +apiVersion: policy/v1 +{{ else if .Capabilities.APIVersions.Has "policy/v1beta1" }} +apiVersion: policy/v1beta1 +{{ else }} +{{ fail "pod disruption budgets are not supported by this version of kubernetes" }} +{{ end }} +kind: PodDisruptionBudget +metadata: + name: {{ include "kuma.name" . }}-ingress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.ingressLabels" . | nindent 4 }} +spec: + maxUnavailable: {{ .Values.ingress.podDisruptionBudget.maxUnavailable }} + selector: + matchLabels: + {{- include "kuma.selectorLabels" . | nindent 6 }} + app: {{ include "kuma.name" . }}-ingress +{{ end }} diff --git a/charts/kuma/kuma/2.9.0/templates/ingress-rbac.yaml b/charts/kuma/kuma/2.9.0/templates/ingress-rbac.yaml new file mode 100644 index 000000000..e4e1d61ce --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/ingress-rbac.yaml @@ -0,0 +1,18 @@ +{{- if .Values.ingress.enabled }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "kuma.name" . }}-ingress + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.ingressLabels" . | nindent 4 }} +{{- with .Values.ingress.serviceAccountAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} +{{- end }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/ingress-service.yaml b/charts/kuma/kuma/2.9.0/templates/ingress-service.yaml new file mode 100644 index 000000000..74a4dde90 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/ingress-service.yaml @@ -0,0 +1,32 @@ +{{- if .Values.ingress.enabled }} +{{- if or (eq .Values.controlPlane.mode "global") (eq .Values.controlPlane.mode "standalone") }} +{{ fail "You shouldn't run zoneIngress when running the CP in global or standalone" }} +{{- end }} +{{- end }} +{{- if and .Values.ingress.enabled .Values.ingress.service.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kuma.ingress.serviceName" . }} + namespace: {{ .Release.Namespace }} + labels: {{ include "kuma.ingressLabels" . | nindent 4 }} + annotations: + {{- range $key, $value := .Values.ingress.service.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} +spec: + type: {{ .Values.ingress.service.type }} + {{- if .Values.ingress.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.ingress.service.loadBalancerIP }} + {{- end }} + ports: + - port: {{ .Values.ingress.service.port }} + protocol: TCP + targetPort: 10001 + {{- if and (eq .Values.ingress.service.type "NodePort") .Values.ingress.service.nodePort }} + nodePort: {{ .Values.ingress.service.nodePort }} + {{- end }} + selector: + app: {{ include "kuma.name" . }}-ingress + {{- include "kuma.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/post-delete-cleanup-ebpf-job.yaml b/charts/kuma/kuma/2.9.0/templates/post-delete-cleanup-ebpf-job.yaml new file mode 100644 index 000000000..aaa3166ff --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/post-delete-cleanup-ebpf-job.yaml @@ -0,0 +1,126 @@ +{{- if and (.Values.experimental.ebpf.enabled) (and (not .Values.cni.enabled) (not .Values.noHelmHooks) (eq .Values.controlPlane.environment "kubernetes")) }} + {{- $serviceAccountName := printf "%s-cleanup-node-ebpf-job" (include "kuma.name" .) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "post-delete" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "kuma.name" . }}-cleanup-node-ebpf-job + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "post-delete" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +rules: + - apiGroups: [""] + resources: + - nodes + verbs: + - list + - apiGroups: [""] + resources: + - pods + verbs: + - watch + - delete + - deletecollection + - apiGroups: ["batch"] + resources: + - jobs + verbs: + - watch + - create + - delete + - deletecollection +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "kuma.name" . }}-cleanup-node-ebpf-job + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "post-delete" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "kuma.name" . }}-cleanup-node-ebpf-job +subjects: + - kind: ServiceAccount + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "kuma.name" . }}-cleanup-node-ebpf-job + namespace: {{ .Release.Namespace }} + labels: + {{ include "kuma.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": "post-delete" + {{/* Ensure the job is created after the RBAC resources */}} + "helm.sh/hook-weight": "5" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" +spec: + template: + metadata: + name: {{ template "kuma.name" . }}-cleanup-node-ebpf-job + labels: + {{ include "kuma.labels" . | nindent 8 }} + spec: + serviceAccountName: {{ $serviceAccountName }} + {{- with .Values.hooks.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.hooks.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + {{- if .Values.hooks.ebpfCleanup.podSecurityContext }} + securityContext: + {{ toYaml .Values.hooks.ebpfCleanup.podSecurityContext | trim | nindent 8 }} + {{- end }} + containers: + - name: post-delete-job + image: {{ include "kuma.formatImage" (dict "image" .Values.dataPlane.initImage "root" $) | quote }} + {{- if .Values.hooks.ebpfCleanup.containerSecurityContext }} + securityContext: + {{ toYaml .Values.hooks.ebpfCleanup.containerSecurityContext | trim | nindent 12 }} + {{- end }} + resources: + requests: + cpu: "20m" + memory: "20Mi" + limits: + cpu: "40m" + memory: "40Mi" + command: + - 'kumactl' + - 'uninstall' + - 'ebpf' + - '--cleanup-image-registry' + - {{ .Values.global.image.registry }} + - '--cleanup-image-repository' + - {{ .Values.dataPlane.initImage.repository }} + {{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/pre-delete-webhooks.yaml b/charts/kuma/kuma/2.9.0/templates/pre-delete-webhooks.yaml new file mode 100644 index 000000000..e6948af2f --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/pre-delete-webhooks.yaml @@ -0,0 +1,109 @@ +{{- if and (eq .Values.controlPlane.environment "kubernetes") (not .Values.noHelmHooks) }} +# HELM first deletes RBAC of Kuma, then it tries to delete Secrets. We've got validating webhook on Secrets. +# But even that the policy of this webhook is Ignore, it fails because Kuma does not have permission to access Secrets anymore. +# Therefore we first need to delete webhook so we can delete the rest of the deployment +{{- $serviceAccountName := printf "%s-pre-delete-job" (include "kuma.name" .) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "pre-delete" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "kuma.name" . }}-pre-delete-job + annotations: + "helm.sh/hook": "pre-delete" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +rules: + - apiGroups: + - admissionregistration.k8s.io + resources: + - validatingwebhookconfigurations + resourceNames: + - {{ include "kuma.name" . }}-validating-webhook-configuration + verbs: + - delete +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "kuma.name" . }}-pre-delete-job + annotations: + "helm.sh/hook": "pre-delete" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "kuma.name" . }}-pre-delete-job +subjects: + - kind: ServiceAccount + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "kuma.name" . }}-delete-webhook + namespace: {{ .Release.Namespace }} + labels: + {{ include "kuma.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": "pre-delete" + {{/* Ensure the job is created after the RBAC resources */}} + "helm.sh/hook-weight": "5" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" +spec: + template: + metadata: + name: {{ template "kuma.name" . }}-delete-webhook + labels: + {{ include "kuma.labels" . | nindent 8 }} + spec: + serviceAccountName: {{ $serviceAccountName }} + {{- with .Values.hooks.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.hooks.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + securityContext: + {{- toYaml .Values.hooks.podSecurityContext | trim | nindent 8 }} + containers: + - name: pre-delete-job + image: "{{ .Values.kubectl.image.registry }}/{{ .Values.kubectl.image.repository }}:{{ .Values.kubectl.image.tag }}" + command: + - 'kubectl' + - 'delete' + - 'ValidatingWebhookConfiguration' + - '--ignore-not-found' + - {{ include "kuma.name" . }}-validating-webhook-configuration + securityContext: + {{- toYaml (mergeOverwrite (dict "runAsUser" 65534) .Values.hooks.containerSecurityContext) | trim | nindent 12 }} + resources: + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "100m" + memory: "256Mi" +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/pre-install-patch-namespace-job.yaml b/charts/kuma/kuma/2.9.0/templates/pre-install-patch-namespace-job.yaml new file mode 100644 index 000000000..a84d7accf --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/pre-install-patch-namespace-job.yaml @@ -0,0 +1,124 @@ +{{- if and ( .Values.noHelmHooks ) (eq .Values.controlPlane.environment "kubernetes") }} + {{- $errorMessage := ".Values.noHelmHooks is set. You must manually create and label the system namespace with kuma.io/system-namespace: \"true\" before installing or upgrading the chart" }} + {{- $systemNamespace := (lookup "v1" "Namespace" "" .Release.Namespace) }} + {{- if not $systemNamespace }} + {{- fail $errorMessage }} + {{- end }} + {{- $systemNamespaceLabels := ($systemNamespace).metadata.labels }} + {{- if ne (get $systemNamespaceLabels "kuma.io/system-namespace") "true" }} + {{- fail $errorMessage }} + {{- end }} +{{- else}} + {{- if .Values.patchSystemNamespace }} + {{- $serviceAccountName := printf "%s-patch-ns-job" (include "kuma.name" .) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "pre-install" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +{{- with .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "kuma.name" . }}-patch-ns-job + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "pre-install" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - namespaces + resourceNames: + - {{ .Release.Namespace }} + verbs: + - get + - patch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "kuma.name" . }}-patch-ns-job + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "pre-install" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "kuma.name" . }}-patch-ns-job +subjects: + - kind: ServiceAccount + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "kuma.name" . }}-patch-ns + namespace: {{ .Release.Namespace }} + labels: + {{ include "kuma.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": "pre-install" + {{/* Ensure the job is created after the RBAC resources */}} + "helm.sh/hook-weight": "5" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" +spec: + template: + metadata: + name: {{ template "kuma.name" . }}-patch-ns-script + labels: + {{ include "kuma.labels" . | nindent 8 }} + spec: + serviceAccountName: {{ $serviceAccountName }} + {{- with .Values.hooks.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.hooks.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + securityContext: + {{- toYaml .Values.hooks.podSecurityContext | trim | nindent 8 }} + containers: + - name: pre-install-job + image: "{{ .Values.kubectl.image.registry }}/{{ .Values.kubectl.image.repository }}:{{ .Values.kubectl.image.tag }}" + securityContext: + {{- toYaml (mergeOverwrite (dict "runAsUser" 65534) .Values.hooks.containerSecurityContext) | trim | nindent 12 }} + resources: + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "100m" + memory: "256Mi" + command: + - 'kubectl' + - 'patch' + - 'namespace' + - {{ .Release.Namespace | quote }} + - '--type' + - 'merge' + - '--patch' + - '{ "metadata": { "labels": { "kuma.io/system-namespace": "true" } } }' + {{- end }} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/templates/pre-upgrade-install-crds-job.yaml b/charts/kuma/kuma/2.9.0/templates/pre-upgrade-install-crds-job.yaml new file mode 100644 index 000000000..8fadf1722 --- /dev/null +++ b/charts/kuma/kuma/2.9.0/templates/pre-upgrade-install-crds-job.yaml @@ -0,0 +1,171 @@ +{{- if (and .Values.installCrdsOnUpgrade.enabled (and (not .Values.noHelmHooks) (eq .Values.controlPlane.environment "kubernetes"))) }} + {{ $hook := "pre-upgrade,pre-install" }} + {{- $serviceAccountName := printf "%s-install-crds" (include "kuma.name" .) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "{{ $hook }}" + "helm.sh/hook-weight": "-1" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +{{- with concat .Values.installCrdsOnUpgrade.imagePullSecrets .Values.global.imagePullSecrets | uniq }} +imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} +{{- end }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "kuma.name" . }}-install-crds + annotations: + "helm.sh/hook": "{{ $hook }}" + "helm.sh/hook-weight": "-1" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +rules: + - apiGroups: + - "apiextensions.k8s.io" + resources: + - customresourcedefinitions + verbs: + - create + - patch + - update + - list + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "kuma.name" . }}-install-crds + annotations: + "helm.sh/hook": "{{ $hook }}" + "helm.sh/hook-weight": "-1" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded,hook-failed" + labels: + {{- include "kuma.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "kuma.name" . }}-install-crds +subjects: + - kind: ServiceAccount + name: {{ $serviceAccountName }} + namespace: {{ .Release.Namespace }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "kuma.name" . }}-install-crds-scripts + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": "{{ $hook }}" + "helm.sh/hook-weight": "-1" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" + labels: + {{- include "kuma.labels" . | nindent 4 }} +data: + install_crds.sh: | + #!/usr/bin/env sh + set -e + + if [ -s /kuma/crds/crds.yaml ]; then + echo "/kuma/crds/crds.yaml found and is not empty, adding crds" + kubectl apply -f /kuma/crds/crds.yaml + else + echo "/kuma/crds/crds.yaml not found or empty, it looks like there is no crds to install" + fi + save_crds.sh: | + set -e + + crds="$(kumactl install crds --no-config)" + + if [ -n "${crds}" ]; then + echo "found crds - saving to /kuma/crds/crds.yaml" + echo "${crds}" > /kuma/crds/crds.yaml + fi +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "kuma.name" . }}-install-crds + namespace: {{ .Release.Namespace }} + labels: + {{ include "kuma.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": "{{ $hook }}" + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" +spec: + template: + metadata: + name: {{ template "kuma.name" . }}-install-crds-job + labels: + {{ include "kuma.labels" . | nindent 8 }} + spec: + serviceAccountName: {{ $serviceAccountName }} + {{- with .Values.hooks.nodeSelector }} + nodeSelector: + {{ toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.hooks.tolerations }} + tolerations: + {{ toYaml . | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + securityContext: + {{- toYaml .Values.hooks.podSecurityContext | trim | nindent 8 }} + containers: + - name: pre-upgrade-job + image: "{{ .Values.kubectl.image.registry }}/{{ .Values.kubectl.image.repository }}:{{ .Values.kubectl.image.tag }}" + securityContext: + {{- toYaml (mergeOverwrite (dict "runAsUser" 65534) .Values.hooks.containerSecurityContext) | trim | nindent 12 }} + resources: + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "100m" + memory: "256Mi" + command: ["/kuma/scripts/install_crds.sh"] + volumeMounts: + - mountPath: /kuma/crds + name: crds + readOnly: true + - mountPath: /kuma/scripts + name: scripts + readOnly: true + initContainers: + - name: pre-upgrade-job-init + image: {{ include "kuma.formatImage" (dict "image" .Values.kumactl.image "root" $) | quote }} + securityContext: + {{- toYaml .Values.hooks.containerSecurityContext | trim | nindent 12 }} + resources: + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "100m" + memory: "256Mi" + volumeMounts: + - mountPath: /kuma/crds + name: crds + - mountPath: /kuma/scripts + name: scripts + readOnly: true + command: ["sh", "-c"] + args: ["/kuma/scripts/save_crds.sh"] + volumes: + - name: scripts + configMap: + name: {{ include "kuma.name" . }}-install-crds-scripts + defaultMode: 0755 + - name: crds + emptyDir: {} +{{- end }} diff --git a/charts/kuma/kuma/2.9.0/values.yaml b/charts/kuma/kuma/2.9.0/values.yaml new file mode 100644 index 000000000..766792e5b --- /dev/null +++ b/charts/kuma/kuma/2.9.0/values.yaml @@ -0,0 +1,903 @@ +global: + image: + # -- Default registry for all Kuma Images + registry: "docker.io/kumahq" + # -- The default tag for all Kuma images, which itself defaults to .Chart.AppVersion + tag: + # -- Add `imagePullSecrets` to all the service accounts used for Kuma components + imagePullSecrets: [] + +# -- Whether to patch the target namespace with the system label +patchSystemNamespace: true + +installCrdsOnUpgrade: + # -- Whether install new CRDs before upgrade (if any were introduced with the new version of Kuma) + enabled: true + # -- The `imagePullSecrets` to attach to the Service Account running CRD installation. + # This field will be deprecated in a future release, please use .global.imagePullSecrets + imagePullSecrets: [] + +# -- Whether to disable all helm hooks +noHelmHooks: false + +# -- Whether to restart control-plane by calculating a new checksum for the secret +restartOnSecretChange: true + +controlPlane: + # -- Environment that control plane is run in, useful when running universal global control plane on k8s + environment: "kubernetes" + + # -- Labels to add to resources in addition to default labels + extraLabels: {} + + # -- Kuma CP log level: one of off,info,debug + logLevel: "info" + + # -- Kuma CP log output path: Defaults to /dev/stdout + logOutputPath: "" + + # -- Kuma CP modes: one of zone,global + mode: "zone" + + # -- (string) Kuma CP zone, if running multizone + zone: + + # -- Only used in `zone` mode + kdsGlobalAddress: "" + + # -- Number of replicas of the Kuma CP. Ignored when autoscaling is enabled + replicas: 1 + + # -- Minimum number of seconds for which a newly created pod should be ready for it to be considered available. + minReadySeconds: 0 + + # -- Annotations applied only to the `Deployment` resource + deploymentAnnotations: {} + + # -- Annotations applied only to the `Pod` resource + podAnnotations: {} + + # Horizontal Pod Autoscaling configuration + autoscaling: + # -- Whether to enable Horizontal Pod Autoscaling, which requires the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in the cluster + enabled: false + + # -- The minimum CP pods to allow + minReplicas: 2 + # -- The max CP pods to scale to + maxReplicas: 5 + + # -- For clusters that don't support autoscaling/v2, autoscaling/v1 is used + targetCPUUtilizationPercentage: 80 + # -- For clusters that do support autoscaling/v2, use metrics + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 80 + + # -- Node selector for the Kuma Control Plane pods + nodeSelector: + kubernetes.io/os: linux + + # -- Tolerations for the Kuma Control Plane pods + tolerations: [] + + podDisruptionBudget: + # -- Whether to create a pod disruption budget + enabled: false + # -- The maximum number of unavailable pods allowed by the budget + maxUnavailable: 1 + + # -- Affinity placement rule for the Kuma Control Plane pods. + # This is rendered as a template, so you can reference other helm variables or includes. + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + # These match the selector labels used on the deployment. + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: + - '{{ include "kuma.name" . }}' + - key: app.kubernetes.io/instance + operator: In + values: + - '{{ .Release.Name }}' + - key: app + operator: In + values: + - '{{ include "kuma.name" . }}-control-plane' + topologyKey: kubernetes.io/hostname + + # -- Topology spread constraints rule for the Kuma Control Plane pods. + # This is rendered as a template, so you can use variables to generate match labels. + topologySpreadConstraints: + + # -- Failure policy of the mutating webhook implemented by the Kuma Injector component + injectorFailurePolicy: Fail + + service: + apiServer: + http: + # -- Port on which Http api server Service is exposed on Node for service of type NodePort + nodePort: 30681 + https: + # -- Port on which Https api server Service is exposed on Node for service of type NodePort + nodePort: 30682 + + # -- Whether to create a service resource. + enabled: true + + # -- (string) Optionally override of the Kuma Control Plane Service's name + name: + + # -- Service type of the Kuma Control Plane + type: ClusterIP + + # -- Annotations to put on the Kuma Control Plane + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "5680" + + # Kuma API and GUI ingress settings. Useful if you want to expose the + # API and GUI of Kuma outside the k8s cluster. + ingress: + # -- Install K8s Ingress resource that exposes GUI and API + enabled: false + # -- IngressClass defines which controller will implement the resource + ingressClassName: + # -- Ingress hostname + hostname: + # -- Map of ingress annotations. + annotations: {} + # -- Ingress path. + path: / + # -- Each path in an Ingress is required to have a corresponding path type. (ImplementationSpecific/Exact/Prefix) + pathType: ImplementationSpecific + # -- Port from kuma-cp to use to expose API and GUI. Switch to 5682 to expose TLS port + servicePort: 5681 + + globalZoneSyncService: + # -- Whether to create a k8s service for the global zone sync + # service. It will only be created when enabled and deploying the global + # control plane. + enabled: true + # -- Service type of the Global-zone sync + type: LoadBalancer + # -- (string) Optionally specify IP to be used by cloud provider when configuring load balancer + loadBalancerIP: + # -- Optionally specify allowed source ranges that can access the load balancer + loadBalancerSourceRanges: [] + # -- Additional annotations to put on the Global Zone Sync Service + annotations: { } + # -- Port on which Global Zone Sync Service is exposed on Node for service of type NodePort + nodePort: 30685 + # -- Port on which Global Zone Sync Service is exposed + port: 5685 + # -- Protocol of the Global Zone Sync service port + protocol: grpc + + defaults: + # -- Whether to skip creating the default Mesh + skipMeshCreation: false + + # -- Whether to automountServiceAccountToken for cp. Optionally set to false + automountServiceAccountToken: true + + # -- Optionally override the resource spec + resources: + requests: + cpu: 500m + memory: 256Mi + limits: + memory: 256Mi + + # -- Pod lifecycle settings (useful for adding a preStop hook, when + # using AWS ALB or NLB) + lifecycle: {} + + # -- Number of seconds to wait before force killing the pod. Make sure to + # update this if you add a preStop hook. + terminationGracePeriodSeconds: 30 + + # TLS for various servers + tls: + general: + # -- Secret that contains tls.crt, tls.key [and ca.crt when no + # controlPlane.tls.general.caSecretName specified] for protecting + # Kuma in-cluster communication + secretName: "" + # -- Secret that contains ca.crt that was used to sign cert for protecting + # Kuma in-cluster communication (ca.crt present in this secret + # have precedence over the one provided in the controlPlane.tls.general.secretName) + caSecretName: "" + # -- Base64 encoded CA certificate (the same as in controlPlane.tls.general.secret#ca.crt) + caBundle: "" + apiServer: + # -- Secret that contains tls.crt, tls.key for protecting Kuma API on HTTPS + secretName: "" + # -- Secret that contains list of .pem certificates that can access admin endpoints of Kuma API on HTTPS + clientCertsSecretName: "" + # - if not creating the global control plane, then do nothing + # - if secretName is empty and create is false, then do nothing + # - if secretName is non-empty and create is false, then use the secret made outside of helm with the name secretName + # - if secretName is empty and create is true, then create a secret with a default name and use it + # - if secretName is non-empty and create is true, then create the secret using the provided name + kdsGlobalServer: + # -- Name of the K8s TLS Secret resource. If you set this and don't set + # create=true, you have to create the secret manually. + secretName: "" + # -- Whether to create the TLS secret in helm. + create: false + # -- The TLS certificate to offer. + cert: "" + # -- The TLS key to use. + key: "" + # - if not creating the zonal control plane, then do nothing + # - if secretName is empty and create is false, then do nothing + # - if secretName is non-empty and create is false, then use the secret made outside of helm with the name secretName + # - if secretName is empty and create is true, then create a secret with a default name and use it + # - if secretName is non-empty and create is true, then create the secret using the provided name + kdsZoneClient: + # -- Name of the K8s Secret resource that contains ca.crt which was + # used to sign the certificate of KDS Global Server. If you set this + # and don't set create=true, you have to create the secret manually. + secretName: "" + # -- Whether to create the TLS secret in helm. + create: false + # -- CA bundle that was used to sign the certificate of KDS Global Server. + cert: "" + # -- If true, TLS cert of the server is not verified. + skipVerify: false + + # -- Annotations to add for Control Plane's Service Account + serviceAccountAnnotations: { } + + image: + # -- Kuma CP ImagePullPolicy + pullPolicy: IfNotPresent + # -- Kuma CP image repository + repository: "kuma-cp" + # -- Kuma CP Image tag. When not specified, the value is copied from global.tag + tag: + + # -- (object with { Env: string, Secret: string, Key: string }) Secrets to add as environment variables, + # where `Env` is the name of the env variable, + # `Secret` is the name of the Secret, + # and `Key` is the key of the Secret value to use + secrets: + # someSecret: + # Secret: some-secret + # Key: secret_key + # Env: SOME_SECRET + + # -- Additional environment variables that will be passed to the control plane + envVars: { } + + # -- Additional environment variables that will be passed to the control plane. Can be used with Kubernetes downward API + envVarEntries: + # - name: MY_NODE_NAME + # valueFrom: + # fieldRef: + # fieldPath: spec.nodeName + + # -- Additional config maps to mount into the control plane, with optional inline values + extraConfigMaps: [ ] +# - name: extra-config +# mountPath: /etc/extra-config +# readOnly: true +# values: +# extra-config-key: | +# extra-config-value + + # -- (object with { name: string, mountPath: string, readOnly: string }) Additional secrets to mount into the control plane, + # where `Env` is the name of the env variable, + # `Secret` is the name of the Secret, + # and `Key` is the key of the Secret value to use + extraSecrets: + # extraConfig: + # name: extra-config + # mountPath: /etc/extra-config + # readOnly: true + + webhooks: + validator: + # -- Additional rules to apply on Kuma validator webhook. Useful when building custom policy on top of Kuma. + additionalRules: "" + ownerReference: + # -- Additional rules to apply on Kuma owner reference webhook. Useful when building custom policy on top of Kuma. + additionalRules: "" + + # -- Specifies if the deployment should be started in hostNetwork mode. + hostNetwork: false + # -- Define a new server port for the admission controller. Recommended to set in combination with + # hostNetwork to prevent multiple port bindings on the same port (like Calico in AWS EKS). + admissionServerPort: 5443 + + # -- Security context at the pod level for control plane. + podSecurityContext: + runAsNonRoot: true + + # -- Security context at the container level for control plane. + containerSecurityContext: + readOnlyRootFilesystem: true + + # -- If true, then control plane can support TLS secrets for builtin gateway outside of mesh system namespace. + # The downside is that control plane requires permission to read Secrets in all namespaces. + supportGatewaySecretsInAllNamespaces: false + # -- DNS configuration for the control-plane pod. + # This is equivalent to the [Kubernetes DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). + dns: + # -- Defines how DNS resolution is configured for that Pod. + policy: "" + # -- Optional dns configuration, required when policy is 'None' + config: + # -- A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. + nameservers: [] + # -- A list of DNS search domains for hostname lookup in the Pod. + searches: [] + +cni: + # -- Install Kuma with CNI instead of proxy init container + enabled: false + # -- Install CNI in chained mode + chained: false + # -- Set the CNI install directory + netDir: /etc/cni/multus/net.d + # -- Set the CNI bin directory + binDir: /var/lib/cni/bin + # -- Set the CNI configuration name + confName: kuma-cni.conf + # -- CNI log level: one of off,info,debug + logLevel: info + # -- Node Selector for the CNI pods + nodeSelector: + kubernetes.io/os: linux + # -- Tolerations for the CNI pods + tolerations: [] + # -- Additional pod annotations + podAnnotations: { } + # -- Set the CNI namespace + namespace: kube-system + + image: + # -- CNI image repository + repository: "kuma-cni" + # -- CNI image tag - defaults to .Chart.AppVersion + tag: + # -- CNI image pull policy + imagePullPolicy: IfNotPresent + + # -- it's only useful in tests to trigger a possible race condition + delayStartupSeconds: 0 + + # -- use new CNI (experimental) + experimental: + imageEbpf: + # -- CNI experimental eBPF image registry + registry: "docker.io/kumahq" + # -- CNI experimental eBPF image repository + repository: "merbridge" + # -- CNI experimental eBPF image tag + tag: "0.8.5" + + resources: + requests: + cpu: 100m + memory: 100Mi + limits: + memory: 100Mi + + # -- Security context at the pod level for cni + podSecurityContext: {} + + # -- Security context at the container level for cni + containerSecurityContext: + readOnlyRootFilesystem: true + runAsNonRoot: false + runAsUser: 0 + runAsGroup: 0 + +dataPlane: + # -- If true, then turn on CoreDNS query logging + dnsLogging: false + image: + # -- The Kuma DP image repository + repository: "kuma-dp" + # -- Kuma DP ImagePullPolicy + pullPolicy: IfNotPresent + # -- Kuma DP Image Tag. When not specified, the value is copied from global.tag + tag: + + initImage: + # -- The Kuma DP init image repository + repository: "kuma-init" + # -- Kuma DP init image tag When not specified, the value is copied from global.tag + tag: + +ingress: + # -- If true, it deploys Ingress for cross cluster communication + enabled: false + + # -- Labels to add to resources, in addition to default labels + extraLabels: {} + + # -- Time for which old listener will still be active as draining + drainTime: 30s + + # -- Number of replicas of the Ingress. Ignored when autoscaling is enabled. + replicas: 1 + + # -- Log level for ingress (available values: off|info|debug) + logLevel: info + + # -- Define the resources to allocate to mesh ingress + resources: + requests: + cpu: 50m + memory: 64Mi + limits: + cpu: 1000m + memory: 512Mi + + # -- Pod lifecycle settings (useful for adding a preStop hook, when + # using AWS ALB or NLB) + lifecycle: {} + + # -- Number of seconds to wait before force killing the pod. Make sure to + # update this if you add a preStop hook. + terminationGracePeriodSeconds: 40 + + # Horizontal Pod Autoscaling configuration + autoscaling: + # -- Whether to enable Horizontal Pod Autoscaling, which requires the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in the cluster + enabled: false + + # -- The minimum CP pods to allow + minReplicas: 2 + # -- The max CP pods to scale to + maxReplicas: 5 + + # -- For clusters that don't support autoscaling/v2, autoscaling/v1 is used + targetCPUUtilizationPercentage: 80 + # -- For clusters that do support autoscaling/v2, use metrics + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 80 + + service: + # -- Whether to create a Service resource. + enabled: true + # -- Service type of the Ingress + type: LoadBalancer + # -- (string) Optionally specify IP to be used by cloud provider when configuring load balancer + loadBalancerIP: + # -- Additional annotations to put on the Ingress service + annotations: { } + # -- Port on which Ingress is exposed + port: 10001 + # -- Port on which service is exposed on Node for service of type NodePort + nodePort: + # -- Additional pod annotations (deprecated favor `podAnnotations`) + annotations: { } + # -- Additional pod annotations + podAnnotations: { } + # -- Node Selector for the Ingress pods + nodeSelector: + kubernetes.io/os: linux + # -- Tolerations for the Ingress pods + tolerations: [] + podDisruptionBudget: + # -- Whether to create a pod disruption budget + enabled: false + # -- The maximum number of unavailable pods allowed by the budget + maxUnavailable: 1 + + # -- Affinity placement rule for the Kuma Ingress pods + # This is rendered as a template, so you can reference other helm variables + # or includes. + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + # These match the selector labels used on the deployment. + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: + - '{{ include "kuma.name" . }}' + - key: app.kubernetes.io/instance + operator: In + values: + - '{{ .Release.Name }}' + - key: app + operator: In + values: + - kuma-ingress + topologyKey: kubernetes.io/hostname + + # -- Topology spread constraints rule for the Kuma Mesh Ingress pods. + # This is rendered as a template, so you can use variables to generate match labels. + topologySpreadConstraints: + + # -- Security context at the pod level for ingress + podSecurityContext: + runAsNonRoot: true + runAsUser: 5678 + runAsGroup: 5678 + + # -- Security context at the container level for ingress + containerSecurityContext: + readOnlyRootFilesystem: true + + # -- Annotations to add for Control Plane's Service Account + serviceAccountAnnotations: { } + # -- Whether to automountServiceAccountToken for cp. Optionally set to false + automountServiceAccountToken: true + # -- DNS configuration for the ingress pod. + # This is equivalent to the [Kubernetes DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). + dns: + # -- Defines how DNS resolution is configured for that Pod. + policy: "" + # -- Optional dns configuration, required when policy is 'None' + config: + # -- A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. + nameservers: [] + # -- A list of DNS search domains for hostname lookup in the Pod. + searches: [] + +egress: + # -- If true, it deploys Egress for cross cluster communication + enabled: false + # -- Labels to add to resources, in addition to the default labels. + extraLabels: {} + # -- Time for which old listener will still be active as draining + drainTime: 30s + # -- Number of replicas of the Egress. Ignored when autoscaling is enabled. + replicas: 1 + + # -- Log level for egress (available values: off|info|debug) + logLevel: info + + # Horizontal Pod Autoscaling configuration + autoscaling: + # -- Whether to enable Horizontal Pod Autoscaling, which requires the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in the cluster + enabled: false + + # -- The minimum CP pods to allow + minReplicas: 2 + # -- The max CP pods to scale to + maxReplicas: 5 + + # -- For clusters that don't support autoscaling/v2, autoscaling/v1 is used + targetCPUUtilizationPercentage: 80 + # -- For clusters that do support autoscaling/v2, use metrics + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 80 + resources: + requests: + cpu: 50m + memory: 64Mi + limits: + cpu: 1000m + memory: 512Mi + + service: + # -- Whether to create the service object + enabled: true + # -- Service type of the Egress + type: ClusterIP + # -- (string) Optionally specify IP to be used by cloud provider when configuring load balancer + loadBalancerIP: + # -- Additional annotations to put on the Egress service + annotations: { } + # -- Port on which Egress is exposed + port: 10002 + # -- Port on which service is exposed on Node for service of type NodePort + nodePort: + # -- Additional pod annotations (deprecated favor `podAnnotations`) + annotations: { } + # -- Additional pod annotations + podAnnotations: { } + # -- Node Selector for the Egress pods + nodeSelector: + kubernetes.io/os: linux + # -- Tolerations for the Egress pods + tolerations: [] + podDisruptionBudget: + # -- Whether to create a pod disruption budget + enabled: false + # -- The maximum number of unavailable pods allowed by the budget + maxUnavailable: 1 + + # -- Affinity placement rule for the Kuma Egress pods. + # This is rendered as a template, so you can reference other helm variables or includes. + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + # These match the selector labels used on the deployment. + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: + - '{{ include "kuma.name" . }}' + - key: app.kubernetes.io/instance + operator: In + values: + - '{{ .Release.Name }}' + - key: app + operator: In + values: + - kuma-egress + topologyKey: kubernetes.io/hostname + + # -- Topology spread constraints rule for the Kuma Egress pods. + # This is rendered as a template, so you can use variables to generate match labels. + topologySpreadConstraints: + + # -- Security context at the pod level for egress + podSecurityContext: + runAsNonRoot: true + runAsUser: 5678 + runAsGroup: 5678 + + # -- Security context at the container level for egress + containerSecurityContext: + readOnlyRootFilesystem: true + + # -- Annotations to add for Control Plane's Service Account + serviceAccountAnnotations: { } + # -- Whether to automountServiceAccountToken for cp. Optionally set to false + automountServiceAccountToken: true + # -- DNS configuration for the egress pod. + # This is equivalent to the [Kubernetes DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). + dns: + # -- Defines how DNS resolution is configured for that Pod. + policy: "" + # -- Optional dns configuration, required when policy is 'None' + config: + # -- A list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. + nameservers: [] + # -- A list of DNS search domains for hostname lookup in the Pod. + searches: [] + +kumactl: + image: + # -- The kumactl image repository + repository: kumactl + # -- The kumactl image tag. When not specified, the value is copied from global.tag + tag: + +kubectl: + image: + # -- The kubectl image registry + registry: docker.io + # -- The kubectl image repository + repository: bitnami/kubectl + # -- The kubectl image tag + tag: "1.27.5" +hooks: + # -- Node selector for the HELM hooks + nodeSelector: + kubernetes.io/os: linux + # -- Tolerations for the HELM hooks + tolerations: [] + # -- Security context at the pod level for crd/webhook/ns + podSecurityContext: + runAsNonRoot: true + + # -- Security context at the container level for crd/webhook/ns + containerSecurityContext: + readOnlyRootFilesystem: true + + # -- ebpf-cleanup hook needs write access to the root filesystem to clean ebpf programs + # Changing below values will potentially break ebpf cleanup completely, + # so be cautious when doing so. + ebpfCleanup: + # -- Security context at the pod level for crd/webhook/cleanup-ebpf + podSecurityContext: + runAsNonRoot: false + # -- Security context at the container level for crd/webhook/cleanup-ebpf + containerSecurityContext: + readOnlyRootFilesystem: false + +transparentProxy: + configMap: + # -- If true, enables the use of a ConfigMap to manage transparent proxy configuration + # instead of directly configuring it within the Kuma system + enabled: false + # -- The name of the ConfigMap used to store the transparent proxy configuration + name: kuma-transparent-proxy-config + config: + # -- The username or UID of the user that will run kuma-dp. If not provided, the system will + # use the default UID ("5678") or the default username ("kuma-dp") + kumaDPUser: "5678" + # -- The IP family mode used for configuring traffic redirection in the transparent proxy + # Supports "dualstack" (for both IPv4 and IPv6) and "ipv4" modes + ipFamilyMode: dualstack + redirect: + dns: + # -- Enables DNS redirection in the transparent proxy + enabled: true + # -- Redirect all DNS queries + captureAll: true + # -- The port on which the DNS server listens + port: 15053 + # -- Path to the system's resolv.conf file + resolvConfigPath: /etc/resolv.conf + # -- Disables conntrack zone splitting, which can prevent potential DNS issues + skipConntrackZoneSplit: false + inbound: + # -- Enables inbound traffic redirection + enabled: true + # -- Port used for redirecting inbound traffic + port: 15006 + # -- List of ports to exclude from inbound traffic redirection + excludePorts: [] + # -- List of IP addresses to exclude from inbound traffic redirection for specific ports + excludePortsForIPs: [] + # -- List of UIDs to exclude from inbound traffic redirection for specific ports + excludePortsForUIDs: [] + # -- List of ports to include in inbound traffic redirection + includePorts: [] + # -- Inserts the redirection rule at the beginning of the chain instead of appending it + insertRedirectInsteadOfAppend: false + outbound: + # -- Enables outbound traffic redirection + enabled: true + # -- Port used for redirecting outbound traffic + port: 15001 + # -- List of ports to exclude from outbound traffic redirection + excludePorts: [] + # -- List of IP addresses to exclude from outbound traffic redirection for specific ports + excludePortsForIPs: [] + # -- List of UIDs to exclude from outbound traffic redirection for specific ports + excludePortsForUIDs: [] + # -- List of ports to include in outbound traffic redirection + includePorts: [] + # -- Inserts the redirection rule at the beginning of the chain instead of appending it + insertRedirectInsteadOfAppend: false + vnet: + # -- Specifies virtual networks using the format interfaceName:CIDR + # Allows matching traffic on specific network interfaces + # Examples: + # - "docker0:172.17.0.0/16" + # - "br+:172.18.0.0/16" (matches any interface starting with "br") + # - "iface:::1/64" (for IPv6) + networks: [] + ebpf: + # -- Enables eBPF support for handling traffic redirection in the transparent proxy + enabled: false + # -- The path of the BPF filesystem + bpffsPath: /run/kuma/bpf + # -- The path of cgroup2 + cgroupPath: /sys/fs/cgroup + # -- The name of the environment variable containing the IP address of the instance (pod/vm) + # where transparent proxy will be installed + instanceIPEnvVarName: "" + # -- Path where compiled eBPF programs and other necessary files for eBPF mode can be found + programsSourcePath: /tmp/kuma-ebpf + # -- The network interface for TC eBPF programs to bind to. If not provided, it will be + # automatically determined + tcAttachIface: "" + retry: + # -- The maximum number of retry attempts for operations + maxRetries: 4 + # -- The time duration to wait between retry attempts + sleepBetweenRetries: 2s + iptablesExecutables: + # -- Custom path for the iptables executable (IPv4) + iptables: "" + # -- Custom path for the iptables-save executable (IPv4) + iptables-save: "" + # -- Custom path for the iptables-restore executable (IPv4) + iptables-restore: "" + # -- Custom path for the ip6tables executable (IPv6) + ip6tables: "" + # -- Custom path for the ip6tables-save executable (IPv6) + ip6tables-save: "" + # -- Custom path for the ip6tables-restore executable (IPv6) + ip6tables-restore: "" + log: + # -- Enables logging of iptables rules for diagnostics and monitoring + enabled: false + comments: + # -- Disables comments in the generated iptables rules + disabled: false + # -- Time in seconds to wait for acquiring the xtables lock before failing + # Value 0 means wait indefinitely + wait: 5 + # -- Time interval between retries to acquire the xtables lock in seconds + waitInterval: 0 + # -- Drops invalid packets to avoid connection resets in high-throughput scenarios + dropInvalidPackets: false + # -- Enables firewalld support to store iptables rules + storeFirewalld: false + # -- Enables verbose mode with longer argument/flag names and additional comments + verbose: false + +experimental: + # Configuration for the experimental ebpf mode for transparent proxy + ebpf: + # -- If true, ebpf will be used instead of using iptables to install/configure transparent proxy + enabled: false + # -- Name of the environmental variable which will contain the IP address of a pod + instanceIPEnvVarName: INSTANCE_IP + # -- Path where BPF file system should be mounted + bpffsPath: /sys/fs/bpf + # -- Host's cgroup2 path + cgroupPath: /sys/fs/cgroup + # -- Name of the network interface which TC programs should be attached to, we'll try to automatically determine it if empty + tcAttachIface: "" + # -- Path where compiled eBPF programs which will be installed can be found + programsSourcePath: /tmp/kuma-ebpf + # -- If true, enable native Kubernetes sidecars. This requires at least + # Kubernetes v1.29 + sidecarContainers: false + +# Postgres' settings for universal control plane on k8s +postgres: + # -- Postgres port, password should be provided as a secret reference in "controlPlane.secrets" + # with the Env value "KUMA_STORE_POSTGRES_PASSWORD". + # Example: + # controlPlane: + # secrets: + # - Secret: postgres-postgresql + # Key: postgresql-password + # Env: KUMA_STORE_POSTGRES_PASSWORD + port: "5432" + # TLS settings + tls: + # -- Mode of TLS connection. Available values are: "disable", "verifyNone", "verifyCa", "verifyFull" + mode: disable # ENV: KUMA_STORE_POSTGRES_TLS_MODE + # -- Whether to disable SNI the postgres `sslsni` option. + disableSSLSNI: false # ENV: KUMA_STORE_POSTGRES_TLS_DISABLE_SSLSNI + # -- Secret name that contains the ca.crt + caSecretName: + # -- Secret name that contains the client tls.crt, tls.key + secretName: + +# @ignored for helm-docs +plugins: + resources: + hostnamegenerators: true + meshexternalservices: true + meshmultizoneservices: true + meshservices: true + policies: + meshaccesslogs: true + meshcircuitbreakers: true + meshfaultinjections: true + meshhealthchecks: true + meshhttproutes: true + meshloadbalancingstrategies: true + meshmetrics: true + meshpassthroughs: true + meshproxypatches: true + meshratelimits: true + meshretries: true + meshtcproutes: true + meshtimeouts: true + meshtlses: true + meshtraces: true + meshtrafficpermissions: true diff --git a/charts/nats/nats/1.2.6/.helmignore b/charts/nats/nats/1.2.6/.helmignore new file mode 100644 index 000000000..240dfde2a --- /dev/null +++ b/charts/nats/nats/1.2.6/.helmignore @@ -0,0 +1,26 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ + +# template tests +/test diff --git a/charts/nats/nats/1.2.6/Chart.yaml b/charts/nats/nats/1.2.6/Chart.yaml new file mode 100644 index 000000000..46235bc37 --- /dev/null +++ b/charts/nats/nats/1.2.6/Chart.yaml @@ -0,0 +1,22 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: NATS Server + catalog.cattle.io/kube-version: '>=1.16-0' + catalog.cattle.io/release-name: nats +apiVersion: v2 +appVersion: 2.10.22 +description: A Helm chart for the NATS.io High Speed Cloud Native Distributed Communications + Technology. +home: http://github.com/nats-io/k8s +icon: file://assets/icons/nats.png +keywords: +- nats +- messaging +- cncf +kubeVersion: '>=1.16-0' +maintainers: +- email: info@nats.io + name: The NATS Authors + url: https://github.com/nats-io +name: nats +version: 1.2.6 diff --git a/charts/nats/nats/1.2.6/README.md b/charts/nats/nats/1.2.6/README.md new file mode 100644 index 000000000..0916999df --- /dev/null +++ b/charts/nats/nats/1.2.6/README.md @@ -0,0 +1,329 @@ +# NATS Server + +--- + +[NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. +NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). +NATS has over [30 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. +NATS can secure and simplify design and operation of modern distributed systems. + +```shell +helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +helm upgrade --install nats nats/nats +``` + +## Upgrade Nodes + +- **Upgrading from 0.x**: The `values.yaml` schema changed significantly from 0.x to 1.x. Read [UPGRADING.md](UPGRADING.md) for instructions on upgrading a 0.x release to 1.x. + +## Values + +There are a handful of explicitly defined options which are documented with comments in the [values.yaml](values.yaml) file. + +Everything in the NATS Config or Kubernetes Resources can be overridden by `merge` and `patch`, which is supported for the following values: + +| key | type | enabled by default | +|----------------------------------|-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------| +| `config` | [NATS Config](https://docs.nats.io/running-a-nats-service/configuration) | yes | +| `config.cluster` | [NATS Cluster](https://docs.nats.io/running-a-nats-service/configuration/clustering/cluster_config) | no | +| `config.cluster.tls` | [NATS TLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) | no | +| `config.jetstream` | [NATS JetStream](https://docs.nats.io/running-a-nats-service/configuration#jetstream) | no | +| `config.jetstream.fileStore.pvc` | [k8s PVC](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core) | yes, when `config.jetstream` is enabled | +| `config.nats.tls` | [NATS TLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) | no | +| `config.leafnodes` | [NATS LeafNodes](https://docs.nats.io/running-a-nats-service/configuration/leafnodes/leafnode_conf) | no | +| `config.leafnodes.tls` | [NATS TLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) | no | +| `config.websocket` | [NATS WebSocket](https://docs.nats.io/running-a-nats-service/configuration/websocket/websocket_conf) | no | +| `config.websocket.tls` | [NATS TLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) | no | +| `config.websocket.ingress` | [k8s Ingress](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#ingress-v1-networking-k8s-io) | no | +| `config.mqtt` | [NATS MQTT](https://docs.nats.io/running-a-nats-service/configuration/mqtt/mqtt_config) | no | +| `config.mqtt.tls` | [NATS TLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) | no | +| `config.gateway` | [NATS Gateway](https://docs.nats.io/running-a-nats-service/configuration/gateways/gateway#gateway-configuration-block) | no | +| `config.gateway.tls` | [NATS TLS](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) | no | +| `config.resolver` | [NATS Resolver](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/jwt/resolver) | no | +| `config.resolver.pvc` | [k8s PVC](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core) | yes, when `config.resolver` is enabled | +| `container` | nats [k8s Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core) | yes | +| `reloader` | config reloader [k8s Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core) | yes | +| `promExporter` | prometheus exporter [k8s Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core) | no | +| `promExporter.podMonitor` | [prometheus PodMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor) | no | +| `service` | [k8s Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core) | yes | +| `statefulSet` | [k8s StatefulSet](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#statefulset-v1-apps) | yes | +| `podTemplate` | [k8s PodTemplate](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core) | yes | +| `headlessService` | [k8s Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core) | yes | +| `configMap` | [k8s ConfigMap](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmap-v1-core) | yes | +| `natsBox.contexts.default` | [NATS Context](https://docs.nats.io/using-nats/nats-tools/nats_cli#nats-contexts) | yes | +| `natsBox.contexts.[name]` | [NATS Context](https://docs.nats.io/using-nats/nats-tools/nats_cli#nats-contexts) | no | +| `natsBox.container` | nats-box [k8s Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core) | yes | +| `natsBox.deployment` | [k8s Deployment](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#deployment-v1-apps) | yes | +| `natsBox.podTemplate` | [k8s PodTemplate](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core) | yes | +| `natsBox.contextsSecret` | [k8s Secret](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secret-v1-core) | yes | +| `natsBox.contentsSecret` | [k8s Secret](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secret-v1-core) | yes | + +### Merge + +Merging is performed using the Helm `merge` function. Example - add NATS accounts and container resources: + +```yaml +config: + merge: + accounts: + A: + users: + - {user: a, password: a} + B: + users: + - {user: b, password: b} +natsBox: + contexts: + a: + merge: {user: a, password: a} + b: + merge: {user: b, password: b} + defaultContextName: a +``` + +## Patch + +Patching is performed using [JSON Patch](https://jsonpatch.com/). Example - add additional route to end of route list: + +```yaml +config: + cluster: + enabled: true + patch: + - op: add + path: /routes/- + value: nats://demo.nats.io:6222 +``` + +## Common Configurations + +### JetStream Cluster on 3 separate hosts + +```yaml +config: + cluster: + enabled: true + replicas: 3 + jetstream: + enabled: true + fileStore: + pvc: + size: 10Gi + +podTemplate: + topologySpreadConstraints: + kubernetes.io/hostname: + maxSkew: 1 + whenUnsatisfiable: DoNotSchedule +``` + +### NATS Container Resources + +```yaml +container: + env: + # different from k8s units, suffix must be B, KiB, MiB, GiB, or TiB + # should be ~90% of memory limit + GOMEMLIMIT: 7GiB + merge: + # recommended limit is at least 2 CPU cores and 8Gi Memory for production JetStream clusters + resources: + requests: + cpu: "2" + memory: 8Gi + limits: + cpu: "2" + memory: 8Gi +``` + +### Specify Image Version + +```yaml +container: + image: + tag: x.y.z-alpine +``` + +### Operator Mode with NATS Resolver + +Run `nsc generate config --nats-resolver` and replace the `OPERATOR_JWT`, `SYS_ACCOUNT_ID`, and `SYS_ACCOUNT_JWT` with your values. +Make sure that you do not include the trailing `,` in the `SYS_ACCOUNT_JWT`. + +``` +config: + resolver: + enabled: true + merge: + type: full + interval: 2m + timeout: 1.9s + merge: + operator: OPERATOR_JWT + system_account: SYS_ACCOUNT_ID + resolver_preload: + SYS_ACCOUNT_ID: SYS_ACCOUNT_JWT +``` + + +## Accessing NATS + +The chart contains 2 services by default, `service` and `headlessService`. + +### `service` + +The `service` is intended to be accessed by NATS Clients. It is a `ClusterIP` service by default, however it can easily be changed to a different service type. + +The `nats`, `websocket`, `leafnodes`, and `mqtt` ports will be exposed through this service by default if they are enabled. + +Example: change this service type to a `LoadBalancer`: + +```yaml +service: + merge: + spec: + type: LoadBalancer +``` + +### `headlessService` + +The `headlessService` is used for NATS Servers in the Stateful Set to discover one another. It is primarily intended to be used for Cluster Route connections. + +### TLS Considerations + +The TLS Certificate used for Client Connections should have a SAN covering DNS Name that clients access the `service` at. + +The TLS Certificate used for Cluster Route Connections should have a SAN covering the DNS Name that routes access each other on the `headlessService` at. This is `*.` by default. + +## Advanced Features + +### Templating Values + +Anything in `values.yaml` can be templated: + +- maps matching the following syntax will be templated and parsed as YAML: + ```yaml + $tplYaml: | + yaml template + ``` +- maps matching the follow syntax will be templated, parsed as YAML, and spread into the parent map/slice + ```yaml + $tplYamlSpread: | + yaml template + ``` + +Example - change service name: + +```yaml +service: + name: + $tplYaml: >- + {{ include "nats.fullname" . }}-svc +``` + +### NATS Config Units and Variables + +NATS configuration extends JSON, and can represent Units and Variables. They must be wrapped in `<< >>` in order to template correctly. Example: + +```yaml +config: + merge: + authorization: + # variable + token: << $TOKEN >> + # units + max_payload: << 2MB >> +``` + +templates to the `nats.conf`: + +``` +{ + "authorization": { + "token": $TOKEN + }, + "max_payload": 2MB, + "port": 4222, + ... +} +``` + +### NATS Config Includes + +Any NATS Config key ending in `$include` will be replaced with an include directive. Included files should be in paths relative to `/etc/nats-config`. Multiple `$include` keys are supported by using a prefix, and will be sorted alphabetically. Example: + +```yaml +config: + merge: + 00$include: auth.conf + 01$include: params.conf +configMap: + merge: + data: + auth.conf: | + accounts: { + A: { + users: [ + {user: a, password: a} + ] + }, + B: { + users: [ + {user: b, password: b} + ] + }, + } + params.conf: | + max_payload: 2MB +``` + +templates to the `nats.conf`: + +``` +include auth.conf; +"port": 4222, +... +include params.conf; +``` + +### Extra Resources + +Enables adding additional arbitrary resources. Example - expose WebSocket via VirtualService in Istio: + +```yaml +config: + websocket: + enabled: true +extraResources: +- apiVersion: networking.istio.io/v1beta1 + kind: VirtualService + metadata: + namespace: + $tplYamlSpread: > + {{ include "nats.metadataNamespace" $ }} + name: + $tplYaml: > + {{ include "nats.fullname" $ | quote }} + labels: + $tplYaml: | + {{ include "nats.labels" $ }} + spec: + hosts: + - demo.nats.io + gateways: + - my-gateway + http: + - name: default + match: + - name: root + uri: + exact: / + route: + - destination: + host: + $tplYaml: > + {{ .Values.service.name | quote }} + port: + number: + $tplYaml: > + {{ .Values.config.websocket.port }} +``` diff --git a/charts/nats/nats/1.2.6/UPGRADING.md b/charts/nats/nats/1.2.6/UPGRADING.md new file mode 100644 index 000000000..9cc177991 --- /dev/null +++ b/charts/nats/nats/1.2.6/UPGRADING.md @@ -0,0 +1,155 @@ +# Upgrading from 0.x to 1.x + +Instructions for upgrading an existing `nats` 0.x release to 1.x. + +## Rename Immutable Fields + +There are a number of immutable fields in the NATS Stateful Set and NATS Box deployment. All 1.x `values.yaml` files targeting an existing 0.x release will require some or all of these settings: + +```yaml +config: + # required if using JetStream file storage + jetstream: + # uncomment the next line if using JetStream file storage + # enabled: true + fileStore: + pvc: + name: + $tplYaml: >- + {{ include "nats.fullname" . }}-js-pvc + # set other PVC options here to make it match 0.x, refer to values.yaml for schema + + # required if using a full or cache resolver + resolver: + # uncomment the next line if using a full or cache resolver + # enabled: true + pvc: + name: nats-jwt-pvc + # set other PVC options here to make it match 0.x, refer to values.yaml for schema + +# required +statefulSet: + patch: + - op: remove + path: /spec/selector/matchLabels/app.kubernetes.io~1component + - $tplYamlSpread: |- + {{- if and + .Values.config.jetstream.enabled + .Values.config.jetstream.fileStore.enabled + .Values.config.jetstream.fileStore.pvc.enabled + .Values.config.resolver.enabled + .Values.config.resolver.pvc.enabled + }} + - op: move + from: /spec/volumeClaimTemplates/0 + path: /spec/volumeClaimTemplates/1 + {{- else}} + [] + {{- end }} + +# required +headlessService: + name: + $tplYaml: >- + {{ include "nats.fullname" . }} + +# required unless 0.x values explicitly set nats.serviceAccount.create=false +serviceAccount: + enabled: true + +# required to use new ClusterIP service for Clients accessing NATS +# if using TLS, this may require adding another SAN +service: + # uncomment the next line to disable the new ClusterIP service + # enabled: false + name: + $tplYaml: >- + {{ include "nats.fullname" . }}-svc + +# required if using NatsBox +natsBox: + deployment: + patch: + - op: replace + path: /spec/selector/matchLabels + value: + app: nats-box + - op: add + path: /spec/template/metadata/labels/app + value: nats-box +``` + +## Update NATS Config to new values.yaml schema + +Most values that control the NATS Config have changed and moved under the `config` key. Refer to the 1.x Chart's [values.yaml](values.yaml) for the complete schema. + +After migrating to the new values schema, ensure that changes you expect in the NATS Config files match by templating the old and new config files. + +Template your old 0.x Config Map, this example uses a file called `values-old.yaml`: + +```sh +helm template \ + --version "0.x" \ + -f values-old.yaml \ + -s templates/configmap.yaml \ + nats \ + nats/nats +``` + +Template your new 1.x Config Map, this example uses a file called `values.yaml`: + +```sh +helm template \ + --version "^1-beta" \ + -f values.yaml \ + -s templates/config-map.yaml \ + nats \ + nats/nats +``` + +## Update Kubernetes Resources to new values.yaml schema + +Most values that control Kubernetes Resources have been changed. Refer to the 1.x Chart's [values.yaml](values.yaml) for the complete schema. + +After migrating to the new values schema, ensure that changes you expect in resources match by templating the old and new resources. + +| Resource | 0.x Template File | 1.x Template File | +|-------------------------|---------------------------------|-------------------------------------------| +| Config Map | `templates/configmap.yaml` | `templates/config-map.yaml` | +| Stateful Set | `templates/statefulset.yaml` | `templates/stateful-set.yaml` | +| Headless Service | `templates/service.yaml` | `templates/headless-service.yaml` | +| ClusterIP Service | N/A | `templates/service.yaml` | +| Network Policy | `templates/networkpolicy.yaml` | N/A | +| Pod Disruption Budget | `templates/pdb.yaml` | `templates/pod-disruption-budget.yaml` | +| Service Account | `templates/rbac.yaml` | `templates/service-account.yaml` | +| Resource | `templates/` | `templates/` | +| Resource | `templates/` | `templates/` | +| Prometheus Monitor | `templates/serviceMonitor.yaml` | `templates/pod-monitor.yaml` | +| NatsBox Deployment | `templates/nats-box.yaml` | `templates/nats-box/deployment.yaml` | +| NatsBox Service Account | N/A | `templates/nats-box/service-account.yaml` | +| NatsBox Contents Secret | N/A | `templates/nats-box/contents-secret.yaml` | +| NatsBox Contexts Secret | N/A | `templates/nats-box/contexts-secret.yaml` | + +For example, to check that the Stateful Set matches: + +Template your old 0.x Stateful Set, this example uses a file called `values-old.yaml`: + +```sh +helm template \ + --version "0.x" \ + -f values-old.yaml \ + -s templates/statefulset.yaml \ + nats \ + nats/nats +``` + +Template your new 1.x Stateful Set, this example uses a file called `values.yaml`: + +```sh +helm template \ + --version "^1-beta" \ + -f values.yaml \ + -s templates/stateful-set.yaml \ + nats \ + nats/nats +``` diff --git a/charts/nats/nats/1.2.6/app-readme.md b/charts/nats/nats/1.2.6/app-readme.md new file mode 100644 index 000000000..b4511f4d5 --- /dev/null +++ b/charts/nats/nats/1.2.6/app-readme.md @@ -0,0 +1,3 @@ +# NATS Server + + [NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). NATS has over [30 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems. diff --git a/charts/nats/nats/1.2.6/files/config-map.yaml b/charts/nats/nats/1.2.6/files/config-map.yaml new file mode 100644 index 000000000..89ee3c281 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config-map.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.configMap.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +data: + nats.conf: | + {{- include "nats.formatConfig" .config | nindent 4 }} diff --git a/charts/nats/nats/1.2.6/files/config/cluster.yaml b/charts/nats/nats/1.2.6/files/config/cluster.yaml new file mode 100644 index 000000000..719cb8ade --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/cluster.yaml @@ -0,0 +1,32 @@ +{{- with .Values.config.cluster }} +name: {{ $.Values.statefulSet.name }} +port: {{ .port }} +no_advertise: true +routes: +{{- $proto := ternary "tls" "nats" .tls.enabled }} +{{- $auth := "" }} +{{- if and .routeURLs.user .routeURLs.password }} + {{- $auth = printf "%s:%s@" (urlquery .routeURLs.user) (urlquery .routeURLs.password) -}} +{{- end }} +{{- $domain := $.Values.headlessService.name }} +{{- if .routeURLs.useFQDN }} + {{- $domain = printf "%s.%s.svc.%s" $domain (include "nats.namespace" $) .routeURLs.k8sClusterDomain }} +{{- end }} +{{- $port := (int .port) }} +{{- range $i, $_ := until (int .replicas) }} +- {{ printf "%s://%s%s-%d.%s:%d" $proto $auth $.Values.statefulSet.name $i $domain $port }} +{{- end }} + +{{- if and .routeURLs.user .routeURLs.password }} +authorization: + user: {{ .routeURLs.user | quote }} + password: {{ .routeURLs.password | quote }} +{{- end }} + +{{- with .tls }} +{{- if .enabled }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/config.yaml b/charts/nats/nats/1.2.6/files/config/config.yaml new file mode 100644 index 000000000..92fd96f1a --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/config.yaml @@ -0,0 +1,114 @@ +{{- with .Values.config }} + +server_name: << $SERVER_NAME >> +lame_duck_grace_period: 10s +lame_duck_duration: 30s +pid_file: /var/run/nats/nats.pid + +######################################## +# NATS +######################################## +{{- with .nats }} +port: {{ .port }} + +{{- with .tls }} +{{- if .enabled }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- end }} +{{- end }} + +######################################## +# leafnodes +######################################## +{{- with .leafnodes }} +{{- if .enabled }} +leafnodes: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/leafnodes.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +######################################## +# websocket +######################################## +{{- with .websocket }} +{{- if .enabled }} +websocket: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/websocket.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +######################################## +# MQTT +######################################## +{{- with .mqtt }} +{{- if .enabled }} +mqtt: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/mqtt.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +######################################## +# cluster +######################################## +{{- with .cluster }} +{{- if .enabled }} +cluster: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/cluster.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +######################################## +# gateway +######################################## +{{- with .gateway }} +{{- if .enabled }} +gateway: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/gateway.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +######################################## +# monitor +######################################## +{{- with .monitor }} +{{- if .enabled }} +{{- if .tls.enabled }} +https_port: {{ .port }} +{{- else }} +http_port: {{ .port }} +{{- end }} +{{- end }} +{{- end }} + +######################################## +# profiling +######################################## +{{- with .profiling }} +{{- if .enabled }} +prof_port: {{ .port }} +{{- end }} +{{- end }} + +######################################## +# jetstream +######################################## +{{- with $.Values.config.jetstream -}} +{{- if .enabled }} +jetstream: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/jetstream.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +######################################## +# resolver +######################################## +{{- with $.Values.config.resolver -}} +{{- if .enabled }} +resolver: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/resolver.yaml" "ctx" $) .) | nindent 2 }} +{{- end }} +{{- end }} + +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/gateway.yaml b/charts/nats/nats/1.2.6/files/config/gateway.yaml new file mode 100644 index 000000000..32d4ed9f7 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/gateway.yaml @@ -0,0 +1,11 @@ +{{- with .Values.config.gateway }} +name: {{ $.Values.statefulSet.name }} +port: {{ .port }} + +{{- with .tls }} +{{- if .enabled }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/jetstream.yaml b/charts/nats/nats/1.2.6/files/config/jetstream.yaml new file mode 100644 index 000000000..17262f643 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/jetstream.yaml @@ -0,0 +1,23 @@ +{{- with .Values.config.jetstream }} +{{- with .memoryStore }} +{{- if .enabled }} +{{- with .maxSize }} +max_memory_store: << {{ . }} >> +{{- end }} +{{- else }} +max_memory_store: 0 +{{- end }} +{{- end }} +{{- with .fileStore }} +{{- if .enabled }} +store_dir: {{ .dir }} +{{- if .maxSize }} +max_file_store: << {{ .maxSize }} >> +{{- else if .pvc.enabled }} +max_file_store: << {{ .pvc.size }} >> +{{- end }} +{{- else }} +max_file_store: 0 +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/leafnodes.yaml b/charts/nats/nats/1.2.6/files/config/leafnodes.yaml new file mode 100644 index 000000000..3a1d9a14a --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/leafnodes.yaml @@ -0,0 +1,11 @@ +{{- with .Values.config.leafnodes }} +port: {{ .port }} +no_advertise: true + +{{- with .tls }} +{{- if .enabled }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/mqtt.yaml b/charts/nats/nats/1.2.6/files/config/mqtt.yaml new file mode 100644 index 000000000..e25d8a3e0 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/mqtt.yaml @@ -0,0 +1,10 @@ +{{- with .Values.config.mqtt }} +port: {{ .port }} + +{{- with .tls }} +{{- if .enabled }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/protocol.yaml b/charts/nats/nats/1.2.6/files/config/protocol.yaml new file mode 100644 index 000000000..288c80d75 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/protocol.yaml @@ -0,0 +1,10 @@ +{{- with .protocol }} +port: {{ .port }} + +{{- with .tls }} +{{- if .enabled }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/resolver.yaml b/charts/nats/nats/1.2.6/files/config/resolver.yaml new file mode 100644 index 000000000..a6761c403 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/resolver.yaml @@ -0,0 +1,3 @@ +{{- with .Values.config.resolver }} +dir: {{ .dir }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/tls.yaml b/charts/nats/nats/1.2.6/files/config/tls.yaml new file mode 100644 index 000000000..26aee0155 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/tls.yaml @@ -0,0 +1,16 @@ +# tls +{{- with .tls }} +{{- if .secretName }} +{{- $dir := trimSuffix "/" .dir }} +cert_file: {{ printf "%s/%s" $dir (.cert | default "tls.crt") | quote }} +key_file: {{ printf "%s/%s" $dir (.key | default "tls.key") | quote }} +{{- end }} +{{- end }} + +# tlsCA +{{- with $.Values.tlsCA }} +{{- if and .enabled (or .configMapName .secretName) }} +{{- $dir := trimSuffix "/" .dir }} +ca_file: {{ printf "%s/%s" $dir (.key | default "ca.crt") | quote }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/config/websocket.yaml b/charts/nats/nats/1.2.6/files/config/websocket.yaml new file mode 100644 index 000000000..afcd178a7 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/config/websocket.yaml @@ -0,0 +1,12 @@ +{{- with .Values.config.websocket }} +port: {{ .port }} + +{{- if .tls.enabled }} +{{- with .tls }} +tls: + {{- include "nats.loadMergePatch" (merge (dict "file" "config/tls.yaml" "ctx" (merge (dict "tls" .) $)) .) | nindent 2 }} +{{- end }} +{{- else }} +no_tls: true +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/headless-service.yaml b/charts/nats/nats/1.2.6/files/headless-service.yaml new file mode 100644 index 000000000..da6552b37 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/headless-service.yaml @@ -0,0 +1,24 @@ +apiVersion: v1 +kind: Service +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.headlessService.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +spec: + selector: + {{- include "nats.selectorLabels" $ | nindent 4 }} + clusterIP: None + publishNotReadyAddresses: true + ports: + {{- range $protocol := list "nats" "leafnodes" "websocket" "mqtt" "cluster" "gateway" "monitor" "profiling" }} + {{- $configProtocol := get $.Values.config $protocol }} + {{- if or (eq $protocol "nats") $configProtocol.enabled }} + {{- $tlsEnabled := false }} + {{- if hasKey $configProtocol "tls" }} + {{- $tlsEnabled = $configProtocol.tls.enabled }} + {{- end }} + {{- $appProtocol := or (eq $protocol "websocket") (eq $protocol "monitor") | ternary ($tlsEnabled | ternary "https" "http") ($tlsEnabled | ternary "tls" "tcp") }} + - {{ dict "name" $protocol "port" $configProtocol.port "targetPort" $protocol "appProtocol" $appProtocol | toYaml | nindent 4 }} + {{- end }} + {{- end }} diff --git a/charts/nats/nats/1.2.6/files/ingress.yaml b/charts/nats/nats/1.2.6/files/ingress.yaml new file mode 100644 index 000000000..b59f0fa5f --- /dev/null +++ b/charts/nats/nats/1.2.6/files/ingress.yaml @@ -0,0 +1,34 @@ +{{- with .Values.config.websocket.ingress }} +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +spec: + {{- with .className }} + ingressClassName: {{ . | quote }} + {{- end }} + rules: + {{- $path := .path }} + {{- $pathType := .pathType }} + {{- range .hosts }} + - host: {{ . | quote }} + http: + paths: + - path: {{ $path | quote }} + pathType: {{ $pathType | quote }} + backend: + service: + name: {{ $.Values.service.name }} + port: + name: websocket + {{- end }} + {{- if .tlsSecretName }} + tls: + - secretName: {{ .tlsSecretName | quote }} + hosts: + {{- toYaml .hosts | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/contents-secret.yaml b/charts/nats/nats/1.2.6/files/nats-box/contents-secret.yaml new file mode 100644 index 000000000..6e8fdb26f --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/contents-secret.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Secret +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.natsBox.contentsSecret.name }} + labels: + {{- include "natsBox.labels" $ | nindent 4 }} +type: Opaque +stringData: + {{- range $ctxKey, $ctxVal := .Values.natsBox.contexts }} + {{- range $secretKey, $secretVal := dict "creds" "creds" "nkey" "nk" }} + {{- $secret := get $ctxVal $secretKey }} + {{- if and $secret $secret.contents }} + "{{ $ctxKey }}.{{ $secretVal }}": {{ $secret.contents | quote }} + {{- end }} + {{- end }} + {{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/contexts-secret/context.yaml b/charts/nats/nats/1.2.6/files/nats-box/contexts-secret/context.yaml new file mode 100644 index 000000000..54480eac9 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/contexts-secret/context.yaml @@ -0,0 +1,51 @@ +{{- $contextName := .contextName }} + +# url +{{- if .Values.service.enabled }} +url: nats://{{ .Values.service.name }} +{{- else }} +url: nats://{{ .Values.headlessService.name }} +{{- end }} + +{{- with .context }} + +# creds +{{- with .creds}} +{{- if .contents }} +creds: /etc/nats-contents/{{ $contextName }}.creds +{{- else if .secretName }} +{{- $dir := trimSuffix "/" .dir }} +creds: {{ printf "%s/%s" $dir (.key | default "nats.creds") | quote }} +{{- end }} +{{- end }} + +# nkey +{{- with .nkey}} +{{- if .contents }} +nkey: /etc/nats-contents/{{ $contextName }}.nk +{{- else if .secretName }} +{{- $dir := trimSuffix "/" .dir }} +nkey: {{ printf "%s/%s" $dir (.key | default "nats.nk") | quote }} +{{- end }} +{{- end }} + +# tls +{{- with .tls }} +{{- if .secretName }} +{{- $dir := trimSuffix "/" .dir }} +cert: {{ printf "%s/%s" $dir (.cert | default "tls.crt") | quote }} +key: {{ printf "%s/%s" $dir (.key | default "tls.key") | quote }} +{{- end }} +{{- end }} + +# tlsCA +{{- if $.Values.config.nats.tls.enabled }} +{{- with $.Values.tlsCA }} +{{- if and .enabled (or .configMapName .secretName) }} +{{- $dir := trimSuffix "/" .dir }} +ca: {{ printf "%s/%s" $dir (.key | default "ca.crt") | quote }} +{{- end }} +{{- end }} +{{- end }} + +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/contexts-secret/contexts-secret.yaml b/charts/nats/nats/1.2.6/files/nats-box/contexts-secret/contexts-secret.yaml new file mode 100644 index 000000000..0ce8d1d87 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/contexts-secret/contexts-secret.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Secret +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.natsBox.contextsSecret.name }} + labels: + {{- include "natsBox.labels" $ | nindent 4 }} +type: Opaque +stringData: +{{- range $ctxKey, $ctxVal := .Values.natsBox.contexts }} + "{{ $ctxKey }}.json": | + {{- include "toPrettyRawJson" (include "nats.loadMergePatch" (dict "file" "nats-box/contexts-secret/context.yaml" "merge" (.merge | default dict) "patch" (.patch | default list) "ctx" (merge (dict "contextName" $ctxKey "context" $ctxVal) $)) | fromYaml) | nindent 4 }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/deployment/container.yaml b/charts/nats/nats/1.2.6/files/nats-box/deployment/container.yaml new file mode 100644 index 000000000..aa1753b4b --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/deployment/container.yaml @@ -0,0 +1,46 @@ +name: nats-box +{{ include "nats.image" (merge (pick $.Values "global") .Values.natsBox.container.image) }} + +{{- with .Values.natsBox.container.env }} +env: +{{- include "nats.env" . }} +{{- end }} + +command: +- sh +- -ec +- | + work_dir="$(pwd)" + mkdir -p "$XDG_CONFIG_HOME/nats" + cd "$XDG_CONFIG_HOME/nats" + if ! [ -s context ]; then + ln -s /etc/nats-contexts context + fi + {{- if .Values.natsBox.defaultContextName }} + if ! [ -f context.txt ]; then + echo -n {{ .Values.natsBox.defaultContextName | quote }} > context.txt + fi + {{- end }} + cd "$work_dir" + exec /entrypoint.sh "$@" +- -- +args: +- sh +- -ec +- trap true INT TERM; sleep infinity & wait +volumeMounts: +# contexts secret +- name: contexts + mountPath: /etc/nats-contexts +# contents secret +{{- if .hasContentsSecret }} +- name: contents + mountPath: /etc/nats-contents +{{- end }} +# tlsCA +{{- include "nats.tlsCAVolumeMount" $ }} +# secrets +{{- range (include "natsBox.secretNames" $ | fromJson).secretNames }} +- name: {{ .name | quote }} + mountPath: {{ .dir | quote }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/deployment/deployment.yaml b/charts/nats/nats/1.2.6/files/nats-box/deployment/deployment.yaml new file mode 100644 index 000000000..bf39dd8d5 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/deployment/deployment.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.natsBox.deployment.name }} + labels: + {{- include "natsBox.labels" $ | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "natsBox.selectorLabels" $ | nindent 6 }} + replicas: 1 + template: + {{- with .Values.natsBox.podTemplate }} + {{ include "nats.loadMergePatch" (merge (dict "file" "nats-box/deployment/pod-template.yaml" "ctx" $) .) | nindent 4 }} + {{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/deployment/pod-template.yaml b/charts/nats/nats/1.2.6/files/nats-box/deployment/pod-template.yaml new file mode 100644 index 000000000..71056bfb6 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/deployment/pod-template.yaml @@ -0,0 +1,44 @@ +metadata: + labels: + {{- include "natsBox.labels" $ | nindent 4 }} +spec: + containers: + {{- with .Values.natsBox.container }} + - {{ include "nats.loadMergePatch" (merge (dict "file" "nats-box/deployment/container.yaml" "ctx" $) .) | nindent 4 }} + {{- end }} + + # service discovery uses DNS; don't need service env vars + enableServiceLinks: false + + {{- with .Values.global.image.pullSecretNames }} + imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} + {{- end }} + + {{- with .Values.natsBox.serviceAccount }} + {{- if .enabled }} + serviceAccountName: {{ .name | quote }} + {{- end }} + {{- end }} + + volumes: + # contexts secret + - name: contexts + secret: + secretName: {{ .Values.natsBox.contextsSecret.name }} + # contents secret + {{- if .hasContentsSecret }} + - name: contents + secret: + secretName: {{ .Values.natsBox.contentsSecret.name }} + {{- end }} + # tlsCA + {{- include "nats.tlsCAVolume" $ | nindent 2 }} + # secrets + {{- range (include "natsBox.secretNames" $ | fromJson).secretNames }} + - name: {{ .name | quote }} + secret: + secretName: {{ .secretName | quote }} + {{- end }} diff --git a/charts/nats/nats/1.2.6/files/nats-box/service-account.yaml b/charts/nats/nats/1.2.6/files/nats-box/service-account.yaml new file mode 100644 index 000000000..c31e52f18 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/nats-box/service-account.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.natsBox.serviceAccount.name }} + labels: + {{- include "natsBox.labels" $ | nindent 4 }} diff --git a/charts/nats/nats/1.2.6/files/pod-disruption-budget.yaml b/charts/nats/nats/1.2.6/files/pod-disruption-budget.yaml new file mode 100644 index 000000000..fd1fdead5 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/pod-disruption-budget.yaml @@ -0,0 +1,12 @@ +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.podDisruptionBudget.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +spec: + maxUnavailable: 1 + selector: + matchLabels: + {{- include "nats.selectorLabels" $ | nindent 6 }} diff --git a/charts/nats/nats/1.2.6/files/pod-monitor.yaml b/charts/nats/nats/1.2.6/files/pod-monitor.yaml new file mode 100644 index 000000000..c6c8eae06 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/pod-monitor.yaml @@ -0,0 +1,13 @@ +apiVersion: monitoring.coreos.com/v1 +kind: PodMonitor +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.promExporter.podMonitor.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "nats.selectorLabels" $ | nindent 6 }} + podMetricsEndpoints: + - port: prom-metrics diff --git a/charts/nats/nats/1.2.6/files/service-account.yaml b/charts/nats/nats/1.2.6/files/service-account.yaml new file mode 100644 index 000000000..22c18cc70 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/service-account.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.serviceAccount.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} diff --git a/charts/nats/nats/1.2.6/files/service.yaml b/charts/nats/nats/1.2.6/files/service.yaml new file mode 100644 index 000000000..db08fe5b5 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/service.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Service +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.service.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +spec: + selector: + {{- include "nats.selectorLabels" $ | nindent 4 }} + ports: + {{- range $protocol := list "nats" "leafnodes" "websocket" "mqtt" "cluster" "gateway" "monitor" "profiling" }} + {{- $configProtocol := get $.Values.config $protocol }} + {{- $servicePort := get $.Values.service.ports $protocol }} + {{- if and (or (eq $protocol "nats") $configProtocol.enabled) $servicePort.enabled }} + {{- $tlsEnabled := false }} + {{- if hasKey $configProtocol "tls" }} + {{- $tlsEnabled = $configProtocol.tls.enabled }} + {{- end }} + {{- $appProtocol := or (eq $protocol "websocket") (eq $protocol "monitor") | ternary ($tlsEnabled | ternary "https" "http") ($tlsEnabled | ternary "tls" "tcp") }} + - {{ merge (dict "name" $protocol "targetPort" $protocol "appProtocol" $appProtocol) (omit $servicePort "enabled") (dict "port" $configProtocol.port) | toYaml | nindent 4 }} + {{- end }} + {{- end }} diff --git a/charts/nats/nats/1.2.6/files/stateful-set/jetstream-pvc.yaml b/charts/nats/nats/1.2.6/files/stateful-set/jetstream-pvc.yaml new file mode 100644 index 000000000..a43f20059 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/jetstream-pvc.yaml @@ -0,0 +1,13 @@ +{{- with .Values.config.jetstream.fileStore.pvc }} +metadata: + name: {{ .name }} +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .size | quote }} + {{- with .storageClassName }} + storageClassName: {{ . | quote }} + {{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/stateful-set/nats-container.yaml b/charts/nats/nats/1.2.6/files/stateful-set/nats-container.yaml new file mode 100644 index 000000000..c5402efea --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/nats-container.yaml @@ -0,0 +1,106 @@ +name: nats +{{ include "nats.image" (merge (pick $.Values "global") .Values.container.image) }} + +ports: +{{- range $protocol := list "nats" "leafnodes" "websocket" "mqtt" "cluster" "gateway" "monitor" "profiling" }} +{{- $configProtocol := get $.Values.config $protocol }} +{{- $containerPort := get $.Values.container.ports $protocol }} +{{- if or (eq $protocol "nats") $configProtocol.enabled }} +- {{ merge (dict "name" $protocol "containerPort" $configProtocol.port) $containerPort | toYaml | nindent 2 }} +{{- end }} +{{- end }} + +args: +- --config +- /etc/nats-config/nats.conf + +env: +- name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name +- name: SERVER_NAME + value: {{ printf "%s$(POD_NAME)" .Values.config.serverNamePrefix | quote }} +{{- with .Values.container.env }} +{{- include "nats.env" . }} +{{- end }} + +lifecycle: + preStop: + exec: + # send the lame duck shutdown signal to trigger a graceful shutdown + command: + - nats-server + - -sl=ldm=/var/run/nats/nats.pid + +{{- with .Values.config.monitor }} +{{- if .enabled }} +startupProbe: + httpGet: + path: /healthz + port: monitor + {{- if .tls.enabled }} + scheme: HTTPS + {{- end}} + initialDelaySeconds: 10 + timeoutSeconds: 5 + periodSeconds: 10 + successThreshold: 1 + failureThreshold: 90 +readinessProbe: + httpGet: + path: /healthz?js-server-only=true + port: monitor + {{- if .tls.enabled }} + scheme: HTTPS + {{- end}} + initialDelaySeconds: 10 + timeoutSeconds: 5 + periodSeconds: 10 + successThreshold: 1 + failureThreshold: 3 +livenessProbe: + httpGet: + path: /healthz?js-enabled-only=true + port: monitor + {{- if .tls.enabled }} + scheme: HTTPS + {{- end}} + initialDelaySeconds: 10 + timeoutSeconds: 5 + periodSeconds: 30 + successThreshold: 1 + failureThreshold: 3 +{{- end }} +{{- end }} + +volumeMounts: +# nats config +- name: config + mountPath: /etc/nats-config +# PID volume +- name: pid + mountPath: /var/run/nats +# JetStream PVC +{{- with .Values.config.jetstream }} +{{- if and .enabled .fileStore.enabled .fileStore.pvc.enabled }} +{{- with .fileStore }} +- name: {{ .pvc.name }} + mountPath: {{ .dir | quote }} +{{- end }} +{{- end }} +{{- end }} +# resolver PVC +{{- with .Values.config.resolver }} +{{- if and .enabled .pvc.enabled }} +- name: {{ .pvc.name }} + mountPath: {{ .dir | quote }} +{{- end }} +{{- end }} +# tlsCA +{{- include "nats.tlsCAVolumeMount" $ }} +# secrets +{{- range (include "nats.secretNames" $ | fromJson).secretNames }} +- name: {{ .name | quote }} + mountPath: {{ .dir | quote }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/stateful-set/pod-template.yaml b/charts/nats/nats/1.2.6/files/stateful-set/pod-template.yaml new file mode 100644 index 000000000..bb1d8d7be --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/pod-template.yaml @@ -0,0 +1,71 @@ +metadata: + labels: + {{- include "nats.labels" $ | nindent 4 }} + annotations: + {{- if .Values.podTemplate.configChecksumAnnotation }} + {{- $configMap := include "nats.loadMergePatch" (merge (dict "file" "config-map.yaml" "ctx" $) $.Values.configMap) }} + checksum/config: {{ sha256sum $configMap }} + {{- end }} +spec: + containers: + # nats + {{- $nats := dict }} + {{- with .Values.container }} + {{- $nats = include "nats.loadMergePatch" (merge (dict "file" "stateful-set/nats-container.yaml" "ctx" $) .) | fromYaml }} + - {{ toYaml $nats | nindent 4 }} + {{- end }} + # reloader + {{- with .Values.reloader }} + {{- if .enabled }} + - {{ include "nats.loadMergePatch" (merge (dict "file" "stateful-set/reloader-container.yaml" "ctx" (merge (dict "natsVolumeMounts" $nats.volumeMounts) $)) .) | nindent 4 }} + {{- end }} + {{- end }} + {{- with .Values.promExporter }} + {{- if .enabled }} + - {{ include "nats.loadMergePatch" (merge (dict "file" "stateful-set/prom-exporter-container.yaml" "ctx" $) .) | nindent 4 }} + {{- end }} + {{- end }} + + # service discovery uses DNS; don't need service env vars + enableServiceLinks: false + + {{- with .Values.global.image.pullSecretNames }} + imagePullSecrets: + {{- range . }} + - name: {{ . | quote }} + {{- end }} + {{- end }} + + {{- with .Values.serviceAccount }} + {{- if .enabled }} + serviceAccountName: {{ .name | quote }} + {{- end }} + {{- end }} + + {{- if .Values.reloader.enabled }} + shareProcessNamespace: true + {{- end }} + + volumes: + # nats config + - name: config + configMap: + name: {{ .Values.configMap.name }} + # PID volume + - name: pid + emptyDir: {} + # tlsCA + {{- include "nats.tlsCAVolume" $ | nindent 2 }} + # secrets + {{- range (include "nats.secretNames" $ | fromJson).secretNames }} + - name: {{ .name | quote }} + secret: + secretName: {{ .secretName | quote }} + {{- end }} + + {{- with .Values.podTemplate.topologySpreadConstraints }} + topologySpreadConstraints: + {{- range $k, $v := . }} + - {{ merge (dict "topologyKey" $k "labelSelector" (dict "matchLabels" (include "nats.selectorLabels" $ | fromYaml))) $v | toYaml | nindent 4 }} + {{- end }} + {{- end}} diff --git a/charts/nats/nats/1.2.6/files/stateful-set/prom-exporter-container.yaml b/charts/nats/nats/1.2.6/files/stateful-set/prom-exporter-container.yaml new file mode 100644 index 000000000..c3e1b6fbe --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/prom-exporter-container.yaml @@ -0,0 +1,30 @@ +name: prom-exporter +{{ include "nats.image" (merge (pick $.Values "global") .Values.promExporter.image) }} + +ports: +- name: prom-metrics + containerPort: {{ .Values.promExporter.port }} + +{{- with .Values.promExporter.env }} +env: +{{- include "nats.env" . }} +{{- end }} + +args: +- -port={{ .Values.promExporter.port }} +- -connz +- -routez +- -subz +- -varz +- -prefix=nats +- -use_internal_server_id +{{- if .Values.config.jetstream.enabled }} +- -jsz=all +{{- end }} +{{- if .Values.config.leafnodes.enabled }} +- -leafz +{{- end }} +{{- if .Values.config.gateway.enabled }} +- -gatewayz +{{- end }} +- http://localhost:{{ .Values.config.monitor.port }}/ diff --git a/charts/nats/nats/1.2.6/files/stateful-set/reloader-container.yaml b/charts/nats/nats/1.2.6/files/stateful-set/reloader-container.yaml new file mode 100644 index 000000000..96722045f --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/reloader-container.yaml @@ -0,0 +1,27 @@ +name: reloader +{{ include "nats.image" (merge (pick $.Values "global") .Values.reloader.image) }} + +{{- with .Values.reloader.env }} +env: +{{- include "nats.env" . }} +{{- end }} + +args: +- -pid +- /var/run/nats/nats.pid +- -config +- /etc/nats-config/nats.conf +{{ include "nats.reloaderConfig" (dict "config" .config "dir" "/etc/nats-config") }} + +volumeMounts: +- name: pid + mountPath: /var/run/nats +{{- range $mnt := .natsVolumeMounts }} +{{- $found := false }} +{{- range $.Values.reloader.natsVolumeMountPrefixes }} +{{- if and (not $found) (hasPrefix . $mnt.mountPath) }} +{{- $found = true }} +- {{ toYaml $mnt | nindent 2}} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/stateful-set/resolver-pvc.yaml b/charts/nats/nats/1.2.6/files/stateful-set/resolver-pvc.yaml new file mode 100644 index 000000000..3634cd826 --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/resolver-pvc.yaml @@ -0,0 +1,13 @@ +{{- with .Values.config.resolver.pvc }} +metadata: + name: {{ .name }} +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .size | quote }} + {{- with .storageClassName }} + storageClassName: {{ . | quote }} + {{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/files/stateful-set/stateful-set.yaml b/charts/nats/nats/1.2.6/files/stateful-set/stateful-set.yaml new file mode 100644 index 000000000..cd8082cbb --- /dev/null +++ b/charts/nats/nats/1.2.6/files/stateful-set/stateful-set.yaml @@ -0,0 +1,37 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + {{- include "nats.metadataNamespace" $ | nindent 2 }} + name: {{ .Values.statefulSet.name }} + labels: + {{- include "nats.labels" $ | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "nats.selectorLabels" $ | nindent 6 }} + {{- if .Values.config.cluster.enabled }} + replicas: {{ .Values.config.cluster.replicas }} + {{- else }} + replicas: 1 + {{- end }} + serviceName: {{ .Values.headlessService.name }} + podManagementPolicy: Parallel + template: + {{- with .Values.podTemplate }} + {{ include "nats.loadMergePatch" (merge (dict "file" "stateful-set/pod-template.yaml" "ctx" $) .) | nindent 4 }} + {{- end }} + volumeClaimTemplates: + {{- with .Values.config.jetstream }} + {{- if and .enabled .fileStore.enabled .fileStore.pvc.enabled }} + {{- with .fileStore.pvc }} + - {{ include "nats.loadMergePatch" (merge (dict "file" "stateful-set/jetstream-pvc.yaml" "ctx" $) .) | nindent 4 }} + {{- end }} + {{- end }} + {{- end }} + {{- with .Values.config.resolver }} + {{- if and .enabled .pvc.enabled }} + {{- with .pvc }} + - {{ include "nats.loadMergePatch" (merge (dict "file" "stateful-set/resolver-pvc.yaml" "ctx" $) .) | nindent 4 }} + {{- end }} + {{- end }} + {{- end }} diff --git a/charts/nats/nats/1.2.6/questions.yaml b/charts/nats/nats/1.2.6/questions.yaml new file mode 100644 index 000000000..a476e440d --- /dev/null +++ b/charts/nats/nats/1.2.6/questions.yaml @@ -0,0 +1,12 @@ +questions: +- variable: cluster.enabled + default: false + type: boolean + label: Enable Cluster + group: "Cluster Settings" + show_subquestion_if: "true" + subquestions: + - variable: cluster.replicas + default: 3 + type: int + label: Replicas diff --git a/charts/nats/nats/1.2.6/templates/_helpers.tpl b/charts/nats/nats/1.2.6/templates/_helpers.tpl new file mode 100644 index 000000000..ba831397d --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/_helpers.tpl @@ -0,0 +1,281 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "nats.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "nats.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "nats.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Print the namespace +*/}} +{{- define "nats.namespace" -}} +{{- default .Release.Namespace .Values.namespaceOverride }} +{{- end }} + +{{/* +Print the namespace for the metadata section +*/}} +{{- define "nats.metadataNamespace" -}} +{{- with .Values.namespaceOverride }} +namespace: {{ . | quote }} +{{- end }} +{{- end }} + +{{/* +Set default values. +*/}} +{{- define "nats.defaultValues" }} +{{- if not .defaultValuesSet }} + {{- $name := include "nats.fullname" . }} + {{- with .Values }} + {{- $_ := set .config.jetstream.fileStore.pvc "name" (.config.jetstream.fileStore.pvc.name | default (printf "%s-js" $name)) }} + {{- $_ := set .config.resolver.pvc "name" (.config.resolver.pvc.name | default (printf "%s-resolver" $name)) }} + {{- $_ := set .config.websocket.ingress "name" (.config.websocket.ingress.name | default (printf "%s-ws" $name)) }} + {{- $_ := set .configMap "name" (.configMap.name | default (printf "%s-config" $name)) }} + {{- $_ := set .headlessService "name" (.headlessService.name | default (printf "%s-headless" $name)) }} + {{- $_ := set .natsBox.contentsSecret "name" (.natsBox.contentsSecret.name | default (printf "%s-box-contents" $name)) }} + {{- $_ := set .natsBox.contextsSecret "name" (.natsBox.contextsSecret.name | default (printf "%s-box-contexts" $name)) }} + {{- $_ := set .natsBox.deployment "name" (.natsBox.deployment.name | default (printf "%s-box" $name)) }} + {{- $_ := set .natsBox.serviceAccount "name" (.natsBox.serviceAccount.name | default (printf "%s-box" $name)) }} + {{- $_ := set .podDisruptionBudget "name" (.podDisruptionBudget.name | default $name) }} + {{- $_ := set .service "name" (.service.name | default $name) }} + {{- $_ := set .serviceAccount "name" (.serviceAccount.name | default $name) }} + {{- $_ := set .statefulSet "name" (.statefulSet.name | default $name) }} + {{- $_ := set .promExporter.podMonitor "name" (.promExporter.podMonitor.name | default $name) }} + {{- end }} + + {{- $values := get (include "tplYaml" (dict "doc" .Values "ctx" $) | fromJson) "doc" }} + {{- $_ := set . "Values" $values }} + + {{- $hasContentsSecret := false }} + {{- range $ctxKey, $ctxVal := .Values.natsBox.contexts }} + {{- range $secretKey, $secretVal := dict "creds" "nats-creds" "nkey" "nats-nkeys" "tls" "nats-certs" }} + {{- $secret := get $ctxVal $secretKey }} + {{- if $secret }} + {{- $_ := set $secret "dir" ($secret.dir | default (printf "/etc/%s/%s" $secretVal $ctxKey)) }} + {{- if and (ne $secretKey "tls") $secret.contents }} + {{- $hasContentsSecret = true }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- $_ := set $ "hasContentsSecret" $hasContentsSecret }} + + {{- with .Values.config }} + {{- $config := include "nats.loadMergePatch" (merge (dict "file" "config/config.yaml" "ctx" $) .) | fromYaml }} + {{- $_ := set $ "config" $config }} + {{- end }} + + {{- $_ := set . "defaultValuesSet" true }} +{{- end }} +{{- end }} + +{{/* +NATS labels +*/}} +{{- define "nats.labels" -}} +{{- with .Values.global.labels -}} +{{ toYaml . }} +{{ end -}} +helm.sh/chart: {{ include "nats.chart" . }} +{{ include "nats.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +NATS selector labels +*/}} +{{- define "nats.selectorLabels" -}} +app.kubernetes.io/name: {{ include "nats.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/component: nats +{{- end }} + +{{/* +NATS Box labels +*/}} +{{- define "natsBox.labels" -}} +{{- with .Values.global.labels -}} +{{ toYaml . }} +{{ end -}} +helm.sh/chart: {{ include "nats.chart" . }} +{{ include "natsBox.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +NATS Box selector labels +*/}} +{{- define "natsBox.selectorLabels" -}} +app.kubernetes.io/name: {{ include "nats.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/component: nats-box +{{- end }} + +{{/* +Print the image +*/}} +{{- define "nats.image" }} +{{- $image := printf "%s:%s" .repository .tag }} +{{- if or .registry .global.image.registry }} +{{- $image = printf "%s/%s" (.registry | default .global.image.registry) $image }} +{{- end -}} +image: {{ $image }} +{{- if or .pullPolicy .global.image.pullPolicy }} +imagePullPolicy: {{ .pullPolicy | default .global.image.pullPolicy }} +{{- end }} +{{- end }} + +{{- define "nats.secretNames" -}} +{{- $secrets := list }} +{{- range $protocol := list "nats" "leafnodes" "websocket" "mqtt" "cluster" "gateway" }} + {{- $configProtocol := get $.Values.config $protocol }} + {{- if and (or (eq $protocol "nats") $configProtocol.enabled) $configProtocol.tls.enabled $configProtocol.tls.secretName }} + {{- $secrets = append $secrets (merge (dict "name" (printf "%s-tls" $protocol)) $configProtocol.tls) }} + {{- end }} +{{- end }} +{{- toJson (dict "secretNames" $secrets) }} +{{- end }} + +{{- define "natsBox.secretNames" -}} +{{- $secrets := list }} +{{- range $ctxKey, $ctxVal := .Values.natsBox.contexts }} +{{- range $secretKey, $secretVal := dict "creds" "nats-creds" "nkey" "nats-nkeys" "tls" "nats-certs" }} + {{- $secret := get $ctxVal $secretKey }} + {{- if and $secret $secret.secretName }} + {{- $secrets = append $secrets (merge (dict "name" (printf "ctx-%s-%s" $ctxKey $secretKey)) $secret) }} + {{- end }} + {{- end }} +{{- end }} +{{- toJson (dict "secretNames" $secrets) }} +{{- end }} + +{{- define "nats.tlsCAVolume" -}} +{{- with .Values.tlsCA }} +{{- if and .enabled (or .configMapName .secretName) }} +- name: tls-ca +{{- if .configMapName }} + configMap: + name: {{ .configMapName | quote }} +{{- else if .secretName }} + secret: + secretName: {{ .secretName | quote }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} + +{{- define "nats.tlsCAVolumeMount" -}} +{{- with .Values.tlsCA }} +{{- if and .enabled (or .configMapName .secretName) }} +- name: tls-ca + mountPath: {{ .dir | quote }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +translates env var map to list +*/}} +{{- define "nats.env" -}} +{{- range $k, $v := . }} +{{- if kindIs "string" $v }} +- name: {{ $k | quote }} + value: {{ $v | quote }} +{{- else if kindIs "map" $v }} +- {{ merge (dict "name" $k) $v | toYaml | nindent 2 }} +{{- else }} +{{- fail (cat "env var" $k "must be string or map, got" (kindOf $v)) }} +{{- end }} +{{- end }} +{{- end }} + +{{- /* +nats.loadMergePatch +input: map with 4 keys: +- file: name of file to load +- ctx: context to pass to tpl +- merge: interface{} to merge +- patch: []interface{} valid JSON Patch document +output: JSON encoded map with 1 key: +- doc: interface{} patched json result +*/}} +{{- define "nats.loadMergePatch" -}} +{{- $doc := tpl (.ctx.Files.Get (printf "files/%s" .file)) .ctx | fromYaml | default dict -}} +{{- $doc = mergeOverwrite $doc (deepCopy (.merge | default dict)) -}} +{{- get (include "jsonpatch" (dict "doc" $doc "patch" (.patch | default list)) | fromJson ) "doc" | toYaml -}} +{{- end }} + + +{{- /* +nats.reloaderConfig +input: map with 2 keys: +- config: interface{} nats config +- dir: dir config file is in +output: YAML list of reloader config files +*/}} +{{- define "nats.reloaderConfig" -}} + {{- $dir := trimSuffix "/" .dir -}} + {{- with .config -}} + {{- if kindIs "map" . -}} + {{- range $k, $v := . -}} + {{- if or (eq $k "cert_file") (eq $k "key_file") (eq $k "ca_file") }} +- -config +- {{ $v }} + {{- else if hasSuffix "$include" $k }} +- -config +- {{ clean (printf "%s/%s" $dir $v) }} + {{- else }} + {{- include "nats.reloaderConfig" (dict "config" $v "dir" $dir) }} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + +{{- /* +nats.formatConfig +input: map[string]interface{} +output: string with following format rules +1. keys ending in $natsRaw are unquoted +2. keys ending in $natsInclude are converted to include directives +*/}} +{{- define "nats.formatConfig" -}} + {{- + (regexReplaceAll "\"<<\\s+(.*)\\s+>>\"" + (regexReplaceAll "\".*\\$include\": \"(.*)\",?" (include "toPrettyRawJson" .) "include ${1};") + "${1}") + -}} +{{- end -}} diff --git a/charts/nats/nats/1.2.6/templates/_jsonpatch.tpl b/charts/nats/nats/1.2.6/templates/_jsonpatch.tpl new file mode 100644 index 000000000..cd42c3bbc --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/_jsonpatch.tpl @@ -0,0 +1,219 @@ +{{- /* +jsonpatch +input: map with 2 keys: +- doc: interface{} valid JSON document +- patch: []interface{} valid JSON Patch document +output: JSON encoded map with 1 key: +- doc: interface{} patched json result +*/}} +{{- define "jsonpatch" -}} + {{- $params := fromJson (toJson .) -}} + {{- $patches := $params.patch -}} + {{- $docContainer := pick $params "doc" -}} + + {{- range $patch := $patches -}} + {{- if not (hasKey $patch "op") -}} + {{- fail "patch is missing op key" -}} + {{- end -}} + {{- if and (ne $patch.op "add") (ne $patch.op "remove") (ne $patch.op "replace") (ne $patch.op "copy") (ne $patch.op "move") (ne $patch.op "test") -}} + {{- fail (cat "patch has invalid op" $patch.op) -}} + {{- end -}} + {{- if not (hasKey $patch "path") -}} + {{- fail "patch is missing path key" -}} + {{- end -}} + {{- if and (or (eq $patch.op "add") (eq $patch.op "replace") (eq $patch.op "test")) (not (hasKey $patch "value")) -}} + {{- fail (cat "patch with op" $patch.op "is missing value key") -}} + {{- end -}} + {{- if and (or (eq $patch.op "copy") (eq $patch.op "move")) (not (hasKey $patch "from")) -}} + {{- fail (cat "patch with op" $patch.op "is missing from key") -}} + {{- end -}} + + {{- $opPathKeys := list "path" -}} + {{- if or (eq $patch.op "copy") (eq $patch.op "move") -}} + {{- $opPathKeys = append $opPathKeys "from" -}} + {{- end -}} + {{- $reSlice := list -}} + + {{- range $opPathKey := $opPathKeys -}} + {{- $obj := $docContainer -}} + {{- if and (eq $patch.op "copy") (eq $opPathKey "from") -}} + {{- $obj = (fromJson (toJson $docContainer)) -}} + {{- end -}} + {{- $key := "doc" -}} + {{- $lastMap := dict "root" $obj -}} + {{- $lastKey := "root" -}} + {{- $paths := (splitList "/" (get $patch $opPathKey)) -}} + {{- $firstPath := index $paths 0 -}} + {{- if ne (index $paths 0) "" -}} + {{- fail (cat "invalid" $opPathKey (get $patch $opPathKey) "must be empty string or start with /") -}} + {{- end -}} + {{- $paths = slice $paths 1 -}} + + {{- range $path := $paths -}} + {{- $path = replace "~1" "/" $path -}} + {{- $path = replace "~0" "~" $path -}} + + {{- if kindIs "slice" $obj -}} + {{- $mapObj := dict -}} + {{- range $i, $v := $obj -}} + {{- $_ := set $mapObj (toString $i) $v -}} + {{- end -}} + {{- $obj = $mapObj -}} + {{- $_ := set $lastMap $lastKey $obj -}} + {{- $reSlice = prepend $reSlice (dict "lastMap" $lastMap "lastKey" $lastKey "mapObj" $obj) -}} + {{- end -}} + + {{- if kindIs "map" $obj -}} + {{- if not (hasKey $obj $key) -}} + {{- fail (cat "key" $key "does not exist") -}} + {{- end -}} + {{- $lastKey = $key -}} + {{- $lastMap = $obj -}} + {{- $obj = index $obj $key -}} + {{- $key = $path -}} + {{- else -}} + {{- fail (cat "cannot iterate into path" $key "on type" (kindOf $obj)) -}} + {{- end -}} + {{- end -}} + + {{- $_ := set $patch (printf "%sKey" $opPathKey) $key -}} + {{- $_ := set $patch (printf "%sLastKey" $opPathKey) $lastKey -}} + {{- $_ = set $patch (printf "%sLastMap" $opPathKey) $lastMap -}} + {{- end -}} + + {{- if eq $patch.op "move" }} + {{- if and (ne $patch.path $patch.from) (hasPrefix (printf "%s/" $patch.path) (printf "%s/" $patch.from)) -}} + {{- fail (cat "from" $patch.from "may not be a child of path" $patch.path) -}} + {{- end -}} + {{- end -}} + + {{- if or (eq $patch.op "move") (eq $patch.op "copy") (eq $patch.op "test") }} + {{- $key := $patch.fromKey -}} + {{- $lastMap := $patch.fromLastMap -}} + {{- $lastKey := $patch.fromLastKey -}} + {{- $setKey := "value" -}} + {{- if eq $patch.op "test" }} + {{- $key = $patch.pathKey -}} + {{- $lastMap = $patch.pathLastMap -}} + {{- $lastKey = $patch.pathLastKey -}} + {{- $setKey = "testValue" -}} + {{- end -}} + {{- $obj := index $lastMap $lastKey -}} + + {{- if kindIs "map" $obj -}} + {{- if not (hasKey $obj $key) -}} + {{- fail (cat $key "does not exist") -}} + {{- end -}} + {{- $_ := set $patch $setKey (index $obj $key) -}} + + {{- else if kindIs "slice" $obj -}} + {{- $i := atoi $key -}} + {{- if ne $key (toString $i) -}} + {{- fail (cat "cannot convert" $key "to int") -}} + {{- end -}} + {{- if lt $i 0 -}} + {{- fail "slice index <0" -}} + {{- else if lt $i (len $obj) -}} + {{- $_ := set $patch $setKey (index $obj $i) -}} + {{- else -}} + {{- fail "slice index >= slice length" -}} + {{- end -}} + + {{- else -}} + {{- fail (cat "cannot" $patch.op $key "on type" (kindOf $obj)) -}} + {{- end -}} + {{- end -}} + + {{- if or (eq $patch.op "remove") (eq $patch.op "replace") (eq $patch.op "move") }} + {{- $key := $patch.pathKey -}} + {{- $lastMap := $patch.pathLastMap -}} + {{- $lastKey := $patch.pathLastKey -}} + {{- if eq $patch.op "move" }} + {{- $key = $patch.fromKey -}} + {{- $lastMap = $patch.fromLastMap -}} + {{- $lastKey = $patch.fromLastKey -}} + {{- end -}} + {{- $obj := index $lastMap $lastKey -}} + + {{- if kindIs "map" $obj -}} + {{- if not (hasKey $obj $key) -}} + {{- fail (cat $key "does not exist") -}} + {{- end -}} + {{- $_ := unset $obj $key -}} + + {{- else if kindIs "slice" $obj -}} + {{- $i := atoi $key -}} + {{- if ne $key (toString $i) -}} + {{- fail (cat "cannot convert" $key "to int") -}} + {{- end -}} + {{- if lt $i 0 -}} + {{- fail "slice index <0" -}} + {{- else if eq $i 0 -}} + {{- $_ := set $lastMap $lastKey (slice $obj 1) -}} + {{- else if lt $i (sub (len $obj) 1) -}} + {{- $_ := set $lastMap $lastKey (concat (slice $obj 0 $i) (slice $obj (add $i 1) (len $obj))) -}} + {{- else if eq $i (sub (len $obj) 1) -}} + {{- $_ := set $lastMap $lastKey (slice $obj 0 (sub (len $obj) 1)) -}} + {{- else -}} + {{- fail "slice index >= slice length" -}} + {{- end -}} + + {{- else -}} + {{- fail (cat "cannot" $patch.op $key "on type" (kindOf $obj)) -}} + {{- end -}} + {{- end -}} + + {{- if or (eq $patch.op "add") (eq $patch.op "replace") (eq $patch.op "move") (eq $patch.op "copy") }} + {{- $key := $patch.pathKey -}} + {{- $lastMap := $patch.pathLastMap -}} + {{- $lastKey := $patch.pathLastKey -}} + {{- $value := $patch.value -}} + {{- $obj := index $lastMap $lastKey -}} + + {{- if kindIs "map" $obj -}} + {{- $_ := set $obj $key $value -}} + + {{- else if kindIs "slice" $obj -}} + {{- $i := 0 -}} + {{- if eq $key "-" -}} + {{- $i = len $obj -}} + {{- else -}} + {{- $i = atoi $key -}} + {{- if ne $key (toString $i) -}} + {{- fail (cat "cannot convert" $key "to int") -}} + {{- end -}} + {{- end -}} + {{- if lt $i 0 -}} + {{- fail "slice index <0" -}} + {{- else if eq $i 0 -}} + {{- $_ := set $lastMap $lastKey (prepend $obj $value) -}} + {{- else if lt $i (len $obj) -}} + {{- $_ := set $lastMap $lastKey (concat (append (slice $obj 0 $i) $value) (slice $obj $i)) -}} + {{- else if eq $i (len $obj) -}} + {{- $_ := set $lastMap $lastKey (append $obj $value) -}} + {{- else -}} + {{- fail "slice index > slice length" -}} + {{- end -}} + + {{- else -}} + {{- fail (cat "cannot" $patch.op $key "on type" (kindOf $obj)) -}} + {{- end -}} + {{- end -}} + + {{- if eq $patch.op "test" }} + {{- if not (deepEqual $patch.value $patch.testValue) }} + {{- fail (cat "test failed, expected" (toJson $patch.value) "but got" (toJson $patch.testValue)) -}} + {{- end -}} + {{- end -}} + + {{- range $reSliceOp := $reSlice -}} + {{- $sliceObj := list -}} + {{- range $i := until (len $reSliceOp.mapObj) -}} + {{- $sliceObj = append $sliceObj (index $reSliceOp.mapObj (toString $i)) -}} + {{- end -}} + {{- $_ := set $reSliceOp.lastMap $reSliceOp.lastKey $sliceObj -}} + {{- end -}} + + {{- end -}} + {{- toJson $docContainer -}} +{{- end -}} diff --git a/charts/nats/nats/1.2.6/templates/_toPrettyRawJson.tpl b/charts/nats/nats/1.2.6/templates/_toPrettyRawJson.tpl new file mode 100644 index 000000000..612a62f9c --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/_toPrettyRawJson.tpl @@ -0,0 +1,28 @@ +{{- /* +toPrettyRawJson +input: interface{} valid JSON document +output: pretty raw JSON string +*/}} +{{- define "toPrettyRawJson" -}} + {{- include "toPrettyRawJsonStr" (toPrettyJson .) -}} +{{- end -}} + +{{- /* +toPrettyRawJsonStr +input: pretty JSON string +output: pretty raw JSON string +*/}} +{{- define "toPrettyRawJsonStr" -}} + {{- $s := + (regexReplaceAll "([^\\\\](?:\\\\\\\\)*)\\\\u003e" + (regexReplaceAll "([^\\\\](?:\\\\\\\\)*)\\\\u003c" + (regexReplaceAll "([^\\\\](?:\\\\\\\\)*)\\\\u0026" . "${1}&") + "${1}<") + "${1}>") + -}} + {{- if regexMatch "([^\\\\](?:\\\\\\\\)*)\\\\u00(26|3c|3e)" $s -}} + {{- include "toPrettyRawJsonStr" $s -}} + {{- else -}} + {{- $s -}} + {{- end -}} +{{- end -}} diff --git a/charts/nats/nats/1.2.6/templates/_tplYaml.tpl b/charts/nats/nats/1.2.6/templates/_tplYaml.tpl new file mode 100644 index 000000000..f42b9c168 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/_tplYaml.tpl @@ -0,0 +1,114 @@ +{{- /* +tplYaml +input: map with 2 keys: +- doc: interface{} +- ctx: context to pass to tpl function +output: JSON encoded map with 1 key: +- doc: interface{} with any keys called tpl or tplSpread values templated and replaced + +maps matching the following syntax will be templated and parsed as YAML +{ + $tplYaml: string +} + +maps matching the follow syntax will be templated, parsed as YAML, and spread into the parent map/slice +{ + $tplYamlSpread: string +} +*/}} +{{- define "tplYaml" -}} + {{- $patch := get (include "tplYamlItr" (dict "ctx" .ctx "parentKind" "" "parentPath" "" "path" "/" "value" .doc) | fromJson) "patch" -}} + {{- include "jsonpatch" (dict "doc" .doc "patch" $patch) -}} +{{- end -}} + +{{- /* +tplYamlItr +input: map with 4 keys: +- path: string JSONPath to current element +- parentKind: string kind of parent element +- parentPath: string JSONPath to parent element +- value: interface{} +- ctx: context to pass to tpl function +output: JSON encoded map with 1 key: +- patch: list of patches to apply in order to template +*/}} +{{- define "tplYamlItr" -}} + {{- $params := . -}} + {{- $kind := kindOf $params.value -}} + {{- $patch := list -}} + {{- $joinPath := $params.path -}} + {{- if eq $params.path "/" -}} + {{- $joinPath = "" -}} + {{- end -}} + {{- $joinParentPath := $params.parentPath -}} + {{- if eq $params.parentPath "/" -}} + {{- $joinParentPath = "" -}} + {{- end -}} + + {{- if eq $kind "slice" -}} + {{- $iAdj := 0 -}} + {{- range $i, $v := $params.value -}} + {{- $iPath := printf "%s/%d" $joinPath (add $i $iAdj) -}} + {{- $itrPatch := get (include "tplYamlItr" (dict "ctx" $params.ctx "parentKind" $kind "parentPath" $params.path "path" $iPath "value" $v) | fromJson) "patch" -}} + {{- $itrLen := len $itrPatch -}} + {{- if gt $itrLen 0 -}} + {{- $patch = concat $patch $itrPatch -}} + {{- if eq (get (index $itrPatch 0) "op") "remove" -}} + {{- $iAdj = add $iAdj (sub $itrLen 2) -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- else if eq $kind "map" -}} + {{- if and (eq (len $params.value) 1) (or (hasKey $params.value "$tplYaml") (hasKey $params.value "$tplYamlSpread")) -}} + {{- $tpl := get $params.value "$tplYaml" -}} + {{- $spread := false -}} + {{- if hasKey $params.value "$tplYamlSpread" -}} + {{- if eq $params.path "/" -}} + {{- fail "cannot $tplYamlSpread on root object" -}} + {{- end -}} + {{- $tpl = get $params.value "$tplYamlSpread" -}} + {{- $spread = true -}} + {{- end -}} + + {{- $res := tpl $tpl $params.ctx -}} + {{- $res = get (fromYaml (tpl "tpl: {{ nindent 2 .res }}" (merge (dict "res" $res) $params.ctx))) "tpl" -}} + + {{- if eq $spread false -}} + {{- $patch = append $patch (dict "op" "replace" "path" $params.path "value" $res) -}} + {{- else -}} + {{- $resKind := kindOf $res -}} + {{- if and (ne $resKind "invalid") (ne $resKind $params.parentKind) -}} + {{- fail (cat "can only $tplYamlSpread slice onto a slice or map onto a map; attempted to spread" $resKind "on" $params.parentKind "at path" $params.path) -}} + {{- end -}} + {{- $patch = append $patch (dict "op" "remove" "path" $params.path) -}} + {{- if eq $resKind "invalid" -}} + {{- /* no-op */ -}} + {{- else if eq $resKind "slice" -}} + {{- range $v := reverse $res -}} + {{- $patch = append $patch (dict "op" "add" "path" $params.path "value" $v) -}} + {{- end -}} + {{- else -}} + {{- range $k, $v := $res -}} + {{- $kPath := replace "~" "~0" $k -}} + {{- $kPath = replace "/" "~1" $kPath -}} + {{- $kPath = printf "%s/%s" $joinParentPath $kPath -}} + {{- $patch = append $patch (dict "op" "add" "path" $kPath "value" $v) -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- else -}} + {{- range $k, $v := $params.value -}} + {{- $kPath := replace "~" "~0" $k -}} + {{- $kPath = replace "/" "~1" $kPath -}} + {{- $kPath = printf "%s/%s" $joinPath $kPath -}} + {{- $itrPatch := get (include "tplYamlItr" (dict "ctx" $params.ctx "parentKind" $kind "parentPath" $params.path "path" $kPath "value" $v) | fromJson) "patch" -}} + {{- if gt (len $itrPatch) 0 -}} + {{- $patch = concat $patch $itrPatch -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- toJson (dict "patch" $patch) -}} +{{- end -}} diff --git a/charts/nats/nats/1.2.6/templates/config-map.yaml b/charts/nats/nats/1.2.6/templates/config-map.yaml new file mode 100644 index 000000000..b95afda20 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/config-map.yaml @@ -0,0 +1,4 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.configMap }} +{{- include "nats.loadMergePatch" (merge (dict "file" "config-map.yaml" "ctx" $) .) }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/extra-resources.yaml b/charts/nats/nats/1.2.6/templates/extra-resources.yaml new file mode 100644 index 000000000..c11f0085e --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/extra-resources.yaml @@ -0,0 +1,5 @@ +{{- include "nats.defaultValues" . }} +{{- range .Values.extraResources }} +--- +{{ . | toYaml }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/headless-service.yaml b/charts/nats/nats/1.2.6/templates/headless-service.yaml new file mode 100644 index 000000000..f11a83d13 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/headless-service.yaml @@ -0,0 +1,4 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.headlessService }} +{{- include "nats.loadMergePatch" (merge (dict "file" "headless-service.yaml" "ctx" $) .) }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/ingress.yaml b/charts/nats/nats/1.2.6/templates/ingress.yaml new file mode 100644 index 000000000..eccd73ffd --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/ingress.yaml @@ -0,0 +1,6 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.config.websocket.ingress }} +{{- if and .enabled .hosts $.Values.config.websocket.enabled $.Values.service.enabled $.Values.service.ports.websocket.enabled }} +{{- include "nats.loadMergePatch" (merge (dict "file" "ingress.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/nats-box/contents-secret.yaml b/charts/nats/nats/1.2.6/templates/nats-box/contents-secret.yaml new file mode 100644 index 000000000..db629bf7b --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/nats-box/contents-secret.yaml @@ -0,0 +1,10 @@ +{{- include "nats.defaultValues" . }} +{{- if .hasContentsSecret }} +{{- with .Values.natsBox }} +{{- if .enabled }} +{{- with .contentsSecret}} +{{- include "nats.loadMergePatch" (merge (dict "file" "nats-box/contents-secret.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/nats-box/contexts-secret.yaml b/charts/nats/nats/1.2.6/templates/nats-box/contexts-secret.yaml new file mode 100644 index 000000000..5ae20f45a --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/nats-box/contexts-secret.yaml @@ -0,0 +1,8 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.natsBox }} +{{- if .enabled }} +{{- with .contextsSecret}} +{{- include "nats.loadMergePatch" (merge (dict "file" "nats-box/contexts-secret/contexts-secret.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/nats-box/deployment.yaml b/charts/nats/nats/1.2.6/templates/nats-box/deployment.yaml new file mode 100644 index 000000000..a063332a2 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/nats-box/deployment.yaml @@ -0,0 +1,8 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.natsBox }} +{{- if .enabled }} +{{- with .deployment }} +{{- include "nats.loadMergePatch" (merge (dict "file" "nats-box/deployment/deployment.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/nats-box/service-account.yaml b/charts/nats/nats/1.2.6/templates/nats-box/service-account.yaml new file mode 100644 index 000000000..e11bdd363 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/nats-box/service-account.yaml @@ -0,0 +1,8 @@ +{{- include "nats.defaultValues" . }} +{{- if .Values.natsBox.enabled }} +{{- with .Values.natsBox.serviceAccount }} +{{- if .enabled }} +{{- include "nats.loadMergePatch" (merge (dict "file" "nats-box/service-account.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/pod-disruption-budget.yaml b/charts/nats/nats/1.2.6/templates/pod-disruption-budget.yaml new file mode 100644 index 000000000..911722629 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/pod-disruption-budget.yaml @@ -0,0 +1,6 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.podDisruptionBudget }} +{{- if .enabled }} +{{- include "nats.loadMergePatch" (merge (dict "file" "pod-disruption-budget.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/pod-monitor.yaml b/charts/nats/nats/1.2.6/templates/pod-monitor.yaml new file mode 100644 index 000000000..0e42a43a5 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/pod-monitor.yaml @@ -0,0 +1,8 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.promExporter }} +{{- if and .enabled .podMonitor.enabled }} +{{- with .podMonitor }} +{{- include "nats.loadMergePatch" (merge (dict "file" "pod-monitor.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/service-account.yaml b/charts/nats/nats/1.2.6/templates/service-account.yaml new file mode 100644 index 000000000..6c763bd3e --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/service-account.yaml @@ -0,0 +1,6 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.serviceAccount }} +{{- if .enabled }} +{{- include "nats.loadMergePatch" (merge (dict "file" "service-account.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/service.yaml b/charts/nats/nats/1.2.6/templates/service.yaml new file mode 100644 index 000000000..04b0b37e7 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/service.yaml @@ -0,0 +1,6 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.service }} +{{- if .enabled }} +{{- include "nats.loadMergePatch" (merge (dict "file" "service.yaml" "ctx" $) .) }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/stateful-set.yaml b/charts/nats/nats/1.2.6/templates/stateful-set.yaml new file mode 100644 index 000000000..bb198323e --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/stateful-set.yaml @@ -0,0 +1,4 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.statefulSet }} +{{- include "nats.loadMergePatch" (merge (dict "file" "stateful-set/stateful-set.yaml" "ctx" $) .) }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/templates/tests/request-reply.yaml b/charts/nats/nats/1.2.6/templates/tests/request-reply.yaml new file mode 100644 index 000000000..3e06edc08 --- /dev/null +++ b/charts/nats/nats/1.2.6/templates/tests/request-reply.yaml @@ -0,0 +1,37 @@ +{{- include "nats.defaultValues" . }} +{{- with .Values.natsBox | deepCopy }} +{{- $natsBox := . }} +{{- if .enabled -}} +apiVersion: v1 +kind: Pod +{{- with .container }} +{{- $_ := set . "merge" (dict + "args" (list + "sh" + "-ec" + "nats reply --echo echo & pid=\"$!\"; sleep 1; nats request echo hi > /tmp/resp; kill \"$pid\"; wait; grep -qF hi /tmp/resp" + ) +) }} +{{- $_ := set . "patch" list }} +{{- end }} +{{- with .podTemplate }} +{{- $_ := set . "merge" (dict + "metadata" (dict + "name" (printf "%s-test-request-reply" $.Values.statefulSet.name) + "labels" (dict + "app.kubernetes.io/component" "test-request-reply" + ) + "annotations" (dict + "helm.sh/hook" "test" + "helm.sh/hook-delete-policy" "before-hook-creation,hook-succeeded" + ) + ) + "spec" (dict + "restartPolicy" "Never" + ) +) }} +{{- $_ := set . "patch" list }} +{{ include "nats.loadMergePatch" (merge (dict "file" "nats-box/deployment/pod-template.yaml" "ctx" (merge (dict "Values" (dict "natsBox" $natsBox)) $)) .) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/nats/nats/1.2.6/values.yaml b/charts/nats/nats/1.2.6/values.yaml new file mode 100644 index 000000000..6f19c84cb --- /dev/null +++ b/charts/nats/nats/1.2.6/values.yaml @@ -0,0 +1,669 @@ +################################################################################ +# Global options +################################################################################ +global: + image: + # global image pull policy to use for all container images in the chart + # can be overridden by individual image pullPolicy + pullPolicy: + # global list of secret names to use as image pull secrets for all pod specs in the chart + # secrets must exist in the same namespace + # https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + pullSecretNames: [] + # global registry to use for all container images in the chart + # can be overridden by individual image registry + registry: + + # global labels will be applied to all resources deployed by the chart + labels: {} + +################################################################################ +# Common options +################################################################################ +# override name of the chart +nameOverride: +# override full name of the chart+release +fullnameOverride: +# override the namespace that resources are installed into +namespaceOverride: + +# reference a common CA Certificate or Bundle in all nats config `tls` blocks and nats-box contexts +# note: `tls.verify` still must be set in the appropriate nats config `tls` blocks to require mTLS +tlsCA: + enabled: false + # set configMapName in order to mount an existing configMap to dir + configMapName: + # set secretName in order to mount an existing secretName to dir + secretName: + # directory to mount the configMap or secret to + dir: /etc/nats-ca-cert + # key in the configMap or secret that contains the CA Certificate or Bundle + key: ca.crt + +################################################################################ +# NATS Stateful Set and associated resources +################################################################################ + +############################################################ +# NATS config +############################################################ +config: + cluster: + enabled: false + port: 6222 + # must be 2 or higher when jetstream is enabled + replicas: 3 + + # apply to generated route URLs that connect to other pods in the StatefulSet + routeURLs: + # if both user and password are set, they will be added to route URLs + # and the cluster authorization block + user: + password: + # set to true to use FQDN in route URLs + useFQDN: false + k8sClusterDomain: cluster.local + + tls: + enabled: false + # set secretName in order to mount an existing secret to dir + secretName: + dir: /etc/nats-certs/cluster + cert: tls.crt + key: tls.key + # merge or patch the tls config + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls + merge: {} + patch: [] + + # merge or patch the cluster config + # https://docs.nats.io/running-a-nats-service/configuration/clustering/cluster_config + merge: {} + patch: [] + + jetstream: + enabled: false + + fileStore: + enabled: true + dir: /data + + ############################################################ + # stateful set -> volume claim templates -> jetstream pvc + ############################################################ + pvc: + enabled: true + size: 10Gi + storageClassName: + + # merge or patch the jetstream pvc + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-js" + name: + + # defaults to the PVC size + maxSize: + + memoryStore: + enabled: false + # ensure that container has a sufficient memory limit greater than maxSize + maxSize: 1Gi + + # merge or patch the jetstream config + # https://docs.nats.io/running-a-nats-service/configuration#jetstream + merge: {} + patch: [] + + nats: + port: 4222 + tls: + enabled: false + # set secretName in order to mount an existing secret to dir + secretName: + dir: /etc/nats-certs/nats + cert: tls.crt + key: tls.key + # merge or patch the tls config + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls + merge: {} + patch: [] + + leafnodes: + enabled: false + port: 7422 + tls: + enabled: false + # set secretName in order to mount an existing secret to dir + secretName: + dir: /etc/nats-certs/leafnodes + cert: tls.crt + key: tls.key + # merge or patch the tls config + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls + merge: {} + patch: [] + + # merge or patch the leafnodes config + # https://docs.nats.io/running-a-nats-service/configuration/leafnodes/leafnode_conf + merge: {} + patch: [] + + websocket: + enabled: false + port: 8080 + tls: + enabled: false + # set secretName in order to mount an existing secret to dir + secretName: + dir: /etc/nats-certs/websocket + cert: tls.crt + key: tls.key + # merge or patch the tls config + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls + merge: {} + patch: [] + + ############################################################ + # ingress + ############################################################ + # service must be enabled also + ingress: + enabled: false + # must contain at least 1 host otherwise ingress will not be created + hosts: [] + path: / + pathType: Exact + # sets to the ingress class name + className: + # set to an existing secret name to enable TLS on the ingress; applies to all hosts + tlsSecretName: + + # merge or patch the ingress + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#ingress-v1-networking-k8s-io + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-ws" + name: + + # merge or patch the websocket config + # https://docs.nats.io/running-a-nats-service/configuration/websocket/websocket_conf + merge: {} + patch: [] + + mqtt: + enabled: false + port: 1883 + tls: + enabled: false + # set secretName in order to mount an existing secret to dir + secretName: + dir: /etc/nats-certs/mqtt + cert: tls.crt + key: tls.key + # merge or patch the tls config + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls + merge: {} + patch: [] + + # merge or patch the mqtt config + # https://docs.nats.io/running-a-nats-service/configuration/mqtt/mqtt_config + merge: {} + patch: [] + + gateway: + enabled: false + port: 7222 + tls: + enabled: false + # set secretName in order to mount an existing secret to dir + secretName: + dir: /etc/nats-certs/gateway + cert: tls.crt + key: tls.key + # merge or patch the tls config + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls + merge: {} + patch: [] + + # merge or patch the gateway config + # https://docs.nats.io/running-a-nats-service/configuration/gateways/gateway#gateway-configuration-block + merge: {} + patch: [] + + monitor: + enabled: true + port: 8222 + tls: + # config.nats.tls must be enabled also + # when enabled, monitoring port will use HTTPS with the options from config.nats.tls + enabled: false + + profiling: + enabled: false + port: 65432 + + resolver: + enabled: false + dir: /data/resolver + + ############################################################ + # stateful set -> volume claim templates -> resolver pvc + ############################################################ + pvc: + enabled: true + size: 1Gi + storageClassName: + + # merge or patch the pvc + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-resolver" + name: + + # merge or patch the resolver + # https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/jwt/resolver + merge: {} + patch: [] + + # adds a prefix to the server name, which defaults to the pod name + # helpful for ensuring server name is unique in a super cluster + serverNamePrefix: "" + + # merge or patch the nats config + # https://docs.nats.io/running-a-nats-service/configuration + # following special rules apply + # 1. strings that start with << and end with >> will be unquoted + # use this for variables and numbers with units + # 2. keys ending in $include will be switched to include directives + # keys are sorted alphabetically, use prefix before $includes to control includes ordering + # paths should be relative to /etc/nats-config/nats.conf + # example: + # + # merge: + # $include: ./my-config.conf + # zzz$include: ./my-config-last.conf + # server_name: nats + # authorization: + # token: << $TOKEN >> + # jetstream: + # max_memory_store: << 1GB >> + # + # will yield the config: + # { + # include ./my-config.conf; + # "authorization": { + # "token": $TOKEN + # }, + # "jetstream": { + # "max_memory_store": 1GB + # }, + # "server_name": "nats", + # include ./my-config-last.conf; + # } + merge: {} + patch: [] + +############################################################ +# stateful set -> pod template -> nats container +############################################################ +container: + image: + repository: nats + tag: 2.10.22-alpine + pullPolicy: + registry: + + # container port options + # must be enabled in the config section also + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerport-v1-core + ports: + nats: {} + leafnodes: {} + websocket: {} + mqtt: {} + cluster: {} + gateway: {} + monitor: {} + profiling: {} + + # map with key as env var name, value can be string or map + # example: + # + # env: + # GOMEMLIMIT: 7GiB + # TOKEN: + # valueFrom: + # secretKeyRef: + # name: nats-auth + # key: token + env: {} + + # merge or patch the container + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core + merge: {} + patch: [] + +############################################################ +# stateful set -> pod template -> reloader container +############################################################ +reloader: + enabled: true + image: + repository: natsio/nats-server-config-reloader + tag: 0.16.0 + pullPolicy: + registry: + + # env var map, see nats.env for an example + env: {} + + # all nats container volume mounts with the following prefixes + # will be mounted into the reloader container + natsVolumeMountPrefixes: + - /etc/ + + # merge or patch the container + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core + merge: {} + patch: [] + +############################################################ +# stateful set -> pod template -> prom-exporter container +############################################################ +# config.monitor must be enabled +promExporter: + enabled: false + image: + repository: natsio/prometheus-nats-exporter + tag: 0.15.0 + pullPolicy: + registry: + + port: 7777 + # env var map, see nats.env for an example + env: {} + + # merge or patch the container + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core + merge: {} + patch: [] + + ############################################################ + # prometheus pod monitor + ############################################################ + podMonitor: + enabled: false + + # merge or patch the pod monitor + # https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}" + name: + + +############################################################ +# service +############################################################ +service: + enabled: true + + # service port options + # additional boolean field enable to control whether port is exposed in the service + # must be enabled in the config section also + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#serviceport-v1-core + ports: + nats: + enabled: true + leafnodes: + enabled: true + websocket: + enabled: true + mqtt: + enabled: true + cluster: + enabled: false + gateway: + enabled: false + monitor: + enabled: false + profiling: + enabled: false + + # merge or patch the service + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}" + name: + +############################################################ +# other nats extension points +############################################################ + +# stateful set +statefulSet: + # merge or patch the stateful set + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#statefulset-v1-apps + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}" + name: + +# stateful set -> pod template +podTemplate: + # adds a hash of the ConfigMap as a pod annotation + # this will cause the StatefulSet to roll when the ConfigMap is updated + configChecksumAnnotation: true + + # map of topologyKey: topologySpreadConstraint + # labelSelector will be added to match StatefulSet pods + # + # topologySpreadConstraints: + # kubernetes.io/hostname: + # maxSkew: 1 + # + topologySpreadConstraints: {} + + # merge or patch the pod template + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core + merge: {} + patch: [] + +# headless service +headlessService: + # merge or patch the headless service + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-headless" + name: + +# config map +configMap: + # merge or patch the config map + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmap-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-config" + name: + +# pod disruption budget +podDisruptionBudget: + enabled: true + # merge or patch the pod disruption budget + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#poddisruptionbudget-v1-policy + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}" + name: + +# service account +serviceAccount: + enabled: false + # merge or patch the service account + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#serviceaccount-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}" + name: + + +############################################################ +# natsBox +# +# NATS Box Deployment and associated resources +############################################################ +natsBox: + enabled: true + + ############################################################ + # NATS contexts + ############################################################ + contexts: + default: + creds: + # set contents in order to create a secret with the creds file contents + contents: + # set secretName in order to mount an existing secret to dir + secretName: + # defaults to /etc/nats-creds/ + dir: + key: nats.creds + nkey: + # set contents in order to create a secret with the nkey file contents + contents: + # set secretName in order to mount an existing secret to dir + secretName: + # defaults to /etc/nats-nkeys/ + dir: + key: nats.nk + # used to connect with client certificates + tls: + # set secretName in order to mount an existing secret to dir + secretName: + # defaults to /etc/nats-certs/ + dir: + cert: tls.crt + key: tls.key + + # merge or patch the context + # https://docs.nats.io/using-nats/nats-tools/nats_cli#nats-contexts + merge: {} + patch: [] + + # name of context to select by default + defaultContextName: default + + ############################################################ + # deployment -> pod template -> nats-box container + ############################################################ + container: + image: + repository: natsio/nats-box + tag: 0.14.5 + pullPolicy: + registry: + + # env var map, see nats.env for an example + env: {} + + # merge or patch the container + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core + merge: {} + patch: [] + + ############################################################ + # other nats-box extension points + ############################################################ + + # deployment + deployment: + # merge or patch the deployment + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#deployment-v1-apps + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-box" + name: + + # deployment -> pod template + podTemplate: + # merge or patch the pod template + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core + merge: {} + patch: [] + + # contexts secret + contextsSecret: + # merge or patch the context secret + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secret-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-box-contexts" + name: + + # contents secret + contentsSecret: + # merge or patch the contents secret + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secret-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-box-contents" + name: + + # service account + serviceAccount: + enabled: false + # merge or patch the service account + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#serviceaccount-v1-core + merge: {} + patch: [] + # defaults to "{{ include "nats.fullname" $ }}-box" + name: + + +################################################################################ +# Extra user-defined resources +################################################################################ +# +# add arbitrary user-generated resources +# example: +# +# config: +# websocket: +# enabled: true +# extraResources: +# - apiVersion: networking.istio.io/v1beta1 +# kind: VirtualService +# metadata: +# name: +# $tplYaml: > +# {{ include "nats.fullname" $ | quote }} +# labels: +# $tplYaml: | +# {{ include "nats.labels" $ }} +# spec: +# hosts: +# - demo.nats.io +# gateways: +# - my-gateway +# http: +# - name: default +# match: +# - name: root +# uri: +# exact: / +# route: +# - destination: +# host: +# $tplYaml: > +# {{ .Values.service.name | quote }} +# port: +# number: +# $tplYaml: > +# {{ .Values.config.websocket.port }} +# +extraResources: [] diff --git a/charts/speedscale/speedscale-operator/2.2.567/.helmignore b/charts/speedscale/speedscale-operator/2.2.567/.helmignore new file mode 100644 index 000000000..0e8a0eb36 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/speedscale/speedscale-operator/2.2.567/Chart.yaml b/charts/speedscale/speedscale-operator/2.2.567/Chart.yaml new file mode 100644 index 000000000..65b15888a --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/Chart.yaml @@ -0,0 +1,27 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: Speedscale Operator + catalog.cattle.io/kube-version: '>= 1.17.0-0' + catalog.cattle.io/release-name: speedscale-operator +apiVersion: v1 +appVersion: 2.2.567 +description: Stress test your APIs with real world scenarios. Collect and replay + traffic without scripting. +home: https://speedscale.com +icon: file://assets/icons/speedscale-operator.png +keywords: +- speedscale +- test +- testing +- regression +- reliability +- load +- replay +- network +- traffic +kubeVersion: '>= 1.17.0-0' +maintainers: +- email: support@speedscale.com + name: Speedscale Support +name: speedscale-operator +version: 2.2.567 diff --git a/charts/speedscale/speedscale-operator/2.2.567/LICENSE b/charts/speedscale/speedscale-operator/2.2.567/LICENSE new file mode 100644 index 000000000..b78723d62 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2021 Speedscale + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/speedscale/speedscale-operator/2.2.567/README.md b/charts/speedscale/speedscale-operator/2.2.567/README.md new file mode 100644 index 000000000..6ca25eed9 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/README.md @@ -0,0 +1,111 @@ +![GitHub Tag](https://img.shields.io/github/v/tag/speedscale/operator-helm) + + +# Speedscale Operator + +The [Speedscale](https://www.speedscale.com) Operator is a [Kubernetes operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) +that watches for deployments to be applied to the cluster and takes action based on annotations. The operator +can inject a proxy to capture traffic into or out of applications, or setup an isolation test environment around +a deployment for testing. The operator itself is a deployment that will be always present on the cluster once +the helm chart is installed. + +## Prerequisites + +- Kubernetes 1.20+ +- Helm 3+ +- Appropriate [network and firewall configuration](https://docs.speedscale.com/reference/networking) for Speedscale cloud and webhook traffic + +## Get Repo Info + +```bash +helm repo add speedscale https://speedscale.github.io/operator-helm/ +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +An API key is required. Sign up for a [free Speedscale trial](https://speedscale.com/free-trial/) if you do not have one. + +```bash +helm install speedscale-operator speedscale/speedscale-operator \ + -n speedscale \ + --create-namespace \ + --set apiKey= \ + --set clusterName= +``` + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +### Pre-install job failure + +We use pre-install job to check provided API key and provision some of the required resources. + +If the job failed during the installation, you'll see the following error during install: + +``` +Error: INSTALLATION FAILED: failed pre-install: job failed: BackoffLimitExceeded +``` + +You can inspect the logs using this command: + +```bash +kubectl -n speedscale logs job/speedscale-operator-pre-install +``` + +After fixing the error, uninstall the helm release, delete the failed job +and try installing again: + +```bash +helm -n speedscale uninstall speedscale-operator +kubectl -n speedscale delete job speedscale-operator-pre-install +``` + +## Uninstall Chart + +```bash +helm -n speedscale uninstall speedscale-operator +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +CRDs created by this chart are not removed by default and should be manually cleaned up: + +```bash +kubectl delete crd trafficreplays.speedscale.com +``` + +## Upgrading Chart + +```bash +helm repo update +helm -n speedscale upgrade speedscale-operator speedscale/speedscale-operator +``` + +Resources capturing traffic will need to be rolled to pick up the latest +Speedscale sidecar. Use the rollout restart command for each namespace and +resource type: + +```bash +kubectl -n rollout restart deployment +``` + +With Helm v3, CRDs created by this chart are not updated by default +and should be manually updated. +Consult also the [Helm Documentation on CRDs](https://helm.sh/docs/chart_best_practices/custom_resource_definitions). + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### Upgrading an existing Release to a new version + +A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an +incompatible breaking change needing manual actions. + + +## Help + +Speedscale docs information available at [docs.speedscale.com](https://docs.speedscale.com) or join us +on the [Speedscale community Slack](https://join.slack.com/t/speedscalecommunity/shared_invite/zt-x5rcrzn4-XHG1QqcHNXIM~4yozRrz8A)! diff --git a/charts/speedscale/speedscale-operator/2.2.567/app-readme.md b/charts/speedscale/speedscale-operator/2.2.567/app-readme.md new file mode 100644 index 000000000..6ca25eed9 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/app-readme.md @@ -0,0 +1,111 @@ +![GitHub Tag](https://img.shields.io/github/v/tag/speedscale/operator-helm) + + +# Speedscale Operator + +The [Speedscale](https://www.speedscale.com) Operator is a [Kubernetes operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) +that watches for deployments to be applied to the cluster and takes action based on annotations. The operator +can inject a proxy to capture traffic into or out of applications, or setup an isolation test environment around +a deployment for testing. The operator itself is a deployment that will be always present on the cluster once +the helm chart is installed. + +## Prerequisites + +- Kubernetes 1.20+ +- Helm 3+ +- Appropriate [network and firewall configuration](https://docs.speedscale.com/reference/networking) for Speedscale cloud and webhook traffic + +## Get Repo Info + +```bash +helm repo add speedscale https://speedscale.github.io/operator-helm/ +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +An API key is required. Sign up for a [free Speedscale trial](https://speedscale.com/free-trial/) if you do not have one. + +```bash +helm install speedscale-operator speedscale/speedscale-operator \ + -n speedscale \ + --create-namespace \ + --set apiKey= \ + --set clusterName= +``` + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +### Pre-install job failure + +We use pre-install job to check provided API key and provision some of the required resources. + +If the job failed during the installation, you'll see the following error during install: + +``` +Error: INSTALLATION FAILED: failed pre-install: job failed: BackoffLimitExceeded +``` + +You can inspect the logs using this command: + +```bash +kubectl -n speedscale logs job/speedscale-operator-pre-install +``` + +After fixing the error, uninstall the helm release, delete the failed job +and try installing again: + +```bash +helm -n speedscale uninstall speedscale-operator +kubectl -n speedscale delete job speedscale-operator-pre-install +``` + +## Uninstall Chart + +```bash +helm -n speedscale uninstall speedscale-operator +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +CRDs created by this chart are not removed by default and should be manually cleaned up: + +```bash +kubectl delete crd trafficreplays.speedscale.com +``` + +## Upgrading Chart + +```bash +helm repo update +helm -n speedscale upgrade speedscale-operator speedscale/speedscale-operator +``` + +Resources capturing traffic will need to be rolled to pick up the latest +Speedscale sidecar. Use the rollout restart command for each namespace and +resource type: + +```bash +kubectl -n rollout restart deployment +``` + +With Helm v3, CRDs created by this chart are not updated by default +and should be manually updated. +Consult also the [Helm Documentation on CRDs](https://helm.sh/docs/chart_best_practices/custom_resource_definitions). + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### Upgrading an existing Release to a new version + +A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an +incompatible breaking change needing manual actions. + + +## Help + +Speedscale docs information available at [docs.speedscale.com](https://docs.speedscale.com) or join us +on the [Speedscale community Slack](https://join.slack.com/t/speedscalecommunity/shared_invite/zt-x5rcrzn4-XHG1QqcHNXIM~4yozRrz8A)! diff --git a/charts/speedscale/speedscale-operator/2.2.567/questions.yaml b/charts/speedscale/speedscale-operator/2.2.567/questions.yaml new file mode 100644 index 000000000..29aee3895 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/questions.yaml @@ -0,0 +1,9 @@ +questions: +- variable: apiKey + default: "fffffffffffffffffffffffffffffffffffffffffffff" + description: "An API key is required to connect to the Speedscale cloud." + required: true + type: string + label: API Key + group: Authentication + diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/NOTES.txt b/charts/speedscale/speedscale-operator/2.2.567/templates/NOTES.txt new file mode 100644 index 000000000..cabb59b17 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/NOTES.txt @@ -0,0 +1,12 @@ +Thank you for installing the Speedscale Operator! + +Next you'll need to add the Speedscale Proxy Sidecar to your deployments. +See https://docs.speedscale.com/setup/sidecar/install/ + +If upgrading use the rollout restart command for each namespace and resource +type to ensure Speedscale sidecars are updated: + + kubectl -n rollout restart deployment + +Once your deployment is running the sidecar your service will show up on +https://app.speedscale.com/. diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/admission.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/admission.yaml new file mode 100644 index 000000000..301748a61 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/admission.yaml @@ -0,0 +1,209 @@ +{{- $cacrt := "" -}} +{{- $crt := "" -}} +{{- $key := "" -}} +{{- $s := (lookup "v1" "Secret" .Release.Namespace "speedscale-webhook-certs") -}} +{{- if $s -}} +{{- $cacrt = index $s.data "ca.crt" | default (index $s.data "tls.crt") | b64dec -}} +{{- $crt = index $s.data "tls.crt" | b64dec -}} +{{- $key = index $s.data "tls.key" | b64dec -}} +{{ else }} +{{- $altNames := list ( printf "speedscale-operator.%s" .Release.Namespace ) ( printf "speedscale-operator.%s.svc" .Release.Namespace ) -}} +{{- $ca := genCA "speedscale-operator" 3650 -}} +{{- $cert := genSignedCert "speedscale-operator" nil $altNames 3650 $ca -}} +{{- $cacrt = $ca.Cert -}} +{{- $crt = $cert.Cert -}} +{{- $key = $cert.Key -}} +{{- end -}} +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + creationTimestamp: null + name: speedscale-operator + annotations: + argocd.argoproj.io/hook: PreSync + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} +webhooks: +- admissionReviewVersions: + - v1 + clientConfig: + caBundle: {{ $cacrt | b64enc }} + service: + name: speedscale-operator + namespace: {{ .Release.Namespace }} + path: /mutate + failurePolicy: Ignore + name: sidecar.speedscale.com + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: "NotIn" + values: + - kube-system + - kube-node-lease + {{- if .Values.namespaceSelector }} + - key: kubernetes.io/metadata.name + operator: "In" + values: + {{- range .Values.namespaceSelector }} + - {{ . | quote }} + {{- end }} + {{- end }} + reinvocationPolicy: IfNeeded + rules: + - apiGroups: + - apps + - batch + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + - DELETE + resources: + - deployments + - statefulsets + - daemonsets + - jobs + - replicasets + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + - DELETE + resources: + - pods + - apiGroups: + - argoproj.io + apiVersions: + - "*" + operations: + - CREATE + - UPDATE + - DELETE + resources: + - rollouts + sideEffects: None + timeoutSeconds: 10 +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + creationTimestamp: null + name: speedscale-operator-replay + annotations: + argocd.argoproj.io/hook: PreSync + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} +webhooks: +- admissionReviewVersions: + - v1 + clientConfig: + caBundle: {{ $cacrt | b64enc }} + service: + name: speedscale-operator + namespace: {{ .Release.Namespace }} + path: /mutate-speedscale-com-v1-trafficreplay + failurePolicy: Fail + name: replay.speedscale.com + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: "NotIn" + values: + - kube-system + - kube-node-lease + {{- if .Values.namespaceSelector }} + - key: kubernetes.io/metadata.name + operator: "In" + values: + {{- range .Values.namespaceSelector }} + - {{ . | quote }} + {{- end }} + {{- end }} + rules: + - apiGroups: + - speedscale.com + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + resources: + - trafficreplays + sideEffects: None + timeoutSeconds: 10 +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: ValidatingWebhookConfiguration +metadata: + creationTimestamp: null + name: speedscale-operator-replay + annotations: + argocd.argoproj.io/hook: PreSync + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} +webhooks: +- admissionReviewVersions: + - v1 + clientConfig: + caBundle: {{ $cacrt | b64enc }} + service: + name: speedscale-operator + namespace: {{ .Release.Namespace }} + path: /validate-speedscale-com-v1-trafficreplay + failurePolicy: Fail + name: replay.speedscale.com + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: "NotIn" + values: + - kube-system + - kube-node-lease + {{- if .Values.namespaceSelector }} + - key: kubernetes.io/metadata.name + operator: "In" + values: + {{- range .Values.namespaceSelector }} + - {{ . | quote }} + {{- end }} + {{- end }} + rules: + - apiGroups: + - speedscale.com + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + - DELETE + resources: + - trafficreplays + sideEffects: None + timeoutSeconds: 10 +--- +apiVersion: v1 +kind: Secret +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + creationTimestamp: null + name: speedscale-webhook-certs + namespace: {{ .Release.Namespace }} +type: kubernetes.io/tls +data: + ca.crt: {{ $cacrt | b64enc }} + tls.crt: {{ $crt | b64enc }} + tls.key: {{ $key | b64enc }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/configmap.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/configmap.yaml new file mode 100644 index 000000000..04dfda91a --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/configmap.yaml @@ -0,0 +1,43 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: speedscale-operator + namespace: {{ .Release.Namespace }} + annotations: + argocd.argoproj.io/hook: PreSync + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} +data: + CLUSTER_NAME: {{ .Values.clusterName }} + IMAGE_PULL_POLICY: {{ .Values.image.pullPolicy }} + IMAGE_PULL_SECRETS: "" + IMAGE_REGISTRY: {{ .Values.image.registry }} + IMAGE_TAG: {{ .Values.image.tag }} + INSTANCE_ID: '{{- $cm := (lookup "v1" "ConfigMap" .Release.Namespace "speedscale-operator") -}}{{ if $cm }}{{ $cm.data.INSTANCE_ID }}{{ else }}{{ ( printf "%s-%s" .Values.clusterName uuidv4 ) }}{{ end }}' + LOG_LEVEL: {{ .Values.logLevel }} + SPEEDSCALE_DLP_CONFIG: {{ .Values.dlp.config }} + SPEEDSCALE_FILTER_RULE: {{ .Values.filterRule }} + TELEMETRY_INTERVAL: 1s + WITH_DLP: {{ .Values.dlp.enabled | quote }} + WITH_INSPECTOR: {{ .Values.dashboardAccess | quote }} + API_KEY_SECRET_NAME: {{ .Values.apiKeySecret | quote }} + DEPLOY_DEMO: {{ .Values.deployDemo | quote }} + GLOBAL_ANNOTATIONS: {{ .Values.globalAnnotations | toJson | quote }} + GLOBAL_LABELS: {{ .Values.globalLabels | toJson | quote }} + {{- if .Values.http_proxy }} + HTTP_PROXY: {{ .Values.http_proxy }} + {{- end }} + {{- if .Values.https_proxy }} + HTTPS_PROXY: {{ .Values.https_proxy }} + {{- end }} + {{- if .Values.no_proxy }} + NO_PROXY: {{ .Values.no_proxy }} + {{- end }} + PRIVILEGED_SIDECARS: {{ .Values.privilegedSidecars | quote }} + DISABLE_SMARTDNS: {{ .Values.disableSidecarSmartReverseDNS | quote }} + SIDECAR_CONFIG: {{ .Values.sidecar | toJson | quote }} + FORWARDER_CONFIG: {{ .Values.forwarder | toJson | quote }} + TEST_PREP_TIMEOUT: {{ .Values.operator.test_prep_timeout }} + CONTROL_PLANE_TIMEOUT: {{ .Values.operator.control_plane_timeout }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/crds/trafficreplays.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/crds/trafficreplays.yaml new file mode 100644 index 000000000..aea331547 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/crds/trafficreplays.yaml @@ -0,0 +1,525 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.15.0 + creationTimestamp: null + name: trafficreplays.speedscale.com +spec: + group: speedscale.com + names: + kind: TrafficReplay + listKind: TrafficReplayList + plural: trafficreplays + shortNames: + - replay + singular: trafficreplay + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.active + name: Active + type: boolean + - jsonPath: .spec.mode + name: Mode + type: string + - jsonPath: .status.conditions[-1:].message + name: Status + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1 + schema: + openAPIV3Schema: + description: TrafficReplay is the Schema for the trafficreplays API + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: TrafficReplaySpec defines the desired state of TrafficReplay + properties: + buildTag: + description: |- + BuildTag links a unique tag, build hash, etc. to the generated + traffic replay report. That way you can connect the report results to the + version of the code that was tested. + type: string + cleanup: + description: |- + Cleanup is the name of cleanup mode used for this TrafficReplay. Set to + "none" to leave resources in the state they were during the replay. The + default mode "inventory" will revert the environment to the state it was + before the replay. + enum: + - inventory + - all + - none + type: string + collectLogs: + description: |- + CollectLogs enables or disables log collection from target + workload. Defaults to true. + DEPRECATED: use TestReport.ActualConfig.Cluster.CollectLogs + type: boolean + configChecksum: + description: |- + ConfigChecksum, managed my the operator, is the SHA1 checksum of the + configuration. + type: string + customURL: + description: |- + CustomURL specifies a custom URL to send *ALL* traffic to. Use + Workload.CustomURI to send traffic to a specific URL for only that + workload. + type: string + generatorLowData: + description: |- + GeneratorLowData forces the generator into a high + efficiency/low data output mode. This is ideal for high volume + performance tests. Defaults to false. + DEPRECATED + type: boolean + mode: + description: Mode is the name of replay mode used for this TrafficReplay. + enum: + - full-replay + - responder-only + - generator-only + type: string + needsReport: + description: Indicates whether a responder-only replay needs a report. + type: boolean + proxyMode: + description: |- + ProxyMode defines proxy operational mode used with injected sidecar. + DEPRECATED + type: string + responderLowData: + description: |- + ResponderLowData forces the responder into a high + efficiency/low data output mode. This is ideal for high volume + performance tests. Defaults to false. + DEPRECATED + type: boolean + secretRefs: + description: |- + SecretRefs hold the references to the secrets which contain + various secrets like (e.g. short-lived JWTs to be used by the generator + for authorization with HTTP calls). + items: + description: |- + LocalObjectReference contains enough information to locate the referenced + Kubernetes resource object. + properties: + name: + description: Name of the referent. + type: string + required: + - name + type: object + type: array + sidecar: + description: |- + Sidecar defines sidecar specific configuration. + DEPRECATED: use Workloads + properties: + inject: + description: 'DEPRECATED: do not use' + type: boolean + patch: + description: Patch is .yaml file patch for the Workload + format: byte + type: string + tls: + properties: + in: + description: In provides configuration for sidecar inbound + TLS. + properties: + private: + description: Private is the filename of the TLS inbound + private key. + type: string + public: + description: Public is the filename of the TLS inbound + public key. + type: string + secret: + description: Secret is a secret with the TLS keys to use + for inbound traffic. + type: string + type: object + mutual: + description: Mutual provides configuration for sidecar mutual + TLS. + properties: + private: + description: Private is the filename of the mutual TLS + private key. + type: string + public: + description: Public is the filename of the mutual TLS + public key. + type: string + secret: + description: Secret is a secret with the mutual TLS keys. + type: string + type: object + out: + description: |- + Out enables or disables TLS out on the + sidecar during replay. + type: boolean + type: object + type: object + snapshotID: + description: |- + SnapshotID is the id of the traffic snapshot for this + TrafficReplay. + type: string + testConfigID: + description: |- + TestConfigID is the id of the replay configuration to be used + by the generator and responder for the TrafficReplay. + type: string + timeout: + description: |- + Timeout is the time to wait for replay test to finish. Defaults + to value of the `TIMEOUT` setting of the operator. + type: string + ttlAfterReady: + description: |- + TTLAfterReady provides a TTL (time to live) mechanism to limit + the lifetime of TrafficReplay object that have finished the execution and + reached its final state (either complete or failed). + type: string + workloadRef: + description: |- + WorkloadRef is the reference to the target workload (SUT) for + TrafficReplay. The operations will be performed in the namespace of the + target object. + DEPRECATED: use Workloads + properties: + apiVersion: + description: API version of the referenced object. + type: string + kind: + description: Kind of the referenced object. Defaults to "Deployment". + type: string + name: + description: |- + Name of the referenced object. Required when defining for a test unless a + custom URI is provided. Always required when defining mocks. + type: string + namespace: + description: Namespace of the referenced object. Defaults to the + TrafficReplay namespace. + type: string + required: + - name + type: object + workloads: + description: |- + Workloads define target workloads (SUT) for a TrafficReplay. Many + workloads may be provided, or none. Workloads may be modified and + restarted during replay to configure communication with a responder. + items: + description: |- + Workload represents a Kubernetes workload to be targeted during replay and + associated settings. + properties: + customURI: + description: |- + CustomURI will be target of the traffic instead of directly targeting + workload. This is required if a Ref is not specified. + type: string + inTrafficKey: + description: 'DEPRECATED: use Tests' + type: string + inTrafficKeys: + description: 'DEPRECATED: use Tests' + items: + type: string + type: array + mocks: + description: |- + Mocks are strings used to identify slices of outbound snapshot traffic to + mock for this workload and maps directly to a snapshot's `OutTraffic` + field. Snapshot egress traffic can be split across multiple slices where + each slice contains part of the traffic. A workload may specify multiple + keys and multiple workloads may specify the same key. + + + Only the traffic slices defined here will be mocked. A workload with no + keys defined will not mock any traffic. Pass '*' to mock all traffic. + + + Mock strings may only match part of the snapshot's `OutTraffic` key if the + string matches exactly one key. For example, the test string + `foo.example.com` would match the `OutTraffic` key of + my-service:foo.example.com:8080, as long as no other keys would match + `foo.example.com`. Multiple mocks must be specified for multiple keys + unless using '*'. + items: + type: string + type: array + outTrafficKeys: + description: 'DEPRECATED: use Mocks' + items: + type: string + type: array + ref: + description: |- + Ref is a reference to a cluster workload, like a deployment or a + statefulset. This is required unless a CustomURI is specified. + properties: + apiVersion: + description: API version of the referenced object. + type: string + kind: + description: Kind of the referenced object. Defaults to + "Deployment". + type: string + name: + description: |- + Name of the referenced object. Required when defining for a test unless a + custom URI is provided. Always required when defining mocks. + type: string + namespace: + description: Namespace of the referenced object. Defaults + to the TrafficReplay namespace. + type: string + required: + - name + type: object + routing: + description: Routing configures how workloads route egress traffic + to responders + enum: + - hostalias + - nat + type: string + sidecar: + description: |- + TODO: this is not implemented, come back and replace deprecated Sidecar with workload specific settings + Sidecar defines sidecar specific configuration. + properties: + inject: + description: 'DEPRECATED: do not use' + type: boolean + patch: + description: Patch is .yaml file patch for the Workload + format: byte + type: string + tls: + properties: + in: + description: In provides configuration for sidecar inbound + TLS. + properties: + private: + description: Private is the filename of the TLS + inbound private key. + type: string + public: + description: Public is the filename of the TLS inbound + public key. + type: string + secret: + description: Secret is a secret with the TLS keys + to use for inbound traffic. + type: string + type: object + mutual: + description: Mutual provides configuration for sidecar + mutual TLS. + properties: + private: + description: Private is the filename of the mutual + TLS private key. + type: string + public: + description: Public is the filename of the mutual + TLS public key. + type: string + secret: + description: Secret is a secret with the mutual + TLS keys. + type: string + type: object + out: + description: |- + Out enables or disables TLS out on the + sidecar during replay. + type: boolean + type: object + type: object + tests: + description: |- + Tests are strings used to identify slices of inbound snapshot traffic this + workload is targeting and maps directly to a snapshot's `InTraffic` field. + Snapshot ingress traffic can be split across multiple slices where each + slice contains part of the traffic. A key must only be specified once + across all workloads, but a workload may specify multiple keys. Pass '*' + to match all keys. + + + Test strings may only match part of the snapshot's `InTraffic` key if the + string matches exactly one key. For example, the test string + `foo.example.com` would match the `InTraffic` key of + my-service:foo.example.com:8080, as long as no other keys would match + `foo.example.com` + + + This field is optional in the spec to provide support for single-workload + and legacy replays, but must be specified for multi-workload replays in + order to provide deterministic replay configuration. + items: + type: string + type: array + type: object + type: array + required: + - snapshotID + - testConfigID + type: object + status: + default: + observedGeneration: -1 + description: TrafficReplayStatus defines the observed state of TrafficReplay + properties: + active: + description: Active indicates whether this traffic replay is currently + underway or not. + type: boolean + conditions: + items: + description: "Condition contains details for one aspect of the current + state of this API Resource.\n---\nThis struct is intended for + direct use as an array at the field path .status.conditions. For + example,\n\n\n\ttype FooStatus struct{\n\t // Represents the + observations of a foo's current state.\n\t // Known .status.conditions.type + are: \"Available\", \"Progressing\", and \"Degraded\"\n\t // + +patchMergeKey=type\n\t // +patchStrategy=merge\n\t // +listType=map\n\t + \ // +listMapKey=type\n\t Conditions []metav1.Condition `json:\"conditions,omitempty\" + patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`\n\n\n\t + \ // other fields\n\t}" + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: |- + type of condition in CamelCase or in foo.example.com/CamelCase. + --- + Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be + useful (see .node.status.conditions), the ability to deconflict is important. + The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + finishedTime: + description: Information when the traffic replay has finished. + format: date-time + type: string + initializedTime: + description: Information when the test environment was successfully + prepared. + format: date-time + type: string + lastHeartbeatTime: + description: 'DEPRECATED: will not be set' + format: date-time + type: string + observedGeneration: + description: ObservedGeneration is the last observed generation. + format: int64 + type: integer + reconcileFailures: + description: |- + ReconcileFailures is the number of times the traffic replay controller + experienced an error during the reconciliation process. The traffic + replay will be deleted if too many errors occur. + format: int64 + type: integer + reportID: + description: The id of the traffic replay report created. + type: string + reportURL: + description: The url to the traffic replay report. + type: string + startedTime: + description: Information when the traffic replay has started. + format: date-time + type: string + type: object + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/deployments.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/deployments.yaml new file mode 100644 index 000000000..e5f329257 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/deployments.yaml @@ -0,0 +1,132 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + operator.speedscale.com/ignore: "true" + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + labels: + app: speedscale-operator + controlplane.speedscale.com/component: operator + {{- if .Values.globalLabels }} +{{ toYaml .Values.globalLabels | indent 4}} + {{- end }} + name: speedscale-operator + namespace: {{ .Release.Namespace }} +spec: + replicas: 1 + selector: + matchLabels: + app: speedscale-operator + controlplane.speedscale.com/component: operator + strategy: + type: Recreate + template: + metadata: + annotations: + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 8}} + {{- end }} + labels: + app: speedscale-operator + controlplane.speedscale.com/component: operator + {{- if .Values.globalLabels }} +{{ toYaml .Values.globalLabels | indent 8}} + {{- end }} + spec: + containers: + - command: + - /operator + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + envFrom: + - configMapRef: + name: speedscale-operator + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core + # When a key exists in multiple sources, the value associated with the last source will take precedence. + # Values defined by an Env with a duplicate key will take precedence. + - configMapRef: + name: speedscale-operator-override + optional: true + - secretRef: + name: '{{ ne .Values.apiKeySecret "" | ternary .Values.apiKeySecret "speedscale-apikey" }}' + optional: false + image: '{{ .Values.image.registry }}/operator:{{ .Values.image.tag }}' + imagePullPolicy: {{ .Values.image.pullPolicy }} + livenessProbe: + failureThreshold: 5 + httpGet: + path: /healthz + port: health-check + scheme: HTTP + initialDelaySeconds: 30 + periodSeconds: 30 + successThreshold: 1 + timeoutSeconds: 5 + name: operator + ports: + - containerPort: 443 + name: webhook-server + - containerPort: 8081 + name: health-check + readinessProbe: + failureThreshold: 10 + httpGet: + path: /readyz + port: health-check + scheme: HTTP + initialDelaySeconds: 5 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 5 + resources: {{- toYaml .Values.operator.resources | nindent 10 }} + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: false + # Run as root to bind 443 https://github.com/kubernetes/kubernetes/issues/56374 + runAsUser: 0 + volumeMounts: + - mountPath: /tmp + name: tmp + - mountPath: /tmp/k8s-webhook-server/serving-certs + name: webhook-certs + readOnly: true + - mountPath: /etc/ssl/speedscale + name: speedscale-tls-out + readOnly: true + hostNetwork: {{ .Values.hostNetwork }} + securityContext: + runAsNonRoot: true + serviceAccountName: speedscale-operator + terminationGracePeriodSeconds: 10 + volumes: + - emptyDir: {} + name: tmp + - name: webhook-certs + secret: + secretName: speedscale-webhook-certs + - name: speedscale-tls-out + secret: + secretName: speedscale-certs + {{- if .Values.affinity }} + affinity: {{ toYaml .Values.affinity | nindent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: {{ toYaml .Values.tolerations | nindent 8 }} + {{- end }} + {{- if .Values.nodeSelector }} + nodeSelector: {{ toYaml .Values.nodeSelector | nindent 8 }} + {{- end }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/hooks.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/hooks.yaml new file mode 100644 index 000000000..3e8231f19 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/hooks.yaml @@ -0,0 +1,73 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded + helm.sh/hook-weight: "4" + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + creationTimestamp: null + name: speedscale-operator-pre-install + namespace: {{ .Release.Namespace }} + labels: + {{- if .Values.globalLabels }} +{{ toYaml .Values.globalLabels | indent 4}} + {{- end }} +spec: + backoffLimit: 0 + ttlSecondsAfterFinished: 30 + template: + metadata: + annotations: + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 8}} + {{- end }} + creationTimestamp: null + labels: + {{- if .Values.globalLabels }} +{{ toYaml .Values.globalLabels | indent 8}} + {{- end }} + spec: + containers: + - args: + - |- + # ensure valid settings before the chart reports a successfull install + {{- if .Values.http_proxy }} + HTTP_PROXY={{ .Values.http_proxy | quote }} \ + {{- end }} + {{- if .Values.https_proxy }} + HTTPS_PROXY={{ .Values.https_proxy | quote }} \ + {{- end }} + {{- if .Values.no_proxy }} + NO_PROXY={{ .Values.no_proxy | quote }} \ + {{- end }} + speedctl init --overwrite --no-rcfile-update \ + --api-key $SPEEDSCALE_API_KEY \ + --app-url $SPEEDSCALE_APP_URL + + # in case we're in istio + curl -X POST http://127.0.0.1:15000/quitquitquit || true + command: + - sh + - -c + envFrom: + - secretRef: + name: '{{ ne .Values.apiKeySecret "" | ternary .Values.apiKeySecret "speedscale-apikey" }}' + optional: false + image: '{{ .Values.image.registry }}/speedscale-cli:{{ .Values.image.tag }}' + imagePullPolicy: {{ .Values.image.pullPolicy }} + name: speedscale-cli + resources: {} + restartPolicy: Never + {{- if .Values.affinity }} + affinity: {{ toYaml .Values.affinity | nindent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: {{ toYaml .Values.tolerations | nindent 8 }} + {{- end }} + {{- if .Values.nodeSelector }} + nodeSelector: {{ toYaml .Values.nodeSelector | nindent 8 }} + {{- end }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/rbac.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/rbac.yaml new file mode 100644 index 000000000..e1ea42d99 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/rbac.yaml @@ -0,0 +1,244 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + creationTimestamp: null + name: speedscale-operator + {{- if .Values.globalAnnotations }} + annotations: {{ toYaml .Values.globalAnnotations | nindent 4 }} + {{- end }} +rules: +- apiGroups: + - apps + resources: + - deployments + - statefulsets + - daemonsets + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - apps + resources: + - replicasets + verbs: + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - get + - list +- apiGroups: + - admissionregistration.k8s.io + resources: + - mutatingwebhookconfigurations + - validatingwebhookconfigurations + verbs: + - get + - list +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + verbs: + - get + - list +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + - pods + - services + - serviceaccounts + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods/log + verbs: + - get + - list +- apiGroups: + - "" + resources: + - events + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get + - list + - watch +- apiGroups: + - metrics.k8s.io + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - rolebindings + - roles + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.istio.io + resources: + - envoyfilters + - sidecars + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - security.istio.io + resources: + - peerauthentications + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - speedscale.com + resources: + - trafficreplays + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - speedscale.com + resources: + - trafficreplays/status + verbs: + - get + - update + - patch +- apiGroups: + - argoproj.io + resources: + - rollouts + verbs: + - get + - list + - patch + - update + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: speedscale-operator + {{- if .Values.globalAnnotations }} + annotations: {{ toYaml .Values.globalAnnotations | nindent 4 }} + {{- end }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: speedscale-operator +subjects: +- kind: ServiceAccount + name: speedscale-operator + namespace: {{ .Release.Namespace }} +--- +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + creationTimestamp: null + labels: + app: speedscale-operator + controlplane.speedscale.com/component: operator + name: speedscale-operator + namespace: {{ .Release.Namespace }} + {{- if .Values.globalAnnotations }} + annotations: {{ toYaml .Values.globalAnnotations | nindent 4 }} + {{- end }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/secrets.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/secrets.yaml new file mode 100644 index 000000000..1fb6999e4 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/secrets.yaml @@ -0,0 +1,18 @@ +--- +{{ if .Values.apiKey }} +apiVersion: v1 +kind: Secret +metadata: + name: speedscale-apikey + namespace: {{ .Release.Namespace }} + annotations: + helm.sh/hook: pre-install + helm.sh/hook-weight: "3" + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} +type: Opaque +data: + SPEEDSCALE_API_KEY: {{ .Values.apiKey | b64enc }} + SPEEDSCALE_APP_URL: {{ .Values.appUrl | b64enc }} +{{ end }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/services.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/services.yaml new file mode 100644 index 000000000..f9da2c25c --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/services.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: v1 +kind: Service +metadata: + creationTimestamp: null + labels: + app: speedscale-operator + controlplane.speedscale.com/component: operator + name: speedscale-operator + namespace: {{ .Release.Namespace }} + {{- if .Values.globalAnnotations }} + annotations: {{ toYaml .Values.globalAnnotations | nindent 4 }} + {{- end }} +spec: + ports: + - port: 443 + protocol: TCP + selector: + app: speedscale-operator + controlplane.speedscale.com/component: operator +status: + loadBalancer: {} diff --git a/charts/speedscale/speedscale-operator/2.2.567/templates/tls.yaml b/charts/speedscale/speedscale-operator/2.2.567/templates/tls.yaml new file mode 100644 index 000000000..4a2456288 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/templates/tls.yaml @@ -0,0 +1,183 @@ +{{- $crt := "" -}} +{{- $key := "" -}} +{{- $s := (lookup "v1" "Secret" .Release.Namespace "speedscale-certs") -}} +{{- if $s -}} +{{- $crt = index $s.data "tls.crt" | b64dec -}} +{{- $key = index $s.data "tls.key" | b64dec -}} +{{ else }} +{{- $cert := genCA "Speedscale" 3650 -}} +{{- $crt = $cert.Cert -}} +{{- $key = $cert.Key -}} +{{- end -}} +--- +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded + helm.sh/hook-weight: "5" + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + creationTimestamp: null + name: speedscale-operator-create-jks + namespace: {{ .Release.Namespace }} + labels: + {{- if .Values.globalLabels }} +{{ toYaml .Values.globalLabels | indent 4}} + {{- end }} +spec: + backoffLimit: 0 + ttlSecondsAfterFinished: 30 + template: + metadata: + annotations: + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 8}} + {{- end }} + creationTimestamp: null + labels: + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 8}} + {{- end }} + spec: + containers: + - args: + - |- + keytool -keystore /usr/lib/jvm/jre/lib/security/cacerts -importcert -noprompt -trustcacerts -storepass changeit -alias speedscale -file /etc/ssl/speedscale/tls.crt + kubectl -n ${POD_NAMESPACE} delete secret speedscale-jks || true + kubectl -n ${POD_NAMESPACE} create secret generic speedscale-jks --from-file=cacerts.jks=/usr/lib/jvm/jre/lib/security/cacerts + + # in case we're in istio + curl -X POST http://127.0.0.1:15000/quitquitquit || true + command: + - sh + - -c + volumeMounts: + - mountPath: /etc/ssl/speedscale + name: speedscale-tls-out + readOnly: true + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + envFrom: + - secretRef: + name: '{{ ne .Values.apiKeySecret "" | ternary .Values.apiKeySecret "speedscale-apikey" }}' + optional: false + image: '{{ .Values.image.registry }}/amazoncorretto' + imagePullPolicy: {{ .Values.image.pullPolicy }} + name: create-jks + resources: {} + restartPolicy: Never + serviceAccountName: speedscale-operator-provisioning + volumes: + - name: speedscale-tls-out + secret: + secretName: speedscale-certs + {{- if .Values.affinity }} + affinity: {{ toYaml .Values.affinity | nindent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: {{ toYaml .Values.tolerations | nindent 8 }} + {{- end }} + {{- if .Values.nodeSelector }} + nodeSelector: {{ toYaml .Values.nodeSelector | nindent 8 }} + {{- end }} +--- +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded + helm.sh/hook-weight: "1" + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + creationTimestamp: null + labels: + app: speedscale-operator + controlplane.speedscale.com/component: operator + name: speedscale-operator-provisioning + namespace: {{ .Release.Namespace }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded + helm.sh/hook-weight: "2" + creationTimestamp: null + name: speedscale-operator-provisioning +rules: +- apiGroups: + - "" + resources: + - secrets + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +- apiGroups: + - admissionregistration.k8s.io + resources: + - mutatingwebhookconfigurations + - validatingwebhookconfigurations + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - update + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded + helm.sh/hook-weight: "3" + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + creationTimestamp: null + name: speedscale-operator-provisioning +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: speedscale-operator-provisioning +subjects: +- kind: ServiceAccount + name: speedscale-operator-provisioning + namespace: {{ .Release.Namespace }} +--- +apiVersion: v1 +kind: Secret +metadata: + annotations: + helm.sh/hook: pre-install + helm.sh/hook-delete-policy: before-hook-creation + {{- if .Values.globalAnnotations }} +{{ toYaml .Values.globalAnnotations | indent 4}} + {{- end }} + creationTimestamp: null + name: speedscale-certs + namespace: {{ .Release.Namespace }} +type: kubernetes.io/tls +data: + tls.crt: {{ $crt | b64enc }} + tls.key: {{ $key | b64enc }} diff --git a/charts/speedscale/speedscale-operator/2.2.567/values.yaml b/charts/speedscale/speedscale-operator/2.2.567/values.yaml new file mode 100644 index 000000000..92bde05a6 --- /dev/null +++ b/charts/speedscale/speedscale-operator/2.2.567/values.yaml @@ -0,0 +1,138 @@ +# An API key is required to connect to the Speedscale cloud. +# If you need a key email support@speedscale.com. +apiKey: "" + +# A secret name can be referenced instead of the api key itself. +# The secret must be of the format: +# +# type: Opaque +# data: +# SPEEDSCALE_API_KEY: +# SPEEDSCALE_APP_URL: +apiKeySecret: "" + +# Speedscale domain to use. +appUrl: "app.speedscale.com" + +# The name of your cluster. +clusterName: "my-cluster" + +# Speedscale components image settings. +image: + registry: gcr.io/speedscale + tag: v2.2.567 + pullPolicy: Always + +# Log level for Speedscale components. +logLevel: "info" + +# Namespaces to be watched by Speedscale Operator as a list of names. +namespaceSelector: [] + +# Instructs operator to deploy resources necessary to interact with your cluster from the Speedscale dashboard. +dashboardAccess: true + +# Filter Rule to apply to the Speedscale Forwarder +filterRule: "standard" + +# Data Loss Prevention settings. +dlp: + # Instructs operator to enable data loss prevention features + enabled: false + + # Configuration for data loss prevention + config: "standard" + +# If the operator pod/webhooks need to be on the host network. +# This is only needed if the control plane cannot connect directly to a pod +# for eg. if Calico is used as EKS's default networking +# https://docs.tigera.io/calico/3.25/getting-started/kubernetes/managed-public-cloud/eks#install-eks-with-calico-networking +hostNetwork: false + +# A set of annotations to be applied to all Speedscale related deployments, +# services, jobs, pods, etc. +# +# Example: +# annotation.first: value +# annotation.second: value +globalAnnotations: {} + +# A set of labels to be applied to all Speedscale related deployments, +# services, jobs, pods, etc. +# +# Example: +# label1: value +# label2: value +globalLabels: {} + +# A full affinity object as detailed: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity +affinity: {} + +# The list of tolerations as detailed: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ +tolerations: [] + +# A nodeselector object as detailed: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/ +nodeSelector: {} + +# Deploy a demo app at startup. Set this to an empty string to not deploy. +# Valid values: ["java", ""] +deployDemo: "java" + +# Proxy connection settings if required by your network. These translate to standard proxy environment +# variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY +http_proxy: "" +https_proxy: "" +no_proxy: "" + +# control if sidecar init containers should run with privileged set +privilegedSidecars: false + +# control if the sidecar should enable/disable use of the smart dns lookup feature (requires NET_ADMIN) +disableSidecarSmartReverseDNS: false + +# Operator settings. These limits are recommended unless you have a cluster +# with a very large number of workloads (for eg. 10k+ deployments, replicasets, etc.). +operator: + resources: + limits: + cpu: 500m + memory: 512Mi + requests: + cpu: 100m + memory: 128Mi + # how long to wait for the SUT to become ready + test_prep_timeout: 10m + # timeout for deploying & upgrading control plane components + control_plane_timeout: 5m + + +# Default sidecar settings. Example: +# sidecar: +# resources: +# limits: +# cpu: 500m +# memory: 512Mi +# ephemeral-storage: 100Mi +# requests: +# cpu: 10m +# memory: 32Mi +# ephemeral-storage: 100Mi +# ignore_src_hosts: example.com, example.org +# ignore_src_ips: 8.8.8.8, 1.1.1.1 +# ignore_dst_hosts: example.com, example.org +# ignore_dst_ips: 8.8.8.8, 1.1.1.1 +# insert_init_first: false +# tls_out: false +# reinitialize_iptables: false +sidecar: {} + +# Forwarder settings +# forwarder: +# resources: +# limits: +# cpu: 500m +# memory: 500M +# requests: +# cpu: 300m +# memory: 250M +forwarder: {} diff --git a/index.yaml b/index.yaml index c8d68fa0a..1337a0f4d 100644 --- a/index.yaml +++ b/index.yaml @@ -241,6 +241,40 @@ entries: - assets/amd/amd-gpu-0.9.0.tgz version: 0.9.0 artifactory-ha: + - annotations: + artifactoryServiceVersion: 7.90.21 + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Artifactory HA + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-ha + apiVersion: v2 + appVersion: 7.90.15 + created: "2024-10-22T00:36:02.2911763Z" + dependencies: + - condition: postgresql.enabled + name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 + description: Universal Repository Manager supporting all major packaging formats, + build tools and CI servers. + digest: 93c715dd5678924eb2dbcc23bbafefbfb78dd0ac5899843bf1a5ee6892f8e3e3 + home: https://www.jfrog.com/artifactory/ + icon: file://assets/icons/artifactory-ha.png + keywords: + - artifactory + - jfrog + - devops + kubeVersion: '>= 1.19.0-0' + maintainers: + - email: installers@jfrog.com + name: Chart Maintainers at JFrog + name: artifactory-ha + sources: + - https://github.com/jfrog/charts + type: application + urls: + - assets/jfrog/artifactory-ha-107.90.15.tgz + version: 107.90.15 - annotations: artifactoryServiceVersion: 7.90.20 catalog.cattle.io/certified: partner @@ -1676,6 +1710,40 @@ entries: - assets/jfrog/artifactory-ha-107.55.14.tgz version: 107.55.14 artifactory-jcr: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Container Registry + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-jcr + apiVersion: v2 + appVersion: 7.90.15 + created: "2024-10-22T00:36:02.687766343Z" + dependencies: + - name: artifactory + repository: file://charts/artifactory + version: 107.90.15 + description: JFrog Container Registry + digest: 8f452ac0e6fd38cd347be4151614c4aea59ca39af2457a735aa1d066f8a53802 + home: https://jfrog.com/container-registry/ + icon: file://assets/icons/artifactory-jcr.png + keywords: + - artifactory + - jfrog + - container + - registry + - devops + - jfrog-container-registry + kubeVersion: '>= 1.19.0-0' + maintainers: + - email: helm@jfrog.com + name: Chart Maintainers at JFrog + name: artifactory-jcr + sources: + - https://github.com/jfrog/charts + type: application + urls: + - assets/jfrog/artifactory-jcr-107.90.15.tgz + version: 107.90.15 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: JFrog Container Registry @@ -5715,6 +5783,28 @@ entries: - assets/cloudcasa/cloudcasa-3.4.1.tgz version: 3.4.1 cockroachdb: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: CockroachDB + catalog.cattle.io/kube-version: '>=1.8-0' + catalog.cattle.io/release-name: cockroachdb + apiVersion: v1 + appVersion: 24.2.4 + created: "2024-10-22T00:36:00.865919234Z" + description: CockroachDB is a scalable, survivable, strongly-consistent SQL database. + digest: c14feb3a5dd9962e346d072ea07ed1502df3f94dfce0c348b4d7c9c9ec50b8ea + home: https://www.cockroachlabs.com + icon: file://assets/icons/cockroachdb.png + kubeVersion: '>=1.8-0' + maintainers: + - email: helm-charts@cockroachlabs.com + name: cockroachlabs + name: cockroachdb + sources: + - https://github.com/cockroachdb/cockroach + urls: + - assets/cockroach-labs/cockroachdb-14.0.5.tgz + version: 14.0.5 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: CockroachDB @@ -23780,6 +23870,36 @@ entries: - assets/avesha/kubeslice-worker-1.1.1.tgz version: 1.1.1 kuma: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: Kuma + catalog.cattle.io/namespace: kuma-system + catalog.cattle.io/release-name: kuma + apiVersion: v2 + appVersion: 2.9.0 + created: "2024-10-22T00:36:03.83706531Z" + description: A Helm chart for the Kuma Control Plane + digest: b098db3e77f384c3f4020a035ee904203de5f0e9a8a3a6c42275cf60b53f28af + home: https://github.com/kumahq/kuma + icon: file://assets/icons/kuma.svg + keywords: + - service mesh + - control plane + maintainers: + - email: jakub.dyszkiewicz@konghq.com + name: Jakub Dyszkiewicz + url: https://github.com/jakubdyszkiewicz + - email: charly.molter@konghq.com + name: Charly Molter + url: https://github.com/lahabana + - email: michael.beaumont@konghq.com + name: Mike Beaumont + url: https://github.com/michaelbeaumont + name: kuma + type: application + urls: + - assets/kuma/kuma-2.9.0.tgz + version: 2.9.0 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: Kuma @@ -28358,6 +28478,32 @@ entries: - assets/minio/minio-operator-5.0.6.tgz version: 5.0.6 nats: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: NATS Server + catalog.cattle.io/kube-version: '>=1.16-0' + catalog.cattle.io/release-name: nats + apiVersion: v2 + appVersion: 2.10.22 + created: "2024-10-22T00:36:04.147929557Z" + description: A Helm chart for the NATS.io High Speed Cloud Native Distributed + Communications Technology. + digest: cf8c2fdf8cef4f7d0c2880a0caeaeefdb4bcfccb13e7c038b8693e8d311dc692 + home: http://github.com/nats-io/k8s + icon: file://assets/icons/nats.png + keywords: + - nats + - messaging + - cncf + kubeVersion: '>=1.16-0' + maintainers: + - email: info@nats.io + name: The NATS Authors + url: https://github.com/nats-io + name: nats + urls: + - assets/nats/nats-1.2.6.tgz + version: 1.2.6 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: NATS Server @@ -38455,6 +38601,37 @@ entries: - assets/redpanda/redpanda-4.0.33.tgz version: 4.0.33 speedscale-operator: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: Speedscale Operator + catalog.cattle.io/kube-version: '>= 1.17.0-0' + catalog.cattle.io/release-name: speedscale-operator + apiVersion: v1 + appVersion: 2.2.567 + created: "2024-10-22T00:36:05.619858319Z" + description: Stress test your APIs with real world scenarios. Collect and replay + traffic without scripting. + digest: 3a76a202d7896c1fd3652ad87d4e9ba62059cf641970fd50ffe4ec57228d81b4 + home: https://speedscale.com + icon: file://assets/icons/speedscale-operator.png + keywords: + - speedscale + - test + - testing + - regression + - reliability + - load + - replay + - network + - traffic + kubeVersion: '>= 1.17.0-0' + maintainers: + - email: support@speedscale.com + name: Speedscale Support + name: speedscale-operator + urls: + - assets/speedscale/speedscale-operator-2.2.567.tgz + version: 2.2.567 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: Speedscale Operator @@ -45562,4 +45739,4 @@ entries: urls: - assets/netfoundry/ziti-host-1.5.1.tgz version: 1.5.1 -generated: "2024-10-20T00:39:05.89499674Z" +generated: "2024-10-22T00:36:00.411622668Z"