Merge pull request #94 from aiyengar2/migrate_to_v0.2.1_live

[Live] Migrate to charts-build-scripts v0.2.1
pull/135/head
Steven Crespo 2021-06-24 18:48:09 -04:00 committed by GitHub
commit b3d1a2c9fc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1442 changed files with 59357 additions and 1142 deletions

3
.gitignore vendored Executable file
View File

@ -0,0 +1,3 @@
bin
*.DS_Store
.idea

10
Makefile Executable file
View File

@ -0,0 +1,10 @@
pull-scripts:
./scripts/pull-scripts
TARGETS := prepare patch charts clean validate template
$(TARGETS):
@./scripts/pull-scripts
@./bin/charts-build-scripts $@
.PHONY: $(TARGETS)

View File

@ -1,3 +1,89 @@
# Asset Branch ## Live Branch
This branch is auto-generated from main-source branch, please open PRs to main-source. This branch contains generated assets that have been officially released on partner-charts.rancher.io.
The following directory structure is expected:
```text
assets/
<package>/
<chart>-<packageVersion>.tgz
...
charts/
<package>
<chart>
<packageVersion>
# Unarchived Helm chart
```
### Configuration
This repository branch contains a `configuration.yaml` file that is used to specify how it interacts with other repository branches.
### Cutting a Release
In the Live branch, cutting a release requires you to copy the contents of the Staging branch into your Live Branch, which can be done with the following simple Bash script.
```bash
# Assuming that your upstream remote (e.g. https://github.com/rancher/charts.git) is named `upstream`
# Replace the following environment variables
STAGING_BRANCH=dev-v2.x
LIVE_BRANCH=release-v2.x
FORKED_BRANCH=release-v2.x.y
git fetch upstream
git checkout upstream/${LIVE_BRANCH} -b ${FORKED_BRANCH}
git branch -u origin/${FORKED_BRANCH}
git checkout upstream/${STAGING_BRANCH} -- charts assets index.yaml
git add charts assets index.yaml
git commit -m "Releasing chart"
git push --set-upstream origin ${FORKED_BRANCH}
# Create your pull request!
```
Once complete, you should see the following:
- The `assets/` and `charts/` directories have been updated to match the Staging branch. All entires should be additions, not modifications.
- The `index.yaml`'s diff shows only adds additional entries and does not modify or remove existing ones.
No other changes are expected.
### Cutting an Out-Of-Band Chart Release
Similar to the above steps, cutting an out-of-band chart release will involve porting over the new chart from the Staging branch via `git checkout`. However, you will need to manually regenerate the Helm index since you only want the index.yaml on the Live branch to be updated to include the single new chart.
Use the following example Bash script to execute this change:
```bash
# Assuming that your upstream remote (e.g. https://github.com/rancher/charts.git) is named `upstream`
# Replace the following environment variables
STAGING_BRANCH=dev-v2.x
LIVE_BRANCH=release-v2.x
FORKED_BRANCH=release-v2.x.y
NEW_CHART_DIR=charts/rancher-monitoring/rancher-monitoring/X.Y.Z
NEW_ASSET_TGZ=assets/rancher-monitoring/rancher-monitoring-X.Y.Z.tgz
git fetch upstream
git checkout upstream/${LIVE_BRANCH} -b ${FORKED_BRANCH}
git branch -u origin/${FORKED_BRANCH}
git checkout upstream/${STAGING_BRANCH} -- ${NEW_CHART_DIR} ${NEW_ASSET_TGZ}
helm repo index --merge ./index.yaml --url assets assets; # FYI: This will generate new 'created' timestamps across *all charts*.
mv assets/index.yaml index.yaml
git add ${NEW_CHART_DIR} ${NEW_ASSET_TGZ} index.yaml
git commit -m "Releasing out-of-band chart"
git push --set-upstream origin ${FORKED_BRANCH}
# Create your pull request!
```
Once complete, you should see the following:
- The new chart should exist in `assets` and `charts`. Existing charts should not be modified.
- The `index.yaml`'s diff should show an additional entry for your new chart.
- The `index.yaml`'s diff should show modified `created` timestamps across all charts (due to the behavior of `helm repo index`).
No other changes are expected.
### Makefile
#### Basic Commands
`make pull-scripts`: Pulls in the version of the `charts-build-scripts` indicated in scripts.
`make validate`: Validates your current repository branch against all the repository branches indicated in your configuration.yaml
`make template`: Updates the current directory by applying the configuration.yaml on [upstream Go templates](https://github.com/rancher/charts-build-scripts/tree/master/templates/template) to pull in the most up-to-date docs, scripts, etc. from [rancher/charts-build-scripts](https://github.com/rancher/charts-build-scripts)

View File

@ -1 +1 @@
exclude: [charts] exclude: [charts]

3
assets/README.md Executable file
View File

@ -0,0 +1,3 @@
## Assets
This folder contains Helm chart archives that are served from partner-charts.rancher.io.

View File

@ -1,917 +0,0 @@
apiVersion: v1
entries:
ambassador:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Ambassador Edge Stack
catalog.cattle.io/release-name: ambassador
apiVersion: v1
appVersion: 1.13.8
created: "2021-06-16T21:03:22.6268011Z"
description: A Helm chart for Datawire Ambassador
digest: f56e602f017a6e48d2838033b31ce356a47db561fcd9c02e008d06b67be95b90
home: https://www.getambassador.io/
icon: https://www.getambassador.io/images/logo.png
keywords:
- api gateway
- ambassador
- datawire
- envoy
maintainers:
- email: markus@maga.se
name: flydiverny
- email: flynn@datawire.io
name: kflynn
- email: nkrause@datawire.io
name: nbkrause
- email: lukeshu@datawire.io
name: lukeshu
name: ambassador
sources:
- https://github.com/datawire/ambassador
- https://github.com/prometheus/statsd_exporter
urls:
- assets/ambassador/ambassador-6.7.1100.tgz
version: 6.7.1100
artifactory-ha:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: artifactory-ha
apiVersion: v1
appVersion: 7.17.5
created: "2021-04-30T00:22:55.074362275Z"
dependencies:
- condition: postgresql.enabled
name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 9.3.4
description: Universal Repository Manager supporting all major packaging formats,
build tools and CI servers.
digest: 63b4083aaf16e3f8f46c01943a6113b11beebdab0b3bd9e6b482ad3e8cc4e56a
home: https://www.jfrog.com/artifactory/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-ha/logo/artifactory-logo.png
keywords:
- artifactory
- jfrog
- devops
maintainers:
- email: installers@jfrog.com
name: Chart Maintainers at JFrog
name: artifactory-ha
sources:
- https://github.com/jfrog/charts
urls:
- assets/artifactory-ha/artifactory-ha-4.13.000.tgz
version: 4.13.000
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: artifactory-ha
apiVersion: v1
appVersion: 7.12.6
created: "2021-02-26T18:55:48.762534939Z"
dependencies:
- condition: postgresql.enabled
name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 9.3.4
description: Universal Repository Manager supporting all major packaging formats,
build tools and CI servers.
digest: 6f13240e67c292e0a7229b1e0b1d8389991e10850d629fab7bac34b7f702fa3c
home: https://www.jfrog.com/artifactory/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-ha/logo/artifactory-logo.png
keywords:
- artifactory
- jfrog
- devops
maintainers:
- email: installers@jfrog.com
name: Chart Maintainers at JFrog
name: artifactory-ha
sources:
- https://bintray.com/jfrog/product/JFrog-Artifactory-Pro/view
- https://github.com/jfrog/charts
urls:
- assets/artifactory-ha/artifactory-ha-4.7.600.tgz
version: 4.7.600
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: artifactory-ha
apiVersion: v1
appVersion: 7.6.3
created: "2020-10-19T19:35:34.216778028Z"
dependencies:
- condition: postgresql.enabled
name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 8.7.3
description: Universal Repository Manager supporting all major packaging formats,
build tools and CI servers.
digest: cfe8c5e0fbf007f8f858b65ab788ad297cdece703364d94ff9d36beca395ca6a
home: https://www.jfrog.com/artifactory/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-ha/logo/artifactory-logo.png
keywords:
- artifactory
- jfrog
- devops
maintainers:
- email: amithk@jfrog.com
name: amithins
- email: daniele@jfrog.com
name: danielezer
- email: eldada@jfrog.com
name: eldada
- email: ramc@jfrog.com
name: chukka
- email: rimasm@jfrog.com
name: rimusz
name: artifactory-ha
sources:
- https://bintray.com/jfrog/product/JFrog-Artifactory-Pro/view
- https://github.com/jfrog/charts
urls:
- assets/artifactory-ha/artifactory-ha-3.0.1400.tgz
version: 3.0.1400
artifactory-jcr:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: artifactory-jcr
apiVersion: v1
appVersion: 7.12.5
created: "2021-02-26T18:58:09.545552572Z"
dependencies:
- name: artifactory
repository: https://charts.jfrog.io/
version: 11.7.4
description: JFrog Container Registry
digest: 148af8042991b7d031770887a8d64e034268c2e1e3eb03f55e13310a40cb2a60
home: https://jfrog.com/container-registry/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-jcr/logo/jcr-logo.png
keywords:
- artifactory
- jfrog
- container
- registry
- devops
- jfrog-container-registry
maintainers:
- email: helm@jfrog.com
name: Chart Maintainers at JFrog
name: artifactory-jcr
sources:
- https://github.com/jfrog/charts
urls:
- assets/artifactory-jcr/artifactory-jcr-3.4.000.tgz
version: 3.4.000
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: artifactory-jcr
apiVersion: v1
appVersion: 7.6.3
created: "2020-10-19T19:35:34.227503815Z"
dependencies:
- name: artifactory
repository: https://charts.jfrog.io/
version: 10.0.12
description: JFrog Container Registry
digest: 4f32c8460467e79492bfab5da99afbd5867f6e8dc305d96458790b6de083f4da
home: https://jfrog.com/container-registry/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-jcr/logo/jcr-logo.png
keywords:
- artifactory
- jfrog
- container
- registry
- devops
- jfrog-container-registry
maintainers:
- email: amithk@jfrog.com
name: amithins
- email: daniele@jfrog.com
name: danielezer
- email: eldada@jfrog.com
name: eldada
- email: ramc@jfrog.com
name: chukka
- email: rimasm@jfrog.com
name: rimusz
- email: vinaya@jfrog.com
name: vinaya
name: artifactory-jcr
sources:
- https://github.com/jfrog/charts
urls:
- assets/artifactory-jcr/artifactory-jcr-2.5.100.tgz
version: 2.5.100
citrix-adc-istio-ingress-gateway:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: citrix-adc-istio-ingress-gateway
apiVersion: v1
appVersion: 1.2.1
created: "2020-10-19T19:35:34.229214465Z"
description: A Helm chart for Citrix ADC as Ingress Gateway installation in Istio
Service Mesh on Kubernetes platform
digest: 41121dad6ac7271f2ada14e5f8cbc7d398e1e656db95e1937ab1dc5bab563e4c
home: https://www.citrix.com
icon: https://raw.githubusercontent.com/citrix/citrix-helm-charts/gh-pages/icon.png
maintainers:
- email: dhiraj.gedam@citrix.com
name: dheerajng
- email: subash.dangol@citrix.com
name: subashd
name: citrix-adc-istio-ingress-gateway
sources:
- https://github.com/citrix/citrix-istio-adaptor
urls:
- assets/citrix-adc-istio-ingress-gateway/citrix-adc-istio-ingress-gateway-1.2.100.tgz
version: 1.2.100
citrix-cpx-with-ingress-controller:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: citrix-cpx-with-ingress-controller
apiVersion: v1
appVersion: 1.8.28
created: "2020-10-19T19:35:34.231058811Z"
description: A Helm chart for Citrix ADC CPX with Citrix ingress Controller running
as sidecar.
digest: 298c1472ff1afea8333346f2d67dc4bb6fb64779b4b90378b18e57180995286e
home: https://www.citrix.com
icon: https://raw.githubusercontent.com/citrix/citrix-helm-charts/gh-pages/icon.png
maintainers:
- email: priyanka.sharma@citrix.com
name: priyankash-citrix
- email: subash.dangol@citrix.com
name: subashd
name: citrix-cpx-with-ingress-controller
sources:
- https://github.com/citrix/citrix-k8s-ingress-controller
urls:
- assets/citrix-cpx-with-ingress-controller/citrix-cpx-with-ingress-controller-1.8.2800.tgz
version: 1.8.2800
citrix-k8s-cpx-ingress-controller:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/namespace: citrix-k8s-cpx-ingress-controller
catalog.cattle.io/release-name: citrix-k8s-cpx-ingress-controller
apiVersion: v1
appVersion: 1.8.28
created: "2020-09-10T18:19:56.040802801Z"
description: A Helm chart for Citrix ADC CPX with Citrix ingress Controller running
as sidecar.
digest: 0a54474018a40043d75aad6209bdc585a3ba2cd9d1fa6c2131536091ec99bfd0
home: https://www.citrix.com
icon: https://raw.githubusercontent.com/citrix/citrix-helm-charts/gh-pages/icon.png
maintainers:
- email: priyanka.sharma@citrix.com
name: priyankash-citrix
- email: subash.dangol@citrix.com
name: subashd
name: citrix-k8s-cpx-ingress-controller
sources:
- https://github.com/citrix/citrix-k8s-ingress-controller
urls:
- assets/citrix-k8s-cpx-ingress-controller/citrix-k8s-cpx-ingress-controller-1.8.2800.tgz
version: 1.8.2800
cloudcasa:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Cloudcasa
catalog.cattle.io/namespace: cloudcasa-io
catalog.cattle.io/release-name: cloudcasa
apiVersion: v2
appVersion: "1.0"
created: "2021-06-23T22:26:42.331564631Z"
description: CloudCasa backup service for Kubernetes and cloud native applications
digest: 9bb36abfa6db450688840c60a4181791da4f4d637f5a48e7aee93238f4d471c1
home: https://cloudcasa.io
icon: https://partner-charts.rancher.io/assets/logos/cloudcasa.png
keywords:
- backup
- Catalogic
- CloudCasa
kubeVersion: '>=1.13.0-0'
maintainers:
- email: info@catalogicsoftware.com
name: catalogicsoftware
name: cloudcasa
urls:
- assets/cloudcasa/cloudcasa-1.tgz
version: "1"
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/namespace: cloudcasa-io
catalog.cattle.io/release-name: cloudcasa
apiVersion: v2
appVersion: 0.1.0
created: "2021-03-09T00:13:50.362055098Z"
description: CloudCasa backup service for Kubernetes and cloud native applications
digest: be87ab1b0e0e9c74998d5d3e5041f75ac732174389ee3bf68a3d8016aace786f
home: https://cloudcasa.io
icon: https://partner-charts.rancher.io/assets/logos/cloudcasa.png
keywords:
- backup
- Catalogic
- CloudCasa
kubeVersion: '>=1.13.0-0'
maintainers:
- email: info@catalogicsoftware.com
name: catalogicsoftware
name: cloudcasa
urls:
- assets/cloudcasa/cloudcasa-0.1.000.tgz
version: 0.1.000
cockroachdb:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: cockroachdb
apiVersion: v1
appVersion: 20.1.3
created: "2020-10-19T19:35:34.23314065Z"
description: CockroachDB is a scalable, survivable, strongly-consistent SQL database.
digest: ba272eab2f61dd699854035f1bfdfafb15cd0b99eefc2b8486702dc990202bea
home: https://www.cockroachlabs.com
icon: https://raw.githubusercontent.com/cockroachdb/cockroach/master/docs/media/cockroach_db.png
maintainers:
- email: helm-charts@cockroachlabs.com
name: cockroachlabs
name: cockroachdb
sources:
- https://github.com/cockroachdb/cockroach
urls:
- assets/cockroachdb/cockroachdb-4.1.200.tgz
version: 4.1.200
control-agent:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: streamsets
apiVersion: v1
appVersion: 3.8.0
created: "2021-02-16T21:56:04.137818572Z"
description: Control Agent for managing StreamSets Control Hub Deployments
digest: 5289b93c60200cc9896b2e903c4143a8db1d312409ab86da5dc77df693bc395f
home: https://streamsets.com
icon: https://github.com/streamsets/datacollector/raw/master/basic-lib/src/main/resources/sdcipc.png
keywords:
- streamsets
- sdc
- sch
maintainers:
- email: thomas.ganka@streamsets.com
name: thomasganka
name: control-agent
sources:
- https://github.com/streamsets/helm-charts/tree/master/incubating/control-agent
urls:
- assets/streamsets/control-agent-2.0.100.tgz
version: 2.0.100
cost-analyzer:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: kubecost
apiVersion: v1
appVersion: 1.70.0
created: "2020-12-21T19:54:47.900259443Z"
description: A Helm chart that sets up Kubecost, Prometheus, and Grafana to monitor
cloud costs.
digest: b633966fdce3fa9d0c899ff38b6090ac687ac1070000c2742e6834b7430c9975
icon: https://kubecost.com/images/logo-white.png
name: cost-analyzer
urls:
- assets/kubecost/cost-analyzer-1.70.000.tgz
version: 1.70.000
csi-wekafsplugin:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: csi-wekafsplugin
apiVersion: v2
appVersion: 0.6.4
created: "2021-01-27T17:17:08.52377768Z"
description: Helm chart for Deployment of WekaIO Container Storage Interface (CSI)
plugin for WekaFS - the world fastest filesystem
digest: 78feaf34b9a8d8cb5ddcf3928e5782ecb6da0d4de94bdfc9b65162be3e357a7d
home: https://github.com/weka/csi-wekafs
icon: https://weka.github.io/csi-wekafs/logo.png
name: csi-wekafsplugin
sources:
- https://github.com/weka/csi-wekafs/tree/v0.6.4/deploy/helm/csi-wekafsplugin
type: application
urls:
- assets/csi-wekafs/csi-wekafsplugin-0.6.400.tgz
version: 0.6.400
datadog:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: datadog
apiVersion: v1
appVersion: "7"
created: "2020-12-21T19:54:47.888344836Z"
dependencies:
- condition: datadog.kubeStateMetricsEnabled
name: kube-state-metrics
repository: https://charts.helm.sh/stable
version: =2.8.11
description: Datadog Agent
digest: 5e05f58feb6bd16390bd3ed6f668d830bf134efee0dbec4a441f16f16e3a4122
home: https://www.datadoghq.com
icon: https://datadog-live.imgix.net/img/dd_logo_70x75.png
keywords:
- monitoring
- alerting
- metric
maintainers:
- email: support@datadoghq.com
name: Datadog
name: datadog
sources:
- https://app.datadoghq.com/account/settings#agent/kubernetes
- https://github.com/DataDog/datadog-agent
urls:
- assets/datadog/datadog-2.4.200.tgz
version: 2.4.200
dynatrace-oneagent-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: dynatrace-oneagent-operator
apiVersion: v2
appVersion: 0.8.0
created: "2020-10-19T19:35:34.240533633Z"
description: The Dynatrace OneAgent Operator Helm chart for Kubernetes and Openshift
digest: 7daf37239c0ca6f903d0e92bcb0ffff02584872f45e2e83822bb986dbf61ee58
home: https://www.dynatrace.com/
icon: https://assets.dynatrace.com/global/resources/Signet_Logo_RGB_CP_512x512px.png
maintainers:
- email: marco.mader@dynatrace.com
name: DTMad
- email: luis.garcia@dynatrace.com
name: lrgar
- email: michael.mayr@dynatrace.com
name: mmayr-at
name: dynatrace-oneagent-operator
sources:
- https://github.com/Dynatrace/helm-charts
type: application
urls:
- assets/dynatrace-oneagent-operator/dynatrace-oneagent-operator-0.8.000.tgz
version: 0.8.000
falcon-sensor:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: CrowdStrike Falcon Platform
catalog.cattle.io/release-name: falcon-helm
apiVersion: v2
appVersion: 0.9.3
created: "2021-06-16T21:03:13.596307526Z"
description: A Helm chart to deploy CrowdStrike Falcon sensors into Kubernetes
clusters.
digest: cb98b5a7e6020ed2d06db01575e76b4cfd3e94549805323e65f551a832a1254a
home: https://crowdstrike.com
icon: https://raw.githubusercontent.com/CrowdStrike/falcon-helm/main/images/crowdstrike-logo.svg
keywords:
- CrowdStrike
- Falcon
- EDR
- kubernetes
- security
- monitoring
- alerting
maintainers:
- name: CrowdStrike Solution Architecture
- email: gabriel.alford@crowdstrike.com
name: Gabe Alford
name: falcon-sensor
sources:
- https://github.com/CrowdStrike/falcon-helm
type: application
urls:
- assets/falcon-sensor/falcon-sensor-0.9.300.tgz
version: 0.9.300
federatorai:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Federator.ai
catalog.cattle.io/release-name: federatorai
apiVersion: v1
appVersion: 4.5.1-ga
created: "2021-06-17T21:55:05.472058889Z"
description: Federator.ai helps enterprises optimize cloud resources, maximize
application performance, and save significant cost without excessive over-provisioning
or under-provisioning of resources, meeting the service-level requirements of
their applications.
digest: bb91267948c7571fcc0ff6604bf950a17da2b4704d31c6a0033ce444d2c0399b
home: https://www.prophetstor.com
icon: https://raw.githubusercontent.com/prophetstor-ai/public/master/images/logo.png
keywords:
- AI
- Resource Orchestration
- NoOps
- AIOps
- Intelligent Workload Management
- Cost Optimization
maintainers:
- email: support@prophetstor.com
name: ProphetStor Data Services, Inc.
name: federatorai
sources:
- https://www.prophetstor.com
urls:
- assets/federatorai/federatorai-4.5.100.tgz
version: 4.5.100
haproxy:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: haproxy
apiVersion: v1
appVersion: 1.5.4
created: "2021-04-30T00:26:59.351246692Z"
description: A Helm chart for HAProxy Kubernetes Ingress Controller
digest: fd110caa557e3b385d407578a4e7693429d5bc722d233f51f19ca58840372ca7
home: https://github.com/haproxytech/helm-charts/tree/master/kubernetes-ingress
icon: http://www.haproxy.org/img/HAProxyCommunityEdition_60px.png
keywords:
- ingress
- haproxy
kubeVersion: '>=1.12.0-0'
maintainers:
- email: mmhedhbi@haproxy.com
name: Moemen Mhedhbi
- email: bassmann@haproxy.com
name: Baptiste Assmann
- email: dkorunic@haproxy.com
name: Dinko Korunic
name: haproxy
sources:
- https://github.com/haproxytech/kubernetes-ingress
urls:
- assets/haproxy/haproxy-1.12.500.tgz
version: 1.12.500
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: haproxy
apiVersion: v1
appVersion: 1.5.1
created: "2021-04-13T23:45:40.966157742Z"
description: A Helm chart for HAProxy Kubernetes Ingress Controller
digest: 29aa101f4851cac5b94d2de40c961d0f24c90bb361c0bf1bc17d3244ddf92046
home: https://github.com/haproxytech/helm-charts/tree/master/kubernetes-ingress
icon: http://www.haproxy.org/img/HAProxyCommunityEdition_60px.png
keywords:
- ingress
- haproxy
kubeVersion: '>=1.12.0-0'
maintainers:
- email: mmhedhbi@haproxy.com
name: Moemen Mhedhbi
- email: bassmann@haproxy.com
name: Baptiste Assmann
- email: dkorunic@haproxy.com
name: Dinko Korunic
name: haproxy
sources:
- https://github.com/haproxytech/kubernetes-ingress
urls:
- assets/haproxy/haproxy-1.12.100.tgz
version: 1.12.100
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: haproxy
apiVersion: v1
appVersion: 1.4.6
created: "2020-10-19T19:35:34.242056789Z"
description: A Helm chart for HAProxy Kubernetes Ingress Controller
digest: f4b11d983e29c3748e04fba10d626277cc4c35c977a2bda016925a326af38b54
home: https://github.com/haproxytech/helm-charts/tree/master/kubernetes-ingress
icon: http://www.haproxy.org/img/HAProxyCommunityEdition_60px.png
keywords:
- ingress
- haproxy
kubeVersion: '>=1.12.0-0'
maintainers:
- email: mmhedhbi@haproxy.com
name: Moemen Mhedhbi
- email: bassmann@haproxy.com
name: Baptiste Assmann
- email: dkorunic@haproxy.com
name: Dinko Korunic
name: haproxy
sources:
- https://github.com/haproxytech/kubernetes-ingress
urls:
- assets/haproxy/haproxy-1.4.300.tgz
version: 1.4.300
hpe-csi-driver:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: hpe-csi-driver
apiVersion: v1
appVersion: 1.4.0
created: "2021-02-25T22:46:37.811643961Z"
description: A Helm chart for installing the HPE CSI Driver for Kubernetes
digest: 487dca3d6bdf6961bf29425945b40974667a67723d2a8d9edbca87285e628793
home: https://hpe.com/storage/containers
icon: https://raw.githubusercontent.com/hpe-storage/co-deployments/master/docs/assets/hpedev.png
keywords:
- HPE
- Storage
- StorageClass
maintainers:
- email: hpe-containers-dev@hpe.com
name: raunakkumar
name: hpe-csi-driver
sources:
- https://scod.hpedev.io/csi_driver
urls:
- assets/hpe-csi-driver/hpe-csi-driver-1.4.200.tgz
version: 1.4.200
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: hpe-csi-driver
apiVersion: v1
appVersion: 1.3.0
created: "2020-10-19T19:35:34.242864765Z"
description: A Helm chart for installing the HPE CSI Driver for Kubernetes
digest: f5e5ce5e51d1b76ea667aca7e7689ccf9439825a30485fa2372ca0b9b86c7af0
home: https://hpe.com/storage/containers
icon: https://raw.githubusercontent.com/hpe-storage/co-deployments/master/docs/assets/hpedev.png
keywords:
- HPE
- Storage
- StorageClass
- CentOS
- Ubuntu
- RHEL
maintainers:
- email: hpe-containers-dev@hpe.com
name: shivamerla
name: hpe-csi-driver
sources:
- https://scod.hpedev.io/csi_driver
urls:
- assets/hpe-csi-driver/hpe-csi-driver-1.3.000.tgz
version: 1.3.000
hpe-flexvolume-driver:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: hpe-flexvolume-driver
apiVersion: v1
appVersion: "3.1"
created: "2020-10-19T19:35:34.243914234Z"
description: A Helm chart for installing the HPE Volume Driver for Kubernetes
FlexVolume plugin
digest: 50fc38e25308bf32156bed37bd549a5855309ac69945ff402fbe5ea809f88ddc
home: https://hpe.com/storage/containers
icon: https://raw.githubusercontent.com/hpe-storage/co-deployments/master/docs/assets/hpedev.png
keywords:
- HPE
- Storage
- StorageClass
- CentOS
- Ubuntu
- CloudVolumes
maintainers:
- email: hpe-containers-dev@hpe.com
name: shivamerla
name: hpe-flexvolume-driver
sources:
- https://github.com/hpe-storage/flexvolume-driver
urls:
- assets/hpe-flexvolume-driver/hpe-flexvolume-driver-3.1.000.tgz
version: 3.1.000
instana-agent:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: instana-agent
apiVersion: v1
appVersion: "1.1"
created: "2021-01-19T17:16:48.198654664Z"
description: Instana Agent for Kubernetes
digest: 164723f111d03fe67c775d916b0bdf29691b29005b8d93da7caa210cf43cab9c
home: https://www.instana.com/
icon: https://instana-management-assets.s3-eu-west-1.amazonaws.com/stan-logo-2020.png
maintainers:
- email: jon.brisbin@instana.com
name: jbrisbin
- email: william.james@instana.com
name: wiggzz
- email: jeroen.soeters@instana.com
name: JeroenSoeters
- email: fabian.staeber@instana.com
name: fstab
- email: miel.donkers@instana.com
name: mdonkers
- email: dahlia.bock@instana.com
name: dlbock
- email: nathan.fisher@instana.com
name: nfisher
name: instana-agent
sources:
- https://github.com/instana/instana-agent-docker
urls:
- assets/instana-agent/instana-agent-1.0.2900.tgz
version: 1.0.2900
k8s-triliovault-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: TrilioVault for Kubernetes Operator
catalog.cattle.io/release-name: k8s-triliovault-operator
apiVersion: v1
appVersion: v2.0.5
created: "2021-06-22T23:38:17.374903848Z"
description: K8s-TrilioVault-Operator is an operator designed to manage the K8s-TrilioVault
Application Lifecycle.
digest: e3272d943f70ec0c442c94920e4093fd0db9d1833711bcb8d23181f10098c000
home: https://github.com/trilioData/k8s-triliovault-operator
icon: https://www.trilio.io/wp-content/uploads/2021/01/Trilio-2020-logo-RGB-gray-green.png
maintainers:
- email: prafull.ladha@trilio.io
name: prafull11
name: k8s-triliovault-operator
sources:
- https://github.com/trilioData/k8s-triliovault-operator
urls:
- assets/k8s-triliovault-operator/k8s-triliovault-operator-2.0.500.tgz
version: 2.0.500
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: k8s-triliovault-operator
apiVersion: v1
appVersion: v2.0.2
created: "2021-02-09T10:02:00.467622094Z"
description: K8s-TrilioVault-Operator is an operator designed to manage the K8s-TrilioVault
Application Lifecycle.
digest: 24d6699876b92315e0b3ce5bd4f171f315ad2f963316e4d8d4e4f6993b3e9021
home: https://github.com/trilioData/k8s-triliovault-operator
icon: https://www.trilio.io/wp-content/uploads/2021/01/Trilio-2020-logo-RGB-gray-green.png
maintainers:
- email: prafull.ladha@trilio.io
name: prafull11
name: k8s-triliovault-operator
sources:
- https://github.com/trilioData/k8s-triliovault-operator
urls:
- assets/k8s-triliovault-operator/k8s-triliovault-operator-v2.0.200.tgz
version: v2.0.200
nutanix-csi-storage:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: nutanix-csi-storage
apiVersion: v1
appVersion: 2.3.1
created: "2021-02-25T20:17:31.426894175Z"
description: A Helm chart for installing Nutanix CSI Volume Driver
digest: 319009a424d1748dc5e7e32e3c0a424621f9555b3ccb0a583f204c561078ef29
home: https://github.com/nutanix/helm
icon: https://avatars2.githubusercontent.com/u/6165865?s=200&v=4
keywords:
- Nutanix
- Storage
- Volumes
- Files
- StorageClass
- CentOS
- Ubuntu
kubeVersion: '>= 1.13.0'
maintainers:
- name: tuxtof
name: nutanix-csi-storage
urls:
- assets/nutanix-csi-storage/nutanix-csi-storage-2.3.100.tgz
version: 2.3.100
openebs:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: openebs
apiVersion: v1
appVersion: 1.12.0
created: "2020-10-19T19:35:34.246561257Z"
description: Containerized Storage for Containers
digest: fa46a4405ad4ad523d246d175bb48fc237c556bf606bb3d0af724920dc166bf6
home: http://www.openebs.io/
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png
keywords:
- cloud-native-storage
- block-storage
- iSCSI
- storage
maintainers:
- email: kiran.mova@openebs.io
name: kmova
- email: prateek.pandey@openebs.io
name: prateekpandey14
name: openebs
sources:
- https://github.com/openebs/openebs
urls:
- assets/openebs/openebs-1.12.300.tgz
version: 1.12.300
portshift-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: portshift-operator
apiVersion: v1
appVersion: v0.1.3
created: "2020-10-19T19:35:34.247083742Z"
description: 'Portshift cloud-native security platform is an agentless security
solution for containerized applications '
digest: e332d44b698d4327c96780453f7de16e32e7b905e9f8797b05c02ba268536ed0
home: https://www.portshift.io/
icon: https://www.portshift.io/wp-content/uploads/2019/10/portshift-logo-68.png
keywords:
- portshift
- operator
- monitoring
- security
- alerting
- metric
- troubleshooting
- run-time
maintainers:
- email: idan@portshift.io
name: idan
name: portshift-operator
urls:
- assets/portshift-operator/portshift-operator-0.1.000.tgz
version: 0.1.000
sysdig:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: sysdig
apiVersion: v1
appVersion: 10.3.0
created: "2020-10-19T19:35:34.248816191Z"
description: Sysdig Monitor and Secure agent
digest: 37cef38a742229947b02dfc764da69ec382b260e062372f2fb7cc3056a31790f
home: https://www.sysdig.com/
icon: https://478h5m1yrfsa3bbe262u7muv-wpengine.netdna-ssl.com/wp-content/uploads/2019/02/Shovel_600px.png
keywords:
- monitoring
- security
- alerting
- metric
- troubleshooting
- run-time
maintainers:
- email: lachlan@deis.com
name: lachie83
- email: jorge.salamero@sysdig.com
name: bencer
- email: nestor.salceda@sysdig.com
name: nestorsalceda
- email: alvaro.iradier@sysdig.com
name: airadier
- email: carlos.arilla@sysdig.com
name: carillan81
name: sysdig
sources:
- https://app.sysdigcloud.com/#/settings/user
- https://github.com/draios/sysdig
urls:
- assets/sysdig/sysdig-1.9.200.tgz
version: 1.9.200
universal-crossplane:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Upbound Universal Crossplane
catalog.cattle.io/release-name: universal-crossplane
apiVersion: v1
appVersion: 1.2.2001
created: "2021-06-16T21:03:04.625626369Z"
description: 'Upbound Universal Crossplane (UXP) is Upbound''s official enterprise-grade
distribution of Crossplane. It''s fully compatible with upstream Crossplane,
open source, capable of connecting to Upbound Cloud for real-time dashboard
visibility, and maintained by Upbound. It''s the easiest way for both individual
community members and enterprises to build their production control planes. '
digest: 3c1dfa0f7f6181ab4101f23c41aadddb330484a1d6f48efcbbd523c6cf92eec9
home: https://upbound.io
icon: https://raw.githubusercontent.com/upbound/universal-crossplane/66ce9eb2c5a0c3af8ed7d19551a2c4d743b933b9/docs/media/logo.png
keywords:
- cloud
- infrastructure
- services
- application
- database
- cache
- bucket
- infra
- app
- ops
- oam
- gcp
- azure
- aws
- alibaba
- cloudsql
- rds
- s3
- azuredatabase
- asparadb
- gke
- aks
- eks
maintainers:
- email: info@upbound.io
name: Upbound Inc.
name: universal-crossplane
urls:
- assets/universal-crossplane/universal-crossplane-1.2.200100.tgz
version: 1.2.200100
generated: "2021-06-23T22:26:42.329579117Z"

3
charts/README.md Executable file
View File

@ -0,0 +1,3 @@
## Charts
This folder contains unarchived Helm charts that are served from partner-charts.rancher.io.

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
OWNERS

View File

@ -1,7 +1,7 @@
annotations: annotations:
catalog.cattle.io/certified: partner catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: ambassador
catalog.cattle.io/display-name: Ambassador Edge Stack catalog.cattle.io/display-name: Ambassador Edge Stack
catalog.cattle.io/release-name: ambassador
apiVersion: v1 apiVersion: v1
appVersion: 1.13.8 appVersion: 1.13.8
description: A Helm chart for Datawire Ambassador description: A Helm chart for Datawire Ambassador

View File

@ -0,0 +1,830 @@
# JFrog Artifactory-ha Chart Changelog
All changes to this chart will be documented in this file.
## [3.0.14] - Jul 31, 2020
* Update the README section on Nginx SSL termination to reflect the actual YAML structure.
## [3.0.13] - Jul 30, 2020
* Added condition to disable the migration scripts.
## [3.0.12] - Jul 29, 2020
* Document Artifactory node affinity.
## [3.0.11] - Jul 28, 2020
* Added maxConnections for persistent storage type aws-s3-v3.
## [3.0.10] - Jul 28, 2020
Bugfix / support for userPluginSecrets with Artifactory 7
## [3.0.9] - Jul 27, 2020
* Add tpl to external database secrets.
* Modified `scheme` to `artifactory-ha.scheme`
## [3.0.8] - Jul 23, 2020
* Added condition to disable the migration init container.
## [3.0.7] - Jul 21, 2020
* Updated Artifactory-ha Chart to add node and primary labels to pods and service objects.
## [3.0.6] - Jul 20, 2020
* Support custom CA and certificates
## [3.0.5] - Jul 13, 2020
* Updated Artifactory version to 7.6.3 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.3
* Fixed Mysql database jar path in `preStartCommand` in README
## [3.0.4] - Jul 8, 2020
* Move some postgresql values to where they should be according to the subchart
## [3.0.3] - Jul 8, 2020
* Set Artifactory access client connections to the same value as the access threads.
## [3.0.2] - Jul 6, 2020
* Updated Artifactory version to 7.6.2
* **IMPORTANT**
* Added ChartCenter Helm repository in README
## [3.0.1] - Jul 01, 2020
* Add dedicated ingress object for Replicator service when enabled
## [3.0.0] - Jun 30, 2020
* Update postgresql tag version to `10.13.0-debian-10-r38`
* Update alpine tag version to `3.12`
* Update busybox tag version to `1.31.1`
* **IMPORTANT**
* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**!
* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass postgresql.image.tag=9.6.18-debian-10-r7 and databaseUpgradeReady=true
## [2.6.0] - Jun 29, 2020
* Updated Artifactory version to 7.6.1 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.1
* Add tpl for external database secrets
## [2.5.8] - Jun 25, 2020
* Stop loading the Nginx stream module because it is now a core module
## [2.5.7] - Jun 18, 2020
* Fixes bootstrap configMap issue on member node
## [2.5.6] - Jun 11, 2020
* Support list of custom secrets
## [2.5.5] - Jun 11, 2020
* NOTES.txt fixed incorrect information
## [2.5.4] - Jun 12, 2020
* Updated Artifactory version to 7.5.7 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5.7
## [2.5.3] - Jun 8, 2020
* Statically setting primary service type to ClusterIP.
* Prevents primary service from being exposed publicly when using LoadBalancer type on cloud providers.
## [2.5.2] - Jun 8, 2020
* Readme update - configuring Artifactory with oracledb
## [2.5.1] - Jun 5, 2020
* Fixes broken PDB issue upgrading from 6.x to 7.x
## [2.5.0] - Jun 1, 2020
* Updated Artifactory version to 7.5.5 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5
* Fixes bootstrap configMap permission issue
* Update postgresql tag version to `9.6.18-debian-10-r7`
## [2.4.10] - May 27, 2020
* Added Tomcat maxThreads & acceptCount
## [2.4.9] - May 25, 2020
* Fixed postgresql README `image` Parameters
## [2.4.8] - May 24, 2020
* Fixed typo in README regarding migration timeout
## [2.4.7] - May 19, 2020
* Added metadata maxOpenConnections
## [2.4.6] - May 07, 2020
* Fix `installerInfo` string format
## [2.4.5] - Apr 27, 2020
* Updated Artifactory version to 7.4.3
## [2.4.4] - Apr 27, 2020
* Change customInitContainers order to run before the "migration-ha-artifactory" initContainer
## [2.4.3] - Apr 24, 2020
* Fix `artifactory.persistence.awsS3V3.useInstanceCredentials` incorrect conditional logic
* Bump postgresql tag version to `9.6.17-debian-10-r72` in values.yaml
## [2.4.2] - Apr 16, 2020
* Custom volume mounts in migration init container.
## [2.4.1] - Apr 16, 2020
* Fix broken support for gcpServiceAccount for googleStorage
## [2.4.0] - Apr 14, 2020
* Updated Artifactory version to 7.4.1
## [2.3.1] - April 13, 2020
* Update README with helm v3 commands
## [2.3.0] - April 10, 2020
* Use dependency charts from `https://charts.bitnami.com/bitnami`
* Bump postgresql chart version to `8.7.3` in requirements.yaml
* Bump postgresql tag version to `9.6.17-debian-10-r21` in values.yaml
## [2.2.11] - Apr 8, 2020
* Added recommended ingress annotation to avoid 413 errors
## [2.2.10] - Apr 8, 2020
* Moved migration scripts under `files` directory
* Support preStartCommand in migration Init container as `artifactory.migration.preStartCommand`
## [2.2.9] - Apr 01, 2020
* Support masterKey and joinKey as secrets
## [2.2.8] - Apr 01, 2020
* Ensure that the join key is also copied when provided by an external secret
* Migration container in primary and node statefulset now respects custom versions and the specified node/primary resources
## [2.2.7] - Apr 01, 2020
* Added cache-layer in chain definition of Google Cloud Storage template
* Fix readme use to `-hex 32` instead of `-hex 16`
## [2.2.6] - Mar 31, 2020
* Change the way the artifactory `command:` is set so it will properly pass a SIGTERM to java
## [2.2.5] - Mar 31, 2020
* Removed duplicate `artifactory-license` volume from primary node
## [2.2.4] - Mar 31, 2020
* Restore `artifactory-license` volume for the primary node
## [2.2.3] - Mar 29, 2020
* Add Nginx log options: stderr as logfile and log level
## [2.2.2] - Mar 30, 2020
* Apply initContainers.resources to `copy-system-yaml`, `prepare-custom-persistent-volume`, and `migration-artifactory-ha` containers
* Use the same defaulting mechanism used for the artifactory version used elsewhere in the chart
* Removed duplicate `artifactory-license` volume that prevented using an external secret
## [2.2.1] - Mar 29, 2020
* Fix loggers sidecars configurations to support new file system layout and new log names
## [2.2.0] - Mar 29, 2020
* Fix broken admin user bootstrap configuration
* **Breaking change:** renamed `artifactory.accessAdmin` to `artifactory.admin`
## [2.1.3] - Mar 24, 2020
* Use `postgresqlExtendedConf` for setting custom PostgreSQL configuration (instead of `postgresqlConfiguration`)
## [2.1.2] - Mar 21, 2020
* Support for SSL offload in Nginx service(LoadBalancer) layer. Introduced `nginx.service.ssloffload` field with boolean type.
## [2.1.1] - Mar 23, 2020
* Moved installer info to values.yaml so it is fully customizable
## [2.1.0] - Mar 23, 2020
* Updated Artifactory version to 7.3.2
## [2.0.36] - Mar 20, 2020
* Add support GCP credentials.json authentication
## [2.0.35] - Mar 20, 2020
* Add support for masterKey trim during 6.x to 7.x migration if 6.x masterKey is 32 hex (64 characters)
## [2.0.34] - Mar 19, 2020
* Add support for NFS directories `haBackupDir` and `haDataDir`
## [2.0.33] - Mar 18, 2020
* Increased Nginx proxy_buffers size
## [2.0.32] - Mar 17, 2020
* Changed all single quotes to double quotes in values files
* useInstanceCredentials variable was declared in S3 settings but not used in chart. Now it is being used.
## [2.0.31] - Mar 17, 2020
* Fix rendering of Service Account annotations
## [2.0.30] - Mar 16, 2020
* Add Unsupported message from 6.18 to 7.2.x (migration)
## [2.0.29] - Mar 11, 2020
* Upgrade Docs update
## [2.0.28] - Mar 11, 2020
* Unified charts public release
## [2.0.27] - Mar 8, 2020
* Add an optional wait for primary node to be ready with a proper test for http status
## [2.0.23] - Mar 6, 2020
* Fix path to `/artifactory_bootstrap`
* Add support for controlling the name of the ingress and allow to set more than one cname
## [2.0.22] - Mar 4, 2020
* Add support for disabling `consoleLog` in `system.yaml` file
## [2.0.21] - Feb 28, 2020
* Add support to process `valueFrom` for extraEnvironmentVariables
## [2.0.20] - Feb 26, 2020
* Store join key to secret
## [2.0.19] - Feb 26, 2020
* Updated Artifactory version to 7.2.1
## [2.0.12] - Feb 07, 2020
* Remove protection flag `databaseUpgradeReady` which was added to check internal postgres upgrade
## [2.0.0] - Feb 07, 2020
* Updated Artifactory version to 7.0.0
## [1.4.10] - Feb 13, 2020
* Add support for SSH authentication to Artifactory
## [1.4.9] - Feb 10, 2020
* Fix custom DB password indention
## [1.4.8] - Feb 9, 2020
* Add support for `tpl` in the `postStartCommand`
## [1.4.7] - Feb 4, 2020
* Support customisable Nginx kind
## [1.4.6] - Feb 2, 2020
* Add a comment stating that it is recommended to use an external PostgreSQL with a static password for production installations
## [1.4.5] - Feb 2, 2020
* Add support for primary or member node specific preStartCommand
## [1.4.4] - Jan 30, 2020
* Add the option to configure resources for the logger containers
## [1.4.3] - Jan 26, 2020
* Improve `database.user` and `database.password` logic in order to support more use cases and make the configuration less repetitive
## [1.4.2] - Jan 22, 2020
* Refined pod disruption budgets to separate nginx and Artifactory pods
## [1.4.1] - Jan 19, 2020
* Fix replicator port config in nginx replicator configmap
## [1.4.0] - Jan 19, 2020
* Updated Artifactory version to 6.17.0
## [1.3.8] - Jan 16, 2020
* Added example for external nginx-ingress
## [1.3.7] - Jan 07, 2020
* Add support for customizable `mountOptions` of NFS PVs
## [1.3.6] - Dec 30, 2019
* Fix for nginx probes failing when launched with http disabled
## [1.3.5] - Dec 24, 2019
* Better support for custom `artifactory.internalPort`
## [1.3.4] - Dec 23, 2019
* Mark empty map values with `{}`
## [1.3.3] - Dec 16, 2019
* Another fix for toggling nginx service ports
## [1.3.2] - Dec 12, 2019
* Fix for toggling nginx service ports
## [1.3.1] - Dec 10, 2019
* Add support for toggling nginx service ports
## [1.3.0] - Dec 1, 2019
* Updated Artifactory version to 6.16.0
## [1.2.4] - Nov 28, 2019
* Add support for using existing PriorityClass
## [1.2.3] - Nov 27, 2019
* Add support for PriorityClass
## [1.2.2] - Nov 20, 2019
* Update Artifactory logo
## [1.2.1] - Nov 18, 2019
* Add the option to provide service account annotations (in order to support stuff like https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html)
## [1.2.0] - Nov 18, 2019
* Updated Artifactory version to 6.15.0
## [1.1.12] - Nov 17, 2019
* Fix `README.md` format (broken table)
## [1.1.11] - Nov 17, 2019
* Update comment on Artifactory master key
## [1.1.10] - Nov 17, 2019
* Fix creation of double slash in nginx artifactory configuration
## [1.1.9] - Nov 14, 2019
* Set explicit `postgresql.postgresqlPassword=""` to avoid helm v3 error
## [1.1.8] - Nov 12, 2019
* Updated Artifactory version to 6.14.1
## [1.1.7] - Nov 11, 2019
* Additional documentation for masterKey
## [1.1.6] - Nov 10, 2019
* Update PostgreSQL chart version to 7.0.1
* Use formal PostgreSQL configuration format
## [1.1.5] - Nov 8, 2019
* Add support `artifactory.service.loadBalancerSourceRanges` for whitelisting when setting `artifactory.service.type=LoadBalancer`
## [1.1.4] - Nov 6, 2019
* Add support for any type of environment variable by using `extraEnvironmentVariables` as-is
## [1.1.3] - Nov 6, 2019
* Add nodeselector support for Postgresql
## [1.1.2] - Nov 5, 2019
* Add support for the aws-s3-v3 filestore, which adds support for pod IAM roles
## [1.1.1] - Nov 4, 2019
* When using `copyOnEveryStartup`, make sure that the target base directories are created before copying the files
## [1.1.0] - Nov 3, 2019
* Updated Artifactory version to 6.14.0
## [1.0.1] - Nov 3, 2019
* Make sure the artifactory pod exits when one of the pre-start stages fail
## [1.0.0] - Oct 27, 2019
**IMPORTANT - BREAKING CHANGES!**<br>
**DOWNTIME MIGHT BE REQUIRED FOR AN UPGRADE!**
* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**!
* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), must use the upgrade instructions in [UPGRADE_NOTES.md](UPGRADE_NOTES.md)!
* PostgreSQL sub chart was upgraded to version `6.5.x`. This version is **not backward compatible** with the old version (`0.9.5`)!
* Note the following **PostgreSQL** Helm chart changes
* The chart configuration has changed! See [values.yaml](values.yaml) for the new keys used
* **PostgreSQL** is deployed as a StatefulSet
* See [PostgreSQL helm chart](https://hub.helm.sh/charts/stable/postgresql) for all available configurations
## [0.17.3] - Oct 24, 2019
* Change the preStartCommand to support templating
## [0.17.2] - Oct 21, 2019
* Add support for setting `artifactory.primary.labels`
* Add support for setting `artifactory.node.labels`
* Add support for setting `nginx.labels`
## [0.17.1] - Oct 10, 2019
* Updated Artifactory version to 6.13.1
## [0.17.0] - Oct 7, 2019
* Updated Artifactory version to 6.13.0
## [0.16.7] - Sep 24, 2019
* Option to skip wait-for-db init container with '--set waitForDatabase=false'
## [0.16.6] - Sep 24, 2019
* Add support for setting `nginx.service.labels`
## [0.16.5] - Sep 23, 2019
* Add support for setting `artifactory.customInitContainersBegin`
## [0.16.4] - Sep 20, 2019
* Add support for setting `initContainers.resources`
## [0.16.3] - Sep 11, 2019
* Updated Artifactory version to 6.12.2
## [0.16.2] - Sep 9, 2019
* Updated Artifactory version to 6.12.1
## [0.16.1] - Aug 22, 2019
* Fix the nginx server_name directive used with ingress.hosts
## [0.16.0] - Aug 21, 2019
* Updated Artifactory version to 6.12.0
## [0.15.15] - Aug 18, 2019
* Fix existingSharedClaim permissions issue and example
## [0.15.14] - Aug 14, 2019
* Updated Artifactory version to 6.11.6
## [0.15.13] - Aug 11, 2019
* Fix Ingress routing and add an example
## [0.15.12] - Aug 6, 2019
* Do not mount `access/etc/bootstrap.creds` unless user specifies a custom password or secret (Access already generates a random password if not provided one)
* If custom `bootstrap.creds` is provided (using keys or custom secret), prepare it with an init container so the temp file does not persist
## [0.15.11] - Aug 5, 2019
* Improve binarystore config
1. Convert to a secret
2. Move config to values.yaml
3. Support an external secret
## [0.15.10] - Aug 5, 2019
* Don't create the nginx configmaps when nginx.enabled is false
## [0.15.9] - Aug 1, 2019
* Fix masterkey/masterKeySecretName not specified warning render logic in NOTES.txt
## [0.15.8] - Jul 28, 2019
* Simplify nginx setup and shorten initial wait for probes
## [0.15.7] - Jul 25, 2019
* Updated README about how to apply Artifactory licenses
## [0.15.6] - Jul 22, 2019
* Change Ingress API to be compatible with recent kubernetes versions
## [0.15.5] - Jul 22, 2019
* Updated Artifactory version to 6.11.3
## [0.15.4] - Jul 11, 2019
* Add `artifactory.customVolumeMounts` support to member node statefulset template
## [0.15.3] - Jul 11, 2019
* Add ingress.hosts to the Nginx server_name directive when ingress is enabled to help with Docker repository sub domain configuration
## [0.15.2] - Jul 3, 2019
* Add the option for changing nginx config using values.yaml and remove outdated reverse proxy documentation
## [0.15.1] - Jul 1, 2019
* Updated Artifactory version to 6.11.1
## [0.15.0] - Jun 27, 2019
* Updated Artifactory version to 6.11.0 and Restart Primary node when bootstrap.creds file has been modified in artifactory-ha
## [0.14.4] - Jun 24, 2019
* Add the option to provide an IP for the access-admin endpoints
## [0.14.3] - Jun 24, 2019
* Update chart maintainers
## [0.14.2] - Jun 24, 2019
* Change Nginx to point to the artifactory externalPort
## [0.14.1] - Jun 23, 2019
* Add values files for small, medium and large installations
## [0.14.0] - Jun 20, 2019
* Use ConfigMaps for nginx configuration and remove nginx postStart command
## [0.13.10] - Jun 19, 2019
* Updated Artifactory version to 6.10.4
## [0.13.9] - Jun 18, 2019
* Add the option to provide additional ingress rules
## [0.13.8] - Jun 14, 2019
* Updated readme with improved external database setup example
## [0.13.7] - Jun 6, 2019
* Updated Artifactory version to 6.10.3
* Updated installer-info template
## [0.13.6] - Jun 6, 2019
* Updated Google Cloud Storage API URL and https settings
## [0.13.5] - Jun 5, 2019
* Delete the db.properties file on Artifactory startup
## [0.13.4] - Jun 3, 2019
* Updated Artifactory version to 6.10.2
## [0.13.3] - May 21, 2019
* Updated Artifactory version to 6.10.1
## [0.13.2] - May 19, 2019
* Fix missing logger image tag
## [0.13.1] - May 15, 2019
* Support `artifactory.persistence.cacheProviderDir` for on-premise cluster
## [0.13.0] - May 7, 2019
* Updated Artifactory version to 6.10.0
## [0.12.23] - May 5, 2019
* Add support for setting `artifactory.async.corePoolSize`
## [0.12.22] - May 2, 2019
* Remove unused property `artifactory.releasebundle.feature.enabled`
## [0.12.21] - Apr 30, 2019
* Add support for JMX monitoring
## [0.12.20] - Apr29, 2019
* Added support for headless services
## [0.12.19] - Apr 28, 2019
* Added support for `cacheProviderDir`
## [0.12.18] - Apr 18, 2019
* Changing API StatefulSet version to `v1` and permission fix for custom `artifactory.conf` for Nginx
## [0.12.17] - Apr 16, 2019
* Updated documentation for Reverse Proxy Configuration
## [0.12.16] - Apr 12, 2019
* Added support for `customVolumeMounts`
## [0.12.15] - Aprl 12, 2019
* Added support for `bucketExists` flag for googleStorage
## [0.12.14] - Apr 11, 2019
* Replace `curl` examples with `wget` due to the new base image
## [0.12.13] - Aprl 07, 2019
* Add support for providing the Artifactory license as a parameter
## [0.12.12] - Apr 10, 2019
* Updated Artifactory version to 6.9.1
## [0.12.11] - Aprl 04, 2019
* Add support for templated extraEnvironmentVariables
## [0.12.10] - Aprl 07, 2019
* Change network policy API group
## [0.12.9] - Aprl 04, 2019
* Apply the existing PVC for members (in addition to primary)
## [0.12.8] - Aprl 03, 2019
* Bugfix for userPluginSecrets
## [0.12.7] - Apr 4, 2019
* Add information about upgrading Artifactory with auto-generated postgres password
## [0.12.6] - Aprl 03, 2019
* Added installer info
## [0.12.5] - Aprl 03, 2019
* Allow secret names for user plugins to contain template language
## [0.12.4] - Apr 02, 2019
* Fix issue #253 (use existing PVC for data and backup storage)
## [0.12.3] - Apr 02, 2019
* Allow NetworkPolicy configurations (defaults to allow all)
## [0.12.2] - Aprl 01, 2019
* Add support for user plugin secret
## [0.12.1] - Mar 26, 2019
* Add the option to copy a list of files to ARTIFACTORY_HOME on startup
## [0.12.0] - Mar 26, 2019
* Updated Artifactory version to 6.9.0
## [0.11.18] - Mar 25, 2019
* Add CI tests for persistence, ingress support and nginx
## [0.11.17] - Mar 22, 2019
* Add the option to change the default access-admin password
## [0.11.16] - Mar 22, 2019
* Added support for `<artifactory|nginx>.<readiness|liveness>Probe.path` to customise the paths used for health probes
## [0.11.15] - Mar 21, 2019
* Added support for `artifactory.customSidecarContainers` to create custom sidecar containers
* Added support for `artifactory.customVolumes` to create custom volumes
## [0.11.14] - Mar 21, 2019
* Make ingress path configurable
## [0.11.13] - Mar 19, 2019
* Move the copy of bootstrap config from postStart to preStart for Primary
## [0.11.12] - Mar 19, 2019
* Fix existingClaim example
## [0.11.11] - Mar 18, 2019
* Disable the option to use nginx PVC with more than one replica
## [0.11.10] - Mar 15, 2019
* Wait for nginx configuration file before using it
## [0.11.9] - Mar 15, 2019
* Revert securityContext changes since they were causing issues
## [0.11.8] - Mar 15, 2019
* Fix issue #247 (init container failing to run)
## [0.11.7] - Mar 14, 2019
* Updated Artifactory version to 6.8.7
## [0.11.6] - Mar 13, 2019
* Move securityContext to container level
## [0.11.5] - Mar 11, 2019
* Add the option to use existing volume claims for Artifactory storage
## [0.11.4] - Mar 11, 2019
* Updated Artifactory version to 6.8.6
## [0.11.3] - Mar 5, 2019
* Updated Artifactory version to 6.8.4
## [0.11.2] - Mar 4, 2019
* Add support for catalina logs sidecars
## [0.11.1] - Feb 27, 2019
* Updated Artifactory version to 6.8.3
## [0.11.0] - Feb 25, 2019
* Add nginx support for tail sidecars
## [0.10.3] - Feb 21, 2019
* Add s3AwsVersion option to awsS3 configuration for use with IAM roles
## [0.10.2] - Feb 19, 2019
* Updated Artifactory version to 6.8.2
## [0.10.1] - Feb 17, 2019
* Updated Artifactory version to 6.8.1
* Add example of `SERVER_XML_EXTRA_CONNECTOR` usage
## [0.10.0] - Feb 15, 2019
* Updated Artifactory version to 6.8.0
## [0.9.7] - Feb 13, 2019
* Updated Artifactory version to 6.7.3
## [0.9.6] - Feb 7, 2019
* Add support for tail sidecars to view logs from k8s api
## [0.9.5] - Feb 6, 2019
* Fix support for customizing statefulset `terminationGracePeriodSeconds`
## [0.9.4] - Feb 5, 2019
* Add support for customizing statefulset `terminationGracePeriodSeconds`
## [0.9.3] - Feb 5, 2019
* Remove the inactive server remove plugin
## [0.9.2] - Feb 3, 2019
* Updated Artifactory version to 6.7.2
## [0.9.1] - Jan 27, 2019
* Fix support for Azure Blob Storage Binary provider
## [0.9.0] - Jan 23, 2019
* Updated Artifactory version to 6.7.0
## [0.8.10] - Jan 22, 2019
* Added support for `artifactory.customInitContainers` to create custom init containers
## [0.8.9] - Jan 18, 2019
* Added support of values ingress.labels
## [0.8.8] - Jan 16, 2019
* Mount replicator.yaml (config) directly to /replicator_extra_conf
## [0.8.7] - Jan 15, 2018
* Add support for Azure Blob Storage Binary provider
## [0.8.6] - Jan 13, 2019
* Fix documentation about nginx group id
## [0.8.5] - Jan 13, 2019
* Updated Artifactory version to 6.6.5
## [0.8.4] - Jan 8, 2019
* Make artifactory.replicator.publicUrl required when the replicator is enabled
## [0.8.3] - Jan 1, 2019
* Updated Artifactory version to 6.6.3
* Add support for `artifactory.extraEnvironmentVariables` to pass more environment variables to Artifactory
## [0.8.2] - Dec 28, 2018
* Fix location `replicator.yaml` is copied to
## [0.8.1] - Dec 27, 2018
* Updated Artifactory version to 6.6.1
## [0.8.0] - Dec 20, 2018
* Updated Artifactory version to 6.6.0
## [0.7.17] - Dec 17, 2018
* Updated Artifactory version to 6.5.13
## [0.7.16] - Dec 12, 2018
* Fix documentation about Artifactory license setup using secret
## [0.7.15] - Dec 9, 2018
* AWS S3 add `roleName` for using IAM role
## [0.7.14] - Dec 6, 2018
* AWS S3 `identity` and `credential` are now added only if have a value to allow using IAM role
## [0.7.13] - Dec 5, 2018
* Remove Distribution certificates creation.
## [0.7.12] - Dec 2, 2018
* Remove Java option "-Dartifactory.locking.provider.type=db". This is already the default setting.
## [0.7.11] - Nov 30, 2018
* Updated Artifactory version to 6.5.9
## [0.7.10] - Nov 29, 2018
* Fixed the volumeMount for the replicator.yaml
## [0.7.9] - Nov 29, 2018
* Optionally include primary node into poddisruptionbudget
## [0.7.8] - Nov 29, 2018
* Updated postgresql version to 9.6.11
## [0.7.7] - Nov 27, 2018
* Updated Artifactory version to 6.5.8
## [0.7.6] - Nov 18, 2018
* Added support for configMap to use custom Reverse Proxy Configuration with Nginx
## [0.7.5] - Nov 14, 2018
* Updated Artifactory version to 6.5.3
## [0.7.4] - Nov 13, 2018
* Allow pod anti-affinity settings to include primary node
## [0.7.3] - Nov 12, 2018
* Support artifactory.preStartCommand for running command before entrypoint starts
## [0.7.2] - Nov 7, 2018
* Support database.url parameter (DB_URL)
## [0.7.1] - Oct 29, 2018
* Change probes port to 8040 (so they will not be blocked when all tomcat threads on 8081 are exhausted)
## [0.7.0] - Oct 28, 2018
* Update postgresql chart to version 0.9.5 to be able and use `postgresConfig` options
## [0.6.9] - Oct 23, 2018
* Fix providing external secret for database credentials
## [0.6.8] - Oct 22, 2018
* Allow user to configure externalTrafficPolicy for Loadbalancer
## [0.6.7] - Oct 22, 2018
* Updated ingress annotation support (with examples) to support docker registry v2
## [0.6.6] - Oct 21, 2018
* Updated Artifactory version to 6.5.2
## [0.6.5] - Oct 19, 2018
* Allow providing pre-existing secret containing master key
* Allow arbitrary annotations on primary and member node pods
* Enforce size limits when using local storage with `emptyDir`
* Allow `soft` or `hard` specification of member node anti-affinity
* Allow providing pre-existing secrets containing external database credentials
* Fix `s3` binary store provider to properly use the `cache-fs` provider
* Allow arbitrary properties when using the `s3` binary store provider
## [0.6.4] - Oct 18, 2018
* Updated Artifactory version to 6.5.1
## [0.6.3] - Oct 17, 2018
* Add Apache 2.0 license
## [0.6.2] - Oct 14, 2018
* Make S3 endpoint configurable (was hardcoded with `s3.amazonaws.com`)
## [0.6.1] - Oct 11, 2018
* Allows ingress default `backend` to be enabled or disabled (defaults to enabled)
## [0.6.0] - Oct 11, 2018
* Updated Artifactory version to 6.5.0
## [0.5.3] - Oct 9, 2018
* Quote ingress hosts to support wildcard names
## [0.5.2] - Oct 2, 2018
* Add `helm repo add jfrog https://charts.jfrog.io` to README
## [0.5.1] - Oct 2, 2018
* Set Artifactory to 6.4.1
## [0.5.0] - Sep 27, 2018
* Set Artifactory to 6.4.0
## [0.4.7] - Sep 26, 2018
* Add ci/test-values.yaml
## [0.4.6] - Sep 25, 2018
* Add PodDisruptionBudget for member nodes, defaulting to minAvailable of 1
## [0.4.4] - Sep 2, 2018
* Updated Artifactory version to 6.3.2
## [0.4.0] - Aug 22, 2018
* Added support to run as non root
* Updated Artifactory version to 6.2.0
## [0.3.0] - Aug 22, 2018
* Enabled RBAC Support
* Added support for PostStartCommand (To download Database JDBC connector)
* Increased postgresql max_connections
* Added support for `nginx.conf` ConfigMap
* Updated Artifactory version to 6.1.0

View File

@ -0,0 +1,29 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: artifactory-ha
apiVersion: v1
appVersion: 7.6.3
description: Universal Repository Manager supporting all major packaging formats,
build tools and CI servers.
home: https://www.jfrog.com/artifactory/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-ha/logo/artifactory-logo.png
keywords:
- artifactory
- jfrog
- devops
maintainers:
- email: amithk@jfrog.com
name: amithins
- email: daniele@jfrog.com
name: danielezer
- email: eldada@jfrog.com
name: eldada
- email: ramc@jfrog.com
name: chukka
- email: rimasm@jfrog.com
name: rimusz
name: artifactory-ha
sources:
- https://bintray.com/jfrog/product/JFrog-Artifactory-Pro/view
- https://github.com/jfrog/charts
version: 3.0.1400

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,38 @@
# JFrog Artifactory Chart Upgrade Notes
This file describes special upgrade notes needed at specific versions
## Upgrade from 1.X to 2.X (Chart Versions)
* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you!**
* To upgrade from a version prior to 1.x, you first need to upgrade to latest version of 1.x as described in https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/CHANGELOG.md.
## Upgrade from 0.X to 1.X (Chart Versions)
**DOWNTIME IS REQUIRED FOR AN UPGRADE!**
* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you!**
* PostgreSQL sub chart was upgraded to version `6.5.x`. This version is not backward compatible with the old version (`0.9.5`)!
* Note the following **PostgreSQL** Helm chart changes
* The chart configuration has changed! See [values.yaml](values.yaml) for the new keys used
* **PostgreSQL** is deployed as a StatefulSet
* See [PostgreSQL helm chart](https://hub.helm.sh/charts/stable/postgresql) for all available configurations
* Upgrade
* Due to breaking changes in the **PostgreSQL** Helm chart, a migration of the database is needed from the old to the new database
* The recommended migration process is the [full system export and import](https://www.jfrog.com/confluence/display/RTF/Importing+and+Exporting)
* **NOTE:** To save time, export only metadata and configuration (check `Exclude Content` in the `System Import & Export`) since the Artifactory filestore is persisted
* Upgrade steps:
1. Block user access to Artifactory (do not shutdown)
a. Scale down the cluster to primary node only (`node.replicaCount=0`) so the exported db and configuration will be kept on one known node (the primary)
b. If your Artifactory HA K8s service is set to member nodes only (`service.pool=members`) you will need to access the primary node directly (use `kubectl port-forward`)
2. Perform `Export System` from the `Admin` -> `Import & Export` -> `System` -> `Export System`
a. Check `Exclude Content` to save export size (as Artifactory filestore will persist across upgrade)
b. Choose to save the export on the persisted Artifactory volume (`/var/opt/jfrog/artifactory/`)
c. Click `Export` (this can take some time)
3. Run the `helm upgrade` with the new version. Old PostgreSQL will be removed and new one deployed
a. You must pass explicit "ready for upgrade flag" with `--set databaseUpgradeReady=yes`. Failing to provide this will block the upgrade!
4. Once ready, open Artifactory UI (you might need to re-enter a valid license). Skip all onboarding wizard steps
a. **NOTE:** Don't worry you can't see the old config and files. It will all restore with the system import in the next step
5. Perform `Import System` from the `Admin` -> `Import & Export` -> `System` -> `Import System`
a. Browse to where the export was saved Artifactory volume (`/var/opt/jfrog/artifactory/<directory-you-set>`)
b. Click `Import` (this can take some time)
6. Restore access to Artifactory
a. Scale the cluster member nodes back to the original size
* Artifactory should now be ready to get back to normal operation

View File

@ -1,7 +1,5 @@
annotations:
category: Database
apiVersion: v1 apiVersion: v1
appVersion: 11.9.0 appVersion: 11.7.0
description: Chart for PostgreSQL, an object-relational database management system description: Chart for PostgreSQL, an object-relational database management system
(ORDBMS) with an emphasis on extensibility and on standards-compliance. (ORDBMS) with an emphasis on extensibility and on standards-compliance.
home: https://www.postgresql.org/ home: https://www.postgresql.org/
@ -21,4 +19,4 @@ maintainers:
name: postgresql name: postgresql
sources: sources:
- https://github.com/bitnami/bitnami-docker-postgresql - https://github.com/bitnami/bitnami-docker-postgresql
version: 9.3.4 version: 8.7.3

View File

@ -0,0 +1,576 @@
# PostgreSQL
[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha)
## TL;DR;
```console
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/postgresql
```
## Introduction
This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
- Kubernetes 1.12+
- Helm 2.11+ or Helm 3.0-beta3+
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install my-release bitnami/postgresql
```
The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Parameters
The following tables lists the configurable parameters of the PostgreSQL chart and their default values.
| Parameter | Description | Default |
|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------|
| `global.imageRegistry` | Global Docker Image registry | `nil` |
| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` |
| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` |
| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` |
| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` |
| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` |
| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `global.storageClass` | Global storage class for dynamic provisioning | `nil` |
| `image.registry` | PostgreSQL Image registry | `docker.io` |
| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` |
| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug values should be set | `false` |
| `nameOverride` | String to partially override postgresql.fullname template with a string (will prepend the release name) | `nil` |
| `fullnameOverride` | String to fully override postgresql.fullname template with a string | `nil` |
| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/minideb` |
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `buster` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` |
| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` |
| `ldap.enabled` | Enable LDAP support | `false` |
| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` |
| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` |
| `ldap.server` | IP address or name of the LDAP server. | `nil` |
| `ldap.port` | Port number on the LDAP server to connect to | `nil` |
| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` |
| `ldap.tls` | Set to `1` to use TLS encryption | `nil` |
| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` |
| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` |
| `ldap.search_attr` | Attribute to match agains the user name in the search | `nil` |
| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` |
| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` |
| `ldap.bindDN` | DN of user to bind to LDAP | `nil` |
| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` |
| `replication.enabled` | Enable replication | `false` |
| `replication.user` | Replication user | `repl_user` |
| `replication.password` | Replication user password | `repl_password` |
| `replication.slaveReplicas` | Number of slaves replicas | `1` |
| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` |
| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.slaveReplicas`. | `0` |
| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` |
| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-postgres-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be sed to authenticate on LDAP. The value is evaluated as a template. | `nil` |
| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`) | _random 10 character alphanumeric string_ |
| `postgresqlUsername` | PostgreSQL admin user | `postgres` |
| `postgresqlPassword` | PostgreSQL admin password | _random 10 character alphanumeric string_ |
| `postgresqlDatabase` | PostgreSQL database | `nil` |
| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) |
| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` |
| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` |
| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` |
| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` |
| `postgresqlConfiguration` | Runtime Config Parameters | `nil` |
| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` |
| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` |
| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` |
| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` |
| `initdbScripts` | Dictionary of initdb scripts | `nil` |
| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` |
| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` |
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` |
| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.port` | PostgreSQL port | `5432` |
| `service.nodePort` | Kubernetes Service nodePort | `nil` |
| `service.annotations` | Annotations for PostgreSQL service | `{}` (evaluated as a template) |
| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | `[]` (evaluated as a template) |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for master and slave(s) Pod(s) | `true` |
| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` |
| `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` |
| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` |
| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
| `persistence.annotations` | Annotations for the PVC | `{}` |
| `master.nodeSelector` | Node labels for pod assignment (postgresql master) | `{}` |
| `master.affinity` | Affinity labels for pod assignment (postgresql master) | `{}` |
| `master.tolerations` | Toleration labels for pod assignment (postgresql master) | `[]` |
| `master.anotations` | Map of annotations to add to the statefulset (postgresql master) | `{}` |
| `master.labels` | Map of labels to add to the statefulset (postgresql master) | `{}` |
| `master.podAnnotations` | Map of annotations to add to the pods (postgresql master) | `{}` |
| `master.podLabels` | Map of labels to add to the pods (postgresql master) | `{}` |
| `master.priorityClassName` | Priority Class to use for each pod (postgresql master) | `nil` |
| `master.extraInitContainers` | Additional init containers to add to the pods (postgresql master) | `[]` |
| `master.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql master) | `[]` |
| `master.extraVolumes` | Additional volumes to add to the pods (postgresql master) | `[]` |
| `master.sidecars` | Add additional containers to the pod | `[]` |
| `master.service.type` | Allows using a different service type for Master | `nil` |
| `master.service.nodePort` | Allows using a different nodePort for Master | `nil` |
| `master.service.clusterIP` | Allows using a different clusterIP for Master | `nil` |
| `slave.nodeSelector` | Node labels for pod assignment (postgresql slave) | `{}` |
| `slave.affinity` | Affinity labels for pod assignment (postgresql slave) | `{}` |
| `slave.tolerations` | Toleration labels for pod assignment (postgresql slave) | `[]` |
| `slave.anotations` | Map of annotations to add to the statefulsets (postgresql slave) | `{}` |
| `slave.labels` | Map of labels to add to the statefulsets (postgresql slave) | `{}` |
| `slave.podAnnotations` | Map of annotations to add to the pods (postgresql slave) | `{}` |
| `slave.podLabels` | Map of labels to add to the pods (postgresql slave) | `{}` |
| `slave.priorityClassName` | Priority Class to use for each pod (postgresql slave) | `nil` |
| `slave.extraInitContainers` | Additional init containers to add to the pods (postgresql slave) | `[]` |
| `slave.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql slave) | `[]` |
| `slave.extraVolumes` | Additional volumes to add to the pods (postgresql slave) | `[]` |
| `slave.sidecars` | Add additional containers to the pod | `[]` |
| `slave.service.type` | Allows using a different service type for Slave | `nil` |
| `slave.service.nodePort` | Allows using a different nodePort for Slave | `nil` |
| `slave.service.clusterIP` | Allows using a different clusterIP for Slave | `nil` |
| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` |
| `serviceAcccount.name` | Name of existing service account | `nil` |
| `livenessProbe.enabled` | Would you like a livenessProbe to be enabled | `true` |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 |
| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.enabled` | Start a prometheus exporter | `false` |
| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` |
| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` |
| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` |
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` |
| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql |
| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` |
| `metrics.image.registry` | PostgreSQL Image registry | `docker.io` |
| `metrics.image.repository` | PostgreSQL Image name | `bitnami/postgres-exporter` |
| `metrics.image.tag` | PostgreSQL Image tag | `{TAG_NAME}` |
| `metrics.image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `metrics.customMetrics` | Additional custom metrics | `nil` |
| `metrics.securityContext.enabled` | Enable security context for metrics | `false` |
| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` |
| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install my-release \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
```
The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install my-release -f values.yaml bitnami/postgresql
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Configuration and installation details
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
### Production configuration and horizontal scaling
This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one.
- Enable replication:
```diff
- replication.enabled: false
+ replication.enabled: true
```
- Number of slaves replicas:
```diff
- replication.slaveReplicas: 1
+ replication.slaveReplicas: 2
```
- Set synchronous commit mode:
```diff
- replication.synchronousCommit: "off"
+ replication.synchronousCommit: "on"
```
- Number of replicas that will have synchronous replication:
```diff
- replication.numSynchronousReplicas: 0
+ replication.numSynchronousReplicas: 1
```
- Start a prometheus exporter:
```diff
- metrics.enabled: false
+ metrics.enabled: true
```
To horizontally scale this chart, you can use the `--replicas` flag to modify the number of nodes in your PostgreSQL deployment. Also you can use the `values-production.yaml` file or modify the parameters shown above.
### Customizing Master and Slave services in a replicated configuration
At the top level, there is a service object which defines the services for both master and slave. For deeper customization, there are service objects for both the master and slave types individually. This allows you to override the values in the top level service object so that the master and slave can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the master and slave to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the master.service or slave.service objects will take precedence over the top level service object.
### Change PostgreSQL version
To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=12.0.0`
### postgresql.conf / pg_hba.conf files as configMap
This helm chart also supports to customize the whole configuration file.
Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server.
Alternatively, you can specify PostgreSQL configuration parameters using the `postgresqlConfiguration` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}.
In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options.
### Allow settings to be loaded from files other than the default `postgresql.conf`
If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory.
Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`.
Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option.
### Initialize a fresh instance
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict.
In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
### Sidecars
If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
```yaml
# For the PostgreSQL master
master:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
# For the PostgreSQL replicas
slave:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
### Metrics
The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml).
The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details.
### Use of global variables
In more complex scenarios, we may have the following tree of dependencies
```
+--------------+
| |
+------------+ Chart 1 +-----------+
| | | |
| --------+------+ |
| | |
| | |
| | |
| | |
v v v
+-------+------+ +--------+------+ +--------+------+
| | | | | |
| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 |
| | | | | |
+--------------+ +---------------+ +---------------+
```
The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters:
```
postgresql.postgresqlPassword=testtest
subchart1.postgresql.postgresqlPassword=testtest
subchart2.postgresql.postgresqlPassword=testtest
postgresql.postgresqlDatabase=db1
subchart1.postgresql.postgresqlDatabase=db1
subchart2.postgresql.postgresqlDatabase=db1
```
If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows:
```
global.postgresql.postgresqlPassword=testtest
global.postgresql.postgresqlDatabase=db1
```
This way, the credentials will be available in all of the subcharts.
## Persistence
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
See the [Parameters](#parameters) section to configure the PVC or to disable persistence.
If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished.
## NetworkPolicy
To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`.
For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace:
```console
$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"
```
With NetworkPolicy enabled, traffic will be limited to just port 5432.
For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL.
This label will be displayed in the output of a successful install.
## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image
- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image.
- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift.
- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,shmVolume.chmod.enabled=false
### Deploy chart using Docker Official PostgreSQL Image
From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image.
Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory.
```
image.repository=postgres
image.tag=10.6
postgresqlDataDir=/data/pgdata
persistence.mountPath=/data/
```
## Upgrade
It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart:
```bash
$ helm upgrade my-release stable/postgresql \
--set postgresqlPassword=[POSTGRESQL_PASSWORD] \
--set replication.password=[REPLICATION_PASSWORD]
```
> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes.
## 8.0.0
Prefixes the port names with their protocols to comply with Istio conventions.
If you depend on the port names in your setup, make sure to update them to reflect this change.
## 7.1.0
Adds support for LDAP configuration.
## 7.0.0
Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec.
In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage.
This major version bump signifies this change.
## 6.5.7
In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies:
- protobuf
- protobuf-c
- json-c
- geos
- proj
## 5.0.0
In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/).
For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs:
```console
Welcome to the Bitnami postgresql container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
Send us your feedback at containers@bitnami.com
INFO ==> ** Starting PostgreSQL setup **
NFO ==> Validating settings in POSTGRESQL_* env vars..
INFO ==> Initializing PostgreSQL database...
INFO ==> postgresql.conf file not detected. Generating it...
INFO ==> pg_hba.conf file not detected. Generating it...
INFO ==> Deploying PostgreSQL with persisted data...
INFO ==> Configuring replication parameters
INFO ==> Loading custom scripts...
INFO ==> Enabling remote connections
INFO ==> Stopping PostgreSQL...
INFO ==> ** PostgreSQL setup finished! **
INFO ==> ** Starting PostgreSQL **
[1] FATAL: database files are incompatible with server
[1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3.
```
In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one.
### 4.0.0
This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately.
IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error
```
The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development
```
### 3.0.0
This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods.
It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride.
#### Breaking changes
- `affinty` has been renamed to `master.affinity` and `slave.affinity`.
- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`.
- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`.
### 2.0.0
In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps:
- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running
```console
$ kubectl get svc
```
- Install (not upgrade) the new version
```console
$ helm repo update
$ helm install my-release bitnami/postgresql
```
- Connect to the new pod (you can obtain the name by running `kubectl get pods`):
```console
$ kubectl exec -it NAME bash
```
- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart:
```console
$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql
```
After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`).
This operation could take some time depending on the database size.
- Once you have the backup file, you can restore it with a command like the one below:
```console
$ psql -U postgres DATABASE_NAME < /tmp/backup.sql
```
In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt).
If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below.
```console
$ psql -U postgres
postgres=# drop database DATABASE_NAME;
postgres=# create database DATABASE_NAME;
postgres=# create user USER_NAME;
postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD';
postgres=# grant all privileges on database DATABASE_NAME to USER_NAME;
postgres=# alter database DATABASE_NAME owner to USER_NAME;
```

Some files were not shown because too many files have changed in this diff Show More