Merge pull request #146 from CAcquaviva/kong-branch

Kong branch
pull/175/head
alex-isv 2021-09-23 14:22:42 -06:00 committed by GitHub
commit f151ace118
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
89 changed files with 11050 additions and 0 deletions

BIN
assets/kong/kong-2.3.1.tgz Normal file

Binary file not shown.

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
OWNERS

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,17 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: The Cloud-Native Ingress and API-management
catalog.cattle.io/release-name: kong
apiVersion: v1
appVersion: "2.5"
description: The Cloud-Native Ingress and API-management
home: https://konghq.com/
icon: https://s3.amazonaws.com/downloads.kong/universe/assets/icon-kong-inc-large.png
kubeVersion: 1.18 - 1.21
maintainers:
- email: harry@konghq.com
name: hbagdi
- email: traines@konghq.com
name: rainest
name: kong
version: 2.3.1

View File

@ -0,0 +1,109 @@
# Frequently Asked Questions (FAQs)
#### Kong fails to start after `helm upgrade` when Postgres is used. What do I do?
You may be running into this issue: https://github.com/helm/charts/issues/12575.
This issue is caused due to: https://github.com/helm/helm/issues/3053.
The problem that happens is that Postgres database has the old password but
the new secret has a different password, which is used by Kong, and password
based authentication fails.
The solution to the problem is to specify a password to the `postgresql` chart.
This is to ensure that the password is not generated randomly but is set to
the same one that is user-provided on each upgrade.
#### Kong fails to start on a fresh installation with Postgres. What do I do?
Please make sure that there is no `PersistentVolumes` present from a previous
release. If there are, it can lead to data or passwords being out of sync
and result in connection issues.
A simple way to find out is to use the following command:
```
kubectl get pv -n <your-namespace>
```
And then based on the `AGE` column, determine if you have an old volume.
If you do, then please delete the release, delete the volume, and then
do a fresh installation. PersistentVolumes can remain in the cluster even if
you delete the namespace itself (the namespace in which they were present).
#### Upgrading a release fails due to missing ServiceAccount
When upgrading a release, some configuration changes result in this error:
```
Error creating: pods "releasename-kong-pre-upgrade-migrations-" is forbidden: error looking up service account releasename-kong: serviceaccount "releasename-kong" not found
```
Enabling the ingress controller or PodSecurityPolicy requires that the Kong
chart also create a ServiceAccount. When upgrading from a configuration that
previously had neither of these features enabled, the pre-upgrade-migrations
Job attempts to use this ServiceAccount before it is created. It is [not
possible to easily handle this case automatically](https://github.com/Kong/charts/pull/31).
Users encountering this issue should temporarily modify their
[pre-upgrade-migrations template](https://github.com/Kong/charts/blob/main/charts/kong/templates/migrations-pre-upgrade.yaml),
adding the following at the bottom:
```
{{ if or .Values.podSecurityPolicy.enabled (and .Values.ingressController.enabled .Values.ingressController.serviceAccount.create) -}}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "kong.serviceAccountName" . }}
namespace: {{ template "kong.namespace" . }}
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
{{- end -}}
```
Upgrading with this in place will create a temporary service account before
creating the actual service account. After this initial upgrade, users must
revert to the original pre-upgrade migrations template, as leaving the
temporary ServiceAccount template in place will [cause permissions issues on
subsequent upgrades](https://github.com/Kong/charts/issues/30).
#### Running "helm upgrade" fails because of old init-migrations Job
When running `helm upgrade`, the upgrade fails and Helm reports an error
similar to the following:
```
Error: UPGRADE FAILED: cannot patch "RELEASE-NAME-kong-init-migrations" with
kind Job: Job.batch "RELEASE-NAME-kong-init-migrations" is invalid ... field
is immutable
```
This occurs if a `RELEASE-NAME-kong-init-migrations` Job is left over from a
previous `helm install` or `helm upgrade`. Deleting it with
`kubectl delete job RELEASE-NAME-kong-init-migrations` will allow the upgrade
to proceed. Chart versions greater than 1.5.0 delete the job automatically.
#### DB-backed instances do not start when deployed within a service mesh
Service meshes, such as Istio and Kuma, if deployed in a mode that injects
a sidecar to Kong, don't make the mesh available to `InitContainer`s,
because the sidecar starts _after_ all `InitContainer`s finish.
By default, this chart uses init containers to ensure that the database is
online and has migrations applied before starting Kong. This provides for a
smoother startup, but isn't compatible with service mesh sidecar requirements
if Kong is to access the database through the mesh.
Setting `waitImage.enabled=false` in values.yaml disables these init containers
and resolves this issue. However, during the initial install, your Kong
Deployment will enter the CrashLoopBackOff state while waiting for migrations
to complete. It will eventually exit this state and enter Running as long as
there are no issues finishing migrations, usually within 2 minutes.
If your Deployment is stuck in CrashLoopBackoff for longer, check the init
migrations Job logs to see if it is unable to connect to the database or unable
to complete migrations for some other reason. Resolve any issues you find,
delete the release, and attempt to install again.

View File

@ -0,0 +1,901 @@
## Kong for Kubernetes
[Kong for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller)
is an open-source Ingress Controller for Kubernetes that offers
API management capabilities with a plugin architecture.
This chart bootstraps all the components needed to run Kong on a
[Kubernetes](http://kubernetes.io) cluster using the
[Helm](https://helm.sh) package manager.
## TL;DR;
```bash
$ helm repo add kong https://charts.konghq.com
$ helm repo update
$ helm install kong/kong --generate-name
```
## Table of contents
- [Prerequisites](#prerequisites)
- [Install](#install)
- [Uninstall](#uninstall)
- [FAQs](#faqs)
- [Kong Enterprise](#kong-enterprise)
- [Deployment Options](#deployment-options)
- [Database](#database)
- [DB-less deployment](#db-less-deployment)
- [Using the Postgres sub-chart](#using-the-postgres-sub-chart)
- [Runtime package](#runtime-package)
- [Configuration method](#configuration-method)
- [Separate admin and proxy nodes](#separate-admin-and-proxy-nodes)
- [Standalone controller nodes](#standalone-controller-nodes)
- [Hybrid mode](#hybrid-mode)
- [Certificates](#certificates)
- [Control plane node configuration](#control-plane-node-configuration)
- [Data plane node configuration](#data-plane-node-configuration)
- [CRD management](#crd-management)
- [InitContainers](#initcontainers)
- [HostAliases](#hostaliases)
- [Sidecar Containers](#sidecar-containers)
- [User Defined Volumes](#user-defined-volumes)
- [User Defined Volume Mounts](#user-defined-volume-mounts)
- [Using a DaemonSet](#using-a-daemonset)
- [Example configurations](#example-configurations)
- [Configuration](#configuration)
- [Kong parameters](#kong-parameters)
- [Kong Service Parameters](#kong-service-parameters)
- [Stream listens](#stream-listens)
- [Ingress Controller Parameters](#ingress-controller-parameters)
- [General Parameters](#general-parameters)
- [The `env` section](#the-env-section)
- [The `extraLabels` section](#the-extralabels-section)
- [Kong Enterprise Parameters](#kong-enterprise-parameters)
- [Overview](#overview)
- [Prerequisites](#prerequisites-1)
- [Kong Enterprise License](#kong-enterprise-license)
- [Kong Enterprise Docker registry access](#kong-enterprise-docker-registry-access)
- [Service location hints](#service-location-hints)
- [RBAC](#rbac)
- [Sessions](#sessions)
- [Email/SMTP](#emailsmtp)
- [Prometheus Operator integration](#prometheus-operator-integration)
- [Changelog](https://github.com/Kong/charts/blob/main/charts/kong/CHANGELOG.md)
- [Upgrading](https://github.com/Kong/charts/blob/main/charts/kong/UPGRADE.md)
- [Seeking help](#seeking-help)
## Prerequisites
- Kubernetes 1.12+
- PV provisioner support in the underlying infrastructure if persistence
is needed for Kong datastore.
## Install
To install Kong:
```bash
$ helm repo add kong https://charts.konghq.com
$ helm repo update
$ helm install kong/kong --generate-name --set ingressController.installCRDs=false
```
## Uninstall
To uninstall/delete a Helm release `my-release`:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the
chart and deletes the release.
> **Tip**: List all releases using `helm list`
## FAQs
Please read the
[FAQs](https://github.com/Kong/charts/blob/main/charts/kong/FAQs.md)
document.
## Kong Enterprise
If using Kong Enterprise, several additional steps are necessary before
installing the chart:
- Set `enterprise.enabled` to `true` in `values.yaml` file.
- Update values.yaml to use a Kong Enterprise image.
- Satisfy the two prerequisites below for
[Enterprise License](#kong-enterprise-license) and
[Enterprise Docker Registry](#kong-enterprise-docker-registry-access).
- (Optional) [set a `password` environment variable](#rbac) to create the
initial super-admin. Though not required, this is recommended for users that
wish to use RBAC, as it cannot be done after initial setup.
Once you have these set, it is possible to install Kong Enterprise.
Please read through
[Kong Enterprise considerations](#kong-enterprise-parameters)
to understand all settings that are enterprise specific.
## Deployment Options
Kong is a highly configurable piece of software that can be deployed
in a number of different ways, depending on your use-case.
All combinations of various runtimes, databases and configuration methods are
supported by this Helm chart.
The recommended approach is to use the Ingress Controller based configuration
along-with DB-less mode.
Following sections detail on various high-level architecture options available:
### Database
Kong can run with or without a database (DB-less). By default, this chart
installs Kong without a database.
You can set the database the `env.database` parameter. For more details, please
read the [env](#the-env-section) section.
#### DB-less deployment
When deploying Kong in DB-less mode(`env.database: "off"`)
and without the Ingress Controller(`ingressController.enabled: false`),
you have to provide a declarative configuration for Kong to run.
The configuration can be provided using an existing ConfigMap
(`dblessConfig.configMap`) or or the whole configuration can be put into the
`values.yaml` file for deployment itself, under the `dblessConfig.config`
parameter. See the example configuration in the default values.yaml
for more details.
#### Using the Postgres sub-chart
The chart can optionally spawn a Postgres instance using [Bitnami's Postgres
chart](https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md)
as a sub-chart. Set `postgresql.enabled=true` to enable the sub-chart. Enabling
this will auto-populate Postgres connection settings in Kong's environment.
The Postgres sub-chart is best used to quickly provision temporary environments
without installing and configuring your database separately. For longer-lived
environments, we recommend you manage your database outside the Kong Helm
release.
### Runtime package
There are three different packages of Kong that are available:
- **Kong Gateway**\
This is the [Open-Source](https://github.com/kong/kong) offering. It is a
full-blown API Gateway and Ingress solution with a wide-array of functionality.
When Kong Gateway is combined with the Ingress based configuration method,
you get Kong for Kubernetes. This is the default deployment for this Helm
Chart.
- **Kong Enterprise K8S**\
This package builds up on top of the Open-Source Gateway and bundles in all
the Enterprise-only plugins as well.
When Kong Enterprise K8S is combined with the Ingress based
configuration method, you get Kong for Kubernetes Enterprise.
This package also comes with 24x7 support from Kong Inc.
- **Kong Enterprise**\
This is the full-blown Enterprise package which packs with itself all the
Enterprise functionality like Manager, Portal, Vitals, etc.
This package can't be run in DB-less mode.
The package to run can be changed via `image.repository` and `image.tag`
parameters. If you would like to run the Enterprise package, please read
the [Kong Enterprise Parameters](#kong-enterprise-parameters) section.
### Configuration method
Kong can be configured via two methods:
- **Ingress and CRDs**\
The configuration for Kong is done via `kubectl` and Kubernetes-native APIs.
This is also known as Kong Ingress Controller or Kong for Kubernetes and is
the default deployment pattern for this Helm Chart. The configuration
for Kong is managed via Ingress and a few
[Custom Resources](https://github.com/Kong/kubernetes-ingress-controller/blob/main/docs/concepts/custom-resources.md).
For more details, please read the
[documentation](https://github.com/Kong/kubernetes-ingress-controller/tree/main/docs)
on Kong Ingress Controller.
To configure and fine-tune the controller, please read the
[Ingress Controller Parameters](#ingress-controller-parameters) section.
- **Admin API**\
This is the traditional method of running and configuring Kong.
By default, the Admin API of Kong is not exposed as a Service. This
can be controlled via `admin.enabled` and `env.admin_listen` parameters.
### Separate admin and proxy nodes
*Note: although this section is titled "Separate admin and proxy nodes", this
split release technique is generally applicable to any deployment with
different types of Kong nodes. Separating Admin API and proxy nodes is one of
the more common use cases for splitting across multiple releases, but you can
also split releases for hybrid mode CP/DP nodes, split proxy and Developer
Portal nodes, etc.*
Users may wish to split their Kong deployment into multiple instances that only
run some of Kong's services (i.e. you run `helm install` once for every
instance type you wish to create).
To disable Kong services on an instance, you should set `SVC.enabled`,
`SVC.http.enabled`, `SVC.tls.enabled`, and `SVC.ingress.enabled` all to
`false`, where `SVC` is `proxy`, `admin`, `manager`, `portal`, or `portalapi`.
The standard chart upgrade automation process assumes that there is only a
single Kong release in the Kong cluster, and runs both `migrations up` and
`migrations finish` jobs. To handle clusters split across multiple releases,
you should:
1. Upgrade one of the releases with `helm upgrade RELEASENAME -f values.yaml
--set migrations.preUpgrade=true --set migrations.postUpgrade=false`.
2. Upgrade all but one of the remaining releases with `helm upgrade RELEASENAME
-f values.yaml --set migrations.preUpgrade=false --set
migrations.postUpgrade=false`.
3. Upgrade the final release with `helm upgrade RELEASENAME -f values.yaml
--set migrations.preUpgrade=false --set migrations.postUpgrade=true`.
This ensures that all instances are using the new Kong package before running
`kong migrations finish`.
Users should note that Helm supports supplying multiple values.yaml files,
allowing you to separate shared configuration from instance-specific
configuration. For example, you may have a shared values.yaml that contains
environment variables and other common settings, and then several
instance-specific values.yamls that contain service configuration only. You can
then create releases with:
```bash
helm install proxy-only -f shared-values.yaml -f only-proxy.yaml kong/kong
helm install admin-only -f shared-values.yaml -f only-admin.yaml kong/kong
```
### Standalone controller nodes
The chart can deploy releases that contain the controller only, with no Kong
container, by setting `deployment.kong.enabled: false` in values.yaml. There
are several controller settings that must be populated manually in this
scenario and several settings that are useful when using multiple controllers:
* `ingressController.env.kong_admin_url` must be set to the Kong Admin API URL.
If the Admin API is exposed by a service in the cluster, this should look
something like `https://my-release-kong-admin.kong-namespace.svc:8444`
* `ingressController.env.publish_service` must be set to the Kong proxy
service, e.g. `namespace/my-release-kong-proxy`.
* `ingressController.ingressClass` should be set to a different value for each
instance of the controller.
* `ingressController.env.admin_filter_tag` should be set to a different value
for each instance of the controller.
* If using Kong Enterprise, `ingressController.env.kong_workspace` can
optionally create configuration in a workspace other than `default`.
Standalone controllers require a database-backed Kong instance, as DB-less mode
requires that a single controller generate a complete Kong configuration.
### Hybrid mode
Kong supports [hybrid mode
deployments](https://docs.konghq.com/2.0.x/hybrid-mode/) as of Kong 2.0.0 and
[Kong Enterprise 2.1.0](https://docs.konghq.com/enterprise/2.1.x/deployment/hybrid-mode/).
These deployments split Kong nodes into control plane (CP) nodes, which provide
the admin API and interact with the database, and data plane (DP) nodes, which
provide the proxy and receive configuration from control plane nodes.
You can deploy hybrid mode Kong clusters by [creating separate releases for each node
type](#separate-admin-and-proxy-nodes), i.e. use separate control and data
plane values.yamls that are then installed separately. The [control
plane](#control-plane-node-configuration) and [data
plane](#data-plane-node-configuration) configuration sections below cover the
values.yaml specifics for each.
Cluster certificates are not generated automatically. You must [create a
certificate and key pair](#certificates) for intra-cluster communication.
#### Certificates
Hybrid mode uses TLS to secure the CP/DP node communication channel, and
requires certificates for it. You can generate these either using `kong hybrid
gen_cert` on a local Kong installation or using OpenSSL:
```bash
openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout /tmp/cluster.key -out /tmp/cluster.crt \
-days 1095 -subj "/CN=kong_clustering"
```
You must then place these certificates in a Secret:
```bash
kubectl create secret tls kong-cluster-cert --cert=/tmp/cluster.crt --key=/tmp/cluster.key
```
#### Control plane node configuration
You must configure the control plane nodes to mount the certificate secret on
the container filesystem is serve it from the cluster listen. In values.yaml:
```yaml
secretVolumes:
- kong-cluster-cert
```
```yaml
env:
role: control_plane
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
```
Furthermore, you must enable the cluster listen and Kubernetes Service, and
should typically disable the proxy:
```yaml
cluster:
enabled: true
tls:
enabled: true
servicePort: 8005
containerPort: 8005
proxy:
enabled: false
```
Enterprise users with Vitals enabled must also enable the cluster telemetry
service:
```yaml
clustertelemetry:
enabled: true
tls:
enabled: true
servicePort: 8006
containerPort: 8006
```
If using the ingress controller, you must also specify the DP proxy service as
its publish target to keep Ingress status information up to date:
```
ingressController:
env:
publish_service: hybrid/example-release-data-kong-proxy
```
Replace `hybrid` with your DP nodes' namespace and `example-release-data` with
the name of the DP release.
#### Data plane node configuration
Data plane configuration also requires the certificate and `role`
configuration, and the database should always be set to `off`. You must also
trust the cluster certificate and indicate what hostname/port Kong should use
to find control plane nodes.
Though not strictly required, you should disable the admin service (it will not
work on DP nodes anyway, but should be disabled to avoid creating an invalid
Service resource).
```yaml
secretVolumes:
- kong-cluster-cert
```
```yaml
admin:
enabled: false
```
```yaml
env:
role: data_plane
database: off
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/tls.crt
cluster_control_plane: control-plane-release-name-kong-cluster.hybrid.svc.cluster.local:8005
cluster_telemetry_endpoint: control-plane-release-name-kong-clustertelemetry.hybrid.svc.cluster.local:8006 # Enterprise-only
```
Note that the `cluster_control_plane` value will differ depending on your
environment. `control-plane-release-name` will change to your CP release name,
`hybrid` will change to whatever namespace it resides in. See [Kubernetes'
documentation on Service
DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
for more detail.
### CRD management
Earlier versions of this chart (<2.0) created CRDs associated with the ingress
controller as part of the release. This raised two challenges:
- Multiple release of the chart would conflict with one another, as each would
attempt to create its own set of CRDs.
- Because deleting a CRD also deletes any custom resources associated with it,
deleting a release of the chart could destroy user configuration without
providing any means to restore it.
Helm 3 introduced a simplified CRD management method that was safer, but
requires some manual work when a chart added or modified CRDs: CRDs are created
on install if they are not already present, but are not modified during
release upgrades or deletes. Our chart release upgrade instructions call out
when manual action is necessary to update CRDs. This CRD handling strategy is
recommended for most users.
Some users may wish to manage their CRDs automatically. If you manage your CRDs
this way, we _strongly_ recommend that you back up all associated custom
resources in the event you need to recover from unintended CRD deletion.
While Helm 3's CRD management system is recommended, there is no simple means
of migrating away from release-managed CRDs if you previously installed your
release with the old system (you would need to back up your existing custom
resources, delete your release, reinstall, and restore your custom resources
after). As such, the chart detects if you currently use release-managed CRDs
and continues to use the old CRD templates when using chart version 2.0+. If
you do (your resources will have a `meta.helm.sh/release-name` annotation), we
_strongly_ recommend that you back up all associated custom resources in the
event you need to recover from unintended CRD deletion.
### InitContainers
The chart able to deploy initcontainers along with Kong. This can be very useful when require to setup additional custom initialization. The `deployment.initcontainers` field in values.yaml takes an array of objects that get appended as-is to the existing `spec.template.initContainers` array in the kong deployment resource.
### HostAliases
The chart able to inject host aliases into containers. This can be very useful when require to resolve additional domain name which can't
be looked-up directly from dns server. The `deployment.hostAliases` field in values.yaml takes an array of objects that set to `spec.template.hostAliases` field in the kong deployment resource.
### Sidecar Containers
The chart can deploy additional containers along with the Kong and Ingress
Controller containers, sometimes referred to as "sidecar containers". This can
be useful to include network proxies or logging services along with Kong. The
`deployment.sidecarContainers` field in values.yaml takes an array of objects
that get appended as-is to the existing `spec.template.spec.containers` array
in the Kong deployment resource.
### User Defined Volumes
The chart can deploy additional volumes along with Kong. This can be useful to include additional volumes which required during iniatilization phase (InitContainer). The `deployment.userDefinedVolumes` field in values.yaml takes an array of objects that get appended as-is to the existing `spec.template.spec.volumes` array in the kong deployment resource.
### User Defined Volume Mounts
The chart can mount the volumes which defined in the `user defined volume` or others. The `deployment.userDefinedVolumeMounts` field in values.yaml takes an array of object that get appended as-is to the existing `spec.template.spec.containers[].volumeMounts` and `spec.template.spec.initContainers[].volumeMounts` array in the kong deployment resource.
### Using a DaemonSet
Setting `deployment.daemonset: true` deploys Kong using a [DaemonSet
controller](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
instead of a Deployment controller. This runs a Kong Pod on every kubelet in
the Kubernetes cluster.
### Using dnsPolicy and dnsConfig
The chart able to inject custom DNS configuration into containers. This can be useful when you have EKS cluster with [NodeLocal DNSCache](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) configured and attach AWS security groups directly to pod using [security groups for pods feature](https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html).
### Example configurations
Several example values.yaml are available in the
[example-values](https://github.com/Kong/charts/blob/main/charts/kong/example-values/)
directory.
## Configuration
### Kong parameters
| Parameter | Description | Default |
| ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
| image.repository | Kong image | `kong` |
| image.tag | Kong image version | `2.5` |
| image.pullPolicy | Image pull policy | `IfNotPresent` |
| image.pullSecrets | Image pull secrets | `null` |
| replicaCount | Kong instance count. It has no effect when `autoscaling.enabled` is set to true | `1` |
| plugins | Install custom plugins into Kong via ConfigMaps or Secrets | `{}` |
| env | Additional [Kong configurations](https://getkong.org/docs/latest/configuration/) | |
| migrations.preUpgrade | Run "kong migrations up" jobs | `true` |
| migrations.postUpgrade | Run "kong migrations finish" jobs | `true` |
| migrations.annotations | Annotations for migration job pods | `{"sidecar.istio.io/inject": "false" |
| migrations.jobAnnotations | Additional annotations for migration jobs | `{}` |
| waitImage.enabled | Spawn init containers that wait for the database before starting Kong | `true` |
| waitImage.repository | Image used to wait for database to become ready. Uses the Kong image if none set | |
| waitImage.tag | Tag for image used to wait for database to become ready | |
| waitImage.pullPolicy | Wait image pull policy | `IfNotPresent` |
| postgresql.enabled | Spin up a new postgres instance for Kong | `false` |
| dblessConfig.configMap | Name of an existing ConfigMap containing the `kong.yml` file. This must have the key `kong.yml`.| `` |
| dblessConfig.config | Yaml configuration file for the dbless (declarative) configuration of Kong | see in `values.yaml` |
#### Kong Service Parameters
The various `SVC.*` parameters below are common to the various Kong services
(the admin API, proxy, Kong Manger, the Developer Portal, and the Developer
Portal API) and define their listener configuration, K8S Service properties,
and K8S Ingress properties. Defaults are listed only if consistent across the
individual services: see values.yaml for their individual default values.
`SVC` below can be substituted with each of:
* `proxy`
* `udpProxy`
* `admin`
* `manager`
* `portal`
* `portalapi`
* `cluster`
* `clustertelemetry`
* `status`
`status` is intended for internal use within the cluster. Unlike other
services it cannot be exposed externally, and cannot create a Kubernetes
service or ingress. It supports the settings under `SVC.http` and `SVC.tls`
only.
`cluster` is used on hybrid mode control plane nodes. It does not support the
`SVC.http.*` settings (cluster communications must be TLS-only) or the
`SVC.ingress.*` settings (cluster communication requires TLS client
authentication, which cannot pass through an ingress proxy). `clustertelemetry`
is similar, and used when Vitals is enabled on Kong Enterprise control plane
nodes.
`udpProxy` is used for UDP stream listens (Kubernetes does not yet support
mixed TCP/UDP LoadBalancer Services). It _does not_ support the `http`, `tls`,
or `ingress` sections, as it is used only for stream listens.
| Parameter | Description | Default |
| ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
| SVC.enabled | Create Service resource for SVC (admin, proxy, manager, etc.) | |
| SVC.http.enabled | Enables http on the service | |
| SVC.http.servicePort | Service port to use for http | |
| SVC.http.containerPort | Container port to use for http | |
| SVC.http.nodePort | Node port to use for http | |
| SVC.http.hostPort | Host port to use for http | |
| SVC.http.parameters | Array of additional listen parameters | `[]` |
| SVC.tls.enabled | Enables TLS on the service | |
| SVC.tls.containerPort | Container port to use for TLS | |
| SVC.tls.servicePort | Service port to use for TLS | |
| SVC.tls.nodePort | Node port to use for TLS | |
| SVC.tls.hostPort | Host port to use for TLS | |
| SVC.tls.overrideServiceTargetPort | Override service port to use for TLS without touching Kong containerPort | |
| SVC.tls.parameters | Array of additional listen parameters | `["http2"]` |
| SVC.type | k8s service type. Options: NodePort, ClusterIP, LoadBalancer | |
| SVC.clusterIP | k8s service clusterIP | |
| SVC.loadBalancerSourceRanges | Limit service access to CIDRs if set and service type is `LoadBalancer` | `[]` |
| SVC.loadBalancerIP | Reuse an existing ingress static IP for the service | |
| SVC.externalIPs | IPs for which nodes in the cluster will also accept traffic for the servic | `[]` |
| SVC.externalTrafficPolicy | k8s service's externalTrafficPolicy. Options: Cluster, Local | |
| SVC.ingress.enabled | Enable ingress resource creation (works with SVC.type=ClusterIP) | `false` |
| SVC.ingress.tls | Name of secret resource, containing TLS secret | |
| SVC.ingress.hostname | Ingress hostname | `""` |
| SVC.ingress.path | Ingress path. | `/` |
| SVC.ingress.annotations | Ingress annotations. See documentation for your ingress controller for details | `{}` |
| SVC.annotations | Service annotations | `{}` |
| SVC.labels | Service labels | `{}` |
#### Stream listens
The proxy configuration additionally supports creating stream listens. These
are configured using an array of objects under `proxy.stream` and `udpProxy.stream`:
| Parameter | Description | Default |
| ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
| protocol | The listen protocol, either "TCP" or "UDP" | |
| containerPort | Container port to use for a stream listen | |
| servicePort | Service port to use for a stream listen | |
| nodePort | Node port to use for a stream listen | |
| hostPort | Host port to use for a stream listen | |
| parameters | Array of additional listen parameters | `[]` |
### Ingress Controller Parameters
All of the following properties are nested under the `ingressController`
section of `values.yaml` file:
| Parameter | Description | Default |
| ---------------------------------- | ------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| enabled | Deploy the ingress controller, rbac and crd | true |
| image.repository | Docker image with the ingress controller | kong/kubernetes-ingress-controller |
| image.tag | Version of the ingress controller | 1.2.0 |
| image.effectiveSemver | Version of the ingress controller used for version-specific features when image.tag is not a valid semantic version | |
| readinessProbe | Kong ingress controllers readiness probe | |
| livenessProbe | Kong ingress controllers liveness probe | |
| installCRDs | Creates managed CRDs. | false
| serviceAccount.create | Create Service Account for ingress controller | true
| serviceAccount.name | Use existing Service Account, specify its name | ""
| serviceAccount.annotations | Annotations for Service Account | {}
| env | Specify Kong Ingress Controller configuration via environment variables | |
| ingressClass | The ingress-class value for controller | kong |
| args | List of ingress-controller cli arguments | [] |
| watchNamespaces | List of namespaces to watch. Watches all namespaces if empty | [] |
| admissionWebhook.enabled | Whether to enable the validating admission webhook | false |
| admissionWebhook.failurePolicy | How unrecognized errors from the admission endpoint are handled (Ignore or Fail) | Fail |
| admissionWebhook.port | The port the ingress controller will listen on for admission webhooks | 8080 |
| admissionWebhook.certificate.provided | Whether to generate the admission webhook certificate if not provided | false |
| admissionWebhook.certificate.secretName | Name of the TLS secret for the provided webhook certificate | |
| admissionWebhook.certificate.caBundle | PEM encoded CA bundle which will be used to validate the provided webhook certificate | |
For a complete list of all configuration values you can set in the
`env` section, please read the Kong Ingress Controller's
[configuration document](https://github.com/Kong/kubernetes-ingress-controller/blob/main/docs/references/cli-arguments.md).
### General Parameters
| Parameter | Description | Default |
| ---------------------------------- | ------------------------------------------------------------------------------------- | ------------------- |
| namespace | Namespace to deploy chart resources | |
| deployment.kong.enabled | Enable or disable deploying Kong | `true` |
| deployment.initContainers | Create initContainers. Please go to Kubernetes doc for the spec of the initContainers | |
| deployment.daemonset | Use a DaemonSet instead of a Deployment | `false` |
| deployment.userDefinedVolumes | Create volumes. Please go to Kubernetes doc for the spec of the volumes | |
| deployment.userDefinedVolumeMounts | Create volumeMounts. Please go to Kubernetes doc for the spec of the volumeMounts | |
| autoscaling.enabled | Set this to `true` to enable autoscaling | `false` |
| autoscaling.minReplicas | Set minimum number of replicas | `2` |
| autoscaling.maxReplicas | Set maximum number of replicas | `5` |
| autoscaling.targetCPUUtilizationPercentage | Target Percentage for when autoscaling takes affect. Only used if cluster doesnt support `autoscaling/v2beta2` | `80` |
| autoscaling.metrics | metrics used for autoscaling for clusters that support autoscaling/v2beta2` | See [values.yaml](values.yaml) |
| updateStrategy | update strategy for deployment | `{}` |
| readinessProbe | Kong readiness probe | |
| livenessProbe | Kong liveness probe | |
| lifecycle | Proxy container lifecycle hooks | see `values.yaml` |
| terminationGracePeriodSeconds | Sets the [termination grace period](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution) for Deployment pods | 30 |
| affinity | Node/pod affinities | |
| topologySpreadConstraints | Control how Pods are spread across cluster among failure-domains | |
| nodeSelector | Node labels for pod assignment | `{}` |
| deploymentAnnotations | Annotations to add to deployment | see `values.yaml` |
| podAnnotations | Annotations to add to each pod | `{}` |
| podLabels | Labels to add to each pod | `{}` |
| resources | Pod resource requests & limits | `{}` |
| tolerations | List of node taints to tolerate | `[]` |
| dnsPolicy | Pod dnsPolicy | |
| dnsConfig | Pod dnsConfig | |
| podDisruptionBudget.enabled | Enable PodDisruptionBudget for Kong | `false` |
| podDisruptionBudget.maxUnavailable | Represents the minimum number of Pods that can be unavailable (integer or percentage) | `50%` |
| podDisruptionBudget.minAvailable | Represents the number of Pods that must be available (integer or percentage) | |
| podSecurityPolicy.enabled | Enable podSecurityPolicy for Kong | `false` |
| podSecurityPolicy.spec | Collection of [PodSecurityPolicy settings](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#what-is-a-pod-security-policy) | |
| priorityClassName | Set pod scheduling priority class for Kong pods | `""` |
| secretVolumes | Mount given secrets as a volume in Kong container to override default certs and keys. | `[]` |
| securityContext | Set the securityContext for Kong Pods | `{}` |
| containerSecurityContext | Set the securityContext for Containers | `{}` |
| serviceMonitor.enabled | Create ServiceMonitor for Prometheus Operator | `false` |
| serviceMonitor.interval | Scraping interval | `30s` |
| serviceMonitor.namespace | Where to create ServiceMonitor | |
| serviceMonitor.labels | ServiceMonitor labels | `{}` |
| serviceMonitor.targetLabels | ServiceMonitor targetLabels | `{}` |
| serviceMonitor.honorLabels | ServiceMonitor honorLabels | `{}` |
| serviceMonitor.metricRelabelings | ServiceMonitor metricRelabelings | `{}` |
| extraConfigMaps | ConfigMaps to add to mounted volumes | `[]` |
| extraSecrets | Secrets to add to mounted volumes | `[]` |
#### The `env` section
The `env` section can be used to configured all properties of Kong.
Any key value put under this section translates to environment variables
used to control Kong's configuration. Every key is prefixed with `KONG_`
and upper-cased before setting the environment variable.
Furthermore, all `kong.env` parameters can also accept a mapping instead of a
value to ensure the parameters can be set through configmaps and secrets.
An example:
```yaml
kong:
env: # load PG password from a secret dynamically
pg_user: kong
pg_password:
valueFrom:
secretKeyRef:
key: kong
name: postgres
nginx_worker_processes: "2"
```
For complete list of Kong configurations please check the
[Kong configuration docs](https://docs.konghq.com/latest/configuration).
> **Tip**: You can use the default [values.yaml](values.yaml)
#### The `extraLabels` section
The `extraLabels` section can be used to configure some extra labels that will be added to each Kubernetes object generated.
For example, you can add the `acme.com/some-key: some-value` label to each Kubernetes object by putting the following in your Helm values:
```yaml
extraLabels:
acme.com/some-key: some-value
```
## Kong Enterprise Parameters
### Overview
Kong Enterprise requires some additional configuration not needed when using
Kong Open-Source. To use Kong Enterprise, at the minimum,
you need to do the following:
- Set `enterprise.enabled` to `true` in `values.yaml` file.
- Update values.yaml to use a Kong Enterprise image.
- Satisfy the two prerequisites below for Enterprise License and
Enterprise Docker Registry.
- (Optional) [set a `password` environment variable](#rbac) to create the
initial super-admin. Though not required, this is recommended for users that
wish to use RBAC, as it cannot be done after initial setup.
Once you have these set, it is possible to install Kong Enterprise,
but please make sure to review the below sections for other settings that
you should consider configuring before installing Kong.
Some of the more important configuration is grouped in sections
under the `.enterprise` key in values.yaml, though most enterprise-specific
configuration can be placed under the `.env` key.
### Prerequisites
#### Kong Enterprise License
Kong Enterprise 2.3+ can run with or without a license. If you wish to run 2.3+
without a license, you can skip this step and leave `enterprise.license_secret`
unset. In this case only a limited subset of features will be available. Earlier versions require a license.
If you have paid for a license, but you do not have a copy of yours, please
contact Kong Support. Once you have it, you will need to store it in a Secret:
```bash
$ kubectl create secret generic kong-enterprise-license --from-file=license=./license.json
```
Set the secret name in `values.yaml`, in the `.enterprise.license_secret` key.
Please ensure the above secret is created in the same namespace in which
Kong is going to be deployed.
#### Kong Enterprise Docker registry access
Kong Enterprise versions 2.2 and earlier use a private Docker registry and
require a pull secret. **If you use 2.3 or newer, you can skip this step.**
You should have received credentials to log into docker hub after
purchasing Kong Enterprise. After logging in, you can retrieve your API key
from \<your username\> \> Edit Profile \> API Key. Use this to create registry
secrets:
```bash
$ kubectl create secret docker-registry kong-enterprise-edition-docker \
--docker-server=hub.docker.io \
--docker-username=<username-provided-to-you> \
--docker-password=<password-provided-to-you>
secret/kong-enterprise-edition-docker created
```
Set the secret names in `values.yaml` in the `image.pullSecrets` section.
Again, please ensure the above secret is created in the same namespace in which
Kong is going to be deployed.
### Service location hints
Kong Enterprise add two GUIs, Kong Manager and the Kong Developer Portal, that
must know where other Kong services (namely the admin and files APIs) can be
accessed in order to function properly. Kong's default behavior for attempting
to locate these absent configuration is unlikely to work in common Kubernetes
environments. Because of this, you should set each of `admin_gui_url`,
`admin_api_uri`, `proxy_url`, `portal_api_url`, `portal_gui_host`, and
`portal_gui_protocol` under the `.env` key in values.yaml to locations where
each of their respective services can be accessed to ensure that Kong services
can locate one another and properly set CORS headers. See the
[Property Reference documentation](https://docs.konghq.com/enterprise/latest/property-reference/)
for more details on these settings.
### RBAC
You can create a default RBAC superuser when initially running `helm install`
by setting a `password` environment variable under `env` in values.yaml. It
should be a reference to a secret key containing your desired password. This
will create a `kong_admin` admin whose token and basic-auth password match the
value in the secret. For example:
```yaml
env:
password:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password
key: password
```
If using the ingress controller, it needs access to the token as well, by
specifying `kong_admin_token` in its environment variables:
```yaml
ingressController:
env:
kong_admin_token:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password
key: password
```
Although the above examples both use the initial super-admin, we recommend
[creating a less-privileged RBAC user](https://docs.konghq.com/enterprise/latest/kong-manager/administration/rbac/add-user/)
for the controller after installing. It needs at least workspace admin
privileges in its workspace (`default` by default, settable by adding a
`workspace` variable under `ingressController.env`). Once you create the
controller user, add its token to a secret and update your `kong_admin_token`
variable to use it. Remove the `password` variable from Kong's environment
variables and the secret containing the super-admin token after.
### Sessions
Login sessions for Kong Manager and the Developer Portal make use of
[the Kong Sessions plugin](https://docs.konghq.com/enterprise/latest/kong-manager/authentication/sessions).
When configured via values.yaml, their configuration must be stored in Secrets,
as it contains an HMAC key.
Kong Manager's session configuration must be configured via values.yaml,
whereas this is optional for the Developer Portal on versions 0.36+. Providing
Portal session configuration in values.yaml provides the default session
configuration, which can be overridden on a per-workspace basis.
```
$ cat admin_gui_session_conf
{"cookie_name":"admin_session","cookie_samesite":"off","secret":"admin-secret-CHANGEME","cookie_secure":true,"storage":"kong"}
$ cat portal_session_conf
{"cookie_name":"portal_session","cookie_samesite":"off","secret":"portal-secret-CHANGEME","cookie_secure":true,"storage":"kong"}
$ kubectl create secret generic kong-session-config --from-file=admin_gui_session_conf --from-file=portal_session_conf
secret/kong-session-config created
```
The exact plugin settings may vary in your environment. The `secret` should
always be changed for both configurations.
After creating your secret, set its name in values.yaml in
`.enterprise.rbac.session_conf_secret`. If you create a Portal configuration,
add it at `env.portal_session_conf` using a secretKeyRef.
### Email/SMTP
Email is used to send invitations for
[Kong Admins](https://docs.konghq.com/enterprise/latest/kong-manager/networking/email)
and [Developers](https://docs.konghq.com/enterprise/latest/developer-portal/configuration/smtp).
Email invitations rely on setting a number of SMTP settings at once. For
convenience, these are grouped under the `.enterprise.smtp` key in values.yaml.
Setting `.enterprise.smtp.disabled: true` will set `KONG_SMTP_MOCK=on` and
allow Admin/Developer invites to proceed without sending email. Note, however,
that these have limited functionality without sending email.
If your SMTP server requires authentication, you must provide the `username` and `smtp_password_secret` keys under `.enterprise.smtp.auth`. `smtp_password_secret` must be a Secret containing an `smtp_password` key whose value is your SMTP password.
By default, SMTP uses `AUTH` `PLAIN` when you provide credentials. If your provider requires `AUTH LOGIN`, set `smtp_auth_type: login`.
## Prometheus Operator integration
The chart can configure a ServiceMonitor resource to instruct the [Prometheus
Operator](https://github.com/prometheus-operator/prometheus-operator) to
collect metrics from Kong Pods. To enable this, set
`serviceMonitor.enabled=true` in `values.yaml`.
Kong exposes memory usage and connection counts by default. You can enable
traffic metrics for routes and services by configuring the [Prometheus
plugin](https://docs.konghq.com/hub/kong-inc/prometheus/).
The ServiceMonitor requires an `enable-metrics: "true"` label on one of the
chart's Services to collect data. By default, this label is set on the proxy
Service. It should only be set on a single chart Service to avoid duplicate
data. If you disable the proxy Service (e.g. on a hybrid control plane instance
or Portal-only instance) and still wish to collect memory usage metrics, add
this label to another Service, e.g. on the admin API Service:
```
admin:
labels:
enable-metrics: "true"
```
## Seeking help
If you run into an issue, bug or have a question, please reach out to the Kong
community via [Kong Nation](https://discuss.konghq.com).
Please do not open issues in [this](https://github.com/helm/charts) repository
as the maintainers will not be notified and won't respond.

View File

@ -0,0 +1,579 @@
# Upgrade considerations
New versions of the Kong chart may add significant new functionality or
deprecate/entirely remove old functionality. This document covers how and why
users should update their chart configuration to take advantage of new features
or migrate away from deprecated features.
In general, breaking changes deprecate their old features before removing them
entirely. While support for the old functionality remains, the chart will show
a warning about the outdated configuration when running `helm
install/status/upgrade`.
Note that not all versions contain breaking changes. If a version is not
present in the table of contents, it requires no version-specific changes when
upgrading from a previous version.
## Table of contents
- [Upgrade considerations for all versions](#upgrade-considerations-for-all-versions)
- [2.3.0](#230)
- [2.2.0](#220)
- [2.1.0](#210)
- [2.0.0](#200)
- [1.14.0](#1140)
- [1.11.0](#1110)
- [1.10.0](#1100)
- [1.9.0](#190)
- [1.6.0](#160)
- [1.5.0](#150)
- [1.4.0](#140)
- [1.3.0](#130)
## Upgrade considerations for all versions
The chart automates the
[upgrade migration process](https://github.com/Kong/kong/blob/master/UPGRADE.md).
When running `helm upgrade`, the chart spawns an initial job to run `kong
migrations up` and then spawns new Kong pods with the updated version. Once
these pods become ready, they begin processing traffic and old pods are
terminated. Once this is complete, the chart spawns another job to run `kong
migrations finish`.
If you split your Kong deployment across multiple Helm releases (to create
proxy-only and admin-only nodes, for example), you must
[set which migration jobs run based on your upgrade order](https://github.com/Kong/charts/blob/main/charts/kong/README.md#separate-admin-and-proxy-nodes).
While the migrations themselves are automated, the chart does not automatically
ensure that you follow the recommended upgrade path. If you are upgrading from
more than one minor Kong version back, check the [upgrade path
recommendations for Kong open source](https://github.com/Kong/kong/blob/master/UPGRADE.md#3-suggested-upgrade-path)
or [Kong Enterprise](https://docs.konghq.com/enterprise/latest/deployment/migrations/).
Although not required, users should upgrade their chart version and Kong
version indepedently. In the even of any issues, this will help clarify whether
the issue stems from changes in Kubernetes resources or changes in Kong.
Users may encounter an error when upgrading which displays a large block of
text ending with `field is immutable`. This is typically due to a bug with the
`init-migrations` job, which was not removed automatically prior to 1.5.0.
If you encounter this error, deleting any existing `init-migrations` jobs will
clear it.
## 2.3.0
### Updated CRDs and CRD API version
2.3.0 adds new and updated CRDs for KIC 2.x. These CRDs are compatible with
KIC 1.x also. The CRD API version is now v1, replacing the deprecated v1beta1,
to support Kubernetes 1.22 and onward. API version v1 requires Kubernetes 1.16
and newer.
Helm 2-style CRD management will upgrade CRDs automatically. You can check to
see if you are using Helm 2-style management by running:
```
kubectl get crd kongconsumers.configuration.konghq.com -o yaml | grep "meta.helm.sh/release-name"
```
If you see output, you are using Helm 2-style CRD management.
Helm 3-style CRD management (the default) does not upgrade CRDs automatically.
You must apply the changes manually by running:
```
kubectl apply -f https://raw.githubusercontent.com/Kong/charts/kong-2.2.0/charts/kong/crds/custom-resource-definitions.yaml
```
Although not recommended, you can remain on an older Kubernetes version and not
upgrade your CRDs if you are using Helm 3-style CRD management. However, you
will not be able to run KIC 2.x, and these configurations are considered
unsupported.
### Ingress controller feature detection
2.3.0 includes some features that are enabled by default, but require KIC 2.x.
KIC 2.x is not yet the default ingress controller version because there are
currently only preview releases for it. To maintain compatibility with KIC 1.x,
the chart automatically detects the KIC image version and disables incompatible
features. This feature detection requires a semver image tag, and the chart
cannot render successfully if the image tag is not semver-compliant.
Standard KIC images do use semver-compliant tags, and you do not need to make
any configuration changes if you use one. If you use a non-semver tag, such as
`next`, you must set the new `ingressController.image.effectiveSemver` field to
your approximate semver version. For example, if your `next` tag is for an
unreleased `2.1.0` KIC version, you should set `effectiveSemver: 2.1.0`.
## 2.2.0
### Changes to pod disruption budget defaults
Prior to 2.2.0, the default values.yaml included
`podDisruptionBudget.maxUnavailable: 50%`. This prevented setting
`podDisruptionBudget.minUnavailable` at all. To allow use of
`podDisruptionBudget.minUnavailable`, we have removed the
`podDisruptionBudget.maxUnavailable` default. If you previously relied on this
default (you set `podDisruptionBudget.enabled: true` but did not set
`podDisruptionBudget.maxUnavailable`), you now must explicitly set
`podDisruptionBudget.maxUnavailable: 50%` in your values.yaml.
## 2.1.0
### Migration off Bintray
Bintray, the Docker registry previously used for several images used by this
chart, is [sunsetting May 1,
2021](https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/).
The chart default `values.yaml` now uses the new Docker Hub repositories for all
affected images. You should check your release `values.yaml` files to confirm that
they do not still reference Bintray repositories. If they do, update them to
use the Docker Hub repositories now in the default `values.yaml`.
## 2.0.0
### Support for Helm 2 dropped
2.0.0 takes advantage of template functionality that is only available in Helm
3 and reworks values defaults to target Helm 3 CRD handling, and requires Helm
3 as such. If you are not already using Helm 3, you must migrate to it before
updating to 2.0.0 or later:
https://helm.sh/docs/topics/v2_v3_migration/
If desired, you can migrate your Kong chart releases without migrating charts'
releases.
### Support for deprecated 1.x features removed
Several previous 1.x chart releases reworked sections of values.yaml while
maintaining support for the older version of those settings. 2.x drops support
for the older versions of these settings entirely:
* [Portal auth settings](#removal-of-dedicated-portal-authentication-configuration-parameters)
* [The `runMigrations` setting](#changes-to-migration-job-configuration)
* [Single-stack admin API Service configuration](#changes-to-kong-service-configuration)
* [Multi-host proxy configuration](#removal-of-multi-host-proxy-ingress)
Each deprecated setting is accompanied by a warning that appears at the end of
`helm upgrade` output on a 1.x release:
```
WARNING: You are currently using legacy ...
```
If you do not see any such warnings when upgrading a release using chart
1.15.0, you are not using deprecated configuration and are ready to upgrade to
2.0.0. If you do see these warnings, follow the linked instructions to migrate
to the current settings format.
## 1.14.0
### Removal of multi-host proxy Ingress
Most of the chart's Ingress templates support a single hostname and TLS Secret.
The proxy Ingress template originally differed, and allowed multiple hostnames
and TLS configurations. As of chart 1.14.0, we have deprecated the unique proxy
Ingress configuration; it is now identical to all other Kong services. If you
do not need to configure multiple Ingress rules for your proxy, you will
change:
```yaml
ingress:
hosts: ["proxy.kong.example"]
tls:
- hosts:
- proxy.kong.example
secretName: example-tls-secret
path: /
```
to:
```yaml
ingress:
tls: example-tls-secret
hostname: proxy.kong.example
path: /
```
We plan to remove support for the multi-host configuration entirely in version
2.0 of the chart. If you currently use multiple hosts, we recommend that you
either:
- Define Ingresses for each application, e.g. if you proxy applicationA at
`foo.kong.example` and applicationB at `bar.kong.example`, you deploy those
applications with their own Ingress resources that target the proxy.
- Define a multi-host Ingress manually. Before upgrading, save your current
proxy Ingress, delete labels from the saved copy, and set
`proxy.ingress.enabled=false`. After upgrading, create your Ingress from the
saved copy and edit it directly to add new rules.
We expect that most users do not need a built-in multi-host proxy Ingress or
even a proxy Ingress at all: the old configuration predates the Kong Ingress
Controller and is most useful if you place Kong behind some other controller.
If you are interested in preserving this functionality, please [discuss your
use case with us](https://github.com/Kong/charts/issues/73). If there is
sufficient interest, we will explore options for continuing to support the
original proxy Ingress configuration format.
### Default custom server block replaced with status listen
Earlier versions of the chart included [a custom server block](https://github.com/Kong/charts/blob/kong-1.13.0/charts/kong/templates/config-custom-server-blocks.yaml)
to provide `/status` and `/metrics` endpoints. This server block simplified
RBAC-enabled Enterprise deployments by providing access to these endpoints
outside the (protected) admin API.
Current versions (Kong 1.4.0+ and Kong Enterprise 1.5.0+) have a built-in
status listen that provides the same functionality, and chart 1.14.0 uses it
for readiness/liveness probes and the Prometheus service monitor.
If you are using a version that supports the new status endpoint, you do not
need to make any changes to your values unless you include `readinessProbe` and
`livenessProbe` in them. If you do, you must change the port from `metrics` to
`status`.
If you are using an older version that does not support the status listen, you
will need to:
- Create the server block ConfigMap independent of the chart. You will need to
set the ConfigMap name and namespace manually and remove the labels block.
- Add an `extraConfigMaps` values entry for your ConfigMap.
- Set `env.nginx_http_include` to `/path/to/your/mount/servers.conf`.
- Add the [old readiness/liveness probe blocks](https://github.com/Kong/charts/blob/kong-1.13.0/charts/kong/values.yaml#L437-L458)
to your values.yaml.
- If you use the Prometheus service monitor, edit it after installing the chart
and set `targetPort` to `9542`. This cannot be set from values.yaml, but Helm
3 will preserve the change on subsequent upgrades.
## 1.11.0
### `KongCredential` custom resources no longer supported
1.11.0 updates the default Kong Ingress Controller version to 1.0. Controller
1.0 removes support for the deprecated KongCredential resource. Before
upgrading to chart 1.11.0, you must convert existing KongCredential resources
to [credential Secrets](https://github.com/Kong/kubernetes-ingress-controller/blob/next/docs/guides/using-consumer-credential-resource.md#provision-a-consumer).
Custom resource management varies depending on your exact chart configuration.
By default, Helm 3 only creates CRDs in the `crds` directory if they are not
already present, and does not modify or remove them after. If you use this
management method, you should create a manifest file that contains [only the
KongCredential CRD](https://github.com/Kong/charts/blob/kong-1.10.0/charts/kong/crds/custom-resource-definitions.yaml#L35-L68)
and then [delete it](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#delete-a-customresourcedefinition).
Helm 2 and Helm 3 both allow managing CRDs via the chart. In Helm 2, this is
required; in Helm 3, it is optional. When using this method, only a single
release will actually manage the CRD. Check to see which release has
`ingressController.installCRDs: true` to determine which does so if you have
multiple releases. When using this management method, upgrading a release to
chart 1.11.0 will delete the KongCredential CRD during the upgrade, which will
_delete any existing KongCredential resources_. To avoid losing configuration,
check to see if your CRD is managed:
```
kubectl get crd kongcredentials.configuration.konghq.com -o yaml | grep "app.kubernetes.io/managed-by: Helm"
```
If that command returns output, your CRD is managed and you must convert to
credential Secrets before upgrading (you should do so regardless, but are not
at risk of losing data, and can downgrade to an older chart version if you have
issues).
### Changes to CRDs
Controller 1.0 [introduces a status field](https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#added)
for its custom resources. By default, Helm 3 does not apply updates to custom
resource definitions if those definitions are already present on the Kubernetes
API server (and they will be if you are upgrading a release from a previous
chart version). To update your custom resources:
```
kubectl apply -f https://raw.githubusercontent.com/Kong/charts/main/charts/kong/crds/custom-resource-definitions.yaml
```
### Deprecated controller flags/environment variables and annotations removed
Kong Ingress Controller 0.x versions had a number of deprecated
flags/environment variables and annotations. Version 1.0 removes support for
these, and you must update your configuration to use their modern equivalents
before upgrading to chart 1.11.0.
The [controller changelog](https://github.com/Kong/kubernetes-ingress-controller/blob/master/CHANGELOG.md#breaking-changes)
provides links to lists of deprecated configuration and their replacements.
## 1.10.0
### `KongClusterPlugin` replaces global `KongPlugin`s
Kong Ingress Controller 0.10.0 no longer supports `KongPlugin`s with a `global: true` label. See the [KIC changelog for 0.10.0](https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#0100---20200915) for migration hints.
### Dropping support for resources not specifying an ingress class
Kong Ingress Controller 0.10.0 drops support for certain kinds of resources without a `kubernetes.io/ingress.class` annotation. See the [KIC changelog for 0.10.0](https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#0100---20200915) for the exact list of those kinds, and for possible migration paths.
## 1.9.0
### New image for Enterprise controller-managed DB-less deployments
As of Kong Enterprise 2.1.3.0, there is no longer a separate image
(`kong-enterprise-k8s`) for controller-managed DB-less deployments. All Kong
Enterprise deployments now use the `kong-enterprise-edition` image.
Existing users of the `kong-enterprise-k8s` image can use the latest
`kong-enterprise-edition` image as a drop-in replacement for the
`kong-enterprise-k8s` image. You will also need to [create a Docker registry
secret](https://github.com/Kong/charts/blob/main/charts/kong/README.md#kong-enterprise-docker-registry-access)
for the `kong-enterprise-edition` registry and add it to `image.pullSecrets` in
values.yaml if you do not have one already.
### Changes to wait-for-postgres image
Prior to 1.9.0, the chart launched a busybox initContainer for migration Pods
to check Postgres' reachability [using
netcat](https://github.com/Kong/charts/blob/kong-1.8.0/charts/kong/templates/_helpers.tpl#L626).
As of 1.9.0, the chart uses a [bash
script](https://github.com/Kong/charts/blob/kong-1.9.0/charts/kong/templates/wait-for-postgres-script.yaml)
to perform the same connectivity check. The default `waitImage.repository`
value is now `bash` rather than `busybox`. Double-check your values.yaml to
confirm that you do not set `waitImage.repository` and `waitImage.tag` to the
old defaults: if you do, remove that configuration before upgrading.
The Helm upgrade cycle requires this script be available for upgrade jobs. On
existing installations, you must first perform an initial `helm upgrade --set
migrations.preUpgrade=false --migrations.postUpgrade=false` to chart 1.9.0.
Perform this initial upgrade without making changes to your Kong image version:
if you are upgrading Kong along with the chart, perform a separate upgrade
after with the migration jobs re-enabled.
If you do not override `waitImage.repository` in your releases, you do not need
to make any other configuration changes when upgrading to 1.9.0.
If you do override `waitImage.repository` to use a custom image, you must
switch to a custom image that provides a `bash` executable. Note that busybox
images, or images derived from it, do _not_ include a `bash` executable. We
recommend switching to an image derived from the public bash Docker image or a
base operating system image that provides a `bash` executable.
## 1.6.0
### Changes to Custom Resource Definitions
The KongPlugin and KongClusterPlugin resources have changed. Helm 3's CRD
management system does not modify CRDs during `helm upgrade`, and these must be
updated manually:
```
kubectl apply -f https://raw.githubusercontent.com/Kong/charts/kong-1.6.0/charts/kong/crds/custom-resource-definitions.yaml
```
Existing plugin resources do not require changes; the CRD update only adds new
fields.
### Removal of default security context UID setting
Versions of Kong prior to 2.0 and Kong Enterprise prior to 1.3 use Docker
images that required setting a UID via Kubernetes in some environments
(primarily OpenShift). This is no longer necessary with modern Docker images
and can cause issues depending on other environment settings, so it was
removed.
Most users should not need to take any action, but if you encounter permissions
errors when upgrading (`kubectl describe pod PODNAME` should contain any), you
can restore it by adding the following to your values.yaml:
```
securityContext:
runAsUser: 1000
```
## 1.5.0
### PodSecurityPolicy defaults to read-only root filesystem
1.5.0 defaults to using a read-only root container filesystem if
`podSecurityPolicy.enabled: true` is set in values.yaml. This improves
security, but is incompatible with Kong Enterprise versions prior to 1.5. If
you use an older version and enable PodSecurityPolicy, you must set
`podSecurityPolicy.spec.readOnlyRootFilesystem: false`.
Kong open-source and Kong for Kubernetes Enterprise are compatible with a
read-only root filesystem on all versions.
### Changes to migration job configuration
Previously, all migration jobs were enabled/disabled through a single
`runMigrations` setting. 1.5.0 splits these into toggles for each of the
individual upgrade migrations:
```
migrations:
preUpgrade: true
postUpgrade: true
```
Initial migration jobs are now only run during `helm install` and are deleted
automatically when users first run `helm upgrade`.
Users should replace `runMigrations` with the above block from the latest
values.yaml.
The new format addresses several needs:
* The initial migrations job are only created during the initial install,
preventing [conflicts on upgrades](https://github.com/Kong/charts/blob/main/charts/kong/FAQs.md#running-helm-upgrade-fails-because-of-old-init-migrations-job).
* The upgrade migrations jobs can be disabled as need for managing
[multi-release clusters](https://github.com/Kong/charts/blob/main/charts/kong/README.md#separate-admin-and-proxy-nodes).
This enables management of clusters that have nodes with different roles,
e.g. nodes that only run the proxy and nodes that only run the admin API.
* Migration jobs now allow specifying annotations, and provide a default set
of annotations that disable some service mesh sidecars. Because sidecar
containers do not terminate, they [prevent the jobs from completing](https://github.com/kubernetes/kubernetes/issues/25908).
## 1.4.0
### Changes to default Postgres permissions
The [Postgres sub-chart](https://github.com/bitnami/charts/tree/master/bitnami/postgresql)
used by this chart has modified the way their chart handles file permissions.
This is not an issue for new installations, but prevents Postgres from starting
if its PVC was created with an older version. If affected, your Postgres pod
logs will show:
```
postgresql 19:16:04.03 INFO ==> ** Starting PostgreSQL **
2020-03-27 19:16:04.053 GMT [1] FATAL: data directory "/bitnami/postgresql/data" has group or world access
2020-03-27 19:16:04.053 GMT [1] DETAIL: Permissions should be u=rwx (0700).
```
You can restore the old permission handling behavior by adding two settings to
the `postgresql` block in values.yaml:
```yaml
postgresql:
enabled: true
postgresqlDataDir: /bitnami/postgresql/data
volumePermissions:
enabled: true
```
For background, see https://github.com/helm/charts/issues/13651
### `strip_path` now defaults to `false` for controller-managed routes
1.4.0 defaults to version 0.8 of the ingress controller, which changes the
default value of the `strip_path` route setting from `true` to `false`. To
understand how this works in practice, compare the upstream path for these
requests when `strip_path` is toggled:
| Ingress path | `strip_path` | Request path | Upstream path |
|--------------|--------------|--------------|---------------|
| /foo/bar | true | /foo/bar/baz | /baz |
| /foo/bar | false | /foo/bar/baz | /foo/bar/baz |
This change brings the controller in line with the Kubernetes Ingress
specification, which expects that controllers will not modify the request
before passing it upstream unless explicitly configured to do so.
To preserve your existing route handling, you should add this annotation to
your ingress resources:
```
konghq.com/strip-path: "true"
```
This is a new annotation that is equivalent to the `route.strip_path` setting
in KongIngress resources. Note that if you have already set this to `false`,
you should leave it as-is and not add an annotation to the ingress.
### Changes to Kong service configuration
1.4.0 reworks the templates and configuration used to generate Kong
configuration and Kuberenetes resources for Kong's services (the admin API,
proxy, Developer Portal, etc.). For the admin API, this requires breaking
changes to the configuration format in values.yaml. Prior to 1.4.0, the admin
API allowed a single listen only, which could be toggled between HTTPS and
HTTP:
```yaml
admin:
enabled: false # create Service
useTLS: true
servicePort: 8444
containerPort: 8444
```
In 1.4.0+, the admin API allows enabling or disabling the HTTP and TLS listens
independently. The equivalent of the above configuration is:
```yaml
admin:
enabled: false # create Service
http:
enabled: false # create HTTP listen
servicePort: 8001
containerPort: 8001
parameters: []
tls:
enabled: true # create HTTPS listen
servicePort: 8444
containerPort: 8444
parameters:
- http2
```
All Kong services now support `SERVICE.enabled` parameters: these allow
disabling the creation of a Kubernetes Service resource for that Kong service,
which is useful in configurations where nodes have different roles, e.g. where
some nodes only handle proxy traffic and some only handle admin API traffic. To
disable a Kong service completely, you should also set `SERVICE.http.enabled:
false` and `SERVICE.tls.enabled: false`. Disabling creation of the Service
resource only leaves the Kong service enabled, but only accessible within its
pod. The admin API is configured with only Service creation disabled to allow
the ingress controller to access it without allowing access from other pods.
Services now also include a new `parameters` section that allows setting
additional listen options, e.g. the `reuseport` and `backlog=16384` parameters
from the [default 2.0.0 proxy
listen](https://github.com/Kong/kong/blob/2.0.0/kong.conf.default#L186). For
compatibility with older Kong versions, the chart defaults do not enable most
of the newer parameters, only HTTP/2 support. Users of versions 1.3.0 and newer
can safely add the new parameters.
## 1.3.0
### Removal of dedicated Portal authentication configuration parameters
1.3.0 deprecates the `enterprise.portal.portal_auth` and
`enterprise.portal.session_conf_secret` settings in values.yaml in favor of
placing equivalent configuration under `env`. These settings are less important
in Kong Enterprise 0.36+, as they can both be set per workspace in Kong
Manager.
These settings provide the default settings for Portal instances: when the
"Authentication plugin" and "Session Config" dropdowns at
https://manager.kong.example/WORKSPACE/portal/settings/ are set to "Default",
the settings from `KONG_PORTAL_AUTH` and `KONG_PORTAL_SESSION_CONF` are used.
If these environment variables are not set, the defaults are to use
`basic-auth` and `{}` (which applies the [session plugin default
configuration](https://docs.konghq.com/hub/kong-inc/session/)).
If you set nonstandard defaults and wish to keep using these settings, or use
Kong Enterprise 0.35 (which did not provide a means to set per-workspace
session configuration) you should convert them to environment variables. For
example, if you currently have:
```yaml
portal:
enabled: true
portal_auth: basic-auth
session_conf_secret: portal-session
```
You should remove the `portal_auth` and `session_conf_secret` entries and
replace them with their equivalents under the `env` block:
```yaml
env:
portal_auth: basic-auth
portal_session_conf:
valueFrom:
secretKeyRef:
name: portal-session
key: portal_session_conf
```

View File

@ -0,0 +1,7 @@
# Kong for Kubernetes
[Kong](https://konghq.com) makes connecting APIs and microservices across hybrid or multi-cloud environments easier and faster than ever. We power trillions of API transactions for leading organizations globally through our end-to-end API platform.
Kong Gateway is the worlds most popular open source API gateway, built for multi-cloud and hybrid, and optimized for microservices and distributed architectures. It is built on top of a lightweight proxy to deliver unparalleled latency, performance and scalability for all your microservice applications regardless of where they run. It allows you to exercise granular control over your traffic with Kongs plugin architecture
The Kong Enterprise Service Control Platform brokers an organizations information across all services. Built on top of Kongs battle-tested open source core, Kong Enterprise enables customers to simplify management of APIs and microservices across hybrid-cloud and multi-cloud deployments. With Kong Enterprise, customers can proactively identify anomalies and threats, automate tasks, and improve visibility across their entire organization.

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,22 @@
apiVersion: v1
appVersion: 11.7.0
description: Chart for PostgreSQL, an object-relational database management system
(ORDBMS) with an emphasis on extensibility and on standards-compliance.
home: https://www.postgresql.org/
icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-110x117.png
keywords:
- postgresql
- postgres
- database
- sql
- replication
- cluster
maintainers:
- email: containers@bitnami.com
name: Bitnami
- email: cedric@desaintmartin.fr
name: desaintmartin
name: postgresql
sources:
- https://github.com/bitnami/bitnami-docker-postgresql
version: 8.6.8

View File

@ -0,0 +1,566 @@
# PostgreSQL
[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha)
## TL;DR;
```console
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/postgresql
```
## Introduction
This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
- Kubernetes 1.12+
- Helm 2.11+ or Helm 3.0-beta3+
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install my-release bitnami/postgresql
```
The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Parameters
The following tables lists the configurable parameters of the PostgreSQL chart and their default values.
| Parameter | Description | Default |
|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------|
| `global.imageRegistry` | Global Docker Image registry | `nil` |
| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` |
| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` |
| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` |
| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` |
| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` |
| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `global.storageClass` | Global storage class for dynamic provisioning | `nil` |
| `image.registry` | PostgreSQL Image registry | `docker.io` |
| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` |
| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug values should be set | `false` |
| `nameOverride` | String to partially override postgresql.fullname template with a string (will prepend the release name) | `nil` |
| `fullnameOverride` | String to fully override postgresql.fullname template with a string | `nil` |
| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/minideb` |
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `buster` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` |
| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` |
| `ldap.enabled` | Enable LDAP support | `false` |
| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` |
| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` |
| `ldap.server` | IP address or name of the LDAP server. | `nil` |
| `ldap.port` | Port number on the LDAP server to connect to | `nil` |
| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` |
| `ldap.tls` | Set to `1` to use TLS encryption | `nil` |
| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` |
| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` |
| `ldap.search_attr` | Attribute to match agains the user name in the search | `nil` |
| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` |
| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` |
| `ldap.bindDN` | DN of user to bind to LDAP | `nil` |
| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` |
| `replication.enabled` | Enable replication | `false` |
| `replication.user` | Replication user | `repl_user` |
| `replication.password` | Replication user password | `repl_password` |
| `replication.slaveReplicas` | Number of slaves replicas | `1` |
| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` |
| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.slaveReplicas`. | `0` |
| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` |
| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-postgres-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be sed to authenticate on LDAP. | `nil` |
| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`) | _random 10 character alphanumeric string_ |
| `postgresqlUsername` | PostgreSQL admin user | `postgres` |
| `postgresqlPassword` | PostgreSQL admin password | _random 10 character alphanumeric string_ |
| `postgresqlDatabase` | PostgreSQL database | `nil` |
| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) |
| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` |
| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` |
| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` |
| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` |
| `postgresqlConfiguration` | Runtime Config Parameters | `nil` |
| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` |
| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` |
| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` |
| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` |
| `initdbScripts` | Dictionary of initdb scripts | `nil` |
| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` |
| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` |
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` |
| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.port` | PostgreSQL port | `5432` |
| `service.nodePort` | Kubernetes Service nodePort | `nil` |
| `service.annotations` | Annotations for PostgreSQL service, the value is evaluated as a template. | {} |
| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | [] |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for master and slave(s) Pod(s) | `true` |
| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` |
| `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` |
| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` |
| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
| `persistence.annotations` | Annotations for the PVC | `{}` |
| `master.nodeSelector` | Node labels for pod assignment (postgresql master) | `{}` |
| `master.affinity` | Affinity labels for pod assignment (postgresql master) | `{}` |
| `master.tolerations` | Toleration labels for pod assignment (postgresql master) | `[]` |
| `master.anotations` | Map of annotations to add to the statefulset (postgresql master) | `{}` |
| `master.labels` | Map of labels to add to the statefulset (postgresql master) | `{}` |
| `master.podAnnotations` | Map of annotations to add to the pods (postgresql master) | `{}` |
| `master.podLabels` | Map of labels to add to the pods (postgresql master) | `{}` |
| `master.priorityClassName` | Priority Class to use for each pod (postgresql master) | `nil` |
| `master.extraInitContainers` | Additional init containers to add to the pods (postgresql master) | `[]` |
| `master.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql master) | `[]` |
| `master.extraVolumes` | Additional volumes to add to the pods (postgresql master) | `[]` |
| `master.sidecars` | Add additional containers to the pod | `[]` |
| `slave.nodeSelector` | Node labels for pod assignment (postgresql slave) | `{}` |
| `slave.affinity` | Affinity labels for pod assignment (postgresql slave) | `{}` |
| `slave.tolerations` | Toleration labels for pod assignment (postgresql slave) | `[]` |
| `slave.anotations` | Map of annotations to add to the statefulsets (postgresql slave) | `{}` |
| `slave.labels` | Map of labels to add to the statefulsets (postgresql slave) | `{}` |
| `slave.podAnnotations` | Map of annotations to add to the pods (postgresql slave) | `{}` |
| `slave.podLabels` | Map of labels to add to the pods (postgresql slave) | `{}` |
| `slave.priorityClassName` | Priority Class to use for each pod (postgresql slave) | `nil` |
| `slave.extraInitContainers` | Additional init containers to add to the pods (postgresql slave) | `[]` |
| `slave.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql slave) | `[]` |
| `slave.extraVolumes` | Additional volumes to add to the pods (postgresql slave) | `[]` |
| `slave.sidecars` | Add additional containers to the pod | `[]` |
| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` |
| `serviceAcccount.name` | Name of existing service account | `nil` |
| `livenessProbe.enabled` | Would you like a livenessProbe to be enabled | `true` |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 |
| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.enabled` | Start a prometheus exporter | `false` |
| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` |
| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` |
| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` |
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` |
| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql |
| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` |
| `metrics.image.registry` | PostgreSQL Image registry | `docker.io` |
| `metrics.image.repository` | PostgreSQL Image name | `bitnami/postgres-exporter` |
| `metrics.image.tag` | PostgreSQL Image tag | `{TAG_NAME}` |
| `metrics.image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `metrics.customMetrics` | Additional custom metrics | `nil` |
| `metrics.securityContext.enabled` | Enable security context for metrics | `false` |
| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` |
| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install my-release \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
```
The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install my-release -f values.yaml bitnami/postgresql
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Configuration and installation details
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
### Production configuration and horizontal scaling
This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one.
- Enable replication:
```diff
- replication.enabled: false
+ replication.enabled: true
```
- Number of slaves replicas:
```diff
- replication.slaveReplicas: 1
+ replication.slaveReplicas: 2
```
- Set synchronous commit mode:
```diff
- replication.synchronousCommit: "off"
+ replication.synchronousCommit: "on"
```
- Number of replicas that will have synchronous replication:
```diff
- replication.numSynchronousReplicas: 0
+ replication.numSynchronousReplicas: 1
```
- Start a prometheus exporter:
```diff
- metrics.enabled: false
+ metrics.enabled: true
```
To horizontally scale this chart, you can use the `--replicas` flag to modify the number of nodes in your PostgreSQL deployment. Also you can use the `values-production.yaml` file or modify the parameters shown above.
### Change PostgreSQL version
To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=12.0.0`
### postgresql.conf / pg_hba.conf files as configMap
This helm chart also supports to customize the whole configuration file.
Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server.
Alternatively, you can specify PostgreSQL configuration parameters using the `postgresqlConfiguration` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}.
In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options.
### Allow settings to be loaded from files other than the default `postgresql.conf`
If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory.
Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`.
Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option.
### Initialize a fresh instance
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict.
In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
### Sidecars
If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
```yaml
# For the PostgreSQL master
master:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
# For the PostgreSQL replicas
slave:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
### Metrics
The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml).
The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details.
### Use of global variables
In more complex scenarios, we may have the following tree of dependencies
```
+--------------+
| |
+------------+ Chart 1 +-----------+
| | | |
| --------+------+ |
| | |
| | |
| | |
| | |
v v v
+-------+------+ +--------+------+ +--------+------+
| | | | | |
| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 |
| | | | | |
+--------------+ +---------------+ +---------------+
```
The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters:
```
postgresql.postgresqlPassword=testtest
subchart1.postgresql.postgresqlPassword=testtest
subchart2.postgresql.postgresqlPassword=testtest
postgresql.postgresqlDatabase=db1
subchart1.postgresql.postgresqlDatabase=db1
subchart2.postgresql.postgresqlDatabase=db1
```
If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows:
```
global.postgresql.postgresqlPassword=testtest
global.postgresql.postgresqlDatabase=db1
```
This way, the credentials will be available in all of the subcharts.
## Persistence
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
See the [Parameters](#parameters) section to configure the PVC or to disable persistence.
If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished.
## NetworkPolicy
To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`.
For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace:
```console
$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"
```
With NetworkPolicy enabled, traffic will be limited to just port 5432.
For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL.
This label will be displayed in the output of a successful install.
## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image
- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image.
- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift.
- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,shmVolume.chmod.enabled=false
### Deploy chart using Docker Official PostgreSQL Image
From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image.
Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory.
```
image.repository=postgres
image.tag=10.6
postgresqlDataDir=/data/pgdata
persistence.mountPath=/data/
```
## Upgrade
It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart:
```bash
$ helm upgrade my-release stable/postgresql \
--set postgresqlPassword=[POSTGRESQL_PASSWORD] \
--set replication.password=[REPLICATION_PASSWORD]
```
> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes.
## 8.0.0
Prefixes the port names with their protocols to comply with Istio conventions.
If you depend on the port names in your setup, make sure to update them to reflect this change.
## 7.1.0
Adds support for LDAP configuration.
## 7.0.0
Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec.
In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage.
This major version bump signifies this change.
## 6.5.7
In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies:
- protobuf
- protobuf-c
- json-c
- geos
- proj
## 5.0.0
In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/).
For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs:
```console
Welcome to the Bitnami postgresql container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
Send us your feedback at containers@bitnami.com
INFO ==> ** Starting PostgreSQL setup **
NFO ==> Validating settings in POSTGRESQL_* env vars..
INFO ==> Initializing PostgreSQL database...
INFO ==> postgresql.conf file not detected. Generating it...
INFO ==> pg_hba.conf file not detected. Generating it...
INFO ==> Deploying PostgreSQL with persisted data...
INFO ==> Configuring replication parameters
INFO ==> Loading custom scripts...
INFO ==> Enabling remote connections
INFO ==> Stopping PostgreSQL...
INFO ==> ** PostgreSQL setup finished! **
INFO ==> ** Starting PostgreSQL **
[1] FATAL: database files are incompatible with server
[1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3.
```
In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one.
### 4.0.0
This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately.
IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error
```
The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development
```
### 3.0.0
This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods.
It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride.
#### Breaking changes
- `affinty` has been renamed to `master.affinity` and `slave.affinity`.
- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`.
- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`.
### 2.0.0
In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps:
- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running
```console
$ kubectl get svc
```
- Install (not upgrade) the new version
```console
$ helm repo update
$ helm install my-release bitnami/postgresql
```
- Connect to the new pod (you can obtain the name by running `kubectl get pods`):
```console
$ kubectl exec -it NAME bash
```
- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart:
```console
$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql
```
After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`).
This operation could take some time depending on the database size.
- Once you have the backup file, you can restore it with a command like the one below:
```console
$ psql -U postgres DATABASE_NAME < /tmp/backup.sql
```
In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt).
If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below.
```console
$ psql -U postgres
postgres=# drop database DATABASE_NAME;
postgres=# create database DATABASE_NAME;
postgres=# create user USER_NAME;
postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD';
postgres=# grant all privileges on database DATABASE_NAME to USER_NAME;
postgres=# alter database DATABASE_NAME owner to USER_NAME;
```

View File

@ -0,0 +1 @@
# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml.

View File

@ -0,0 +1,2 @@
shmVolume:
enabled: false

View File

@ -0,0 +1 @@
Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map.

View File

@ -0,0 +1,4 @@
If you don't want to provide the whole configuration file and only specify certain parameters, you can copy here your extended `.conf` files.
These files will be injected as a config maps and add/overwrite the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`.
More info in the [bitnami-docker-postgresql README](https://github.com/bitnami/bitnami-docker-postgresql#configuration-file).

View File

@ -0,0 +1,3 @@
You can copy here your custom `.sh`, `.sql` or `.sql.gz` file so they are executed during the first boot of the image.
More info in the [bitnami-docker-postgresql](https://github.com/bitnami/bitnami-docker-postgresql#initializing-a-new-instance) repository.

View File

@ -0,0 +1,60 @@
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster:
{{ template "postgresql.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection
{{- if .Values.replication.enabled }}
{{ template "postgresql.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection
{{- end }}
{{- if and .Values.postgresqlPostgresPassword (not (eq .Values.postgresqlUsername "postgres")) }}
To get the password for "postgres" run:
export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-postgres-password}" | base64 --decode)
{{- end }}
To get the password for "{{ template "postgresql.username" . }}" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run {{ template "postgresql.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
--labels="{{ template "postgresql.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "postgresql.fullname" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }}
{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
Note: Since NetworkPolicy is enabled, only pods with label {{ template "postgresql.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster.
{{- end }}
To connect to your database from outside the cluster execute the following commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "postgresql.fullname" . }})
{{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }}
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "postgresql.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "postgresql.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
{{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }}
{{- else if contains "ClusterIP" .Values.service.type }}
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "postgresql.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} &
{{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }}
{{- end }}
{{- include "postgresql.validateValues" . -}}
{{- if and (contains "bitnami/" .Values.image.repository) (not (.Values.image.tag | toString | regexFind "-r\\d+$|sha256:")) }}
WARNING: Rolling tag detected ({{ .Values.image.repository }}:{{ .Values.image.tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
{{- end }}

View File

@ -0,0 +1,420 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "postgresql.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "postgresql.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "postgresql.master.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- $fullname := default (printf "%s-%s" .Release.Name $name) .Values.fullnameOverride -}}
{{- if .Values.replication.enabled -}}
{{- printf "%s-%s" $fullname "master" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s" $fullname | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for networkpolicy.
*/}}
{{- define "postgresql.networkPolicy.apiVersion" -}}
{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}}
"extensions/v1beta1"
{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}}
"networking.k8s.io/v1"
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "postgresql.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Return the proper PostgreSQL image name
*/}}
{{- define "postgresql.image" -}}
{{- $registryName := .Values.image.registry -}}
{{- $repositoryName := .Values.image.repository -}}
{{- $tag := .Values.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL postgres user password
*/}}
{{- define "postgresql.postgres.password" -}}
{{- if .Values.global.postgresql.postgresqlPostgresPassword }}
{{- .Values.global.postgresql.postgresqlPostgresPassword -}}
{{- else if .Values.postgresqlPostgresPassword -}}
{{- .Values.postgresqlPostgresPassword -}}
{{- else -}}
{{- randAlphaNum 10 -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL password
*/}}
{{- define "postgresql.password" -}}
{{- if .Values.global.postgresql.postgresqlPassword }}
{{- .Values.global.postgresql.postgresqlPassword -}}
{{- else if .Values.postgresqlPassword -}}
{{- .Values.postgresqlPassword -}}
{{- else -}}
{{- randAlphaNum 10 -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL replication password
*/}}
{{- define "postgresql.replication.password" -}}
{{- if .Values.global.postgresql.replicationPassword }}
{{- .Values.global.postgresql.replicationPassword -}}
{{- else if .Values.replication.password -}}
{{- .Values.replication.password -}}
{{- else -}}
{{- randAlphaNum 10 -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL username
*/}}
{{- define "postgresql.username" -}}
{{- if .Values.global.postgresql.postgresqlUsername }}
{{- .Values.global.postgresql.postgresqlUsername -}}
{{- else -}}
{{- .Values.postgresqlUsername -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL replication username
*/}}
{{- define "postgresql.replication.username" -}}
{{- if .Values.global.postgresql.replicationUser }}
{{- .Values.global.postgresql.replicationUser -}}
{{- else -}}
{{- .Values.replication.user -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL port
*/}}
{{- define "postgresql.port" -}}
{{- if .Values.global.postgresql.servicePort }}
{{- .Values.global.postgresql.servicePort -}}
{{- else -}}
{{- .Values.service.port -}}
{{- end -}}
{{- end -}}
{{/*
Return PostgreSQL created database
*/}}
{{- define "postgresql.database" -}}
{{- if .Values.global.postgresql.postgresqlDatabase }}
{{- .Values.global.postgresql.postgresqlDatabase -}}
{{- else if .Values.postgresqlDatabase -}}
{{- .Values.postgresqlDatabase -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper image name to change the volume permissions
*/}}
{{- define "postgresql.volumePermissions.image" -}}
{{- $registryName := .Values.volumePermissions.image.registry -}}
{{- $repositoryName := .Values.volumePermissions.image.repository -}}
{{- $tag := .Values.volumePermissions.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper PostgreSQL metrics image name
*/}}
{{- define "postgresql.metrics.image" -}}
{{- $registryName := default "docker.io" .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := default "latest" .Values.metrics.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{/*
Get the password secret.
*/}}
{{- define "postgresql.secretName" -}}
{{- if .Values.global.postgresql.existingSecret }}
{{- printf "%s" .Values.global.postgresql.existingSecret -}}
{{- else if .Values.existingSecret -}}
{{- printf "%s" .Values.existingSecret -}}
{{- else -}}
{{- printf "%s" (include "postgresql.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Return true if a secret object should be created
*/}}
{{- define "postgresql.createSecret" -}}
{{- if .Values.global.postgresql.existingSecret }}
{{- else if .Values.existingSecret -}}
{{- else -}}
{{- true -}}
{{- end -}}
{{- end -}}
{{/*
Get the configuration ConfigMap name.
*/}}
{{- define "postgresql.configurationCM" -}}
{{- if .Values.configurationConfigMap -}}
{{- printf "%s" (tpl .Values.configurationConfigMap $) -}}
{{- else -}}
{{- printf "%s-configuration" (include "postgresql.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Get the extended configuration ConfigMap name.
*/}}
{{- define "postgresql.extendedConfigurationCM" -}}
{{- if .Values.extendedConfConfigMap -}}
{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}}
{{- else -}}
{{- printf "%s-extended-configuration" (include "postgresql.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Get the initialization scripts ConfigMap name.
*/}}
{{- define "postgresql.initdbScriptsCM" -}}
{{- if .Values.initdbScriptsConfigMap -}}
{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}}
{{- else -}}
{{- printf "%s-init-scripts" (include "postgresql.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Get the initialization scripts Secret name.
*/}}
{{- define "postgresql.initdbScriptsSecret" -}}
{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}}
{{- end -}}
{{/*
Get the metrics ConfigMap name.
*/}}
{{- define "postgresql.metricsCM" -}}
{{- printf "%s-metrics" (include "postgresql.fullname" .) -}}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "postgresql.imagePullSecrets" -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
Also, we can not use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.metrics.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.volumePermissions.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end -}}
{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.metrics.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.volumePermissions.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end -}}
{{- end -}}
{{/*
Get the readiness probe command
*/}}
{{- define "postgresql.readinessProbeCommand" -}}
- |
{{- if (include "postgresql.database" .) }}
exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
{{- if contains "bitnami/" .Values.image.repository }}
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
{{- end -}}
{{- end -}}
{{/*
Return the proper Storage Class
*/}}
{{- define "postgresql.storageClass" -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
*/}}
{{- if .Values.global -}}
{{- if .Values.global.storageClass -}}
{{- if (eq "-" .Values.global.storageClass) -}}
{{- printf "storageClassName: \"\"" -}}
{{- else }}
{{- printf "storageClassName: %s" .Values.global.storageClass -}}
{{- end -}}
{{- else -}}
{{- if .Values.persistence.storageClass -}}
{{- if (eq "-" .Values.persistence.storageClass) -}}
{{- printf "storageClassName: \"\"" -}}
{{- else }}
{{- printf "storageClassName: %s" .Values.persistence.storageClass -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- else -}}
{{- if .Values.persistence.storageClass -}}
{{- if (eq "-" .Values.persistence.storageClass) -}}
{{- printf "storageClassName: \"\"" -}}
{{- else }}
{{- printf "storageClassName: %s" .Values.persistence.storageClass -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Renders a value that contains template.
Usage:
{{ include "postgresql.tplValue" ( dict "value" .Values.path.to.the.Value "context" $) }}
*/}}
{{- define "postgresql.tplValue" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (.value | toYaml) .context }}
{{- end }}
{{- end -}}
{{/*
Return the appropriate apiVersion for statefulset.
*/}}
{{- define "postgresql.statefulset.apiVersion" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "apps/v1beta2" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Compile all warnings into a single message, and call fail.
*/}}
{{- define "postgresql.validateValues" -}}
{{- $messages := list -}}
{{- $messages := append $messages (include "postgresql.validateValues.ldapConfigurationMethod" .) -}}
{{- $messages := without $messages "" -}}
{{- $message := join "\n" $messages -}}
{{- if $message -}}
{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}}
{{- end -}}
{{- end -}}
{{/*
Validate values of Postgresql - If ldap.url is used then you don't need the other settings for ldap
*/}}
{{- define "postgresql.validateValues.ldapConfigurationMethod" -}}
{{- if and .Values.ldap.enabled (and (not (empty .Values.ldap.url)) (not (empty .Values.ldap.server))) }}
postgresql: ldap.url, ldap.server
You cannot set both `ldap.url` and `ldap.server` at the same time.
Please provide a unique way to configure LDAP.
More info at https://www.postgresql.org/docs/current/auth-ldap.html
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,26 @@
{{ if and (or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration) (not .Values.configurationConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.fullname" . }}-configuration
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
{{- if (.Files.Glob "files/postgresql.conf") }}
{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }}
{{- else if .Values.postgresqlConfiguration }}
postgresql.conf: |
{{- range $key, $value := default dict .Values.postgresqlConfiguration }}
{{ $key | snakecase }}={{ $value }}
{{- end }}
{{- end }}
{{- if (.Files.Glob "files/pg_hba.conf") }}
{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }}
{{- else if .Values.pgHbaConfiguration }}
pg_hba.conf: |
{{ .Values.pgHbaConfiguration | indent 4 }}
{{- end }}
{{ end }}

View File

@ -0,0 +1,21 @@
{{- if and (or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf) (not .Values.extendedConfConfigMap)}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.fullname" . }}-extended-configuration
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
{{- with .Files.Glob "files/conf.d/*.conf" }}
{{ .AsConfig | indent 2 }}
{{- end }}
{{ with .Values.postgresqlExtendedConf }}
override.conf: |
{{- range $key, $value := . }}
{{ $key | snakecase }}={{ $value }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,24 @@
{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.fullname" . }}-init-scripts
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }}
binaryData:
{{- range $path, $bytes := . }}
{{ base $path }}: {{ $.Files.Get $path | b64enc | quote }}
{{- end }}
{{- end }}
data:
{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}" }}
{{ .AsConfig | indent 2 }}
{{- end }}
{{- with .Values.initdbScripts }}
{{ toYaml . | indent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.metricsCM" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }}
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if .Values.metrics.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}-metrics
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
annotations:
{{ toYaml .Values.metrics.service.annotations | indent 4 }}
spec:
type: {{ .Values.metrics.service.type }}
{{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }}
{{- end }}
ports:
- name: http-metrics
port: 9187
targetPort: http-metrics
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name }}
role: master
{{- end }}

View File

@ -0,0 +1,38 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }}
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
podSelector:
matchLabels:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
ingress:
# Allow inbound connections
- ports:
- port: {{ template "postgresql.port" . }}
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ template "postgresql.fullname" . }}-client: "true"
{{- if .Values.networkPolicy.explicitNamespacesSelector }}
namespaceSelector:
{{ toYaml .Values.networkPolicy.explicitNamespacesSelector | indent 12 }}
{{- end }}
- podSelector:
matchLabels:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: slave
{{- end }}
# Allow prometheus scrapes
- ports:
- port: 9187
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "postgresql.fullname" . }}
{{- with .Values.metrics.prometheusRule.namespace }}
namespace: {{ . }}
{{- end }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- with .Values.metrics.prometheusRule.rules }}
groups:
- name: {{ template "postgresql.name" $ }}
rules: {{ tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,23 @@
{{- if (include "postgresql.createSecret" .) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
type: Opaque
data:
{{- if and .Values.postgresqlPostgresPassword (not (eq .Values.postgresqlUsername "postgres")) }}
postgresql-postgres-password: {{ include "postgresql.postgres.password" . | b64enc | quote }}
{{- end }}
postgresql-password: {{ include "postgresql.password" . | b64enc | quote }}
{{- if .Values.replication.enabled }}
postgresql-replication-password: {{ include "postgresql.replication.password" . | b64enc | quote }}
{{- end }}
{{- if (and .Values.ldap.enabled .Values.ldap.bind_password)}}
postgresql-ldap-password: {{ .Values.ldap.bind_password | b64enc | quote }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,11 @@
{{- if and (.Values.serviceAccount.enabled) (not .Values.serviceAccount.name) }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
name: {{ template "postgresql.fullname" . }}
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "postgresql.fullname" . }}
{{- if .Values.metrics.serviceMonitor.namespace }}
namespace: {{ .Values.metrics.serviceMonitor.namespace }}
{{- end }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.metrics.serviceMonitor.additionalLabels }}
{{ toYaml .Values.metrics.serviceMonitor.additionalLabels | indent 4 }}
{{- end }}
spec:
endpoints:
- port: http-metrics
{{- if .Values.metrics.serviceMonitor.interval }}
interval: {{ .Values.metrics.serviceMonitor.interval }}
{{- end }}
{{- if .Values.metrics.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout }}
{{- end }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name }}
{{- end }}

View File

@ -0,0 +1,299 @@
{{- if .Values.replication.enabled }}
apiVersion: {{ template "postgresql.statefulset.apiVersion" . }}
kind: StatefulSet
metadata:
name: "{{ template "postgresql.fullname" . }}-slave"
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Values.slave.labels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- with .Values.slave.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
serviceName: {{ template "postgresql.fullname" . }}-headless
replicas: {{ .Values.replication.slaveReplicas }}
selector:
matchLabels:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: slave
template:
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
role: slave
{{- with .Values.slave.podLabels }}
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.slave.podAnnotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- include "postgresql.imagePullSecrets" . | indent 6 }}
{{- if .Values.slave.nodeSelector }}
nodeSelector:
{{ toYaml .Values.slave.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.slave.affinity }}
affinity:
{{ toYaml .Values.slave.affinity | indent 8 }}
{{- end }}
{{- if .Values.slave.tolerations }}
tolerations:
{{ toYaml .Values.slave.tolerations | indent 8 }}
{{- end }}
{{- if .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
{{- end }}
{{- if .Values.serviceAccount.enabled }}
serviceAccountName: {{ default (include "postgresql.fullname" . ) .Values.serviceAccount.name}}
{{- end }}
{{- if or .Values.slave.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }}
initContainers:
{{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled)) }}
- name: init-chmod-data
image: {{ template "postgresql.volumePermissions.image" . }}
imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
command:
- /bin/sh
- -cx
- |
{{ if .Values.persistence.enabled }}
mkdir -p {{ .Values.persistence.mountPath }}/data
chmod 700 {{ .Values.persistence.mountPath }}/data
find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | \
{{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }}
xargs chown -R `id -u`:`id -G | cut -d " " -f2`
{{- else }}
xargs chown -R {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }}
{{- end }}
{{- end }}
{{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }}
chmod -R 777 /dev/shm
{{- end }}
{{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }}
securityContext:
{{- else }}
securityContext:
runAsUser: {{ .Values.volumePermissions.securityContext.runAsUser }}
{{- end }}
volumeMounts:
{{ if .Values.persistence.enabled }}
- name: data
mountPath: {{ .Values.persistence.mountPath }}
subPath: {{ .Values.persistence.subPath }}
{{- end }}
{{- if .Values.shmVolume.enabled }}
- name: dshm
mountPath: /dev/shm
{{- end }}
{{- end }}
{{- if .Values.slave.extraInitContainers }}
{{ tpl .Values.slave.extraInitContainers . | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.slave.priorityClassName }}
priorityClassName: {{ .Values.slave.priorityClassName }}
{{- end }}
containers:
- name: {{ template "postgresql.fullname" . }}
image: {{ template "postgresql.image" . }}
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
env:
- name: BITNAMI_DEBUG
value: {{ ternary "true" "false" .Values.image.debug | quote }}
- name: POSTGRESQL_VOLUME_DIR
value: "{{ .Values.persistence.mountPath }}"
- name: POSTGRESQL_PORT_NUMBER
value: "{{ template "postgresql.port" . }}"
{{- if .Values.persistence.mountPath }}
- name: PGDATA
value: {{ .Values.postgresqlDataDir | quote }}
{{- end }}
- name: POSTGRES_REPLICATION_MODE
value: "slave"
- name: POSTGRES_REPLICATION_USER
value: {{ include "postgresql.replication.username" . | quote }}
{{- if .Values.usePasswordFile }}
- name: POSTGRES_REPLICATION_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password"
{{- else }}
- name: POSTGRES_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-replication-password
{{- end }}
- name: POSTGRES_CLUSTER_APP_NAME
value: {{ .Values.replication.applicationName }}
- name: POSTGRES_MASTER_HOST
value: {{ template "postgresql.fullname" . }}
- name: POSTGRES_MASTER_PORT_NUMBER
value: {{ include "postgresql.port" . | quote }}
{{- if and .Values.postgresqlPostgresPassword (not (eq .Values.postgresqlUsername "postgres")) }}
{{- if .Values.usePasswordFile }}
- name: POSTGRES_POSTGRES_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password"
{{- else }}
- name: POSTGRES_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-postgres-password
{{- end }}
{{- end }}
{{- if .Values.usePasswordFile }}
- name: POSTGRES_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
{{- else }}
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-password
{{- end }}
ports:
- name: tcp-postgresql
containerPort: {{ template "postgresql.port" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
exec:
command:
- /bin/sh
- -c
{{- if (include "postgresql.database" .) }}
- exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
- exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
exec:
command:
- /bin/sh
- -c
- -e
{{- include "postgresql.readinessProbeCommand" . | nindent 16 }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
volumeMounts:
{{- if .Values.usePasswordFile }}
- name: postgresql-password
mountPath: /opt/bitnami/postgresql/secrets/
{{- end }}
{{- if .Values.shmVolume.enabled }}
- name: dshm
mountPath: /dev/shm
{{- end }}
{{- if .Values.persistence.enabled }}
- name: data
mountPath: {{ .Values.persistence.mountPath }}
subPath: {{ .Values.persistence.subPath }}
{{ end }}
{{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
mountPath: /bitnami/postgresql/conf/conf.d/
{{- end }}
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }}
- name: postgresql-config
mountPath: /bitnami/postgresql/conf
{{- end }}
{{- if .Values.slave.extraVolumeMounts }}
{{- toYaml .Values.slave.extraVolumeMounts | nindent 12 }}
{{- end }}
{{- if .Values.slave.sidecars }}
{{- include "postgresql.tplValue" ( dict "value" .Values.slave.sidecars "context" $ ) | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.usePasswordFile }}
- name: postgresql-password
secret:
secretName: {{ template "postgresql.secretName" . }}
{{- end }}
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}}
- name: postgresql-config
configMap:
name: {{ template "postgresql.configurationCM" . }}
{{- end }}
{{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
configMap:
name: {{ template "postgresql.extendedConfigurationCM" . }}
{{- end }}
{{- if .Values.shmVolume.enabled }}
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 1Gi
{{- end }}
{{- if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.slave.extraVolumes }}
{{- toYaml .Values.slave.extraVolumes | nindent 8 }}
{{- end }}
updateStrategy:
type: {{ .Values.updateStrategy.type }}
{{- if (eq "Recreate" .Values.updateStrategy.type) }}
rollingUpdate: null
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
{{- with .Values.persistence.annotations }}
annotations:
{{- range $key, $value := . }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{ include "postgresql.storageClass" . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,453 @@
apiVersion: {{ template "postgresql.statefulset.apiVersion" . }}
kind: StatefulSet
metadata:
name: {{ template "postgresql.master.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Values.master.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.master.annotations }}
annotations: {{ toYaml . | nindent 4 }}
{{- end }}
spec:
serviceName: {{ template "postgresql.fullname" . }}-headless
replicas: 1
updateStrategy:
type: {{ .Values.updateStrategy.type }}
{{- if (eq "Recreate" .Values.updateStrategy.type) }}
rollingUpdate: null
{{- end }}
selector:
matchLabels:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: master
template:
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
role: master
{{- with .Values.master.podLabels }}
{{- toYaml . | indent 8 }}
{{- end }}
{{- with .Values.master.podAnnotations }}
annotations: {{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- include "postgresql.imagePullSecrets" . | indent 6 }}
{{- if .Values.master.nodeSelector }}
nodeSelector: {{- toYaml .Values.master.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.master.affinity }}
affinity: {{- toYaml .Values.master.affinity | nindent 8 }}
{{- end }}
{{- if .Values.master.tolerations }}
tolerations: {{- toYaml .Values.master.tolerations | nindent 8 }}
{{- end }}
{{- if .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
{{- end }}
{{- if .Values.serviceAccount.enabled }}
serviceAccountName: {{ default (include "postgresql.fullname" . ) .Values.serviceAccount.name }}
{{- end }}
{{- if or .Values.master.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }}
initContainers:
{{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled)) }}
- name: init-chmod-data
image: {{ template "postgresql.volumePermissions.image" . }}
imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
command:
- /bin/sh
- -cx
- |
{{- if .Values.persistence.enabled }}
mkdir -p {{ .Values.persistence.mountPath }}/data
chmod 700 {{ .Values.persistence.mountPath }}/data
find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | \
{{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }}
xargs chown -R `id -u`:`id -G | cut -d " " -f2`
{{- else }}
xargs chown -R {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }}
{{- end }}
{{- end }}
{{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }}
chmod -R 777 /dev/shm
{{- end }}
{{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }}
securityContext:
{{- else }}
securityContext:
runAsUser: {{ .Values.volumePermissions.securityContext.runAsUser }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: data
mountPath: {{ .Values.persistence.mountPath }}
subPath: {{ .Values.persistence.subPath }}
{{- end }}
{{- if .Values.shmVolume.enabled }}
- name: dshm
mountPath: /dev/shm
{{- end }}
{{- end }}
{{- if .Values.master.extraInitContainers }}
{{- tpl .Values.master.extraInitContainers . | nindent 8 }}
{{- end }}
{{- end }}
{{- if .Values.master.priorityClassName }}
priorityClassName: {{ .Values.master.priorityClassName }}
{{- end }}
containers:
- name: {{ template "postgresql.fullname" . }}
image: {{ template "postgresql.image" . }}
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
env:
- name: BITNAMI_DEBUG
value: {{ ternary "true" "false" .Values.image.debug | quote }}
- name: POSTGRESQL_PORT_NUMBER
value: "{{ template "postgresql.port" . }}"
- name: POSTGRESQL_VOLUME_DIR
value: "{{ .Values.persistence.mountPath }}"
{{- if .Values.postgresqlInitdbArgs }}
- name: POSTGRES_INITDB_ARGS
value: {{ .Values.postgresqlInitdbArgs | quote }}
{{- end }}
{{- if .Values.postgresqlInitdbWalDir }}
- name: POSTGRES_INITDB_WALDIR
value: {{ .Values.postgresqlInitdbWalDir | quote }}
{{- end }}
{{- if .Values.initdbUser }}
- name: POSTGRESQL_INITSCRIPTS_USERNAME
value: {{ .Values.initdbUser }}
{{- end }}
{{- if .Values.initdbPassword }}
- name: POSTGRESQL_INITSCRIPTS_PASSWORD
value: {{ .Values.initdbPassword }}
{{- end }}
{{- if .Values.persistence.mountPath }}
- name: PGDATA
value: {{ .Values.postgresqlDataDir | quote }}
{{- end }}
{{- if .Values.replication.enabled }}
- name: POSTGRES_REPLICATION_MODE
value: "master"
- name: POSTGRES_REPLICATION_USER
value: {{ include "postgresql.replication.username" . | quote }}
{{- if .Values.usePasswordFile }}
- name: POSTGRES_REPLICATION_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password"
{{- else }}
- name: POSTGRES_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-replication-password
{{- end }}
{{- if not (eq .Values.replication.synchronousCommit "off")}}
- name: POSTGRES_SYNCHRONOUS_COMMIT_MODE
value: {{ .Values.replication.synchronousCommit | quote }}
- name: POSTGRES_NUM_SYNCHRONOUS_REPLICAS
value: {{ .Values.replication.numSynchronousReplicas | quote }}
{{- end }}
- name: POSTGRES_CLUSTER_APP_NAME
value: {{ .Values.replication.applicationName }}
{{- end }}
{{- if and .Values.postgresqlPostgresPassword (not (eq .Values.postgresqlUsername "postgres")) }}
{{- if .Values.usePasswordFile }}
- name: POSTGRES_POSTGRES_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password"
{{- else }}
- name: POSTGRES_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-postgres-password
{{- end }}
{{- end }}
- name: POSTGRES_USER
value: {{ include "postgresql.username" . | quote }}
{{- if .Values.usePasswordFile }}
- name: POSTGRES_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
{{- else }}
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-password
{{- end }}
{{- if (include "postgresql.database" .) }}
- name: POSTGRES_DB
value: {{ (include "postgresql.database" .) | quote }}
{{- end }}
{{- if .Values.extraEnv }}
{{- include "postgresql.tplValue" (dict "value" .Values.extraEnv "context" $) | nindent 12 }}
{{- end }}
- name: POSTGRESQL_ENABLE_LDAP
value: {{ ternary "yes" "no" .Values.ldap.enabled | quote }}
{{- if .Values.ldap.enabled }}
- name: POSTGRESQL_LDAP_SERVER
value: {{ .Values.ldap.server }}
- name: POSTGRESQL_LDAP_PORT
value: {{ .Values.ldap.port | quote }}
- name: POSTGRESQL_LDAP_SCHEME
value: {{ .Values.ldap.scheme }}
{{- if .Values.ldap.tls }}
- name: POSTGRESQL_LDAP_TLS
value: "1"
{{- end}}
- name: POSTGRESQL_LDAP_PREFIX
value: {{ .Values.ldap.prefix | quote }}
- name: POSTGRESQL_LDAP_SUFFIX
value: {{ .Values.ldap.suffix | quote}}
- name: POSTGRESQL_LDAP_BASE_DN
value: {{ .Values.ldap.baseDN }}
- name: POSTGRESQL_LDAP_BIND_DN
value: {{ .Values.ldap.bindDN }}
{{- if (not (empty .Values.ldap.bind_password)) }}
- name: POSTGRESQL_LDAP_BIND_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-ldap-password
{{- end}}
- name: POSTGRESQL_LDAP_SEARCH_ATTR
value: {{ .Values.ldap.search_attr }}
- name: POSTGRESQL_LDAP_SEARCH_FILTER
value: {{ .Values.ldap.search_filter }}
- name: POSTGRESQL_LDAP_URL
value: {{ .Values.ldap.url }}
{{- end}}
{{- if .Values.extraEnvVarsCM }}
envFrom:
- configMapRef:
name: {{ tpl .Values.extraEnvVarsCM . }}
{{- end }}
ports:
- name: tcp-postgresql
containerPort: {{ template "postgresql.port" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
exec:
command:
- /bin/sh
- -c
{{- if (include "postgresql.database" .) }}
- exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
- exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
exec:
command:
- /bin/sh
- -c
- -e
{{- include "postgresql.readinessProbeCommand" . | nindent 16 }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
volumeMounts:
{{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }}
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d/
{{- end }}
{{- if .Values.initdbScriptsSecret }}
- name: custom-init-scripts-secret
mountPath: /docker-entrypoint-initdb.d/secret
{{- end }}
{{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
mountPath: /bitnami/postgresql/conf/conf.d/
{{- end }}
{{- if .Values.usePasswordFile }}
- name: postgresql-password
mountPath: /opt/bitnami/postgresql/secrets/
{{- end }}
{{- if .Values.shmVolume.enabled }}
- name: dshm
mountPath: /dev/shm
{{- end }}
{{- if .Values.persistence.enabled }}
- name: data
mountPath: {{ .Values.persistence.mountPath }}
subPath: {{ .Values.persistence.subPath }}
{{- end }}
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }}
- name: postgresql-config
mountPath: /bitnami/postgresql/conf
{{- end }}
{{- if .Values.master.extraVolumeMounts }}
{{- toYaml .Values.master.extraVolumeMounts | nindent 12 }}
{{- end }}
{{- if .Values.master.sidecars }}
{{- include "postgresql.tplValue" ( dict "value" .Values.master.sidecars "context" $ ) | nindent 8 }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "postgresql.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
{{- if .Values.metrics.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.metrics.securityContext.runAsUser }}
{{- end }}
env:
{{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase)" (include "postgresql.database" .) }}
- name: DATA_SOURCE_URI
value: {{ printf "127.0.0.1:%d/%s?sslmode=disable" (int (include "postgresql.port" .)) $database | quote }}
{{- if .Values.usePasswordFile }}
- name: DATA_SOURCE_PASS_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
{{- else }}
- name: DATA_SOURCE_PASS
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-password
{{- end }}
- name: DATA_SOURCE_USER
value: {{ template "postgresql.username" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /
port: http-metrics
initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /
port: http-metrics
initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }}
{{- end }}
volumeMounts:
{{- if .Values.usePasswordFile }}
- name: postgresql-password
mountPath: /opt/bitnami/postgresql/secrets/
{{- end }}
{{- if .Values.metrics.customMetrics }}
- name: custom-metrics
mountPath: /conf
readOnly: true
args: ["--extend.query-path", "/conf/custom-metrics.yaml"]
{{- end }}
ports:
- name: http-metrics
containerPort: 9187
{{- if .Values.metrics.resources }}
resources: {{- toYaml .Values.metrics.resources | nindent 12 }}
{{- end }}
{{- end }}
volumes:
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}}
- name: postgresql-config
configMap:
name: {{ template "postgresql.configurationCM" . }}
{{- end }}
{{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
configMap:
name: {{ template "postgresql.extendedConfigurationCM" . }}
{{- end }}
{{- if .Values.usePasswordFile }}
- name: postgresql-password
secret:
secretName: {{ template "postgresql.secretName" . }}
{{- end }}
{{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }}
- name: custom-init-scripts
configMap:
name: {{ template "postgresql.initdbScriptsCM" . }}
{{- end }}
{{- if .Values.initdbScriptsSecret }}
- name: custom-init-scripts-secret
secret:
secretName: {{ template "postgresql.initdbScriptsSecret" . }}
{{- end }}
{{- if .Values.master.extraVolumes }}
{{- toYaml .Values.master.extraVolumes | nindent 8 }}
{{- end }}
{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }}
- name: custom-metrics
configMap:
name: {{ template "postgresql.metricsCM" . }}
{{- end }}
{{- if .Values.shmVolume.enabled }}
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 1Gi
{{- end }}
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
- name: data
persistentVolumeClaim:
{{- with .Values.persistence.existingClaim }}
claimName: {{ tpl . $ }}
{{- end }}
{{- else if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
volumeClaimTemplates:
- metadata:
name: data
{{- with .Values.persistence.annotations }}
annotations:
{{- range $key, $value := . }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{ include "postgresql.storageClass" . }}
{{- end }}

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}-headless
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: tcp-postgresql
port: {{ template "postgresql.port" . }}
targetPort: tcp-postgresql
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}

View File

@ -0,0 +1,31 @@
{{- if .Values.replication.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}-read
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
- name: tcp-postgresql
port: {{ template "postgresql.port" . }}
targetPort: tcp-postgresql
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: slave
{{- end }}

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Values.service.annotations }}
annotations:
{{ tpl (toYaml .) $ | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ with .Values.service.loadBalancerSourceRanges }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
ports:
- name: tcp-postgresql
port: {{ template "postgresql.port" . }}
targetPort: tcp-postgresql
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: master

View File

@ -0,0 +1,528 @@
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
postgresql: {}
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
##
image:
registry: docker.io
repository: bitnami/postgresql
tag: 11.7.0-debian-10-r37
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
debug: false
## String to partially override postgresql.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override postgresql.fullname template
##
# fullnameOverride:
##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/minideb
tag: buster
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Init container Security Context
## Note: the chown of the data folder is done to securityContext.runAsUser
## and not the below volumePermissions.securityContext.runAsUser
## When runAsUser is set to special value "auto", init container will try to chwon the
## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed).
## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with
## pod securityContext.enabled=false and shmVolume.chmod.enabled=false
##
securityContext:
runAsUser: 0
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## Pod Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
enabled: false
## Name of an already existing service account. Setting this value disables the automatic service account creation.
# name:
replication:
enabled: true
user: repl_user
password: repl_password
slaveReplicas: 2
## Set synchronous commit mode: on, off, remote_apply, remote_write and local
## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL
synchronousCommit: "on"
## From the number of `slaveReplicas` defined above, set the number of those that will have synchronous replication
## NOTE: It cannot be > slaveReplicas
numSynchronousReplicas: 1
## Replication Cluster application name. Useful for defining multiple replication policies
applicationName: my_application
## PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`)
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!)
# postgresqlPostgresPassword:
## PostgreSQL user (has superuser privileges if username is `postgres`)
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
postgresqlUsername: postgres
## PostgreSQL password
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
##
# postgresqlPassword:
## PostgreSQL password using existing secret
## existingSecret: secret
## Mount PostgreSQL secret as a file instead of passing environment variable
# usePasswordFile: false
## Create a database
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run
##
# postgresqlDatabase:
## PostgreSQL data dir
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
postgresqlDataDir: /bitnami/postgresql/data
## An array to add extra environment variables
## For example:
## extraEnv:
## - name: FOO
## value: "bar"
##
# extraEnv:
extraEnv: []
## Name of a ConfigMap containing extra env vars
##
# extraEnvVarsCM:
## Specify extra initdb args
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
# postgresqlInitdbArgs:
## Specify a custom location for the PostgreSQL transaction log
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
# postgresqlInitdbWalDir:
## PostgreSQL configuration
## Specify runtime configuration parameters as a dict, using camelCase, e.g.
## {"sharedBuffers": "500MB"}
## Alternatively, you can put your postgresql.conf under the files/ directory
## ref: https://www.postgresql.org/docs/current/static/runtime-config.html
##
# postgresqlConfiguration:
## PostgreSQL extended configuration
## As above, but _appended_ to the main configuration
## Alternatively, you can put your *.conf under the files/conf.d/ directory
## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf
##
# postgresqlExtendedConf:
## PostgreSQL client authentication configuration
## Specify content for pg_hba.conf
## Default: do not create pg_hba.conf
## Alternatively, you can put your pg_hba.conf under the files/ directory
# pgHbaConfiguration: |-
# local all all trust
# host all all localhost trust
# host mydatabase mysuser 192.168.0.0/24 md5
## ConfigMap with PostgreSQL configuration
## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration
# configurationConfigMap:
## ConfigMap with PostgreSQL extended configuration
# extendedConfConfigMap:
## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
## Specify the PostgreSQL username and password to execute the initdb scripts
# initdbUser:
# initdbPassword:
## ConfigMap with scripts to be run at first boot
## NOTE: This will override initdbScripts
# initdbScriptsConfigMap:
## Secret with scripts to be run at first boot (in case it contains sensitive information)
## NOTE: This can work along initdbScripts or initdbScriptsConfigMap
# initdbScriptsSecret:
## Optional duration in seconds the pod needs to terminate gracefully.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
# terminationGracePeriodSeconds: 30
## LDAP configuration
##
ldap:
enabled: false
url: ""
server: ""
port: ""
prefix: ""
suffix: ""
baseDN: ""
bindDN: ""
bind_password:
search_attr: ""
search_filter: ""
scheme: ""
tls: false
## PostgreSQL service configuration
service:
## PosgresSQL service type
type: ClusterIP
# clusterIP: None
port: 5432
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required.
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
annotations: {}
## Set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## Start master and slave(s) pod(s) without limitations on shm memory.
## By default docker and containerd (and possibly other container runtimes)
## limit `/dev/shm` to `64M` (see e.g. the
## [docker issue](https://github.com/docker-library/postgres/issues/416) and the
## [containerd issue](https://github.com/containerd/containerd/issues/3654),
## which could be not enough if PostgreSQL uses parallel workers heavily.
##
shmVolume:
## Set `shmVolume.enabled` to `true` to mount a new tmpfs volume to remove
## this limitation.
##
enabled: true
## Set to `true` to `chmod 777 /dev/shm` on a initContainer.
## This option is ingored if `volumePermissions.enabled` is `false`
##
chmod:
enabled: true
## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
##
# existingClaim:
## The path the volume will be mounted at, useful when using different
## PostgreSQL images.
##
mountPath: /bitnami/postgresql
## The subdirectory of the volume to mount to, useful in dev environments
## and one PV for multiple services.
##
subPath: ""
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
## updateStrategy for PostgreSQL StatefulSet and its slaves StatefulSets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
##
## PostgreSQL Master parameters
##
master:
## Node, affinity, tolerations, and priorityclass settings for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
annotations: {}
podLabels: {}
podAnnotations: {}
priorityClassName: ""
## Additional PostgreSQL Master Volume mounts
##
extraVolumeMounts: []
## Additional PostgreSQL Master Volumes
##
extraVolumes: []
## Add sidecars to the pod
##
## For example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
sidecars: []
##
## PostgreSQL Slave parameters
##
slave:
## Node, affinity, tolerations, and priorityclass settings for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
annotations: {}
podLabels: {}
podAnnotations: {}
priorityClassName: ""
## Extra init containers
## Example
##
## extraInitContainers:
## - name: do-something
## image: busybox
## command: ['do', 'something']
extraInitContainers: []
## Additional PostgreSQL Slave Volume mounts
##
extraVolumeMounts: []
## Additional PostgreSQL Slave Volumes
##
extraVolumes: []
## Add sidecars to the pod
##
## For example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
sidecars: []
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
requests:
memory: 256Mi
cpu: 250m
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
##
enabled: false
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the port PostgreSQL is listening
## on. When true, PostgreSQL will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## if explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace
## and that match other criteria, the ones that have the good label, can reach the DB.
## But sometimes, we want the DB to be accessible to clients from other namespaces, in this case, we can use this
## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added.
##
## Example:
## explicitNamespacesSelector:
## matchLabels:
## role: frontend
## matchExpressions:
## - {key: role, operator: In, values: [frontend]}
explicitNamespacesSelector: {}
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Configure metrics exporter
##
metrics:
enabled: true
# resources: {}
service:
type: ClusterIP
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
loadBalancerIP:
serviceMonitor:
enabled: false
additionalLabels: {}
# namespace: monitoring
# interval: 30s
# scrapeTimeout: 10s
## Custom PrometheusRule to be defined
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
## These are just examples rules, please adapt them to your needs.
## Make sure to constraint the rules to the current postgresql service.
## rules:
## - alert: HugeReplicationLag
## expr: pg_replication_lag{service="{{ template "postgresql.fullname" . }}-metrics"} / 3600 > 1
## for: 1m
## labels:
## severity: critical
## annotations:
## description: replication for {{ template "postgresql.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s).
## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s).
rules: []
image:
registry: docker.io
repository: bitnami/postgres-exporter
tag: 0.8.0-debian-10-r52
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Define additional custom metrics
## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file
# customMetrics:
# pg_database:
# query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'postgres')"
# metrics:
# - name:
# usage: "LABEL"
# description: "Name of the database"
# - size_bytes:
# usage: "GAUGE"
# description: "Size of the database in bytes"
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: false
runAsUser: 1001
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
## Configure extra options for liveness and readiness probes
livenessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1

View File

@ -0,0 +1,103 @@
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"postgresqlUsername": {
"type": "string",
"title": "Admin user",
"form": true
},
"postgresqlPassword": {
"type": "string",
"title": "Password",
"form": true
},
"persistence": {
"type": "object",
"properties": {
"size": {
"type": "string",
"title": "Persistent Volume Size",
"form": true,
"render": "slider",
"sliderMin": 1,
"sliderMax": 100,
"sliderUnit": "Gi"
}
}
},
"resources": {
"type": "object",
"title": "Required Resources",
"description": "Configure resource requests",
"form": true,
"properties": {
"requests": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"form": true,
"render": "slider",
"title": "Memory Request",
"sliderMin": 10,
"sliderMax": 2048,
"sliderUnit": "Mi"
},
"cpu": {
"type": "string",
"form": true,
"render": "slider",
"title": "CPU Request",
"sliderMin": 10,
"sliderMax": 2000,
"sliderUnit": "m"
}
}
}
}
},
"replication": {
"type": "object",
"form": true,
"title": "Replication Details",
"properties": {
"enabled": {
"type": "boolean",
"title": "Enable Replication",
"form": true
},
"slaveReplicas": {
"type": "integer",
"title": "Slave Replicas",
"form": true,
"hidden": {
"condition": false,
"value": "replication.enabled"
}
}
}
},
"volumePermissions": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"form": true,
"title": "Enable Init Containers",
"description": "Change the owner of the persist volume mountpoint to RunAsUser:fsGroup"
}
}
},
"metrics": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"title": "Configure metrics exporter",
"form": true
}
}
}
}
}

View File

@ -0,0 +1,534 @@
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
postgresql: {}
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
##
image:
registry: docker.io
repository: bitnami/postgresql
tag: 11.7.0-debian-10-r37
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
debug: false
## String to partially override postgresql.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override postgresql.fullname template
##
# fullnameOverride:
##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/minideb
tag: buster
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Init container Security Context
## Note: the chown of the data folder is done to securityContext.runAsUser
## and not the below volumePermissions.securityContext.runAsUser
## When runAsUser is set to special value "auto", init container will try to chwon the
## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed).
## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with
## pod securityContext.enabled=false and shmVolume.chmod.enabled=false
##
securityContext:
runAsUser: 0
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## Pod Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
enabled: false
## Name of an already existing service account. Setting this value disables the automatic service account creation.
# name:
replication:
enabled: false
user: repl_user
password: repl_password
slaveReplicas: 1
## Set synchronous commit mode: on, off, remote_apply, remote_write and local
## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL
synchronousCommit: "off"
## From the number of `slaveReplicas` defined above, set the number of those that will have synchronous replication
## NOTE: It cannot be > slaveReplicas
numSynchronousReplicas: 0
## Replication Cluster application name. Useful for defining multiple replication policies
applicationName: my_application
## PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`)
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!)
# postgresqlPostgresPassword:
## PostgreSQL user (has superuser privileges if username is `postgres`)
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
postgresqlUsername: postgres
## PostgreSQL password
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run
##
# postgresqlPassword:
## PostgreSQL password using existing secret
## existingSecret: secret
## Mount PostgreSQL secret as a file instead of passing environment variable
# usePasswordFile: false
## Create a database
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run
##
# postgresqlDatabase:
## PostgreSQL data dir
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
postgresqlDataDir: /bitnami/postgresql/data
## An array to add extra environment variables
## For example:
## extraEnv:
## - name: FOO
## value: "bar"
##
# extraEnv:
extraEnv: []
## Name of a ConfigMap containing extra env vars
##
# extraEnvVarsCM:
## Specify extra initdb args
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
# postgresqlInitdbArgs:
## Specify a custom location for the PostgreSQL transaction log
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
##
# postgresqlInitdbWalDir:
## PostgreSQL configuration
## Specify runtime configuration parameters as a dict, using camelCase, e.g.
## {"sharedBuffers": "500MB"}
## Alternatively, you can put your postgresql.conf under the files/ directory
## ref: https://www.postgresql.org/docs/current/static/runtime-config.html
##
# postgresqlConfiguration:
## PostgreSQL extended configuration
## As above, but _appended_ to the main configuration
## Alternatively, you can put your *.conf under the files/conf.d/ directory
## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf
##
# postgresqlExtendedConf:
## PostgreSQL client authentication configuration
## Specify content for pg_hba.conf
## Default: do not create pg_hba.conf
## Alternatively, you can put your pg_hba.conf under the files/ directory
# pgHbaConfiguration: |-
# local all all trust
# host all all localhost trust
# host mydatabase mysuser 192.168.0.0/24 md5
## ConfigMap with PostgreSQL configuration
## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration
# configurationConfigMap:
## ConfigMap with PostgreSQL extended configuration
# extendedConfConfigMap:
## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
## ConfigMap with scripts to be run at first boot
## NOTE: This will override initdbScripts
# initdbScriptsConfigMap:
## Secret with scripts to be run at first boot (in case it contains sensitive information)
## NOTE: This can work along initdbScripts or initdbScriptsConfigMap
# initdbScriptsSecret:
## Specify the PostgreSQL username and password to execute the initdb scripts
# initdbUser:
# initdbPassword:
## Optional duration in seconds the pod needs to terminate gracefully.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
# terminationGracePeriodSeconds: 30
## LDAP configuration
##
ldap:
enabled: false
url: ""
server: ""
port: ""
prefix: ""
suffix: ""
baseDN: ""
bindDN: ""
bind_password:
search_attr: ""
search_filter: ""
scheme: ""
tls: false
## PostgreSQL service configuration
service:
## PosgresSQL service type
type: ClusterIP
# clusterIP: None
port: 5432
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required.
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
annotations: {}
## Set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## Start master and slave(s) pod(s) without limitations on shm memory.
## By default docker and containerd (and possibly other container runtimes)
## limit `/dev/shm` to `64M` (see e.g. the
## [docker issue](https://github.com/docker-library/postgres/issues/416) and the
## [containerd issue](https://github.com/containerd/containerd/issues/3654),
## which could be not enough if PostgreSQL uses parallel workers heavily.
##
shmVolume:
## Set `shmVolume.enabled` to `true` to mount a new tmpfs volume to remove
## this limitation.
##
enabled: true
## Set to `true` to `chmod 777 /dev/shm` on a initContainer.
## This option is ingored if `volumePermissions.enabled` is `false`
##
chmod:
enabled: true
## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
##
# existingClaim:
## The path the volume will be mounted at, useful when using different
## PostgreSQL images.
##
mountPath: /bitnami/postgresql
## The subdirectory of the volume to mount to, useful in dev environments
## and one PV for multiple services.
##
subPath: ""
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
## updateStrategy for PostgreSQL StatefulSet and its slaves StatefulSets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
##
## PostgreSQL Master parameters
##
master:
## Node, affinity, tolerations, and priorityclass settings for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
annotations: {}
podLabels: {}
podAnnotations: {}
priorityClassName: ""
## Extra init containers
## Example
##
## extraInitContainers:
## - name: do-something
## image: busybox
## command: ['do', 'something']
extraInitContainers: []
## Additional PostgreSQL Master Volume mounts
##
extraVolumeMounts: []
## Additional PostgreSQL Master Volumes
##
extraVolumes: []
## Add sidecars to the pod
##
## For example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
sidecars: []
##
## PostgreSQL Slave parameters
##
slave:
## Node, affinity, tolerations, and priorityclass settings for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
annotations: {}
podLabels: {}
podAnnotations: {}
priorityClassName: ""
extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Additional PostgreSQL Slave Volume mounts
##
extraVolumeMounts: []
## Additional PostgreSQL Slave Volumes
##
extraVolumes: []
## Add sidecars to the pod
##
## For example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
sidecars: []
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
requests:
memory: 256Mi
cpu: 250m
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
##
enabled: false
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the port PostgreSQL is listening
## on. When true, PostgreSQL will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## if explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace
## and that match other criteria, the ones that have the good label, can reach the DB.
## But sometimes, we want the DB to be accessible to clients from other namespaces, in this case, we can use this
## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added.
##
## Example:
## explicitNamespacesSelector:
## matchLabels:
## role: frontend
## matchExpressions:
## - {key: role, operator: In, values: [frontend]}
explicitNamespacesSelector: {}
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Configure metrics exporter
##
metrics:
enabled: false
# resources: {}
service:
type: ClusterIP
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
loadBalancerIP:
serviceMonitor:
enabled: false
additionalLabels: {}
# namespace: monitoring
# interval: 30s
# scrapeTimeout: 10s
## Custom PrometheusRule to be defined
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
## These are just examples rules, please adapt them to your needs.
## Make sure to constraint the rules to the current postgresql service.
## rules:
## - alert: HugeReplicationLag
## expr: pg_replication_lag{service="{{ template "postgresql.fullname" . }}-metrics"} / 3600 > 1
## for: 1m
## labels:
## severity: critical
## annotations:
## description: replication for {{ template "postgresql.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s).
## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s).
rules: []
image:
registry: docker.io
repository: bitnami/postgres-exporter
tag: 0.8.0-debian-10-r52
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Define additional custom metrics
## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file
# customMetrics:
# pg_database:
# query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'postgres')"
# metrics:
# - name:
# usage: "LABEL"
# description: "Name of the database"
# - size_bytes:
# usage: "GAUGE"
# description: "Size of the database in bytes"
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: false
runAsUser: 1001
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
## Configure extra options for liveness and readiness probes
livenessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1

View File

@ -0,0 +1,6 @@
# install chart with some extra labels
extraLabels:
acme.com/some-key: some-value

View File

@ -0,0 +1,10 @@
# install chart with default values
proxy:
type: NodePort
env:
anonymous_reports: "off"
ingressController:
env:
anonymous_reports: "false"
installCRDs: false

View File

@ -0,0 +1,16 @@
# install chart with default values
# use single image strings instead of repository/tag
image:
unifiedRepoTag: kong:2.5
proxy:
type: NodePort
env:
anonymous_reports: "off"
ingressController:
env:
anonymous_reports: "false"
image:
unifiedRepoTag: kong/kubernetes-ingress-controller:1.3.1
installCRDs: false

View File

@ -0,0 +1,45 @@
# This tests the following unrelated aspects of Ingress Controller
# - HPA enabled
autoscaling:
enabled: true
args:
- --alsologtostderr
# - ingressController deploys without a database (default)
ingressController:
enabled: true
installCRDs: false
# - webhook is enabled and deploys
admissionWebhook:
enabled: true
# - environment variables can be injected into ingress controller container
env:
anonymous_reports: "false"
kong_admin_header: "foo:bar"
# - annotations can be injected for service account
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
# - pod labels can be added to the deployment template
podLabels:
app: kong
environment: test
# - podSecurityPolicies are enabled
podSecurityPolicy:
enabled: true
# - ingress resources are created with hosts
admin:
type: NodePort
ingress:
enabled: true
hostname: admin.kong.example
annotations: {}
path: /
proxy:
type: NodePort
ingress:
enabled: true
hostname: proxy.kong.example
annotations: {}
path: /
env:
anonymous_reports: "off"

View File

@ -0,0 +1,50 @@
# This tests the following unrelated aspects of Ingress Controller
# - ingressController deploys with a database
# - stream listens work
ingressController:
enabled: true
installCRDs: false
env:
anonymous_reports: "false"
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
service:
port: 5432
env:
anonymous_reports: "off"
database: "postgres"
# - ingress resources are created without hosts
admin:
type: NodePort
ingress:
enabled: true
hosts: []
path: /
proxy:
type: NodePort
ingress:
enabled: true
hostname: proxy.kong.example
annotations: {}
path: /
# - add stream listens
stream:
- containerPort: 9000
servicePort: 9000
parameters: []
- containerPort: 9001
servicePort: 9001
parameters:
- ssl
# - PDB is enabled
podDisruptionBudget:
enabled: true
# update strategy
updateStrategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 0

View File

@ -0,0 +1,51 @@
# CI test for testing dbless deployment without ingress controllers
# - disable ingress controller
ingressController:
enabled: false
installCRDs: false
# - disable DB for kong
env:
anonymous_reports: "off"
database: "off"
postgresql:
enabled: false
# - supply DBless config for kong
dblessConfig:
# Or the configuration is passed in full-text below
config:
_format_version: "1.1"
services:
- name: test-svc
url: http://example.com
routes:
- name: test
paths:
- /test
plugins:
- name: request-termination
config:
status_code: 200
message: "dbless-config"
proxy:
type: NodePort
deployment:
initContainers:
- name: "bash"
image: "bash:latest"
command: ["/bin/sh", "-c", "true"]
resources:
limits:
cpu: "100m"
memory: "64Mi"
requests:
cpu: "100m"
memory: "64Mi"
volumeMounts:
- name: "tmpdir"
mountPath: "/opt/tmp"
userDefinedVolumes:
- name: "tmpdir"
emptyDir: {}
userDefinedVolumeMounts:
- name: "tmpdir"
mountPath: "/opt/tmp"

View File

@ -0,0 +1,44 @@
# CI test for testing dbless deployment without ingress controllers using legacy admin listen and stream listens
# - disable ingress controller
ingressController:
enabled: false
installCRDs: false
env:
anonymous_reports: "false"
# - disable DB for kong
env:
anonymous_reports: "off"
database: "off"
postgresql:
enabled: false
# - supply DBless config for kong
dblessConfig:
# Or the configuration is passed in full-text below
config:
_format_version: "1.1"
services:
- name: test-svc
url: http://example.com
routes:
- name: test
paths:
- /test
plugins:
- name: request-termination
config:
status_code: 200
message: "dbless-config"
proxy:
type: NodePort
# - add stream listens
stream:
- containerPort: 9000
servicePort: 9000
parameters: []
- containerPort: 9001
servicePort: 9001
parameters:
- ssl
ingress:
enabled: true

View File

@ -0,0 +1,46 @@
# This tests the following unrelated aspects of Ingress Controller
# - ingressController deploys with a database
# - TODO remove this test when https://github.com/Kong/charts/issues/295 is solved
# and its associated wait-for-db workaround is removed.
# This test is similar to test2-values.yaml, but lacks a stream listen.
# wait-for-db will _not_ create a socket file. This test ensures the workaround
# does not interfere with startup when there is no file to remove.
ingressController:
enabled: true
installCRDs: false
env:
anonymous_reports: "false"
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
service:
port: 5432
env:
anonymous_reports: "off"
database: "postgres"
# - ingress resources are created without hosts
admin:
type: NodePort
ingress:
enabled: true
hosts: []
path: /
proxy:
type: NodePort
ingress:
enabled: true
hostname: proxy.kong.example
annotations: {}
path: /
# - PDB is enabled
podDisruptionBudget:
enabled: true
# update strategy
updateStrategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 0

View File

@ -0,0 +1,994 @@
# generated using: kubectl kustomize github.com/kong/kubernetes-ingress-controller/railgun/config/crd?ref=main
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
name: kongclusterplugins.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongClusterPlugin
listKind: KongClusterPluginList
plural: kongclusterplugins
singular: kongclusterplugin
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: KongClusterPlugin is the Schema for the kongclusterplugins API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
config:
description: Config contains the plugin configuration.
x-kubernetes-preserve-unknown-fields: true
configFrom:
description: ConfigFrom references a secret containing the plugin configuration.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource
this object represents. Servers may infer this from the endpoint
the client submits requests to. Cannot be updated. In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
secretKeyRef:
description: NamespacedSecretValueFromSource represents the source
of a secret value specifying the secret namespace
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this
representation of an object. Servers should convert recognized
schemas to the latest internal value, and may reject unrecognized
values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
key:
description: the key containing the value
type: string
kind:
description: 'Kind is a string value representing the REST resource
this object represents. Servers may infer this from the endpoint
the client submits requests to. Cannot be updated. In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: the secret containing the key
type: string
namespace:
description: The namespace containing the secret
type: string
type: object
type: object
consumerRef:
description: ConsumerRef is a reference to a particular consumer
type: string
disabled:
description: Disabled set if the plugin is disabled or not
type: boolean
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
plugin:
description: PluginName is the name of the plugin to which to apply the
config
type: string
protocols:
description: Protocols configures plugin to run on requests received on
specific protocols.
items:
type: string
type: array
run_on:
description: RunOn configures the plugin to run on the first or the second
or both nodes in case of a service mesh deployment.
type: string
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
name: kongconsumers.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongConsumer
listKind: KongConsumerList
plural: kongconsumers
singular: kongconsumer
preserveUnknownFields: false
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: KongConsumer is the Schema for the kongconsumers API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
credentials:
description: Credentials are references to secrets containing a credential
to be provisioned in Kong.
items:
type: string
type: array
custom_id:
description: CustomID existing unique ID for the consumer - useful for
mapping Kong with users in your existing database
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
username:
description: Username unique username of the consumer.
type: string
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
listKind: KongIngressList
plural: kongingresses
singular: kongingress
preserveUnknownFields: false
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: KongIngress is the Schema for the kongingresses API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
proxy:
description: Service represents a Service in Kong. Read https://getkong.org/docs/0.13.x/admin-api/#Service-object
properties:
ca_certificates:
items:
type: string
type: array
client_certificate:
description: Certificate represents a Certificate in Kong. Read https://getkong.org/docs/0.14.x/admin-api/#certificate-object
properties:
cert:
type: string
created_at:
format: int64
type: integer
id:
type: string
key:
type: string
snis:
items:
type: string
type: array
tags:
items:
type: string
type: array
type: object
connect_timeout:
type: integer
created_at:
type: integer
host:
type: string
id:
type: string
name:
type: string
path:
type: string
port:
type: integer
protocol:
type: string
read_timeout:
type: integer
retries:
type: integer
tags:
items:
type: string
type: array
tls_verify:
type: boolean
tls_verify_depth:
type: integer
updated_at:
type: integer
url:
type: string
write_timeout:
type: integer
type: object
route:
description: Route represents a Route in Kong. Read https://getkong.org/docs/0.13.x/admin-api/#Route-object
properties:
created_at:
type: integer
destinations:
items:
description: CIDRPort represents a set of CIDR and a port.
properties:
ip:
type: string
port:
type: integer
type: object
type: array
headers:
additionalProperties:
items:
type: string
type: array
type: object
hosts:
items:
type: string
type: array
https_redirect_status_code:
type: integer
id:
type: string
methods:
items:
type: string
type: array
name:
type: string
path_handling:
type: string
paths:
items:
type: string
type: array
preserve_host:
type: boolean
protocols:
items:
type: string
type: array
regex_priority:
type: integer
request_buffering:
description: "Kong buffers requests and responses by default. Buffering
is not always desired, for instance if large payloads are being
proxied using HTTP 1.1 chunked encoding. \n The request and response
route buffering options are enabled by default and allow the user
to disable buffering if desired for their use case. \n SEE ALSO:
- https://github.com/Kong/kong/pull/6057 - https://docs.konghq.com/2.2.x/admin-api/#route-object"
type: boolean
response_buffering:
type: boolean
service:
description: Service represents a Service in Kong. Read https://getkong.org/docs/0.13.x/admin-api/#Service-object
properties:
ca_certificates:
items:
type: string
type: array
client_certificate:
description: Certificate represents a Certificate in Kong. Read
https://getkong.org/docs/0.14.x/admin-api/#certificate-object
properties:
cert:
type: string
created_at:
format: int64
type: integer
id:
type: string
key:
type: string
snis:
items:
type: string
type: array
tags:
items:
type: string
type: array
type: object
connect_timeout:
type: integer
created_at:
type: integer
host:
type: string
id:
type: string
name:
type: string
path:
type: string
port:
type: integer
protocol:
type: string
read_timeout:
type: integer
retries:
type: integer
tags:
items:
type: string
type: array
tls_verify:
type: boolean
tls_verify_depth:
type: integer
updated_at:
type: integer
url:
type: string
write_timeout:
type: integer
type: object
snis:
items:
type: string
type: array
sources:
items:
description: CIDRPort represents a set of CIDR and a port.
properties:
ip:
type: string
port:
type: integer
type: object
type: array
strip_path:
type: boolean
tags:
items:
type: string
type: array
updated_at:
type: integer
type: object
upstream:
description: Upstream represents an Upstream in Kong.
properties:
algorithm:
type: string
client_certificate:
description: Certificate represents a Certificate in Kong. Read https://getkong.org/docs/0.14.x/admin-api/#certificate-object
properties:
cert:
type: string
created_at:
format: int64
type: integer
id:
type: string
key:
type: string
snis:
items:
type: string
type: array
tags:
items:
type: string
type: array
type: object
created_at:
format: int64
type: integer
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
description: Healthcheck represents a health-check config of an upstream
in Kong.
properties:
active:
description: ActiveHealthcheck configures active health check
probing.
properties:
concurrency:
type: integer
healthy:
description: Healthy configures thresholds and HTTP status
codes to mark targets healthy for an upstream.
properties:
http_statuses:
items:
type: integer
type: array
interval:
type: integer
successes:
type: integer
type: object
http_path:
type: string
https_sni:
type: string
https_verify_certificate:
type: boolean
timeout:
type: integer
type:
type: string
unhealthy:
description: Unhealthy configures thresholds and HTTP status
codes to mark targets unhealthy.
properties:
http_failures:
type: integer
http_statuses:
items:
type: integer
type: array
interval:
type: integer
tcp_failures:
type: integer
timeouts:
type: integer
type: object
type: object
passive:
description: PassiveHealthcheck configures passive checks around
passive health checks.
properties:
healthy:
description: Healthy configures thresholds and HTTP status
codes to mark targets healthy for an upstream.
properties:
http_statuses:
items:
type: integer
type: array
interval:
type: integer
successes:
type: integer
type: object
type:
type: string
unhealthy:
description: Unhealthy configures thresholds and HTTP status
codes to mark targets unhealthy.
properties:
http_failures:
type: integer
http_statuses:
items:
type: integer
type: array
interval:
type: integer
tcp_failures:
type: integer
timeouts:
type: integer
type: object
type: object
threshold:
type: number
type: object
host_header:
type: string
id:
type: string
name:
type: string
slots:
type: integer
tags:
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
name: kongplugins.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongPlugin
listKind: KongPluginList
plural: kongplugins
singular: kongplugin
preserveUnknownFields: false
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: KongPlugin is the Schema for the kongplugins API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
config:
description: Config contains the plugin configuration.
x-kubernetes-preserve-unknown-fields: true
configFrom:
description: ConfigFrom references a secret containing the plugin configuration.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource
this object represents. Servers may infer this from the endpoint
the client submits requests to. Cannot be updated. In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
secretKeyRef:
description: SecretValueFromSource represents the source of a secret
value
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this
representation of an object. Servers should convert recognized
schemas to the latest internal value, and may reject unrecognized
values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
key:
description: the key containing the value
type: string
kind:
description: 'Kind is a string value representing the REST resource
this object represents. Servers may infer this from the endpoint
the client submits requests to. Cannot be updated. In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: the secret containing the key
type: string
type: object
type: object
consumerRef:
description: ConsumerRef is a reference to a particular consumer
type: string
disabled:
description: Disabled set if the plugin is disabled or not
type: boolean
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
plugin:
description: PluginName is the name of the plugin to which to apply the
config
type: string
protocols:
description: Protocols configures plugin to run on requests received on
specific protocols.
items:
type: string
type: array
run_on:
description: RunOn configures the plugin to run on the first or the second
or both nodes in case of a service mesh deployment.
type: string
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
name: tcpingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: TCPIngress
listKind: TCPIngressList
plural: tcpingresses
singular: tcpingress
preserveUnknownFields: false
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: TCPIngress is the Schema for the tcpingresses API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: TCPIngressSpec defines the desired state of TCPIngress
properties:
rules:
description: A list of rules used to configure the Ingress.
items:
description: IngressRule represents a rule to apply against incoming
requests. Matching is performed based on an (optional) SNI and
port.
properties:
backend:
description: Backend defines the referenced service endpoint
to which the traffic will be forwarded to.
properties:
serviceName:
description: Specifies the name of the referenced service.
type: string
servicePort:
description: Specifies the port of the referenced service.
type: integer
required:
- serviceName
- servicePort
type: object
host:
description: Host is the fully qualified domain name of a network
host, as defined by RFC 3986. If a Host is specified, the
protocol must be TLS over TCP. A plain-text TCP request cannot
be routed based on Host. It can only be routed based on Port.
type: string
port:
description: Port is the port on which to accept TCP or TLS
over TCP sessions and route. It is a required field. If a
Host is not specified, the requested are routed based only
on Port.
maximum: 65535
minimum: 1
type: integer
required:
- backend
type: object
type: array
tls:
description: TLS configuration. This is similar to the `tls` section
in the Ingress resource in networking.v1beta1 group. The mapping
of SNIs to TLS cert-key pair defined here will be used for HTTP
Ingress rules as well. Once can define the mapping in this resource
or the original Ingress resource, both have the same effect.
items:
description: IngressTLS describes the transport layer security.
properties:
hosts:
description: Hosts are a list of hosts included in the TLS certificate.
The values in this list must match the name/s used in the
tlsSecret. Defaults to the wildcard host setting for the loadbalancer
controller fulfilling this Ingress, if left unspecified.
items:
type: string
type: array
secretName:
description: SecretName is the name of the secret used to terminate
SSL traffic.
type: string
type: object
type: array
type: object
status:
description: TCPIngressStatus defines the observed state of TCPIngress
properties:
loadBalancer:
description: LoadBalancer contains the current status of the load-balancer.
properties:
ingress:
description: Ingress is a list containing ingress points for the
load-balancer. Traffic intended for the service should be sent
to these ingress points.
items:
description: 'LoadBalancerIngress represents the status of a
load-balancer ingress point: traffic intended for the service
should be sent to an ingress point.'
properties:
hostname:
description: Hostname is set for load-balancer ingress points
that are DNS based (typically AWS load-balancers)
type: string
ip:
description: IP is set for load-balancer ingress points
that are IP based (typically GCE or OpenStack load-balancers)
type: string
ports:
description: Ports is a list of records of service ports
If used, every port defined in the service should have
an entry in it
items:
properties:
error:
description: 'Error is to record the problem with
the service port The format of the error shall comply
with the following rules: - built-in error values
shall be specified in this file and those shall
use CamelCase names - cloud provider specific
error values must have names that comply with the format
foo.example.com/CamelCase. --- The regex it matches
is (dns1123SubdomainFmt/)?(qualifiedNameFmt)'
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
port:
description: Port is the port number of the service
port of which status is recorded here
format: int32
type: integer
protocol:
default: TCP
description: 'Protocol is the protocol of the service
port of which status is recorded here The supported
values are: "TCP", "UDP", "SCTP"'
type: string
required:
- port
- protocol
type: object
type: array
x-kubernetes-list-type: atomic
type: object
type: array
type: object
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
name: udpingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: UDPIngress
listKind: UDPIngressList
plural: udpingresses
singular: udpingress
preserveUnknownFields: false
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: UDPIngress is the Schema for the udpingresses API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: UDPIngressSpec defines the desired state of UDPIngress
properties:
rules:
description: A list of rules used to configure the Ingress.
items:
description: UDPIngressRule represents a rule to apply against incoming
requests wherein no Host matching is available for request routing,
only the port is used to match requests.
properties:
backend:
description: Backend defines the Kubernetes service which accepts
traffic from the listening Port defined above.
properties:
serviceName:
description: Specifies the name of the referenced service.
type: string
servicePort:
description: Specifies the port of the referenced service.
type: integer
required:
- serviceName
- servicePort
type: object
port:
description: Port indicates the port for the Kong proxy to accept
incoming traffic on, which will then be routed to the service
Backend.
type: integer
required:
- backend
- port
type: object
type: array
type: object
status:
description: UDPIngressStatus defines the observed state of UDPIngress
properties:
loadBalancer:
description: LoadBalancer contains the current status of the load-balancer.
properties:
ingress:
description: Ingress is a list containing ingress points for the
load-balancer. Traffic intended for the service should be sent
to these ingress points.
items:
description: 'LoadBalancerIngress represents the status of a
load-balancer ingress point: traffic intended for the service
should be sent to an ingress point.'
properties:
hostname:
description: Hostname is set for load-balancer ingress points
that are DNS based (typically AWS load-balancers)
type: string
ip:
description: IP is set for load-balancer ingress points
that are IP based (typically GCE or OpenStack load-balancers)
type: string
ports:
description: Ports is a list of records of service ports
If used, every port defined in the service should have
an entry in it
items:
properties:
error:
description: 'Error is to record the problem with
the service port The format of the error shall comply
with the following rules: - built-in error values
shall be specified in this file and those shall
use CamelCase names - cloud provider specific
error values must have names that comply with the format
foo.example.com/CamelCase. --- The regex it matches
is (dns1123SubdomainFmt/)?(qualifiedNameFmt)'
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
port:
description: Port is the port number of the service
port of which status is recorded here
format: int32
type: integer
protocol:
default: TCP
description: 'Protocol is the protocol of the service
port of which status is recorded here The supported
values are: "TCP", "UDP", "SCTP"'
type: string
required:
- port
- protocol
type: object
type: array
x-kubernetes-list-type: atomic
type: object
type: array
type: object
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,51 @@
# Example values.yaml configurations
The YAML files in this directory provide basic example configurations for
common Kong deployment scenarios on Kubernetes. All examples assume Helm 3 and
disable legacy CRD templates (`ingressController.installCRDs: false`; you must
change this value to `true` if you use Helm 2).
* [minimal-kong-controller.yaml](minimal-kong-controller.yaml) installs Kong
open source with the ingress controller in DB-less mode.
* [minimal-kong-standalone.yaml](minimal-kong-standalone.yaml) installs Kong
open source and Postgres with no controller.
* [minimal-kong-enterprise-dbless.yaml](minimal-kong-enterprise-dbless.yaml)
installs Kong for Kubernetes with Kong Enterprise with the ingress controller
in DB-less mode.
* [minimal-k4k8s-with-kong-enterprise.yaml](minimal-k4k8s-with-kong-enterprise.yaml)
installs Kong for Kubernetes with Kong Enterprise with the ingress controller
and PostgreSQL. It does not enable Enterprise features other than Kong
Manager, and does not expose it or the Admin API via a TLS-secured ingress.
* [full-k4k8s-with-kong-enterprise.yaml](full-k4k8s-with-kong-enterprise.yaml)
installs Kong for Kubernetes with Kong Enterprise with the ingress controller
in PostgreSQL. It enables all Enterprise services.
* [minimal-kong-hybrid-control.yaml](minimal-kong-hybrid-control.yaml) and
[minimal-kong-hybrid-data.yaml](minimal-kong-hybrid-data.yaml) install
separate releases for hybrid mode control and data plane nodes, using the
built-in PostgreSQL chart on the control plane release. They require some
pre-work to [create certificates](https://github.com/Kong/charts/blob/main/charts/kong/README.md#certificates)
and configure the control plane location. See comments in the file headers
for additional details.
Note that you should install the control plane release first if possible:
data planes must be able to talk with a control plane node before they can
come online. Starting control planes first is not strictly required (data
plane nodes will retry their connection for a while before Kubernetes
restarts them, so starting control planes second, but around the same time
will usually work), but is the smoothest option.
All Enterprise examples require some level of additional user configuration to
install properly. Read the comments at the top of each file for instructions.
Examples are designed for use with Helm 3, and disable Helm 2 CRD installation.
If you use Helm 2, you will need to enable it:
```
helm install kong/kong -f /path/to/values.yaml \
--set ingressController.installCRDs=true
```

View File

@ -0,0 +1,201 @@
# Kong for Kubernetes with Kong Enterprise with Enterprise features enabled and
# exposed via TLS-enabled Ingresses. Before installing:
# * Several settings (search for the string "CHANGEME") require user-provided
# Secrets. These Secrets must be created before installation.
# * Ingresses reference example "<service>.kong.CHANGEME.example" hostnames. These must
# be changed to an actual hostname that resolve to your proxy.
# * Ensure that your session configurations create cookies that are usable
# across your services. The admin session configuration must create cookies
# that are sent to both the admin API and Kong Manager, and any Dev Portal
# instances with authentication must create cookies that are sent to both
# the Portal and Portal API.
image:
repository: kong/kong-gateway
tag: "2.5.0.0-alpine"
env:
prefix: /kong_prefix/
database: postgres
password:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password #CHANGEME
key: password #CHANGEME
admin:
enabled: true
annotations:
konghq.com/protocol: "https"
tls:
enabled: true
servicePort: 8444
containerPort: 8444
parameters:
- http2
ingress:
enabled: true
tls: CHANGEME-admin-tls-secret
hostname: admin.kong.CHANGEME.example
annotations:
kubernetes.io/ingress.class: "kong"
path: /
proxy:
enabled: true
type: LoadBalancer
annotations: {}
http:
enabled: true
servicePort: 80
containerPort: 8000
parameters: []
tls:
enabled: true
servicePort: 443
containerPort: 8443
parameters:
- http2
stream: {}
ingress:
enabled: false
annotations: {}
path: /
externalIPs: []
enterprise:
enabled: true
# CHANGEME: https://github.com/Kong/charts/blob/main/charts/kong/README.md#kong-enterprise-license
license_secret: kong-enterprise-license
vitals:
enabled: true
portal:
enabled: true
rbac:
enabled: true
admin_gui_auth: basic-auth
session_conf_secret: kong-session-config
admin_gui_auth_conf_secret: CHANGEME-admin-gui-auth-conf-secret
smtp:
enabled: false
portal_emails_from: none@example.com
portal_emails_reply_to: none@example.com
admin_emails_from: none@example.com
admin_emails_reply_to: none@example.com
smtp_admin_emails: none@example.com
smtp_host: smtp.example.com
smtp_port: 587
smtp_auth_type: ''
smtp_ssl: nil
smtp_starttls: true
auth:
smtp_username: '' # e.g. postmaster@example.com
smtp_password_secret: CHANGEME-smtp-password
manager:
enabled: true
type: NodePort
annotations:
konghq.com/protocol: "https"
http:
enabled: false
tls:
enabled: true
servicePort: 8445
containerPort: 8445
parameters:
- http2
ingress:
enabled: true
tls: CHANGEME-manager-tls-secret
hostname: manager.kong.CHANGEME.example
annotations: {}
path: /
externalIPs: []
portal:
enabled: true
type: NodePort
annotations:
konghq.com/protocol: "https"
http:
enabled: true
servicePort: 8003
containerPort: 8003
parameters: []
tls:
enabled: true
servicePort: 8446
containerPort: 8446
parameters:
- http2
ingress:
enabled: true
tls: CHANGEME-portal-tls-secret
hostname: portal.kong.CHANGEME.example
annotations:
kubernetes.io/ingress.class: "kong"
path: /
externalIPs: []
portalapi:
enabled: true
type: NodePort
annotations:
konghq.com/protocol: "https"
http:
enabled: true
servicePort: 8004
containerPort: 8004
parameters: []
tls:
enabled: true
servicePort: 8447
containerPort: 8447
parameters:
- http2
ingress:
enabled: true
tls: CHANGEME-portalapi-tls-secret
hostname: portalapi.kong.CHANGEME.example
annotations:
kubernetes.io/ingress.class: "kong"
path: /
externalIPs: []
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
service:
port: 5432
ingressController:
enabled: true
installCRDs: false
env:
kong_admin_token:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password #CHANGEME
key: password #CHANGEME

View File

@ -0,0 +1,58 @@
# Basic values.yaml for Kong for Kubernetes with Kong Enterprise
# Several settings (search for the string "CHANGEME") require user-provided
# Secrets. These Secrets must be created before installation.
#
# This installation does not create an Ingress or LoadBalancer Service for
# the Admin API or Kong Manager. They require port-forwards to access without
# further configuration to add them:
# kubectl port-forward deploy/your-deployment-kong 8001:8001 8002:8002
image:
repository: kong/kong-gateway
tag: "2.5.0.0-alpine"
admin:
enabled: true
http:
enabled: true
servicePort: 8001
containerPort: 8001
enterprise:
enabled: true
# CHANGEME: https://github.com/Kong/charts/blob/main/charts/kong/README.md#kong-enterprise-license
license_secret: kong-enterprise-license
vitals:
enabled: false
portal:
enabled: false
rbac:
enabled: false
smtp:
enabled: false
portal:
enabled: false
portalapi:
enabled: false
env:
prefix: /kong_prefix/
database: postgres
password:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password #CHANGEME
key: password #CHANGEME
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
service:
port: 5432
ingressController:
enabled: true
installCRDs: false

View File

@ -0,0 +1,13 @@
# Basic values.yaml configuration for Kong for Kubernetes (with the ingress controller)
image:
repository: kong
tag: "2.3"
env:
prefix: /kong_prefix/
database: "off"
ingressController:
enabled: true
installCRDs: false

View File

@ -0,0 +1,39 @@
# Basic values.yaml for Kong for Kubernetes with Kong Enterprise (DB-less)
# Several settings (search for the string "CHANGEME") require user-provided
# Secrets. These Secrets must be created before installation.
image:
repository: kong/kong-gateway
tag: "2.5.0.0-alpine"
enterprise:
enabled: true
# See instructions regarding enterprise licenses at https://github.com/Kong/charts/blob/master/charts/kong/README.md#kong-enterprise-license
license_secret: kong-enterprise-license # CHANGEME
vitals:
enabled: false
portal:
enabled: false
rbac:
enabled: false
manager:
enabled: false
portal:
enabled: false
portalapi:
enabled: false
env:
database: "off"
ingressController:
enabled: true
installCRDs: false
proxy:
# Enable creating a Kubernetes service for the proxy
enabled: true
type: NodePort

View File

@ -0,0 +1,47 @@
# Basic configuration for Kong without the ingress controller, using the Postgres subchart
# This installation does not create an Ingress or LoadBalancer Service for
# the Admin API. It requires port-forwards to access without further
# configuration to add them, e.g.:
# kubectl port-forward deploy/your-deployment-kong 8001:8001
image:
repository: kong
tag: "2.3"
env:
prefix: /kong_prefix/
database: postgres
role: control_plane
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
admin:
enabled: true
http:
enabled: true
servicePort: 8001
containerPort: 8001
cluster:
enabled: true
tls:
enabled: true
servicePort: 8005
containerPort: 8005
proxy:
enabled: false
secretVolumes:
- kong-cluster-cert
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
service:
port: 5432
ingressController:
enabled: false
installCRDs: false

View File

@ -0,0 +1,33 @@
# Basic configuration for Kong as a hybrid mode data plane node.
# It depends on the presence of a control plane release, as shown in
# https://github.com/Kong/charts/blob/main/charts/kong/example-values/minimal-kong-hybrid-control.yaml
#
# The "env.cluster_control_plane" value must be changed to your control plane
# instance's cluster Service hostname. Search "CHANGEME" to find it in this
# example.
#
# Hybrid mode requires a certificate. See https://github.com/Kong/charts/blob/main/charts/kong/README.md#certificates
# to create one.
image:
repository: kong
tag: "2.3"
env:
prefix: /kong_prefix/
database: "off"
role: data_plane
cluster_control_plane: CHANGEME-control-service.CHANGEME-namespace.svc.cluster.local:8005
lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
admin:
enabled: false
secretVolumes:
- kong-cluster-cert
ingressController:
enabled: false
installCRDs: false

View File

@ -0,0 +1,31 @@
# Basic configuration for Kong without the ingress controller, using the Postgres subchart
# This installation does not create an Ingress or LoadBalancer Service for
# the Admin API. It requires port-forwards to access without further
# configuration to add them, e.g.:
# kubectl port-forward deploy/your-deployment-kong 8001:8001
image:
repository: kong
tag: "2.3"
env:
prefix: /kong_prefix/
database: postgres
admin:
enabled: true
http:
enabled: true
servicePort: 8001
containerPort: 8001
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
service:
port: 5432
ingressController:
enabled: false
installCRDs: false

View File

@ -0,0 +1,33 @@
labels:
io.rancher.certified: partner
io.cattle.role: project # options are cluster/project
categories:
- API Gateway
questions:
- variable: admin.enabled
default: "false"
description: "Enable REST Admin API"
label: REST Admin API
type: boolean
show_subquestion_if: true
group: "Admin API"
subquestions:
- variable: admin.type
default: "LoadBalancer"
description: "Kubernetes Service Type"
label: Service Type
type: enum
options:
- ClusterIP
- NodePort
- LoadBalancer
- variable: admin.http.enabled
default: "false"
description: "Enable HTTP for REST Admin API"
label: REST Admin API - HTTP
type: boolean
- variable: proxy.http.enabled
default: "true"
description: "Enable HTTP for Proxy"
label: Proxy - HTTP
type: boolean

View File

@ -0,0 +1,6 @@
dependencies:
- name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 8.6.8
digest: sha256:8a3bbab2075144edbd954b95b2c779a61dcc55ec6a858f24b01b1e0031b72c19
generated: "2020-03-20T13:08:19.76868615-07:00"

View File

@ -0,0 +1,4 @@
dependencies:
- condition: postgresql.enabled
name: postgresql
repository: file://./charts/postgresql

View File

@ -0,0 +1,16 @@
To connect to Kong, please execute the following commands:
{{ if contains "LoadBalancer" .Values.proxy.type }}
HOST=$(kubectl get svc --namespace {{ template "kong.namespace" . }} {{ template "kong.fullname" . }}-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
PORT=$(kubectl get svc --namespace {{ template "kong.namespace" . }} {{ template "kong.fullname" . }}-proxy -o jsonpath='{.spec.ports[0].port}')
{{ else if contains "NodePort" .Values.proxy.type }}HOST=$(kubectl get nodes --namespace {{ template "kong.namespace" . }} -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace {{ template "kong.namespace" . }} {{ template "kong.fullname" . }}-proxy -o jsonpath='{.spec.ports[0].nodePort}')
{{ end -}}
export PROXY_IP=${HOST}:${PORT}
curl $PROXY_IP
Once installed, please follow along the getting started guide to start using
Kong: https://bit.ly/k4k8s-get-started
{{ $warnings := list -}}
{{- include "kong.deprecation-warnings" $warnings -}}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,105 @@
{{- if .Values.ingressController.admissionWebhook.enabled }}
{{- $certCert := "" -}}
{{- $certKey := "" -}}
{{- $caCert := "" -}}
{{- $caKey := "" -}}
{{- if not .Values.ingressController.admissionWebhook.certificate.provided }}
{{- $cn := printf "%s.%s.svc" ( include "kong.service.validationWebhook" . ) ( include "kong.namespace" . ) -}}
{{- $ca := genCA "kong-admission-ca" 3650 -}}
{{- $cert := genSignedCert $cn nil (list $cn) 3650 $ca -}}
{{- $certCert = $cert.Cert -}}
{{- $certKey = $cert.Key -}}
{{- $caCert = $ca.Cert -}}
{{- $caKey = $ca.Key -}}
{{- $caSecret := (lookup "v1" "Secret" (include "kong.namespace" .) (printf "%s-validation-webhook-ca-keypair" (include "kong.fullname" .))) -}}
{{- $certSecret := (lookup "v1" "Secret" (include "kong.namespace" .) (printf "%s-validation-webhook-keypair" (include "kong.fullname" .))) -}}
{{- if $certSecret }}
{{- $certCert = (b64dec (get $certSecret.data "tls.crt")) -}}
{{- $certKey = (b64dec (get $certSecret.data "tls.key")) -}}
{{- $caCert = (b64dec (get $caSecret.data "tls.crt")) -}}
{{- $caKey = (b64dec (get $caSecret.data "tls.key")) -}}
{{- end }}
{{- end }}
kind: ValidatingWebhookConfiguration
{{- if .Capabilities.APIVersions.Has "admissionregistration.k8s.io/v1" }}
apiVersion: admissionregistration.k8s.io/v1
{{- else }}
apiVersion: admissionregistration.k8s.io/v1beta1
{{- end }}
metadata:
name: {{ template "kong.fullname" . }}-validations
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
webhooks:
- name: validations.kong.konghq.com
failurePolicy: {{ .Values.ingressController.admissionWebhook.failurePolicy }}
sideEffects: None
admissionReviewVersions: ["v1beta1"]
rules:
- apiGroups:
- configuration.konghq.com
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- kongconsumers
- kongplugins
clientConfig:
{{- if not .Values.ingressController.admissionWebhook.certificate.provided }}
caBundle: {{ b64enc $caCert }}
{{- else }}
{{- if .Values.ingressController.admissionWebhook.certificate.caBundle }}
caBundle: {{ b64enc .Values.ingressController.admissionWebhook.certificate.caBundle }}
{{- end }}
{{- end }}
service:
name: {{ template "kong.service.validationWebhook" . }}
namespace: {{ template "kong.namespace" . }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ template "kong.service.validationWebhook" . }}
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: webhook
selector:
{{- include "kong.metaLabels" . | nindent 4 }}
app.kubernetes.io/component: app
{{- if not .Values.ingressController.admissionWebhook.certificate.provided }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ template "kong.fullname" . }}-validation-webhook-ca-keypair
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
type: kubernetes.io/tls
data:
tls.crt: {{ b64enc $caCert }}
tls.key: {{ b64enc $caKey }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ template "kong.fullname" . }}-validation-webhook-keypair
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
type: kubernetes.io/tls
data:
tls.crt: {{ b64enc $certCert }}
tls.key: {{ b64enc $certKey }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- if .Values.deployment.kong.enabled }}
{{- if (and (not .Values.ingressController.enabled) (eq .Values.env.database "off")) }}
{{- if not .Values.dblessConfig.configMap }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "kong.dblessConfig.fullname" . }}
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
data:
kong.yml: |
{{ .Values.dblessConfig.config | toYaml | indent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,144 @@
{{- if and .Values.ingressController.rbac.create .Values.ingressController.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "kong.fullname" . }}
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<kong-ingress-controller-leader-nginx>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "kong-ingress-controller-leader-{{ .Values.ingressController.ingressClass }}-{{ .Values.ingressController.ingressClass }}"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
# Begin KIC 2.x leader permissions
- apiGroups:
- ""
- coordination.k8s.io
resources:
- configmaps
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "kong.fullname" . }}
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "kong.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kong.serviceAccountName" . }}
namespace: {{ template "kong.namespace" . }}
{{- if eq (len .Values.ingressController.watchNamespaces) 0 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
name: {{ template "kong.fullname" . }}
rules:
{{ include "kong.kubernetesRBACRules" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "kong.fullname" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "kong.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kong.serviceAccountName" . }}
namespace: {{ template "kong.namespace" . }}
{{- else }}
{{- range .Values.ingressController.watchNamespaces }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
{{- include "kong.metaLabels" $ | nindent 4 }}
name: {{ template "kong.fullname" $ }}-{{ . }}
namespace: {{ . }}
rules:
{{ include "kong.kubernetesRBACRules" $ }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "kong.fullname" $ }}-{{ . }}
labels:
{{- include "kong.metaLabels" $ | nindent 4 }}
namespace: {{ . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "kong.fullname" $ }}-{{ . }}
subjects:
- kind: ServiceAccount
name: {{ template "kong.serviceAccountName" $ }}
namespace: {{ template "kong.namespace" $ }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,15 @@
{{- if or .Values.podSecurityPolicy.enabled (and .Values.ingressController.enabled .Values.ingressController.serviceAccount.create) -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "kong.serviceAccountName" . }}
namespace: {{ template "kong.namespace" . }}
{{- if .Values.ingressController.serviceAccount.annotations }}
annotations:
{{- range $key, $value := .Values.ingressController.serviceAccount.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
{{- end -}}

View File

@ -0,0 +1,42 @@
{{- $installCRDs := false -}}
{{- if .Values.ingressController.installCRDs -}}
{{- if .Values.ingressController.enabled -}}
{{/* Managed CRD installation is enabled, and the controller is enabled.
*/}}
{{- $installCRDs = true -}}
{{- else if (not .Values.deployment.kong.enabled) -}}
{{/* Managed CRD installation is enabled, and neither the controller nor Kong or enabled.
This is a CRD-only release.
*/}}
{{- $installCRDs = true -}}
{{- end -}}
{{- else -}}
{{/* Legacy default handling. CRD installation is _not_ enabled, but CRDs are already present
and are managed by this release. This release previously relied on the <2.0 default
.Values.ingressController.installCRDs=true. The default change would delete CRDs on upgrade,
which would cascade delete all associated CRs. This unexpected loss of configuration is bad,
so this clause pretends the default didn't change if you have an existing release that relied
on it
*/}}
{{- $kongPluginCRD := false -}}
{{- if .Capabilities.APIVersions.Has "apiextensions.k8s.io/v1/CustomResourceDefinition" -}}
{{- $kongPluginCRD = (lookup "apiextensions.k8s.io/v1" "CustomResourceDefinition" "" "kongplugins.configuration.konghq.com") -}}
{{- else -}}
{{/* TODO: remove the v1beta1 path when we no longer support k8s <1.16 */}}
{{- $kongPluginCRD = (lookup "apiextensions.k8s.io/v1beta1" "CustomResourceDefinition" "" "kongplugins.configuration.konghq.com") -}}
{{- end -}}
{{- if $kongPluginCRD -}}
{{- if (hasKey $kongPluginCRD.metadata "annotations") -}}
{{- if (eq .Release.Name (get $kongPluginCRD.metadata.annotations "meta.helm.sh/release-name")) -}}
{{- $installCRDs = true -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if $installCRDs -}}
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" }}
{{ $.Files.Get $path }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,269 @@
{{- if or .Values.deployment.kong.enabled .Values.ingressController.enabled }}
apiVersion: apps/v1
{{- if .Values.deployment.daemonset }}
kind: DaemonSet
{{- else }}
kind: Deployment
{{- end }}
metadata:
name: {{ template "kong.fullname" . }}
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
app.kubernetes.io/component: app
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- if not .Values.autoscaling.enabled }}
{{- if not .Values.deployment.daemonset }}
replicas: {{ .Values.replicaCount }}
{{- end }}
{{- end }}
selector:
matchLabels:
{{- include "kong.selectorLabels" . | nindent 6 }}
{{- if .Values.updateStrategy }}
{{- if .Values.deployment.daemonset }}
updateStrategy:
{{- else }}
strategy:
{{- end }}
{{ toYaml .Values.updateStrategy | indent 4 }}
{{- end }}
template:
metadata:
annotations:
{{- if (and (not .Values.ingressController.enabled) (eq .Values.env.database "off" )) }}
{{- if .Values.dblessConfig.config }}
checksum/dbless.config: {{ toYaml .Values.dblessConfig.config | sha256sum }}
{{- end }}
{{- end }}
{{- if .Values.podAnnotations }}
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
{{- include "kong.metaLabels" . | nindent 8 }}
app.kubernetes.io/component: app
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | nindent 8 }}
{{- end }}
spec:
automountServiceAccountToken: {{ .Values.ingressController.enabled }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
{{- if or .Values.ingressController.enabled .Values.podSecurityPolicy.enabled }}
serviceAccountName: {{ template "kong.serviceAccountName" . }}
{{ end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- if (or (and (not (eq .Values.env.database "off")) .Values.waitImage.enabled) .Values.deployment.initContainers) }}
initContainers:
{{- if .Values.deployment.initContainers }}
{{- toYaml .Values.deployment.initContainers | nindent 6 }}
{{- end }}
{{- if (and (not (eq .Values.env.database "off")) .Values.waitImage.enabled) }}
{{- include "kong.wait-for-db" . | nindent 6 }}
{{- end }}
{{- end }}
{{- if .Values.deployment.hostAliases }}
hostAliases:
{{- toYaml .Values.deployment.hostAliases | nindent 6 }}
{{- end}}
{{- if .Values.dnsPolicy }}
dnsPolicy: {{ .Values.dnsPolicy | quote }}
{{- end }}
{{- if .Values.dnsConfig }}
dnsConfig:
{{ toYaml .Values.dnsConfig | indent 8 }}
{{- end }}
containers:
{{- if .Values.ingressController.enabled }}
{{- include "kong.controller-container" . | nindent 6 }}
{{ end }}
{{- if .Values.deployment.sidecarContainers }}
{{- toYaml .Values.deployment.sidecarContainers | nindent 6 }}
{{- end }}
{{- if .Values.deployment.kong.enabled }}
- name: "proxy"
image: {{ include "kong.getRepoTag" .Values.image }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{ toYaml .Values.containerSecurityContext | nindent 10 }}
env:
{{- include "kong.no_daemon_env" . | nindent 8 }}
lifecycle:
{{- toYaml .Values.lifecycle | nindent 10 }}
ports:
{{- if (and .Values.admin.http.enabled .Values.admin.enabled) }}
- name: admin
containerPort: {{ .Values.admin.http.containerPort }}
{{- if .Values.admin.http.hostPort }}
hostPort: {{ .Values.admin.http.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.admin.tls.enabled .Values.admin.enabled) }}
- name: admin-tls
containerPort: {{ .Values.admin.tls.containerPort }}
{{- if .Values.admin.tls.hostPort }}
hostPort: {{ .Values.admin.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.proxy.http.enabled .Values.proxy.enabled) }}
- name: proxy
containerPort: {{ .Values.proxy.http.containerPort }}
{{- if .Values.proxy.http.hostPort }}
hostPort: {{ .Values.proxy.http.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.proxy.tls.enabled .Values.proxy.enabled)}}
- name: proxy-tls
containerPort: {{ .Values.proxy.tls.containerPort }}
{{- if .Values.proxy.tls.hostPort }}
hostPort: {{ .Values.proxy.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- range .Values.proxy.stream }}
- name: stream-{{ .containerPort }}
containerPort: {{ .containerPort }}
{{- if .hostPort }}
hostPort: {{ .hostPort }}
{{- end}}
protocol: {{ .protocol }}
{{- end }}
{{- range .Values.udpProxy.stream }}
- name: stream-udp-{{ .containerPort }}
containerPort: {{ .containerPort }}
{{- if .hostPort }}
hostPort: {{ .hostPort }}
{{- end}}
protocol: {{ .protocol }}
{{- end }}
{{- if (and .Values.status.http.enabled .Values.status.enabled)}}
- name: status
containerPort: {{ .Values.status.http.containerPort }}
{{- if .Values.status.http.hostPort }}
hostPort: {{ .Values.status.http.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.status.tls.enabled .Values.status.enabled) }}
- name: status-tls
containerPort: {{ .Values.status.tls.containerPort }}
{{- if .Values.status.tls.hostPort }}
hostPort: {{ .Values.status.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.cluster.tls.enabled .Values.cluster.enabled) }}
- name: cluster-tls
containerPort: {{ .Values.cluster.tls.containerPort }}
{{- if .Values.cluster.tls.hostPort }}
hostPort: {{ .Values.cluster.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if .Values.enterprise.enabled }}
{{- if (and .Values.manager.http.enabled .Values.manager.enabled) }}
- name: manager
containerPort: {{ .Values.manager.http.containerPort }}
{{- if .Values.manager.http.hostPort }}
hostPort: {{ .Values.manager.http.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.manager.tls.enabled .Values.manager.enabled) }}
- name: manager-tls
containerPort: {{ .Values.manager.tls.containerPort }}
{{- if .Values.manager.tls.hostPort }}
hostPort: {{ .Values.manager.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.portal.http.enabled .Values.portal.enabled) }}
- name: portal
containerPort: {{ .Values.portal.http.containerPort }}
{{- if .Values.portal.http.hostPort }}
hostPort: {{ .Values.portal.http.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.portal.tls.enabled .Values.portal.enabled) }}
- name: portal-tls
containerPort: {{ .Values.portal.tls.containerPort }}
{{- if .Values.portal.tls.hostPort }}
hostPort: {{ .Values.portal.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.portalapi.http.enabled .Values.portalapi.enabled) }}
- name: portalapi
containerPort: {{ .Values.portalapi.http.containerPort }}
{{- if .Values.portalapi.http.hostPort }}
hostPort: {{ .Values.portalapi.http.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.portalapi.tls.enabled .Values.portalapi.enabled) }}
- name: portalapi-tls
containerPort: {{ .Values.portalapi.tls.containerPort }}
{{- if .Values.portalapi.tls.hostPort }}
hostPort: {{ .Values.portalapi.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- if (and .Values.clustertelemetry.tls.enabled .Values.clustertelemetry.enabled) }}
- name: clustert-tls
containerPort: {{ .Values.clustertelemetry.tls.containerPort }}
{{- if .Values.clustertelemetry.tls.hostPort }}
hostPort: {{ .Values.clustertelemetry.tls.hostPort }}
{{- end}}
protocol: TCP
{{- end }}
{{- end }}
volumeMounts:
{{- include "kong.volumeMounts" . | nindent 10 }}
{{- include "kong.userDefinedVolumeMounts" . | nindent 10 }}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
livenessProbe:
{{ toYaml .Values.livenessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- end }} {{/* End of Kong container spec */}}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{ toYaml .Values.topologySpreadConstraints | indent 8 }}
{{- end }}
securityContext:
{{- include "kong.podsecuritycontext" . | nindent 8 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
volumes:
{{- include "kong.volumes" . | nindent 8 -}}
{{- include "kong.userDefinedVolumes" . | nindent 8 -}}
{{- end }}

View File

@ -0,0 +1,22 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: {{ .Capabilities.APIVersions.Has "autoscaling/v2beta2" | ternary "autoscaling/v2beta2" "autoscaling/v1" }}
kind: HorizontalPodAutoscaler
metadata:
name: "{{ template "kong.fullname" . }}"
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: "{{ template "kong.fullname" . }}"
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
{{- if not (.Capabilities.APIVersions.Has "autoscaling/v2beta2") }}
targetCPUUtilizationPercentage: {{ .Values.autoscaling.targetCPUUtilizationPercentage | default 80 }}
{{- else }}
metrics:
{{- toYaml .Values.autoscaling.metrics | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,81 @@
{{- if .Values.deployment.kong.enabled }}
{{- if (and .Values.migrations.postUpgrade (not (eq .Values.env.database "off"))) }}
# Why is this Job duplicated and not using only helm hooks?
# See: https://github.com/helm/charts/pull/7362
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "kong.fullname" . }}-post-upgrade-migrations
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
app.kubernetes.io/component: post-upgrade-migrations
annotations:
helm.sh/hook: "post-upgrade"
helm.sh/hook-delete-policy: "before-hook-creation"
{{- range $key, $value := .Values.migrations.jobAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
template:
metadata:
name: {{ template "kong.name" . }}-post-upgrade-migrations
labels:
{{- include "kong.metaLabels" . | nindent 8 }}
app.kubernetes.io/component: post-upgrade-migrations
{{- if .Values.migrations.annotations }}
annotations:
{{- range $key, $value := .Values.migrations.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
automountServiceAccountToken: false
{{- if .Values.podSecurityPolicy.enabled }}
serviceAccountName: {{ template "kong.serviceAccountName" . }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- if (or (and (.Values.postgresql.enabled) .Values.waitImage.enabled) .Values.deployment.initContainers) }}
initContainers:
{{- if .Values.deployment.initContainers }}
{{- toYaml .Values.deployment.initContainers | nindent 6 }}
{{- end }}
{{- if (and (.Values.postgresql.enabled) .Values.waitImage.enabled) }}
{{- include "kong.wait-for-postgres" . | nindent 6 }}
{{- end }}
{{- end }}
containers:
- name: {{ template "kong.name" . }}-post-upgrade-migrations
image: {{ include "kong.getRepoTag" .Values.image }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{ toYaml .Values.containerSecurityContext | nindent 10 }}
env:
{{- include "kong.no_daemon_env" . | nindent 8 }}
args: [ "kong", "migrations", "finish" ]
volumeMounts:
{{- include "kong.volumeMounts" . | nindent 8 }}
{{- include "kong.userDefinedVolumeMounts" . | nindent 8 }}
resources:
{{- toYaml .Values.migrations.resources | nindent 10 }}
securityContext:
{{- include "kong.podsecuritycontext" . | nindent 8 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
restartPolicy: OnFailure
volumes:
{{- include "kong.volumes" . | nindent 6 -}}
{{- include "kong.userDefinedVolumes" . | nindent 6 -}}
{{- end }}
{{- end }}

View File

@ -0,0 +1,81 @@
{{- if .Values.deployment.kong.enabled }}
{{- if (and .Values.migrations.preUpgrade (not (eq .Values.env.database "off"))) }}
# Why is this Job duplicated and not using only helm hooks?
# See: https://github.com/helm/charts/pull/7362
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "kong.fullname" . }}-pre-upgrade-migrations
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
app.kubernetes.io/component: pre-upgrade-migrations
annotations:
helm.sh/hook: "pre-upgrade"
helm.sh/hook-delete-policy: "before-hook-creation"
{{- range $key, $value := .Values.migrations.jobAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
template:
metadata:
name: {{ template "kong.name" . }}-pre-upgrade-migrations
labels:
{{- include "kong.metaLabels" . | nindent 8 }}
app.kubernetes.io/component: pre-upgrade-migrations
{{- if .Values.migrations.annotations }}
annotations:
{{- range $key, $value := .Values.migrations.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
automountServiceAccountToken: false
{{- if .Values.podSecurityPolicy.enabled }}
serviceAccountName: {{ template "kong.serviceAccountName" . }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- if (or (and (.Values.postgresql.enabled) .Values.waitImage.enabled) .Values.deployment.initContainers) }}
initContainers:
{{- if .Values.deployment.initContainers }}
{{- toYaml .Values.deployment.initContainers | nindent 6 }}
{{- end }}
{{- if (and (.Values.postgresql.enabled) .Values.waitImage.enabled) }}
{{- include "kong.wait-for-postgres" . | nindent 6 }}
{{- end }}
{{- end }}
containers:
- name: {{ template "kong.name" . }}-upgrade-migrations
image: {{ include "kong.getRepoTag" .Values.image }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{ toYaml .Values.containerSecurityContext | nindent 10 }}
env:
{{- include "kong.no_daemon_env" . | nindent 8 }}
args: [ "kong", "migrations", "up" ]
volumeMounts:
{{- include "kong.volumeMounts" . | nindent 8 }}
{{- include "kong.userDefinedVolumeMounts" . | nindent 8 }}
resources:
{{- toYaml .Values.migrations.resources| nindent 10 }}
securityContext:
{{- include "kong.podsecuritycontext" . | nindent 8 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
restartPolicy: OnFailure
volumes:
{{- include "kong.volumes" . | nindent 6 -}}
{{- include "kong.userDefinedVolumes" . | nindent 6 -}}
{{- end }}
{{- end }}

View File

@ -0,0 +1,90 @@
{{- if .Values.deployment.kong.enabled }}
{{- if .Release.IsInstall -}}
{{/* .migrations.init isn't normally exposed in values.yaml, since it should
generally always run on install--there should never be any reason to
disable it, and at worst it's a no-op. However, https://github.com/helm/helm/issues/3308
means we cannot use the default function to create a hidden value, hence
the workaround with this $runInit variable.
*/}}
{{- $runInit := true -}}
{{- if (hasKey .Values.migrations "init") -}}
{{- $runInit = .Values.migrations.init -}}
{{- end -}}
{{- if (and ($runInit) (not (eq .Values.env.database "off"))) }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "kong.fullname" . }}-init-migrations
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
app.kubernetes.io/component: init-migrations
annotations:
{{- range $key, $value := .Values.migrations.jobAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
template:
metadata:
name: {{ template "kong.name" . }}-init-migrations
labels:
{{- include "kong.metaLabels" . | nindent 8 }}
app.kubernetes.io/component: init-migrations
{{- if .Values.migrations.annotations }}
annotations:
{{- range $key, $value := .Values.migrations.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
automountServiceAccountToken: false
{{- if .Values.podSecurityPolicy.enabled }}
serviceAccountName: {{ template "kong.serviceAccountName" . }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- if (or (and (.Values.postgresql.enabled) .Values.waitImage.enabled) .Values.deployment.initContainers) }}
initContainers:
{{- if .Values.deployment.initContainers }}
{{- toYaml .Values.deployment.initContainers | nindent 6 }}
{{- end }}
{{- if (and (.Values.postgresql.enabled) .Values.waitImage.enabled) }}
{{- include "kong.wait-for-postgres" . | nindent 6 }}
{{- end }}
{{- end }}
containers:
- name: {{ template "kong.name" . }}-migrations
image: {{ include "kong.getRepoTag" .Values.image }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{ toYaml .Values.containerSecurityContext | nindent 10 }}
env:
{{- include "kong.no_daemon_env" . | nindent 8 }}
args: [ "kong", "migrations", "bootstrap" ]
volumeMounts:
{{- include "kong.volumeMounts" . | nindent 8 }}
{{- include "kong.userDefinedVolumeMounts" . | nindent 8 }}
resources:
{{- toYaml .Values.migrations.resources | nindent 10 }}
securityContext:
{{- include "kong.podsecuritycontext" . | nindent 8 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
restartPolicy: OnFailure
volumes:
{{- include "kong.volumes" . | nindent 6 -}}
{{- include "kong.userDefinedVolumes" . | nindent 6 -}}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "kong.fullname" . }}
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "kong.metaLabels" . | nindent 6 }}
app.kubernetes.io/component: app
{{- end }}

View File

@ -0,0 +1,42 @@
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "kong.serviceAccountName" . }}-psp
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
spec:
{{ .Values.podSecurityPolicy.spec | toYaml | indent 2 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "kong.serviceAccountName" . }}-psp
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- {{ template "kong.serviceAccountName" . }}-psp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "kong.serviceAccountName" . }}-psp
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ template "kong.serviceAccountName" . }}
namespace: {{ template "kong.namespace" . }}
roleRef:
kind: ClusterRole
name: {{ template "kong.serviceAccountName" . }}-psp
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@ -0,0 +1,16 @@
{{- if .Values.deployment.kong.enabled }}
{{- if and .Values.admin.enabled (or .Values.admin.http.enabled .Values.admin.tls.enabled) -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.admin -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "admin" -}}
{{- include "kong.service" $serviceConfig }}
{{ if .Values.admin.ingress.enabled }}
---
{{ include "kong.ingress" $serviceConfig }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,12 @@
{{- if .Values.deployment.kong.enabled }}
{{- if and .Values.clustertelemetry.enabled .Values.clustertelemetry.tls.enabled -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.clustertelemetry -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "clustertelemetry" -}}
{{- include "kong.service" $serviceConfig }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,12 @@
{{- if .Values.deployment.kong.enabled }}
{{- if and .Values.cluster.enabled .Values.cluster.tls.enabled -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.cluster -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "cluster" -}}
{{- include "kong.service" $serviceConfig }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.deployment.kong.enabled }}
{{- if .Values.enterprise.enabled }}
{{- if and .Values.manager.enabled (or .Values.manager.http.enabled .Values.manager.tls.enabled) -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.manager -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "manager" -}}
{{- include "kong.service" $serviceConfig }}
{{ if .Values.manager.ingress.enabled }}
---
{{ include "kong.ingress" $serviceConfig }}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.deployment.kong.enabled }}
{{- if .Values.enterprise.enabled }}
{{- if and .Values.portalapi.enabled (or .Values.portalapi.http.enabled .Values.portalapi.tls.enabled) -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.portalapi -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "portalapi" -}}
{{- include "kong.service" $serviceConfig }}
{{ if .Values.portalapi.ingress.enabled }}
---
{{ include "kong.ingress" $serviceConfig }}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.deployment.kong.enabled }}
{{- if .Values.enterprise.enabled }}
{{- if and .Values.portal.enabled (or .Values.portal.http.enabled .Values.portal.tls.enabled) -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.portal -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "portal" -}}
{{- include "kong.service" $serviceConfig }}
{{ if .Values.portal.ingress.enabled }}
---
{{ include "kong.ingress" $serviceConfig }}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,16 @@
{{- if .Values.deployment.kong.enabled }}
{{- if and .Values.proxy.enabled (or .Values.proxy.http.enabled .Values.proxy.tls.enabled) -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.proxy -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "proxy" -}}
{{- include "kong.service" $serviceConfig }}
{{ if .Values.proxy.ingress.enabled }}
---
{{ include "kong.ingress" $serviceConfig }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,14 @@
{{- if .Values.deployment.kong.enabled }}
{{- if and .Values.udpProxy.enabled -}}
{{- $serviceConfig := dict -}}
{{- $serviceConfig := merge $serviceConfig .Values.udpProxy -}}
{{- $_ := set $serviceConfig "fullName" (include "kong.fullname" .) -}}
{{- $_ := set $serviceConfig "namespace" (include "kong.namespace" .) -}}
{{- $_ := set $serviceConfig "metaLabels" (include "kong.metaLabels" .) -}}
{{- $_ := set $serviceConfig "selectorLabels" (include "kong.selectorLabels" .) -}}
{{- $_ := set $serviceConfig "serviceName" "udp-proxy" -}}
{{- $_ := set $serviceConfig "tls" (dict "enabled" false) -}}
{{- $_ := set $serviceConfig "http" (dict "enabled" false) -}}
{{- include "kong.service" $serviceConfig }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,52 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.serviceMonitor.enabled }}
{{- $controllerIs2xPlus := include "kong.controller2xplus" . -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "kong.fullname" . }}
{{- if .Values.serviceMonitor.namespace }}
namespace: {{ .Values.serviceMonitor.namespace }}
{{- end }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
{{- if .Values.serviceMonitor.labels }}
{{ toYaml .Values.serviceMonitor.labels | nindent 4 }}
{{- end }}
spec:
endpoints:
- targetPort: status
scheme: http
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.honorLabels }}
honorLabels: true
{{- end }}
{{- if .Values.serviceMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml .Values.serviceMonitor.metricRelabelings | nindent 6 }}
{{- end }}
{{ if (eq $controllerIs2xPlus "true") -}}
- targetPort: cmetrics
scheme: http
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.honorLabels }}
honorLabels: true
{{- end }}
{{- if .Values.serviceMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml .Values.serviceMonitor.metricRelabelings | nindent 6 }}
{{- end }}
{{- end }}
jobLabel: {{ .Release.Name }}
namespaceSelector:
matchNames:
- {{ template "kong.namespace" . }}
selector:
matchLabels:
enable-metrics: "true"
{{- include "kong.metaLabels" . | nindent 6 }}
{{- if .Values.serviceMonitor.targetLabels }}
targetLabels: {{ toYaml .Values.serviceMonitor.targetLabels | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,31 @@
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-test-ingress"
annotations:
"helm.sh/hook": test
spec:
restartPolicy: OnFailure
containers:
- name: "{{ .Release.Name }}-curl"
image: curlimages/curl
command:
- curl
- "http://{{ .Release.Name }}-kong-proxy.{{ .Release.Namespace }}.svc.cluster.local/httpbin"
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-test-ingress-v1beta1"
annotations:
"helm.sh/hook": test
spec:
restartPolicy: OnFailure
containers:
- name: "{{ .Release.Name }}-curl"
image: curlimages/curl
command:
- curl
- "http://{{ .Release.Name }}-kong-proxy.{{ .Release.Namespace }}.svc.cluster.local/httpbin-v1beta1"

View File

@ -0,0 +1,65 @@
{{- if .Values.deployment.test.enabled }}
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-httpbin"
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: kennethreitz/httpbin
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-httpbin"
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: httpbin
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Release.Name }}-httpbin"
annotations:
httpbin.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "kong"
konghq.com/strip-path: "true"
spec:
rules:
- http:
paths:
- path: /httpbin
pathType: Prefix
backend:
service:
name: "{{ .Release.Name }}-httpbin"
port:
number: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "{{ .Release.Name }}-httpbin-v1beta1"
annotations:
httpbin.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "kong"
konghq.com/strip-path: "true"
spec:
rules:
- http:
paths:
- path: /httpbin-v1beta1
backend:
serviceName: "{{ .Release.Name }}-httpbin"
servicePort: 80
{{- end }}

View File

@ -0,0 +1,15 @@
{{ if (and (.Values.postgresql.enabled) .Values.waitImage.enabled) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "kong.fullname" . }}-bash-wait-for-postgres
namespace: {{ template "kong.namespace" . }}
labels:
{{- include "kong.metaLabels" . | nindent 4 }}
data:
wait.sh: |
until timeout 2 bash -c "9<>/dev/tcp/${KONG_PG_HOST}/${KONG_PG_PORT}"
do echo "waiting for db - trying ${KONG_PG_HOST}:${KONG_PG_PORT}"
sleep 2
done
{{ end }}

View File

@ -0,0 +1,902 @@
# Default values for Kong's Helm Chart.
# Declare variables to be passed into your templates.
#
# Sections:
# - Deployment parameters
# - Kong parameters
# - Ingress Controller parameters
# - Postgres sub-chart parameters
# - Miscellaneous parameters
# - Kong Enterprise parameters
# -----------------------------------------------------------------------------
# Deployment parameters
# -----------------------------------------------------------------------------
deployment:
kong:
# Enable or disable Kong itself
# Setting this to false with ingressController.enabled=true will create a
# controller-only release.
enabled: true
## Optionally specify any extra sidecar containers to be included in the deployment
## See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core
# sidecarContainers:
# - name: sidecar
# image: sidecar:latest
# initContainers:
# - name: initcon
# image: initcon:latest
# hostAliases:
# - ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
# userDefinedVolumes:
# - name: "volumeName"
# emptyDir: {}
# userDefinedVolumeMounts:
# - name: "volumeName"
# mountPath: "/opt/user/dir/mount"
test:
# Enable creation of test resources for use with "helm test"
enabled: false
# Use a DaemonSet controller instead of a Deployment controller
daemonset: false
# Override namepsace for Kong chart resources. By default, the chart creates resources in the release namespace.
# This may not be desirable when using this chart as a dependency.
# namespace: "example"
# -----------------------------------------------------------------------------
# Kong parameters
# -----------------------------------------------------------------------------
# Specify Kong configuration
# This chart takes all entries defined under `.env` and transforms them into into `KONG_*`
# environment variables for Kong containers.
# Their names here should match the names used in https://github.com/Kong/kong/blob/master/kong.conf.default
# See https://docs.konghq.com/latest/configuration also for additional details
# Values here take precedence over values from other sections of values.yaml,
# e.g. setting pg_user here will override the value normally set when postgresql.enabled
# is set below. In general, you should not set values here if they are set elsewhere.
env:
database: "off"
nginx_worker_processes: "2"
proxy_access_log: /dev/stdout
admin_access_log: /dev/stdout
admin_gui_access_log: /dev/stdout
portal_api_access_log: /dev/stdout
proxy_error_log: /dev/stderr
admin_error_log: /dev/stderr
admin_gui_error_log: /dev/stderr
portal_api_error_log: /dev/stderr
prefix: /kong_prefix/
# This section can be used to configure some extra labels that will be added to each Kubernetes object generated.
extraLabels: {}
# Specify Kong's Docker image and repository details here
image:
repository: kong
tag: "2.5"
# Kong Enterprise
# repository: kong/kong-gateway
# tag: "2.5.0.0-alpine"
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## If using the official Kong Enterprise registry above, you MUST provide a secret.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistrKeySecretName
# Specify Kong admin API service and listener configuration
admin:
# Enable creating a Kubernetes service for the admin API
# Disabling this is recommended for most ingress controller configurations
# Enterprise users that wish to use Kong Manager with the controller should enable this
enabled: false
type: NodePort
# To specify annotations or labels for the admin service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
http:
# Enable plaintext HTTP listen for the admin API
# Disabling this and using a TLS listen only is recommended for most configuration
enabled: false
servicePort: 8001
containerPort: 8001
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the admin API
enabled: true
servicePort: 8444
containerPort: 8444
# Set a target port for the TLS port in the admin API service, useful when using TLS
# termination on an ELB.
# overrideServiceTargetPort: 8000
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
# Kong admin ingress settings. Useful if you want to expose the Admin
# API of Kong outside the k8s cluster.
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-admin.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
# Specify Kong status listener configuration
# This listen is internal-only. It cannot be exposed through a service or ingress.
status:
enabled: true
http:
# Enable plaintext HTTP listen for the status listen
enabled: true
containerPort: 8100
parameters: []
tls:
# Enable HTTPS listen for the status listen
# Kong versions prior to 2.1 do not support TLS status listens.
# This setting must remain false on those versions
enabled: false
containerPort: 8543
parameters: []
# Specify Kong cluster service and listener configuration
#
# The cluster service *must* use TLS. It does not support the "http" block
# available on other services.
#
# The cluster service cannot be exposed through an Ingress, as it must perform
# TLS client validation directly and is not compatible with TLS-terminating
# proxies. If you need to expose it externally, you must use "type:
# LoadBalancer" and use a TCP-only load balancer (check your Kubernetes
# provider's documentation, as the configuration required for this varies).
cluster:
enabled: false
# To specify annotations or labels for the cluster service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
tls:
enabled: false
servicePort: 8005
containerPort: 8005
parameters: []
type: ClusterIP
# Specify Kong proxy service configuration
proxy:
# Enable creating a Kubernetes service for the proxy
enabled: true
type: LoadBalancer
# To specify annotations or labels for the proxy service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# If terminating TLS at the ELB, the following annotations can be used
# "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "*",
# "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true",
# "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:REGION:ACCOUNT:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX",
# "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "kong-tls-proxy",
# "service.beta.kubernetes.io/aws-load-balancer-type": "elb"
labels:
enable-metrics: "true"
http:
# Enable plaintext HTTP listen for the proxy
enabled: true
servicePort: 80
containerPort: 8000
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the proxy
enabled: true
servicePort: 443
containerPort: 8443
# Set a target port for the TLS port in proxy service
# overrideServiceTargetPort: 8000
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
# Define stream (TCP) listen
# To enable, remove "{}", uncomment the section below, and select your desired
# ports and parameters. Listens are dynamically named after their servicePort,
# e.g. "stream-9000" for the below.
# Note: although you can select the protocol here, you cannot set UDP if you
# use a LoadBalancer Service due to limitations in current Kubernetes versions.
# To proxy both TCP and UDP with LoadBalancers, you must enable the udpProxy Service
# in the next section and place all UDP stream listen configuration under it.
stream: {}
# # Set the container (internal) and service (external) ports for this listen.
# # These values should normally be the same. If your environment requires they
# # differ, note that Kong will match routes based on the containerPort only.
# - containerPort: 9000
# servicePort: 9000
# protocol: TCP
# # Optionally set a static nodePort if the service type is NodePort
# # nodePort: 32080
# # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
# # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
# parameters: []
# Kong proxy ingress settings.
# Note: You need this only if you are using another Ingress Controller
# to expose Kong outside the k8s cluster.
ingress:
# Enable/disable exposure using ingress.
enabled: false
# Ingress hostname
# TLS secret name.
# tls: kong-admin.example.com-tls
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
# Optionally specify a static load balancer IP.
# loadBalancerIP:
# Specify Kong UDP proxy service configuration
# Currently, LoadBalancer type Services are generally limited to a single transport protocol
# Multi-protocol Services are an alpha feature as of Kubernetes 1.20:
# https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types
# You should enable this Service if you proxy UDP traffic, and configure UDP stream listens under it
udpProxy:
# Enable creating a Kubernetes service for UDP proxying
enabled: false
type: LoadBalancer
# To specify annotations or labels for the proxy service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
# Optionally specify a static load balancer IP.
# loadBalancerIP:
# Define stream (UDP) listen
# To enable, remove "{}", uncomment the section below, and select your desired
# ports and parameters. Listens are dynamically named after their servicePort,
# e.g. "stream-9000" for the below.
stream: {}
# # Set the container (internal) and service (external) ports for this listen.
# # These values should normally be the same. If your environment requires they
# # differ, note that Kong will match routes based on the containerPort only.
# - containerPort: 9000
# servicePort: 9000
# protocol: UDP
# # Optionally set a static nodePort if the service type is NodePort
# # nodePort: 32080
# # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
# # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
# parameters: []
# Custom Kong plugins can be loaded into Kong by mounting the plugin code
# into the file-system of Kong container.
# The plugin code should be present in ConfigMap or Secret inside the same
# namespace as Kong is being installed.
# The `name` property refers to the name of the ConfigMap or Secret
# itself, while the pluginName refers to the name of the plugin as it appears
# in Kong.
# Subdirectories (which are optional) require separate ConfigMaps/Secrets.
# "path" indicates their directory under the main plugin directory: the example
# below will mount the contents of kong-plugin-rewriter-migrations at "/opt/kong/rewriter/migrations".
plugins: {}
# configMaps:
# - pluginName: rewriter
# name: kong-plugin-rewriter
# subdirectories:
# - name: kong-plugin-rewriter-migrations
# path: migrations
# secrets:
# - pluginName: rewriter
# name: kong-plugin-rewriter
# Inject specified secrets as a volume in Kong Container at path /etc/secrets/{secret-name}/
# This can be used to override default SSL certificates.
# Be aware that the secret name will be used verbatim, and that certain types
# of punctuation (e.g. `.`) can cause issues.
# Example configuration
# secretVolumes:
# - kong-proxy-tls
# - kong-admin-tls
secretVolumes: []
# Enable/disable migration jobs, and set annotations for them
migrations:
# Enable pre-upgrade migrations (run "kong migrations up")
preUpgrade: true
# Enable post-upgrade migrations (run "kong migrations finish")
postUpgrade: true
# Annotations to apply to migrations job pods
# By default, these disable service mesh sidecar injection for Istio and Kuma,
# as the sidecar containers do not terminate and prevent the jobs from completing
annotations:
sidecar.istio.io/inject: false
# Additional annotations to apply to migration jobs
# This is helpful in certain non-Helm installation situations such as GitOps
# where additional control is required around this job creation.
jobAnnotations: {}
resources: {}
# Example reasonable setting for "resources":
# resources:
# limits:
# cpu: 100m
# memory: 256Mi
# requests:
# cpu: 50m
# memory: 128Mi
# Kong's configuration for DB-less mode
# Note: Use this section only if you are deploying Kong in DB-less mode
# and not as an Ingress Controller.
dblessConfig:
# Either Kong's configuration is managed from an existing ConfigMap (with Key: kong.yml)
configMap: ""
# Or the configuration is passed in full-text below
config:
_format_version: "1.1"
services:
# Example configuration
# - name: example.com
# url: http://example.com
# routes:
# - name: example
# paths:
# - "/example"
# -----------------------------------------------------------------------------
# Ingress Controller parameters
# -----------------------------------------------------------------------------
# Kong Ingress Controller's primary purpose is to satisfy Ingress resources
# created in k8s. It uses CRDs for more fine grained control over routing and
# for Kong specific configuration.
ingressController:
enabled: true
image:
repository: kong/kubernetes-ingress-controller
tag: "1.3"
# Optionally set a semantic version for version-gated features. This can normally
# be left unset. You only need to set this if your tag is not a semver string,
# such as when you are using a "next" tag. Set this to the effective semantic
# version of your tag: for example if using a "next" image for an unreleased 3.1.0
# version, set this to "3.1.0".
effectiveSemver:
args: []
# Specify individual namespaces to watch for ingress configuration. By default,
# when no namespaces are set, the controller watches all namespaces and uses a
# ClusterRole to grant access to Kubernetes resources. When you list specific
# namespaces, the controller will watch those namespaces only and will create
# namespaced-scoped Roles for each of them. Note that watching specific namespaces
# disables KongClusterPlugin usage, as KongClusterPlugins only exist as cluster resources.
# Requires controller 2.0.0 or newer.
watchNamespaces: []
# Specify Kong Ingress Controller configuration via environment variables
env:
# The controller disables TLS verification by default because Kong
# generates self-signed certificates by default. Set this to false once you
# have installed CA-signed certificates.
kong_admin_tls_skip_verify: true
# If using Kong Enterprise with RBAC enabled, uncomment the section below
# and specify the secret/key containing your admin token.
# kong_admin_token:
# valueFrom:
# secretKeyRef:
# name: CHANGEME-admin-token-secret
# key: CHANGEME-admin-token-key
admissionWebhook:
enabled: false
failurePolicy: Fail
port: 8080
certificate:
provided: false
# Specifiy the secretName when the certificate is provided via a TLS secret
# secretName: ""
# Specifiy the CA bundle of the provided certificate.
# This is a PEM encoded CA bundle which will be used to validate the webhook certificate. If unspecified, system trust roots on the apiserver are used.
# caBundle:
# | Add the CA bundle content here.
ingressClass: kong
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# The annotations for service account
annotations: {}
# general properties
livenessProbe:
httpGet:
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
resources: {}
# Example reasonable setting for "resources":
# resources:
# limits:
# cpu: 100m
# memory: 256Mi
# requests:
# cpu: 50m
# memory: 128Mi
# -----------------------------------------------------------------------------
# Postgres sub-chart parameters
# -----------------------------------------------------------------------------
# Kong can run without a database or use either Postgres or Cassandra
# as a backend datatstore for it's configuration.
# By default, this chart installs Kong without a database.
# If you would like to use a database, there are two options:
# - (recommended) Deploy and maintain a database and pass the connection
# details to Kong via the `env` section.
# - You can use the below `postgresql` sub-chart to deploy a database
# along-with Kong as part of a single Helm release.
# PostgreSQL chart documentation:
# https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md
postgresql:
enabled: false
# postgresqlUsername: kong
# postgresqlDatabase: kong
# service:
# port: 5432
# -----------------------------------------------------------------------------
# Miscellaneous parameters
# -----------------------------------------------------------------------------
waitImage:
# Wait for the database to come online before starting Kong or running migrations
# If Kong is to access the database through a service mesh that injects a sidecar to
# Kong's container, this must be disabled. Otherwise there'll be a deadlock:
# InitContainer waiting for DB access that requires the sidecar, and the sidecar
# waiting for InitContainers to finish.
enabled: true
# Optionally specify an image that provides bash for pre-migration database
# checks. If none is specified, the chart uses the Kong image. The official
# Kong images provide bash
# repository: bash
# tag: 5
pullPolicy: IfNotPresent
# update strategy
updateStrategy: {}
# type: RollingUpdate
# rollingUpdate:
# maxSurge: "100%"
# maxUnavailable: "0%"
# If you want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
resources: {}
# limits:
# cpu: 100m
# memory: 256Mi
# requests:
# cpu: 100m
# memory: 256Mi
# readinessProbe for Kong pods
readinessProbe:
httpGet:
path: "/status"
port: status
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# livenessProbe for Kong pods
livenessProbe:
httpGet:
path: "/status"
port: status
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# Proxy container lifecycle hooks
# Ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
lifecycle:
preStop:
exec:
# Note kong quit has a default timeout of 10 seconds
command: ["/bin/sh", "-c", "/bin/sleep 15 && kong quit"]
# Sets the termination grace period for pods spawned by the Kubernetes Deployment.
# Ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution
terminationGracePeriodSeconds: 30
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Topology spread constraints for pod assignment (requires Kubernetes >= 1.19)
# Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
# topologySpreadConstraints: []
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# Annotation to be added to Kong pods
podAnnotations: {}
# Labels to be added to Kong pods
podLabels: {}
# Kong pod count.
# It has no effect when autoscaling.enabled is set to true
replicaCount: 1
# Annotations to be added to Kong deployment
deploymentAnnotations:
kuma.io/gateway: enabled
traffic.sidecar.istio.io/includeInboundPorts: ""
# Enable autoscaling using HorizontalPodAutoscaler
# When configuring an HPA, you must set resource requests on all containers via
# "resources" and, if using the controller, "ingressController.resources" in values.yaml
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 5
## targetCPUUtilizationPercentage only used if the cluster doesn't support autoscaling/v2beta
targetCPUUtilizationPercentage:
## Otherwise for clusters that do support autoscaling/v2beta, use metrics
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
# Kong Pod Disruption Budget
podDisruptionBudget:
enabled: false
# Uncomment only one of the following when enabled is set to true
# maxUnavailable: "50%"
# minUnavailable: "50%"
podSecurityPolicy:
enabled: false
spec:
privileged: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
runAsGroup:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'secret'
- 'emptyDir'
allowPrivilegeEscalation: false
hostNetwork: false
hostIPC: false
hostPID: false
# Make the root filesystem read-only. This is not compatible with Kong Enterprise <1.5.
# If you use Kong Enterprise <1.5, this must be set to false.
readOnlyRootFilesystem: true
priorityClassName: ""
# securityContext for Kong pods.
securityContext: {}
# securityContext for containers.
containerSecurityContext: {}
## Optional DNS configuration for Kong pods
# dnsPolicy: ClusterFirst
# dnsConfig:
# nameservers:
# - "10.100.0.10"
# options:
# - name: ndots
# value: "5"
# searches:
# - default.svc.cluster.local
# - svc.cluster.local
# - cluster.local
# - us-east-1.compute.internal
serviceMonitor:
# Specifies whether ServiceMonitor for Prometheus operator should be created
# If you wish to gather metrics from a Kong instance with the proxy disabled (such as a hybrid control plane), see:
# https://github.com/Kong/charts/blob/main/charts/kong/README.md#prometheus-operator-integration
enabled: false
# interval: 10s
# Specifies namespace, where ServiceMonitor should be installed
# namespace: monitoring
# labels:
# foo: bar
# targetLabels:
# - foo
# honorLabels: false
# metricRelabelings: []
# -----------------------------------------------------------------------------
# Kong Enterprise parameters
# -----------------------------------------------------------------------------
# Toggle Kong Enterprise features on or off
# RBAC and SMTP configuration have additional options that must all be set together
# Other settings should be added to the "env" settings below
enterprise:
enabled: false
# Kong Enterprise license secret name
# This secret must contain a single 'license' key, containing your base64-encoded license data
# The license secret is required to unlock all Enterprise features. If you omit it,
# Kong will run in free mode, with some Enterprise features disabled.
# license_secret: kong-enterprise-license
vitals:
enabled: true
portal:
enabled: false
rbac:
enabled: false
admin_gui_auth: basic-auth
# If RBAC is enabled, this Secret must contain an admin_gui_session_conf key
# The key value must be a secret configuration, following the example at
# https://docs.konghq.com/enterprise/latest/kong-manager/authentication/sessions
session_conf_secret: kong-session-config
# If admin_gui_auth is not set to basic-auth, provide a secret name which
# has an admin_gui_auth_conf key containing the plugin config JSON
admin_gui_auth_conf_secret: CHANGEME-admin-gui-auth-conf-secret
# For configuring emails and SMTP, please read through:
# https://docs.konghq.com/enterprise/latest/developer-portal/configuration/smtp
# https://docs.konghq.com/enterprise/latest/kong-manager/networking/email
smtp:
enabled: false
portal_emails_from: none@example.com
portal_emails_reply_to: none@example.com
admin_emails_from: none@example.com
admin_emails_reply_to: none@example.com
smtp_admin_emails: none@example.com
smtp_host: smtp.example.com
smtp_port: 587
smtp_auth_type: ''
smtp_ssl: nil
smtp_starttls: true
auth:
# If your SMTP server does not require authentication, this section can
# be left as-is. If smtp_username is set to anything other than an empty
# string, you must create a Secret with an smtp_password key containing
# your SMTP password and specify its name here.
smtp_username: '' # e.g. postmaster@example.com
smtp_password_secret: CHANGEME-smtp-password
manager:
# Enable creating a Kubernetes service for Kong Manager
enabled: true
type: NodePort
# To specify annotations or labels for the Manager service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
http:
# Enable plaintext HTTP listen for Kong Manager
enabled: true
servicePort: 8002
containerPort: 8002
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for Kong Manager
enabled: true
servicePort: 8445
containerPort: 8445
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-proxy.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
portal:
# Enable creating a Kubernetes service for the Developer Portal
enabled: true
type: NodePort
# To specify annotations or labels for the Portal service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
http:
# Enable plaintext HTTP listen for the Developer Portal
enabled: true
servicePort: 8003
containerPort: 8003
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the Developer Portal
enabled: true
servicePort: 8446
containerPort: 8446
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-proxy.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
portalapi:
# Enable creating a Kubernetes service for the Developer Portal API
enabled: true
type: NodePort
# To specify annotations or labels for the Portal API service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
http:
# Enable plaintext HTTP listen for the Developer Portal API
enabled: true
servicePort: 8004
containerPort: 8004
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the Developer Portal API
enabled: true
servicePort: 8447
containerPort: 8447
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-proxy.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
clustertelemetry:
enabled: false
# To specify annotations or labels for the cluster telemetry service, add them to the respective
# "annotations" or "labels" dictionaries below.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels: {}
tls:
enabled: false
servicePort: 8006
containerPort: 8006
parameters: []
type: ClusterIP
extraConfigMaps: []
# extraConfigMaps:
# - name: my-config-map
# mountPath: /mount/to/my/location
# subPath: my-subpath # Optional, if you wish to mount a single key and not the entire ConfigMap
extraSecrets: []
# extraSecrets:
# - name: my-secret
# mountPath: /mount/to/my/location
# subPath: my-subpath # Optional, if you wish to mount a single key and not the entire ConfigMap

View File

@ -1176,6 +1176,32 @@ entries:
urls:
- assets/k8s-triliovault-operator/k8s-triliovault-operator-v2.0.200.tgz
version: v2.0.200
kong:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: The Cloud-Native Ingress and API-management
catalog.cattle.io/release-name: kong
apiVersion: v1
appVersion: "2.5"
created: "2021-08-30T20:35:35.658449-03:00"
dependencies:
- condition: postgresql.enabled
name: postgresql
repository: file://./charts/postgresql
description: The Cloud-Native Ingress and API-management
digest: 16565091fe1bec62f275358cbd9dcd0654feaebfb00291fca389db0b8731b5d4
home: https://konghq.com/
icon: https://s3.amazonaws.com/downloads.kong/universe/assets/icon-kong-inc-large.png
kubeVersion: 1.18 - 1.21
maintainers:
- email: harry@konghq.com
name: hbagdi
- email: traines@konghq.com
name: rainest
name: kong
urls:
- assets/kong/kong-2.3.1.tgz
version: 2.3.1
neuvector:
- annotations:
catalog.cattle.io/certified: partner

View File

@ -0,0 +1,2 @@
workingDir: ""
url: https://charts.bitnami.com/bitnami/postgresql-8.6.8.tgz

View File

@ -0,0 +1,7 @@
# Kong for Kubernetes
[Kong](https://konghq.com) makes connecting APIs and microservices across hybrid or multi-cloud environments easier and faster than ever. We power trillions of API transactions for leading organizations globally through our end-to-end API platform.
Kong Gateway is the worlds most popular open source API gateway, built for multi-cloud and hybrid, and optimized for microservices and distributed architectures. It is built on top of a lightweight proxy to deliver unparalleled latency, performance and scalability for all your microservice applications regardless of where they run. It allows you to exercise granular control over your traffic with Kongs plugin architecture
The Kong Enterprise Service Control Platform brokers an organizations information across all services. Built on top of Kongs battle-tested open source core, Kong Enterprise enables customers to simplify management of APIs and microservices across hybrid-cloud and multi-cloud deployments. With Kong Enterprise, customers can proactively identify anomalies and threats, automate tasks, and improve visibility across their entire organization.

View File

@ -0,0 +1,33 @@
labels:
io.rancher.certified: partner
io.cattle.role: project # options are cluster/project
categories:
- API Gateway
questions:
- variable: admin.enabled
default: "false"
description: "Enable REST Admin API"
label: REST Admin API
type: boolean
show_subquestion_if: true
group: "Admin API"
subquestions:
- variable: admin.type
default: "LoadBalancer"
description: "Kubernetes Service Type"
label: Service Type
type: enum
options:
- ClusterIP
- NodePort
- LoadBalancer
- variable: admin.http.enabled
default: "false"
description: "Enable HTTP for REST Admin API"
label: REST Admin API - HTTP
type: boolean
- variable: proxy.http.enabled
default: "true"
description: "Enable HTTP for Proxy"
label: Proxy - HTTP
type: boolean

View File

@ -0,0 +1,12 @@
--- charts-original/Chart.yaml
+++ charts/Chart.yaml
@@ -10,3 +10,9 @@
name: rainest
name: kong
version: 2.3.0
+kubeVersion: "1.18 - 1.21"
+annotations:
+ catalog.cattle.io/certified: partner
+ catalog.cattle.io/release-name: kong
+ catalog.cattle.io/display-name: The Cloud-Native Ingress and API-management
+

View File

@ -0,0 +1,3 @@
url: https://github.com/Kong/charts/releases/download/kong-2.3.0/kong-2.3.0.tgz
packageVersion: 01