[Percona XtraDB Cluster (PXC)](https://www.percona.com/doc/percona-xtradb-cluster/LATEST/index.html) is a database clustering solution for MySQL. This chart deploys Percona XtraDB Cluster on Kubernetes controlled by Percona Operator for MySQL.
* [Percona Operator for MySQL](https://hub.helm.sh/charts/percona/pxc-operator) running in your Kubernetes cluster. See installation details [here](https://github.com/percona/percona-helm-charts/tree/main/charts/pxc-operator) or in the [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-pxc/helm.html).
This chart will deploy Percona XtraDB Cluster in Kubernetes. It will create a Custom Resource, and the Operator will trigger the creation of corresponding Kubernetes primitives: StatefulSets, Pods, Secrets, etc.
### Installing the Chart
To install the chart with the `pxc` release name using a dedicated namespace (recommended):
| `pxc.imagePullPolicy` | The policy used to update images | `` |
| `pxc.autoRecovery` | Enable full cluster crash auto recovery | `true` |
| `pxc.expose.enabled` | Enable or disable exposing `Percona XtraDB Cluster` nodes with dedicated IP addresses | `true` |
| `pxc.expose.type` | The Kubernetes Service Type used for exposure | `LoadBalancer` |
| `pxc.expose.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `10.0.0.0/8` |
| `pxc.expose.annotations` | The Kubernetes annotations | `true` |
| `pxc.replicationChannels.name` | Name of the replication channel for cross-site replication | `pxc1_to_pxc2` |
| `pxc.replicationChannels.isSource` | Should the cluster act as Source (true) or Replica (false) in cross-site replication | `false` |
| `pxc.replicationChannels.sourcesList.host` | For the cross-site replication Replica cluster, this key should contain the hostname or IP address of the Source cluster | `10.95.251.101` |
| `pxc.replicationChannels.sourcesList.port` | For the cross-site replication Replica cluster, this key should contain the Source port number | `3306` |
| `pxc.replicationChannels.sourcesList.weight`| For the cross-site replication Replica cluster, this key should contain the Source cluster weight | `100` |
| `pxc.affinity.antiAffinityTopologyKey` | PXC Pods simple scheduling restriction on/off for host, zone, region | `"kubernetes.io/hostname"` |
| `pxc.affinity.advanced` | PXC Pods advanced scheduling restriction with match expression engine | `{}` |
| `pxc.tolerations` | List of node taints to tolerate for PXC Pods | `[]` |
| `pxc.gracePeriod` | Allowed time for graceful shutdown | `600` |
| `pxc.podDisruptionBudget.maxUnavailable` | Instruct Kubernetes about the failed pods allowed quantity | `1` |
| `pxc.persistence.enabled` | Requests a persistent storage (`hostPath` or `storageClass`) from K8S for PXC Pods datadir | `true` |
| `pxc.persistence.hostPath` | Sets datadir path on K8S node for all PXC Pods. Available only when `pxc.persistence.enabled: true` | |
| `pxc.persistence.storageClass` | Sets K8S storageClass name for all PXC Pods PVC. Available only when `pxc.persistence.enabled: true` | `-` |
| `pxc.persistence.accessMode` | Sets K8S persistent storage access policy for all PXC Pods | `ReadWriteOnce` |
| `pxc.persistence.size` | Sets K8S persistent storage size for all PXC Pods | `8Gi` |
| `pxc.disableTLS` | Disable PXC Pod communication with TLS | `false` |
| `pxc.certManager` | Enable this option if you want the operator to request certificates from `cert-manager` | `false` |
| `pxc.readinessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `5` |
| `pxc.readinessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `15` |
| `pxc.readinessProbes.periodSeconds` | How often (in seconds) to perform the probe | `30` |
| `pxc.readinessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `pxc.readinessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `15` |
| `pxc.livenessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `3` |
| `pxc.livenessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `300` |
| `pxc.livenessProbes.periodSeconds` | How often (in seconds) to perform the probe | `10` |
| `pxc.livenessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `pxc.livenessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `5` |
| `pxc.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| `pxc.podSecurityContext` | A custom Kubernetes Security Context for a Pod to be used instead of the default one | `{}` |
| |
| `haproxy.enabled` | Use HAProxy as TCP proxy for PXC cluster | `true` |
| `haproxy.size` | HAProxy target pod quantity. Can't even if `allowUnsafeConfigurations` is `true` | `3` |
| `haproxy.affinity.antiAffinityTopologyKey` | HAProxy Pods simple scheduling restriction on/off for host, zone, region | `"kubernetes.io/hostname"` |
| `haproxy.affinity.advanced` | HAProxy Pods advanced scheduling restriction with match expression engine | `{}` |
| `haproxy.tolerations` | List of node taints to tolerate for HAProxy Pods | `[]` |
| `haproxy.gracePeriod` | Allowed time for graceful shutdown | `600` |
| `haproxy.podDisruptionBudget.maxUnavailable` | Instruct Kubernetes about the failed pods allowed quantity | `1` |
| `haproxy.readinessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `5` |
| `haproxy.readinessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `15` |
| `haproxy.readinessProbes.periodSeconds` | How often (in seconds) to perform the probe | `30` |
| `haproxy.readinessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `haproxy.readinessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `15` |
| `haproxy.livenessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `3` |
| `haproxy.livenessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `300` |
| `haproxy.livenessProbes.periodSeconds` | How often (in seconds) to perform the probe | `10` |
| `haproxy.livenessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `haproxy.livenessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `5` |
| `haproxy.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| `haproxy.podSecurityContext` | A custom Kubernetes Security Context for a Pod to be used instead of the default one | `{}` |
| |
| `proxysql.enabled` | Use ProxySQL as TCP proxy for PXC cluster | `false` |
| `proxysql.size` | ProxySQL target pod quantity. Can't even if `allowUnsafeConfigurations` is `true` | `3` |
| `proxysql.affinity.antiAffinityTopologyKey` | ProxySQL Pods simple scheduling restriction on/off for host, zone, region | `"kubernetes.io/hostname"` |
| `proxysql.affinity.advanced` | ProxySQL Pods advanced scheduling restriction with match expression engine | `{}` |
| `proxysql.tolerations` | List of node taints to tolerate for ProxySQL Pods | `[]` |
| `proxysql.gracePeriod` | Allowed time for graceful shutdown | `600` |
| `proxysql.podDisruptionBudget.maxUnavailable` | Instruct Kubernetes about the failed pods allowed quantity | `1` |
| `proxysql.persistence.enabled` | Requests a persistent storage (`hostPath` or `storageClass`) from K8S for ProxySQL Pods | `true` |
| `proxysql.persistence.hostPath` | Sets datadir path on K8S node for all ProxySQL Pods. Available only when `proxysql.persistence.enabled: true` | |
| `proxysql.persistence.storageClass` | Sets K8S storageClass name for all ProxySQL Pods PVC. Available only when `proxysql.persistence.enabled: true` | `-` |
| `proxysql.persistence.accessMode` | Sets K8S persistent storage access policy for all ProxySQL Pods | `ReadWriteOnce` |
| `proxysql.persistence.size` | Sets K8S persistent storage size for all ProxySQL Pods | `8Gi` |
| `proxysql.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| `proxysql.podSecurityContext` | A custom Kubernetes Security Context for a Pod to be used instead of the default one | `{}` |
| `backup.storages.fs-pvc` | Backups storage configuration, where `storages:` is a high-level key for the underlying structure. `fs-pvc` is a user-defined storage name. | |
| `backup.storages.fs-pvc.type` | Backup storage type | `filysystem` |
| `backup.storages.fs-pvc.verifyTLS` | Enable or disable verification of the storage server TLS certificate | `true` |
| `secrets.passwords.root` | Default user secret | `insecure-root-password` |
| `secrets.passwords.xtrabackup` | Default user secret | `insecure-xtrabackup-password` |
| `secrets.passwords.monitor` | Default user secret | `insecure-monitor-password` |
| `secrets.passwords.clustercheck` | Default user secret | `insecure-clustercheck-password` |
| `secrets.passwords.proxyadmin` | Default user secret | `insecure-proxyadmin-password` |
| `secrets.passwords.pmmserver` | Default user secret | `insecure-pmmserver-password` |
| `secrets.passwords.pmmserverkey` | PMM server API key | `` |
| `secrets.passwords.operator` | Default user secret | `insecure-operator-password` |
| `secrets.passwords.replication` | Default user secret | `insecure-replication-password` |
| `secrets.tls.cluster` | Specify secret name for TLS. Not needed in case if you're using cert-manager. Structure expects keys `ca.crt`, `tls.crt`, `tls.key` and files contents encoded in base64. | `` |
| `secrets.tls.internal` | Specify internal secret name for TLS. | `` |
| `secrets.logCollector` | Specify secret name used for Fluent Bit Log Collector | `` |
| `secrets.vault` | Specify secret name used for HashiCorp Vault to carry on Data at Rest Encryption | `` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
## Examples
### Deploy a Cluster without a MySQL Proxy, no backups, no persistent disks
This is great for a dev cluster as it doesn't require a persistent disk and doesn't bother with a proxy, backups, or TLS.
### Deploy a cluster with certificates provided by Cert Manager
First you need a working cert-manager installed with appropriate Issuers set up. Check out the [JetStack Helm Chart](https://hub.helm.sh/charts/jetstack/cert-manager) to do that.
By setting `pxc.certManager=true` we're signaling the Helm chart to not create secrets,which will in turn let the operator know to request appropriate `certificate` resources to be filled by cert-manager.
```bash
$ helm install dev --namespace pxc . --set pxc.certManager=true
```
### Deploy a production grade cluster
The pxc-database chart contains an example production values file that should set you
well on your path to running a production database. It is not fully production grade as
there are some requirements for you to provide your own secrets for passwords and TLS to be truly production ready, but it does provide comments on how to do those parts.