Added chart versions:

percona/pxc-db:
    - 1.15.0
  percona/pxc-operator:
    - 1.15.0
pull/1059/head
github-actions[bot] 2024-08-22 00:50:55 +00:00
parent 699949439f
commit ae45d53756
25 changed files with 13594 additions and 1 deletions

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,21 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona XtraDB Cluster
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: pxc-db
apiVersion: v2
appVersion: 1.15.0
description: A Helm chart for installing Percona XtraDB Cluster Databases using the
PXC Operator.
home: https://www.percona.com/doc/kubernetes-operator-for-pxc/kubernetes.html
icon: file://assets/icons/pxc-db.png
kubeVersion: '>=1.21-0'
maintainers:
- email: tomislav.plavcic@percona.com
name: tplavcic
- email: sergey.pronin@percona.com
name: spron-in
- email: natalia.marukovich@percona.com
name: nmarukovich
name: pxc-db
version: 1.15.0

View File

@ -0,0 +1,330 @@
# Percona XtraDB Cluster
[Percona XtraDB Cluster (PXC)](https://www.percona.com/doc/percona-xtradb-cluster/LATEST/index.html) is a database clustering solution for MySQL. This chart deploys Percona XtraDB Cluster on Kubernetes controlled by Percona Operator for MySQL.
Useful links
* [Operator Github repository](https://github.com/percona/percona-xtradb-cluster-operator)
* [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-pxc/index.html)
## Pre-requisites
* [Percona Operator for MySQL](https://hub.helm.sh/charts/percona/pxc-operator) running in your Kubernetes cluster. See installation details [here](https://github.com/percona/percona-helm-charts/tree/main/charts/pxc-operator) or in the [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-pxc/helm.html).
* Kubernetes 1.28+
* Helm v3
## Chart Details
This chart will deploy Percona XtraDB Cluster in Kubernetes. It will create a Custom Resource, and the Operator will trigger the creation of corresponding Kubernetes primitives: StatefulSets, Pods, Secrets, etc.
### Installing the Chart
To install the chart with the `pxc` release name using a dedicated namespace (recommended):
```sh
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install my-db percona/pxc-db --version 1.15.0 --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | ------------------------------------ |
| `crVersion` | Version of the Operator the Custom Resource belongs to | `1.15.0` |
| `ignoreAnnotations` | Operator will not remove following annotations | `[]` |
| `ignoreLabels` | Operator will not remove following labels | `[]` |
| `pause` | Stop PXC Database safely | `false` |
| `unsafeFlags.tls` | Allows users to configure a cluster without TLS/SSL certificates | `false` |
| `unsafeFlags.pxcSize` | Allows users to configure a cluster with less than 3 Percona XtraDB Cluster instances | `false` |
| `unsafeFlags.proxySize` | Allows users to configure a cluster with less than 2 ProxySQL or HAProxy Pods | `false` |
| `unsafeFlags.backupIfUnhealthy` | Allows running a backup even if the cluster status is not `ready` | `false` |
| `enableCRValidationWebhook` | Enables or disables schema validation before applying custom resource | `false` |
| `initContainer.image` | An alternative image for the initial Operator installation | `""` |
| `initContainer.resources.requests` | Init container resource requests | `{}` |
| `initContainer.resources.limits` | Init container resource limits | `{}` |
| `updateStrategy` | Regulates the way how PXC Cluster Pods will be updated after setting a new image | `SmartUpdate` |
| `upgradeOptions.versionServiceEndpoint` | Endpoint for actual PXC Versions provider | `https://check.percona.com/versions` |
| `upgradeOptions.apply` | PXC image to apply from version service - `recommended`, `latest`, actual version like `8.0.19-10.1` | `disabled` |
| `upgradeOptions.schedule` | Cron formatted time to execute the update | `"0 4 * * *"` |
| `finalizers:percona.com/delete-pxc-pods-in-order` | Set this if you want to delete PXC pods in order on cluster deletion | [] |
| `finalizers:percona.com/delete-proxysql-pvc` | Set this if you want to delete proxysql persistent volumes on cluster deletion | [] |
| `finalizers:percona.com/delete-pxc-pvc` | Set this if you want to delete database persistent volumes on cluster deletion | [] |
| `finalizers:percona.com/delete-ssl` | Deletes objects created for SSL (Secret, certificate, and issuer) after the cluster deletion | [] |
| `annotations` | PerconaXtraDBCluster custom resource annotations | {} |
| |
| `tls.enabled` | Enable PXC Pod communication with TLS | `true` |
| `tls.SANs` | Additional domains (SAN) to be added to the TLS certificate within the extended cert-manager configuration | `[]` |
| `tls.issuerConf.name` | A cert-manager issuer name | `""` |
| `tls.issuerConf.kind` | A cert-manager issuer type | `""` |
| `tls.issuerConf.group` | A cert-manager issuer group | `""` |
| |
| `pxc.size` | PXC Cluster target member (pod) quantity. Can't even if `unsafeFlags.pxcSize` is `true` | `3` |
| `pxc.clusterSecretName` | Specify if you want to use custom or Operator generated users secret (if the one specified doesn't exist) | `` |
| `pxc.image.repository` | PXC Container image repository | `percona/percona-xtradb-cluster` |
| `pxc.image.tag` | PXC Container image tag | `8.0.36-28.1` |
| `pxc.imagePullPolicy` | The policy used to update images | `` |
| `pxc.autoRecovery` | Enable full cluster crash auto recovery | `true` |
| `pxc.expose.enabled` | Enable or disable exposing `Percona XtraDB Cluster` nodes with dedicated IP addresses | `true` |
| `pxc.expose.type` | The Kubernetes Service Type used for exposure | `LoadBalancer` |
| `pxc.expose.externalTrafficPolicy` | Specifies whether Service for Percona XtraDB Cluster should [route external traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) (it can influence the load balancing effectiveness) | `""` |
| `pxc.expose.internalTrafficPolicy` | Specifies whether Service for Percona XtraDB Cluster should [route internal traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/) (it can influence the load balancing effectiveness) | `""` |
| `pxc.expose.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `[]` |
| `pxc.expose.loadBalancerIP` | The static IP-address for the load balancer | `""` |
| `pxc.expose.annotations` | The Kubernetes annotations for exposed service | `{}` |
| `pxc.expose.labels` | The Kubernetes labels for exposed service | `{}` |
| `pxc.replicationChannels.name` | Name of the replication channel for cross-site replication | `pxc1_to_pxc2` |
| `pxc.replicationChannels.isSource` | Should the cluster act as Source (true) or Replica (false) in cross-site replication | `false` |
| `pxc.replicationChannels.sourcesList.host` | For the cross-site replication Replica cluster, this key should contain the hostname or IP address of the Source cluster | `10.95.251.101` |
| `pxc.replicationChannels.sourcesList.port` | For the cross-site replication Replica cluster, this key should contain the Source port number | `3306` |
| `pxc.replicationChannels.sourcesList.weight` | For the cross-site replication Replica cluster, this key should contain the Source cluster weight | `100` |
| `pxc.imagePullSecrets` | PXC Container pull secret | `[]` |
| `pxc.annotations` | PXC Pod user-defined annotations | `{}` |
| `pxc.priorityClassName` | PXC Pod priority Class defined by user | |
| `pxc.runtimeClassName` | Name of the Kubernetes Runtime Class for PXC Pods | |
| `pxc.labels` | PXC Pod user-defined labels | `{}` |
| `pxc.schedulerName` | The Kubernetes Scheduler | |
| `pxc.readinessDelaySec` | PXC Pod delay for readiness probe in seconds | `15` |
| `pxc.livenessDelaySec` | PXC Pod delay for liveness probe in seconds | `300` |
| `pxc.configuration` | User defined MySQL options according to MySQL configuration file syntax | `` |
| `pxc.envVarsSecret` | A secret with environment variables | `` |
| `pxc.resources.requests` | PXC Pods resource requests | `{"memory": "1G", "cpu": "600m"}` |
| `pxc.resources.limits` | PXC Pods resource limits | `{}` |
| `pxc.sidecars` | PXC Pods sidecars | `[]` |
| `pxc.sidecarVolumes` | PXC Pods sidecarVolumes | `[]` |
| `pxc.sidecarPVCs` | PXC Pods sidecar PVCs | `[]` |
| `pxc.sidecarResources.requests` | PXC sidecar resource requests | `{}` |
| `pxc.sidecarResources.limits` | PXC sidecar resource limits | `{}` |
| `pxc.nodeSelector` | PXC Pods key-value pairs setting for K8S node assingment | `{}` |
| `pxc.topologySpreadConstraints` | The Label selector for the [Kubernetes Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) | `[]` |
| `pxc.affinity.antiAffinityTopologyKey` | PXC Pods simple scheduling restriction on/off for host, zone, region | `"kubernetes.io/hostname"` |
| `pxc.affinity.advanced` | PXC Pods advanced scheduling restriction with match expression engine | `{}` |
| `pxc.tolerations` | List of node taints to tolerate for PXC Pods | `[]` |
| `pxc.gracePeriod` | Allowed time for graceful shutdown | `600` |
| `pxc.lifecycle.preStop.exec.command` | Command for the [preStop lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) for Percona XtraDB Cluster Pods | `""` |
| `pxc.lifecycle.postStart.exec.command` | Command for the [postStart lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) for Percona XtraDB Cluster Pods | `600` |
| `pxc.podDisruptionBudget.maxUnavailable` | Instruct Kubernetes about the failed pods allowed quantity | `1` |
| `pxc.persistence.enabled` | Requests a persistent storage (`hostPath` or `storageClass`) from K8S for PXC Pods datadir | `true` |
| `pxc.persistence.hostPath` | Sets datadir path on K8S node for all PXC Pods. Available only when `pxc.persistence.enabled: true` | |
| `pxc.persistence.storageClass` | Sets K8S storageClass name for all PXC Pods PVC. Available only when `pxc.persistence.enabled: true` | `-` |
| `pxc.persistence.accessMode` | Sets K8S persistent storage access policy for all PXC Pods | `ReadWriteOnce` |
| `pxc.persistence.dataSource.name` | The name of PVC used as a data source to [create the Percona XtraDB Cluster Volumes by cloning :octicons-link-external-16:](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). | `` |
| `pxc.persistence.dataSource.kind` | The [Kubernetes DataSource type :octicons-link-external-16:](https://kubernetes-csi.github.io/docs/volume-datasources.html#supported-datasources). | `` |
| `pxc.persistence.dataSource.apiGroup` | The [Kubernetes API group :octicons-link-external-16:](https://kubernetes.io/docs/reference/using-api/#api-groups) to use for [PVC Data Source :octicons-link-external-16:](https://kubernetes-csi.github.io/docs/volume-datasources.html). | `` |
| `pxc.persistence.size` | Sets K8S persistent storage size for all PXC Pods | `8Gi` |
| `pxc.certManager` | Enable this option if you want the operator to request certificates from `cert-manager` | `false` |
| `pxc.readinessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `5` |
| `pxc.readinessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `15` |
| `pxc.readinessProbes.periodSeconds` | How often (in seconds) to perform the probe | `30` |
| `pxc.readinessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `pxc.readinessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `15` |
| `pxc.livenessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `3` |
| `pxc.livenessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `300` |
| `pxc.livenessProbes.periodSeconds` | How often (in seconds) to perform the probe | `10` |
| `pxc.livenessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `pxc.livenessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `5` |
| `pxc.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| `pxc.podSecurityContext` | A custom Kubernetes Security Context for a Pod to be used instead of the default one | `{}` |
| |
| `haproxy.enabled` | Use HAProxy as TCP proxy for PXC cluster | `true` |
| `haproxy.size` | HAProxy target pod quantity. Can't even if `unsafeFlags.pxcSize` is `true` | `3` |
| `haproxy.image` | HAProxy Container image repository | `percona/haproxy:2.8.5` |
| `haproxy.imagePullPolicy` | The policy used to update images | `` |
| `haproxy.imagePullSecrets` | HAProxy Container pull secret | `[]` |
| `haproxy.configuration` | User defined HAProxy options according to HAProxy configuration file syntax | `` |
| `haproxy.priorityClassName` | HAProxy Pod priority Class defined by user | |
| `haproxy.runtimeClassName` | Name of the Kubernetes Runtime Class for HAProxy Pods | |
| `haproxy.exposePrimary.enabled` | Enable or disable exposing `HAProxy` nodes with dedicated IP addresses | `true` |
| `haproxy.exposePrimary.type` | The Kubernetes Service Type used for exposure | `LoadBalancer` |
| `haproxy.exposePrimary.externalTrafficPolicy` | Specifies whether Service for HAProxy primary should [route external traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) (it can influence the load balancing effectiveness) | `""` |
| `haproxy.exposePrimary.internalTrafficPolicy` | Specifies whether Service for HAProxy primary should [route internal traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/) (it can influence the load balancing effectiveness) | `""` |
| `haproxy.exposePrimary.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `[]` |
| `haproxy.exposePrimary.loadBalancerIP` | The static IP-address for the load balancer | `""` |
| `haproxy.exposePrimary.annotations` | The Kubernetes annotations for exposed service | `{}` |
| `haproxy.exposePrimary.labels` | The Kubernetes labels for exposed service | `{}` |
| `haproxy.exposeReplicas.enabled` | Enables or disables `haproxy-replicas` Service. This Service default forwards requests to all Percona XtraDB Cluster instances, and it **should not be used for write requests**! | `true` |
| `haproxy.exposeReplicas.onlyReaders` | Setting it to `true` excludes current MySQL primary instance (writer) from the list of Pods, to which `haproxy-replicas` Service directs connections, leaving only the reader instances. | `false` |
| `haproxy.exposeReplicas.type` | The Kubernetes Service Type used for exposure | `LoadBalancer` |
| `haproxy.exposeReplicas.externalTrafficPolicy` | Specifies whether Service for HAProxy replicas should [route external traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) (it can influence the load balancing effectiveness) | `""` |
| `haproxy.exposeReplicas.internalTrafficPolicy` | Specifies whether Service for HAProxy replicas should [route internal traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/) (it can influence the load balancing effectiveness) | `""` |
| `haproxy.exposeReplicas.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `[]` |
| `haproxy.exposeReplicas.loadBalancerIP` | The static IP-address for the load balancer | `""` |
| `haproxy.exposeReplicas.annotations` | The Kubernetes annotations for exposed service | `{}` |
| `haproxy.exposeReplicas.labels` | The Kubernetes labels for exposed service | `{}` |
| `haproxy.annotations` | HAProxy Pod user-defined annotations | `{}` |
| `haproxy.labels` | HAProxy Pod user-defined labels | `{}` |
| `haproxy.schedulerName` | The Kubernetes Scheduler | |
| `haproxy.readinessDelaySec` | HAProxy Pod delay for readiness probe in seconds | `15` |
| `haproxy.livenessDelaySec` | HAProxy Pod delay for liveness probe in seconds | `300` |
| `haproxy.envVarsSecret` | A secret with environment variables | `` |
| `haproxy.resources.requests` | HAProxy Pods resource requests | `{"memory": "1G", "cpu": "600m"}` |
| `haproxy.resources.limits` | HAProxy Pods resource limits | `{}` |
| `haproxy.sidecars` | HAProxy Pods sidecars | `[]` |
| `haproxy.sidecarVolumes` | HAProxy Pods sidecarVolumes | `[]` |
| `haproxy.sidecarPVCs` | HAProxy Pods sidecar PVCs | `[]` |
| `haproxy.sidecarResources.requests` | HAProxy sidecar resource requests | `{}` |
| `haproxy.sidecarResources.limits` | HAProxy sidecar resource limits | `{}` |
| `haproxy.nodeSelector` | HAProxy Pods key-value pairs setting for K8S node assingment | `{}` |
| `haproxy.topologySpreadConstraints` | The Label selector for the [Kubernetes Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) | `[]` |
| `haproxy.affinity.antiAffinityTopologyKey` | HAProxy Pods simple scheduling restriction on/off for host, zone, region | `"kubernetes.io/hostname"` |
| `haproxy.affinity.advanced` | HAProxy Pods advanced scheduling restriction with match expression engine | `{}` |
| `haproxy.tolerations` | List of node taints to tolerate for HAProxy Pods | `[]` |
| `haproxy.gracePeriod` | Allowed time for graceful shutdown | `600` |
| `haproxy.lifecycle.preStop.exec.command` | Command for the [preStop lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) for HAProxy Pods | `""` |
| `haproxy.lifecycle.postStart.exec.command` | Command for the [postStart lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) for HAProxy Pods | `600` |
| `haproxy.podDisruptionBudget.maxUnavailable` | Instruct Kubernetes about the failed pods allowed quantity | `1` |
| `haproxy.readinessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `5` |
| `haproxy.readinessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `15` |
| `haproxy.readinessProbes.periodSeconds` | How often (in seconds) to perform the probe | `30` |
| `haproxy.readinessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `haproxy.readinessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `15` |
| `haproxy.livenessProbes.failureThreshold` | When a probe fails, Kubernetes will try failureThreshold times before giving up | `3` |
| `haproxy.livenessProbes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `300` |
| `haproxy.livenessProbes.periodSeconds` | How often (in seconds) to perform the probe | `10` |
| `haproxy.livenessProbes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `haproxy.livenessProbes.timeoutSeconds` | Number of seconds after which the probe times out | `5` |
| `haproxy.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| `haproxy.podSecurityContext` | A custom Kubernetes Security Context for a Pod to be used instead of the default one | `{}` |
| |
| `proxysql.enabled` | Use ProxySQL as TCP proxy for PXC cluster | `false` |
| `proxysql.size` | ProxySQL target pod quantity. Can't even if `unsafeFlags.pxcSize` is `true` | `3` |
| `proxysql.image` | ProxySQL Container image | `percona/proxysql2:2.5.5` |
| `proxysql.imagePullPolicy` | The policy used to update images | `` |
| `proxysql.imagePullSecrets` | ProxySQL Container pull secret | `[]` |
| `proxysql.configuration` | User defined ProxySQL options according to ProxySQL configuration file syntax | `` |
| `proxysql.priorityClassName` | ProxySQL Pod priority Class defined by user | |
| `proxysql.runtimeClassName` | Name of the Kubernetes Runtime Class for ProxySQL Pods | |
| `proxysql.expose.enabled` | Enable or disable exposing `ProxySQL` nodes with dedicated IP addresses | `true` |
| `proxysql.expose.type` | The Kubernetes Service Type used for exposure | `LoadBalancer` |
| `proxysql.expose.externalTrafficPolicy` | Specifies whether Service for ProxySQL nodes should [route external traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) (it can influence the load balancing effectiveness) | `""` |
| `proxysql.expose.internalTrafficPolicy` | Specifies whether Service for ProxySQL nodes should [route internal traffic to cluster-wide or to node-local endpoints](https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/) (it can influence the load balancing effectiveness) | `""` |
| `proxysql.expose.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `[]` |
| `proxysql.expose.loadBalancerIP` | The static IP-address for the load balancer | `""` |
| `proxysql.expose.annotations` | The Kubernetes annotations for exposed service | `{}` |
| `proxysql.expose.labels` | The Kubernetes labels for exposed service | `{}` |
| `proxysql.annotations` | ProxySQL Pod user-defined annotations | `{}` |
| `proxysql.labels` | ProxySQL Pod user-defined labels | `{}` |
| `proxysql.schedulerName` | The Kubernetes Scheduler | |
| `proxysql.readinessDelaySec` | ProxySQL Pod delay for readiness probe in seconds | `15` |
| `proxysql.livenessDelaySec` | ProxySQL Pod delay for liveness probe in seconds | `300` |
| `proxysql.envVarsSecret` | A secret with environment variables | `` |
| `proxysql.resources.requests` | ProxySQL Pods resource requests | `{"memory": "1G", "cpu": "600m"}` |
| `proxysql.resources.limits` | ProxySQL Pods resource limits | `{}` |
| `proxysql.sidecars` | ProxySQL Pods sidecars | `[]` |
| `proxysql.sidecarVolumes` | ProxySQL Pods sidecarVolumes | `[]` |
| `proxysql.sidecarPVCs` | ProxySQL Pods sidecar PVCs | `[]` |
| `proxysql.sidecarResources.requests` | ProxySQL sidecar resource requests | `{}` |
| `proxysql.sidecarResources.limits` | ProxySQL sidecar resource limits | `{}` |
| `proxysql.nodeSelector` | ProxySQL Pods key-value pairs setting for K8S node assingment | `{}` |
| `proxysql.topologySpreadConstraints` | The Label selector for the [Kubernetes Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) | `[]` |
| `proxysql.affinity.antiAffinityTopologyKey` | ProxySQL Pods simple scheduling restriction on/off for host, zone, region | `"kubernetes.io/hostname"` |
| `proxysql.affinity.advanced` | ProxySQL Pods advanced scheduling restriction with match expression engine | `{}` |
| `proxysql.tolerations` | List of node taints to tolerate for ProxySQL Pods | `[]` |
| `proxysql.gracePeriod` | Allowed time for graceful shutdown | `600` |
| `proxysql.lifecycle.preStop.exec.command` | Command for the [preStop lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) for ProxySQL Pods | `""` |
| `proxysql.lifecycle.postStart.exec.command` | Command for the [postStart lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) for ProxySQL Pods | `600` |
| `proxysql.podDisruptionBudget.maxUnavailable` | Instruct Kubernetes about the failed pods allowed quantity | `1` |
| `proxysql.persistence.enabled` | Requests a persistent storage (`hostPath` or `storageClass`) from K8S for ProxySQL Pods | `true` |
| `proxysql.persistence.hostPath` | Sets datadir path on K8S node for all ProxySQL Pods. Available only when `proxysql.persistence.enabled: true` | |
| `proxysql.persistence.storageClass` | Sets K8S storageClass name for all ProxySQL Pods PVC. Available only when `proxysql.persistence.enabled: true` | `-` |
| `proxysql.persistence.accessMode` | Sets K8S persistent storage access policy for all ProxySQL Pods | `ReadWriteOnce` |
| `proxysql.persistence.size` | Sets K8S persistent storage size for all ProxySQL Pods | `8Gi` |
| `proxysql.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| `proxysql.podSecurityContext` | A custom Kubernetes Security Context for a Pod to be used instead of the default one | `{}` |
| |
| `logcollector.enabled` | Enable log collector container | `true` |
| `logcollector.image` | Log collector image repository | `percona/percona-xtradb-cluster-operator:1.15.0-logcollector-fluentbit3.1.4` |
| `logcollector.imagePullSecrets` | Log collector pull secret | `[]` |
| `logcollector.imagePullPolicy` | The policy used to update images | `` |
| `logcollector.configuration` | User defined configuration for logcollector | `` |
| `logcollector.resources.requests` | Log collector resource requests | `{"memory": "100M", "cpu": "200m"}` |
| `logcollector.resources.limits` | Log collector resource limits | `{}` |
| `logcollector.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| |
| `pmm.enabled` | Enable integration with [Percona Monitoring and Management software](https://www.percona.com/doc/kubernetes-operator-for-pxc/monitoring.html) | `false` |
| `pmm.image.repository` | PMM Container image repository | `percona/pmm-client` |
| `pmm.image.tag` | PMM Container image tag | `2.42.0` |
| `pmm.imagePullSecrets` | PMM Container pull secret | `[]` |
| `pmm.imagePullPolicy` | The policy used to update images | `` |
| `pmm.serverHost` | PMM server related K8S service hostname | `monitoring-service` |
| `pmm.serverUser` | Username for accessing PXC database internals | `admin` |
| `pmm.resources.requests` | PMM Container resource requests | `{"memory": "150M", "cpu": "300m"}` |
| `pmm.resources.limits` | PMM Container resource limits | `{}` |
| `pmm.pxcParams` | Additional parameters which will be passed to the [pmm-admin add mysql](https://docs.percona.com/percona-monitoring-and-management/setting-up/client/mysql.html#add-service) command for `pxc` Pods | `""` |
| `pmm.proxysqlParams` | Additional parameters which will be passed to the [pmm-admin add proxysql](https://docs.percona.com/percona-monitoring-and-management/setting-up/client/proxysql.html) command for `proxysql` Pods | `""` |
| `pmm.containerSecurityContext` | A custom Kubernetes Security Context for a Container to be used instead of the default one | `{}` |
| |
| `backup.enabled` | Enables backups for PXC cluster | `true` |
| `backup.allowParallel` | Allow taking multiple backups in parallel | `true` |
| `backup.image.repository` | Backup Container image | `percona/percona-xtradb-cluster-operator` |
| `backup.image.tag` | Backup Container tag | `1.15.0-pxc8.0-backup-pxb8.0.35` |
| `backup.backoffLimit` | The number of retries to make a backup | `10` |
| `backup.imagePullSecrets` | Backup Container pull secret | `[]` |
| `backup.imagePullPolicy` | The policy used to update images | `` |
| `backup.pitr.enabled` | Enable point in time recovery | `false` |
| `backup.pitr.storageName` | Storage name for PITR | `s3-us-west-binlogs` |
| `backup.pitr.timeBetweenUploads` | Time between uploads for PITR | `60` |
| `backup.pitr.timeoutSeconds` | Timeout in seconds for the binlog to be uploaded; the binlog uploader container will be restarted after exceeding this timeout | `60` |
| `backup.pitr.resources.requests` | PITR Container resource requests | `{}` |
| `backup.pitr.resources.limits` | PITR Container resource limits | `{}` |
| `backup.storages.fs-pvc` | Backups storage configuration, where `storages:` is a high-level key for the underlying structure. `fs-pvc` is a user-defined storage name. | |
| `backup.storages.fs-pvc.type` | Backup storage type | `filysystem` |
| `backup.storages.fs-pvc.verifyTLS` | Enable or disable verification of the storage server TLS certificate | `true` |
| `backup.storages.fs-pvc.volume.persistentVolumeClaim.accessModes` | Backup PVC access policy | `["ReadWriteOnce"]` |
| `backup.storages.fs-pvc.volume.persistentVolumeClaim.resources` | Backup Pod resources specification | `{}` |
| `backup.storages.fs-pvc.volume.persistentVolumeClaim.resources.requests.storage` | Backup Pod datadir backups size | `6Gi` |
| `backup.storages.fs-pvc.topologySpreadConstraints` | The Label selector for the [Kubernetes Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) | `[]` |
| `backup.storages.fs-pvc.containerOptions.env` | Environment variables to add to the backup container | `[]` |
| `backup.storages.fs-pvc.containerOptions.args.xtrabackup` | Additional arguments for xtrabackup | `[]` |
| `backup.storages.fs-pvc.containerOptions.args.xbstream` | Additional arguments for xbstream | `[]` |
| `backup.storages.fs-pvc.containerOptions.args.xbcloud` | Additional arguments for xbcloud | `[]` |
| `backup.schedule` | Backup execution timetable | `[]` |
| `backup.schedule.0.name` | Backup execution timetable name | `daily-backup` |
| `backup.schedule.0.schedule` | Backup execution timetable cron timing | `0 0 * * *` |
| `backup.schedule.0.keep` | Backup items to keep | `5` |
| `backup.schedule.0.storageName` | Backup target storage | `fs-pvc` |
| |
| `secrets.passwords.root` | Default user secret | `insecure-root-password` |
| `secrets.passwords.xtrabackup` | Default user secret | `insecure-xtrabackup-password` |
| `secrets.passwords.monitor` | Default user secret | `insecure-monitor-password` |
| `secrets.passwords.clustercheck` | Default user secret | `insecure-clustercheck-password` |
| `secrets.passwords.proxyadmin` | Default user secret | `insecure-proxyadmin-password` |
| `secrets.passwords.pmmserver` | Default user secret | `insecure-pmmserver-password` |
| `secrets.passwords.pmmserverkey` | PMM server API key | `` |
| `secrets.passwords.operator` | Default user secret | `insecure-operator-password` |
| `secrets.passwords.replication` | Default user secret | `insecure-replication-password` |
| `secrets.tls.cluster` | Specify secret name for TLS. Not needed in case if you're using cert-manager. Structure expects keys `ca.crt`, `tls.crt`, `tls.key` and files contents encoded in base64. | `` |
| `secrets.tls.internal` | Specify internal secret name for TLS. | `` |
| `secrets.logCollector` | Specify secret name used for Fluent Bit Log Collector | `` |
| `secrets.vault` | Specify secret name used for HashiCorp Vault to carry on Data at Rest Encryption | `` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
## Examples
### Deploy a Cluster without a MySQL Proxy, no backups, no persistent disks
This is great for a dev cluster as it doesn't require a persistent disk and doesn't bother with a proxy, backups, or TLS.
```bash
$ helm install dev --namespace pxc . \
--set proxysql.enabled=false --set tls.enabled=false --set unsafeFlags.tls=true \
--set pxc.persistence.enabled=false --set backup-enabled=false
```
### Deploy a cluster with certificates provided by Cert Manager
First you need a working cert-manager installed with appropriate Issuers set up. Check out the [JetStack Helm Chart](https://hub.helm.sh/charts/jetstack/cert-manager) to do that.
By setting `pxc.certManager=true` we're signaling the Helm chart to not create secrets,which will in turn let the operator know to request appropriate `certificate` resources to be filled by cert-manager.
```bash
$ helm install dev --namespace pxc . --set pxc.certManager=true
```
### Deploy a production grade cluster
The pxc-database chart contains an example production values file that should set you
well on your path to running a production database. It is not fully production grade as
there are some requirements for you to provide your own secrets for passwords and TLS to be truly production ready, but it does provide comments on how to do those parts.
```bash
$ helm install prod --file production-values.yaml --namespace pxc .
```

View File

@ -0,0 +1,56 @@
#
% _____
%%% | __ \
###%%%%%%%%%%%%* | |__) |__ _ __ ___ ___ _ __ __ _
### ##%% %%%% | ___/ _ \ '__/ __/ _ \| '_ \ / _` |
#### ##% %%%% | | | __/ | | (_| (_) | | | | (_| |
### #### %%% |_| \___|_| \___\___/|_| |_|\__,_|
,((### ### %%% _ _ _____ _
(((( (### #### %%%% | | / _ \ / ____| | |
((( ((# ###### | | _| (_) |___ | (___ __ _ _ _ __ _ __| |
(((( (((# #### | |/ /> _ </ __| \___ \ / _` | | | |/ _` |/ _` |
/(( ,((( *### | <| (_) \__ \ ____) | (_| | |_| | (_| | (_| |
//// ((( #### |_|\_\\___/|___/ |_____/ \__, |\__,_|\__,_|\__,_|
/// (((( #### | |
/////////////(((((((((((((((((######## |_| Join @ percona.com/k8s
Join Percona Squad! Get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles.
>>> https://percona.com/k8s <<<
1. To get a MySQL prompt inside your new cluster you can run:
{{- if hasKey .Values.pxc "clusterSecretName" }}
ROOT_PASSWORD=`kubectl -n {{ .Release.Namespace }} get secrets {{ .Values.pxc.clusterSecretName }} -o jsonpath="{.data.root}" | base64 --decode`
kubectl -n {{ .Release.Namespace }} exec -ti \
{{ include "pxc-database.fullname" . }}-pxc-0 -c pxc -- mysql -uroot -p"$ROOT_PASSWORD"
{{- else }}
ROOT_PASSWORD=`kubectl -n {{ .Release.Namespace }} get secrets {{ include "pxc-database.fullname" . }}-secrets -o jsonpath="{.data.root}" | base64 --decode`
kubectl -n {{ .Release.Namespace }} exec -ti \
{{ include "pxc-database.fullname" . }}-pxc-0 -c pxc -- mysql -uroot -p"$ROOT_PASSWORD"
{{- end }}
2. To connect an Application running in the same Kubernetes cluster you can connect with:
{{- if hasKey .Values.pxc "clusterSecretName" }}
ROOT_PASSWORD=`kubectl -n {{ .Release.Namespace }} get secrets {{ .Values.pxc.clusterSecretName }} -o jsonpath="{.data.root}" | base64 --decode`
{{- else }}
ROOT_PASSWORD=`kubectl -n {{ .Release.Namespace }} get secrets {{ include "pxc-database.fullname" . }}-secrets -o jsonpath="{.data.root}" | base64 --decode`
{{- end }}
{{- if .Values.proxysql.enabled }}
kubectl run -i --tty --rm percona-client --image=percona --restart=Never \
-- mysql -h {{ template "pxc-database.fullname" . }}-proxysql.{{ .Release.Namespace }}.svc.cluster.local -uroot -p"$ROOT_PASSWORD"
{{- else }}
kubectl run -i --tty --rm percona-client --image=percona --restart=Never \
-- mysql -h {{ template "pxc-database.fullname" . }}-haproxy.{{ .Release.Namespace }}.svc.cluster.local -uroot -p"$ROOT_PASSWORD"
{{- end }}

View File

@ -0,0 +1,77 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "pxc-database.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "pxc-database.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 21 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 21 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 21 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "pxc-database.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 21 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "pxc-database.labels" -}}
app.kubernetes.io/name: {{ include "pxc-database.name" . }}
helm.sh/chart: {{ include "pxc-database.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
This filters the backup.storages hash for S3 credentials. If we detect them, they go in a separate secret.
*/}}
{{- define "pxc-database.storages" -}}
{{- $storages := dict -}}
{{- range $key, $value := .Values.backup.storages -}}
{{- if and (hasKey $value "type") (eq $value.type "s3") (hasKey $value "s3") (hasKey (index $value "s3") "credentialsAccessKey") (hasKey (index $value "s3") "credentialsSecretKey") }}
{{- if hasKey (index $value "s3") "credentialsSecret" -}}
{{- fail "credentialsSecret and credentialsAccessKey/credentialsSecretKey isn't supported!" -}}
{{- end -}}
{{- $secretName := printf "%s-s3-%s" (include "pxc-database.fullname" $) $key -}}
{{- $s3 := set (omit (index $value "s3") "credentialsAccessKey" "credentialsSecretKey") "credentialsSecret" $secretName -}}
{{- $_value := set (omit $value "s3") "s3" $s3 -}}
{{- $_ := set $storages $key $_value -}}
{{- else -}}
{{- $_ := set $storages $key $value -}}
{{- end -}}
{{- end -}}
{{- $storages | toYaml -}}
{{- end -}}
{{/*
Functions returns image URI according to parameters set
*/}}
{{- define "pxc-db.operator-image" -}}
{{- if .Values.image }}
{{- .Values.image }}
{{- else }}
{{- printf "%s:%s" .Values.operatorImageRepository .Chart.AppVersion }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,27 @@
{{- if hasKey .Values.secrets "passwords" }}
apiVersion: v1
kind: Secret
metadata:
{{- if hasKey .Values.pxc "clusterSecretName" }}
name: {{ .Values.pxc.clusterSecretName }}
{{- else }}
name: {{ include "pxc-database.fullname" . }}-secrets
{{- end }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "pxc-database.labels" . | indent 4 }}
type: Opaque
data:
root: {{ .Values.secrets.passwords.root | b64enc }}
xtrabackup: {{ .Values.secrets.passwords.xtrabackup | b64enc }}
monitor: {{ .Values.secrets.passwords.monitor | b64enc }}
clustercheck: {{ .Values.secrets.passwords.clustercheck | b64enc }}
proxyadmin: {{ .Values.secrets.passwords.proxyadmin | b64enc }}
{{- if hasKey .Values.secrets.passwords "pmmserverkey" }}
pmmserverkey: {{ .Values.secrets.passwords.pmmserverkey | b64enc }}
{{- else if hasKey .Values.secrets.passwords "pmmserver" }}
pmmserver: {{ .Values.secrets.passwords.pmmserver | b64enc }}
{{- end}}
operator: {{ .Values.secrets.passwords.operator | b64enc }}
replication: {{ .Values.secrets.passwords.replication | b64enc }}
{{- end }}

View File

@ -0,0 +1,42 @@
{{- if .Values.tls.enabled }}
{{- if not .Values.pxc.certManager }}
{{- $nameDB := printf "%s" (include "pxc-database.fullname" .) }}
{{ $ca := genCA (printf "%s-ca" $nameDB ) 365 }}
{{- if not (hasKey .Values.secrets.tls "cluster") }}
---
{{- $name := printf "%s-proxysql" $nameDB }}
{{- $altNames := list ( printf "%s-pxc" $nameDB ) ( printf "*.%s-pxc" $nameDB ) ( printf "*.%s-proxysql" $nameDB ) -}}
{{ $cert := genSignedCert $name nil $altNames 365 $ca }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $nameDB }}-ssl
namespace: {{ .Release.Namespace }}
labels:
{{ include "pxc-database.labels" . | indent 4 }}
type: kubernetes.io/tls
data:
ca.crt: {{ $ca.Cert | b64enc }}
tls.crt: {{ $cert.Cert | b64enc }}
tls.key: {{ $cert.Key | b64enc }}
{{- end }}
{{- if not (hasKey .Values.secrets.tls "internal") }}
---
{{- $name := printf "%s-pxc" $nameDB }}
{{- $altNames := list ( printf "%s" $name ) ( printf "*.%s" $name ) ( printf "%s-haproxy-replicas.%s.svc.cluster.local" $nameDB .Release.Namespace ) ( printf "%s-haproxy-replicas.%s" $nameDB .Release.Namespace ) ( printf "%s-haproxy-replicas" $nameDB ) ( printf "%s-haproxy.%s.svc.cluster.local" $nameDB .Release.Namespace ) ( printf "%s-haproxy.%s" $nameDB .Release.Namespace ) ( printf "%s-haproxy" $nameDB ) -}}
{{ $cert := genSignedCert $name nil $altNames 365 $ca }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $nameDB }}-ssl-internal
namespace: {{ .Release.Namespace }}
labels:
{{ include "pxc-database.labels" . | indent 4 }}
type: kubernetes.io/tls
data:
ca.crt: {{ $ca.Cert | b64enc }}
tls.crt: {{ $cert.Cert | b64enc }}
tls.key: {{ $cert.Key | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,543 @@
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
name: {{ include "pxc-database.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "pxc-database.labels" . | indent 4 }}
finalizers:
{{ .Values.finalizers | toYaml | indent 4 }}
{{- with .Values.annotations }}
annotations:
{{- . | toYaml | nindent 4 }}
{{- end }}
spec:
crVersion: {{ .Chart.AppVersion }}
{{- if .Values.ignoreAnnotations }}
ignoreAnnotations:
{{ .Values.ignoreAnnotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.ignoreLabels }}
ignoreLabels:
{{ .Values.ignoreLabels | toYaml | indent 4 }}
{{- end }}
{{- if hasKey .Values.pxc "clusterSecretName" }}
secretsName: {{ .Values.pxc.clusterSecretName }}
{{- else }}
secretsName: {{ include "pxc-database.fullname" . }}-secrets
{{- end }}
{{- if .Values.tls.enabled }}
{{- if hasKey .Values.secrets.tls "cluster" }}
sslSecretName: {{ .Values.secrets.tls.cluster }}
{{- else }}
sslSecretName: {{ include "pxc-database.fullname" . }}-ssl
{{- end }}
{{- if hasKey .Values.secrets.tls "internal" }}
sslInternalSecretName: {{ .Values.secrets.tls.internal }}
{{- else }}
sslInternalSecretName: {{ include "pxc-database.fullname" . }}-ssl-internal
{{- end }}
{{- end }}
{{- if hasKey .Values.secrets "vault" }}
vaultSecretName: {{ .Values.secrets.vault }}
{{- else }}
vaultSecretName: {{ include "pxc-database.fullname" . }}-vault
{{- end }}
{{- if hasKey .Values.secrets "logCollector" }}
logCollectorSecretName: {{ .Values.secrets.logCollector }}
{{- else }}
logCollectorSecretName: {{ include "pxc-database.fullname" . }}-log-collector
{{- end }}
{{- if .Values.initContainer }}
initContainer:
{{- if hasKey .Values.initContainer "image" }}
image: {{ .Values.initContainer.image }}
{{- else }}
image: {{ include "pxc-db.operator-image" . }}
{{- end }}
{{- if .Values.initContainer.resources }}
resources:
{{- if hasKey .Values.initContainer.resources "requests" }}
requests:
{{ tpl (.Values.initContainer.resources.requests | toYaml) $ | indent 8 }}
{{- end }}
{{- if hasKey .Values.initContainer.resources "limits" }}
limits:
{{ tpl (.Values.initContainer.resources.limits | toYaml) $ | indent 8 }}
{{- end }}
{{- end }}
{{- end }}
enableCRValidationWebhook: {{ .Values.enableCRValidationWebhook }}
pause: {{ .Values.pause }}
{{- if .Values.unsafeFlags }}
unsafeFlags:
{{ .Values.unsafeFlags | toYaml | indent 4 }}
{{- end }}
updateStrategy: {{ .Values.updateStrategy }}
{{- if hasKey .Values.upgradeOptions "versionServiceEndpoint" }}
upgradeOptions:
versionServiceEndpoint: {{ .Values.upgradeOptions.versionServiceEndpoint }}
apply: {{ .Values.upgradeOptions.apply }}
schedule: {{ .Values.upgradeOptions.schedule }}
{{- end }}
{{- if .Values.tls }}
tls:
enabled: {{ .Values.tls.enabled }}
{{- if hasKey .Values.tls "SANs" }}
SANs:
{{ .Values.tls.SANs | toYaml | indent 6 }}
{{- end }}
{{- if hasKey .Values.tls "issuerConf" }}
issuerConf:
name: {{ .Values.tls.issuerConf.name }}
kind: {{ .Values.tls.issuerConf.kind }}
group: {{ .Values.tls.issuerConf.group }}
{{- end }}
{{- end }}
{{- $pxc := .Values.pxc }}
pxc:
size: {{ $pxc.size }}
image: {{ $pxc.image.repository }}:{{ $pxc.image.tag }}
autoRecovery: {{ $pxc.autoRecovery }}
{{- if $pxc.schedulerName }}
schedulerName: {{ $pxc.schedulerName }}
{{- end }}
readinessDelaySec: {{ $pxc.readinessDelaySec }}
livenessDelaySec: {{ $pxc.livenessDelaySec }}
{{- if $pxc.configuration }}
configuration: |
{{ tpl $pxc.configuration $ | nindent 6 }}
{{- end }}
{{- if $pxc.imagePullPolicy }}
imagePullPolicy: {{ $pxc.imagePullPolicy }}
{{- end }}
{{- if $pxc.imagePullSecrets }}
imagePullSecrets:
{{ $pxc.imagePullSecrets | toYaml | indent 6 }}
{{- end }}
{{- if $pxc.priorityClassName }}
priorityClassName: {{ $pxc.priorityClassName }}
{{- end }}
annotations:
{{ $pxc.annotations | toYaml | indent 6 }}
labels:
{{ $pxc.labels | toYaml | indent 6 }}
{{- if $pxc.expose }}
expose:
{{ tpl ($pxc.expose | toYaml) $ | indent 6 }}
{{- end }}
{{- if $pxc.replicationChannels }}
replicationChannels:
{{ tpl ($pxc.replicationChannels | toYaml) $ | indent 6 }}
{{- end }}
{{- if $pxc.runtimeClassName }}
runtimeClassName: {{ $pxc.runtimeClassName }}
{{- end }}
{{- if $pxc.envVarsSecret }}
envVarsSecret: {{ $pxc.envVarsSecret }}
{{- end }}
resources:
requests:
{{ tpl ($pxc.resources.requests | toYaml) $ | indent 8 }}
limits:
{{ tpl ($pxc.resources.limits | toYaml) $ | indent 8 }}
sidecars:
{{ $pxc.sidecars | toYaml | indent 6 }}
sidecarVolumes:
{{ $pxc.sidecarVolumes | toYaml | indent 6 }}
sidecarPVCs:
{{ $pxc.sidecarPVCs | toYaml | indent 6 }}
sidecarResources:
requests:
{{ tpl ($pxc.sidecarResources.requests | toYaml) $ | indent 8 }}
limits:
{{ tpl ($pxc.sidecarResources.limits | toYaml) $ | indent 8 }}
nodeSelector:
{{ $pxc.nodeSelector | toYaml | indent 6 }}
{{- if $pxc.topologySpreadConstraints }}
topologySpreadConstraints:
{{ $pxc.topologySpreadConstraints | toYaml | indent 6 }}
{{- end }}
affinity:
{{ $pxc.affinity | toYaml | indent 6 }}
tolerations:
{{ $pxc.tolerations | toYaml | indent 6 }}
podDisruptionBudget:
{{ $pxc.podDisruptionBudget | toYaml | indent 6 }}
volumeSpec:
{{- if not $pxc.persistence.enabled }}
emptyDir: {}
{{- else }}
{{- if hasKey $pxc.persistence "hostPath" }}
hostPath:
path: {{ $pxc.persistence.hostPath }}
type: Directory
{{- else }}
persistentVolumeClaim:
{{- if $pxc.persistence.storageClass }}
{{- if (eq "-" $pxc.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ $pxc.persistence.storageClass }}"
{{- end }}
{{- end }}
accessModes: [{{ $pxc.persistence.accessMode | quote }}]
{{- if $pxc.persistence.dataSource }}
dataSource:
{{ $pxc.persistence.dataSource| toYaml | indent 10 }}
{{- end }}
resources:
requests:
storage: {{ $pxc.persistence.size | quote }}
{{- end }}
{{- end }}
gracePeriod: {{ $pxc.gracePeriod }}
{{- if hasKey $pxc "lifecycle" }}
lifecycle:
{{- if hasKey $pxc.lifecycle "preStop" }}
preStop:
{{- $pxc.lifecycle.preStop | toYaml | nindent 8 }}
{{- end }}
{{- if hasKey $pxc.lifecycle "postStart" }}
postStart:
{{- $pxc.lifecycle.postStart | toYaml | nindent 8 }}
{{- end }}
{{- end }}
readinessProbes:
{{ tpl ($pxc.readinessProbes | toYaml) $ | indent 6 }}
livenessProbes:
{{ tpl ($pxc.livenessProbes | toYaml) $ | indent 6 }}
{{- if $pxc.containerSecurityContext }}
containerSecurityContext:
{{ tpl ($pxc.containerSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- if $pxc.podSecurityContext }}
podSecurityContext:
{{ tpl ($pxc.podSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- if $pxc.serviceAccountName }}
serviceAccountName: {{ $pxc.serviceAccountName }}
{{- end }}
{{- if or (not .Values.haproxy.enabled) .Values.proxysql.enabled }}
haproxy:
enabled: false
{{- else }}
{{- $haproxy := .Values.haproxy }}
haproxy:
enabled: true
size: {{ $haproxy.size }}
image: {{ .Values.haproxy.image }}
{{- if $haproxy.imagePullPolicy }}
imagePullPolicy: {{ $haproxy.imagePullPolicy }}
{{- end }}
{{- if $haproxy.imagePullSecrets }}
imagePullSecrets:
{{ $haproxy.imagePullSecrets | toYaml | indent 6 }}
{{- end }}
{{- if $haproxy.schedulerName }}
schedulerName: {{ $haproxy.schedulerName }}
{{- end }}
{{- if $haproxy.configuration }}
configuration: |
{{ tpl $haproxy.configuration $ | nindent 6 }}
{{- end }}
{{- if $haproxy.priorityClassName }}
priorityClassName: {{ $haproxy.priorityClassName }}
{{- end }}
{{- if $haproxy.exposePrimary }}
exposePrimary:
{{ tpl ($haproxy.exposePrimary | toYaml) $ | indent 6 }}
{{- end }}
{{- if $haproxy.exposeReplicas }}
exposeReplicas:
{{ tpl ($haproxy.exposeReplicas | toYaml) $ | indent 6 }}
{{- end }}
annotations:
{{ $haproxy.annotations | toYaml | indent 6 }}
labels:
{{ $haproxy.labels | toYaml | indent 6 }}
{{- if $haproxy.runtimeClassName }}
runtimeClassName: {{ $haproxy.runtimeClassName }}
{{- end }}
{{- if $haproxy.envVarsSecret }}
envVarsSecret: {{ $haproxy.envVarsSecret }}
{{- end }}
resources:
requests:
{{ $haproxy.resources.requests | toYaml | indent 8 }}
limits:
{{ $haproxy.resources.limits | toYaml | indent 8 }}
sidecars:
{{ $haproxy.sidecars | toYaml | indent 6 }}
sidecarVolumes:
{{ $haproxy.sidecarVolumes | toYaml | indent 6 }}
sidecarPVCs:
{{ $haproxy.sidecarPVCs | toYaml | indent 6 }}
sidecarResources:
requests:
{{ tpl ($haproxy.sidecarResources.requests | toYaml) $ | indent 8 }}
limits:
{{ tpl ($haproxy.sidecarResources.limits | toYaml) $ | indent 8 }}
{{- if $haproxy.serviceAccountName }}
serviceAccountName: {{ $haproxy.serviceAccountName }}
{{- end }}
nodeSelector:
{{ $haproxy.nodeSelector | toYaml | indent 6 }}
{{- if $haproxy.topologySpreadConstraints }}
topologySpreadConstraints:
{{ $haproxy.topologySpreadConstraints | toYaml | indent 6 }}
{{- end }}
affinity:
{{ $haproxy.affinity | toYaml | indent 6 }}
tolerations:
{{ $haproxy.tolerations | toYaml | indent 6 }}
podDisruptionBudget:
{{ $haproxy.podDisruptionBudget | toYaml | indent 6 }}
volumeSpec:
emptyDir: {}
gracePeriod: {{ $haproxy.gracePeriod }}
{{- if hasKey $haproxy "lifecycle" }}
lifecycle:
{{- if hasKey $haproxy.lifecycle "preStop" }}
preStop:
{{- $haproxy.lifecycle.preStop | toYaml | nindent 8 }}
{{- end }}
{{- if hasKey $haproxy.lifecycle "postStart" }}
postStart:
{{- $haproxy.lifecycle.postStart | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- if $haproxy.readinessDelaySec }}
readinessDelaySec: {{ $haproxy.readinessDelaySec }}
{{- end }}
{{- if $haproxy.livenessDelaySec }}
livenessDelaySec: {{ $pxc.livenessDelaySec }}
{{- end }}
readinessProbes:
{{ tpl ($haproxy.readinessProbes | toYaml) $ | indent 6 }}
livenessProbes:
{{ tpl ($haproxy.livenessProbes | toYaml) $ | indent 6 }}
{{- if $haproxy.containerSecurityContext }}
containerSecurityContext:
{{ tpl ($haproxy.containerSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- if $haproxy.podSecurityContext }}
podSecurityContext:
{{ tpl ($haproxy.podSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- end }}
{{- if not .Values.proxysql.enabled }}
proxysql:
enabled: false
{{- else }}
{{- $proxysql := .Values.proxysql }}
proxysql:
enabled: true
size: {{ $proxysql.size }}
image: {{ .Values.proxysql.image }}
{{- if $proxysql.imagePullPolicy }}
imagePullPolicy: {{ $proxysql.imagePullPolicy }}
{{- end }}
{{- if $proxysql.imagePullSecrets }}
imagePullSecrets:
{{- $proxysql.imagePullSecrets | toYaml | indent 6 }}
{{- end }}
{{- if $proxysql.schedulerName }}
schedulerName: {{ $proxysql.schedulerName }}
{{- end }}
{{- if $proxysql.configuration }}
configuration: |
{{ tpl $proxysql.configuration $ | nindent 6 }}
{{- end }}
{{- if $proxysql.priorityClassName }}
priorityClassName: {{ $proxysql.priorityClassName }}
{{- end }}
{{- if $proxysql.expose }}
expose:
{{ tpl ($proxysql.expose | toYaml) $ | indent 6 }}
{{- end }}
annotations:
{{ $proxysql.annotations | toYaml | indent 6 }}
labels:
{{ $proxysql.labels | toYaml | indent 6 }}
{{- if $proxysql.runtimeClassName }}
runtimeClassName: {{ $proxysql.runtimeClassName }}
{{- end }}
{{- if $proxysql.envVarsSecret }}
envVarsSecret: {{ $proxysql.envVarsSecret }}
{{- end }}
resources:
requests:
{{ $proxysql.resources.requests | toYaml | indent 8 }}
limits:
{{ $proxysql.resources.limits | toYaml | indent 8 }}
sidecars:
{{ $proxysql.sidecars | toYaml | indent 6 }}
sidecarVolumes:
{{ $proxysql.sidecarVolumes | toYaml | indent 6 }}
sidecarPVCs:
{{ $proxysql.sidecarPVCs | toYaml | indent 6 }}
sidecarResources:
requests:
{{ tpl ($proxysql.sidecarResources.requests | toYaml) $ | indent 8 }}
limits:
{{ tpl ($proxysql.sidecarResources.limits | toYaml) $ | indent 8 }}
{{- if $proxysql.serviceAccountName }}
serviceAccountName: {{ $proxysql.serviceAccountName }}
{{- end }}
nodeSelector:
{{ $proxysql.nodeSelector | toYaml | indent 6 }}
{{- if $proxysql.topologySpreadConstraints }}
topologySpreadConstraints:
{{ $proxysql.topologySpreadConstraints | toYaml | indent 6 }}
{{- end }}
affinity:
{{ $proxysql.affinity | toYaml | indent 6 }}
tolerations:
{{ $proxysql.tolerations | toYaml | indent 6 }}
podDisruptionBudget:
{{ $proxysql.podDisruptionBudget | toYaml | indent 6 }}
volumeSpec:
{{- if not $proxysql.persistence.enabled }}
emptyDir: {}
{{- else }}
{{- if hasKey $proxysql.persistence "hostPath" }}
hostPath:
path: {{ $proxysql.persistence.hostPath }}
type: Directory
{{- else }}
persistentVolumeClaim:
{{- if $proxysql.persistence.storageClass }}
{{- if (eq "-" $proxysql.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ $proxysql.persistence.storageClass }}"
{{- end }}
{{- end }}
accessModes: [{{ $proxysql.persistence.accessMode | quote }}]
resources:
requests:
storage: {{ $proxysql.persistence.size | quote }}
{{- end }}
{{- end }}
gracePeriod: {{ $proxysql.gracePeriod }}
{{- if hasKey $proxysql "lifecycle" }}
lifecycle:
{{- if hasKey $proxysql.lifecycle "preStop" }}
preStop:
{{- $proxysql.lifecycle.preStop | toYaml | nindent 8 }}
{{- end }}
{{- if hasKey $proxysql.lifecycle "postStart" }}
postStart:
{{- $proxysql.lifecycle.postStart | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- if $proxysql.containerSecurityContext }}
containerSecurityContext:
{{ tpl ($proxysql.containerSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- if $proxysql.podSecurityContext }}
podSecurityContext:
{{ tpl ($proxysql.podSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- end }}
logcollector:
{{- if not .Values.logcollector.enabled }}
enabled: false
{{- else }}
{{- $logcollector := .Values.logcollector }}
enabled: true
image: {{ .Values.logcollector.image }}
{{- if $logcollector.imagePullPolicy }}
imagePullPolicy: {{ $logcollector.imagePullPolicy }}
{{- end }}
{{- if $logcollector.imagePullSecrets }}
imagePullSecrets:
{{- $logcollector.imagePullSecrets | toYaml | nindent 6 }}
{{- end }}
{{- if $logcollector.configuration }}
configuration: |
{{ tpl $logcollector.configuration $ | nindent 6 }}
{{- end }}
resources:
requests:
{{ tpl ($logcollector.resources.requests | toYaml) $ | indent 8 }}
limits:
{{ tpl ($logcollector.resources.limits | toYaml) $ | indent 8 }}
{{- if $logcollector.containerSecurityContext }}
containerSecurityContext:
{{ tpl ($logcollector.containerSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- end }}
pmm:
{{- if not .Values.pmm.enabled }}
enabled: false
{{- else }}
{{- $pmm := .Values.pmm }}
enabled: true
image: {{ $pmm.image.repository }}:{{ $pmm.image.tag }}
{{- if $pmm.imagePullPolicy }}
imagePullPolicy: {{ $pmm.imagePullPolicy }}
{{- end }}
{{- if $pmm.containerSecurityContext }}
containerSecurityContext:
{{ tpl ($pmm.containerSecurityContext | toYaml) $ | indent 6 }}
{{- end }}
{{- if $pmm.imagePullSecrets }}
imagePullSecrets:
{{- $pmm.imagePullSecrets | toYaml | nindent 6 }}
{{- end }}
serverHost: {{ $pmm.serverHost }}
serverUser: {{ $pmm.serverUser }}
{{- if $pmm.pxcParams }}
pxcParams: {{ $pmm.pxcParams }}
{{- end }}
{{- if $pmm.proxysqlParams }}
proxysqlParams: {{ $pmm.proxysqlParams }}
{{- end }}
resources:
requests:
{{ tpl ($pmm.resources.requests | toYaml) $ | indent 8 }}
limits:
{{ tpl ($pmm.resources.limits | toYaml) $ | indent 8 }}
{{- end }}
{{- $backup := .Values.backup }}
{{- if $backup.enabled }}
backup:
{{- if $backup.allowParallel }}
allowParallel: {{ $backup.allowParallel }}
{{- end }}
image: {{ $backup.image.repository }}:{{ $backup.image.tag }}
{{- if $backup.backoffLimit }}
backoffLimit: {{ $backup.backoffLimit }}
{{- end }}
{{- if $backup.serviceAccountName }}
serviceAccountName: {{ $backup.serviceAccountName }}
{{- end }}
{{- if $backup.imagePullPolicy }}
imagePullPolicy: {{ $backup.imagePullPolicy }}
{{- end }}
{{- if $backup.imagePullSecrets }}
imagePullSecrets:
{{ $backup.imagePullSecrets | toYaml | indent 6 }}
{{- end }}
pitr:
{{- if not $backup.pitr.enabled }}
enabled: false
{{- else }}
enabled: true
storageName: {{ $backup.pitr.storageName }}
timeBetweenUploads: {{ $backup.pitr.timeBetweenUploads }}
timeoutSeconds: {{ $backup.pitr.timeoutSeconds }}
resources:
requests:
{{ tpl ($backup.pitr.resources.requests | toYaml) $ | indent 10 }}
limits:
{{ tpl ($backup.pitr.resources.limits | toYaml) $ | indent 10 }}
{{- end }}
storages:
{{ include "pxc-database.storages" . | indent 6 }}
schedule:
{{ $backup.schedule | toYaml | indent 6 }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- range $key, $value := .Values.backup.storages }}
{{- if and (hasKey $value "type") (eq $value.type "s3") (hasKey $value "s3") (hasKey (index $value "s3") "credentialsAccessKey") (hasKey (index $value "s3") "credentialsSecretKey") }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "pxc-database.fullname" $ }}-s3-{{ $key }}
namespace: {{ $.Release.Namespace }}
labels:
{{ include "pxc-database.labels" $ | indent 4 }}
type: Opaque
data:
AWS_ACCESS_KEY_ID: {{ index $value "s3" "credentialsAccessKey" | b64enc }}
AWS_SECRET_ACCESS_KEY: {{ index $value "s3" "credentialsSecretKey" | b64enc }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,710 @@
# Default values for pxc-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
finalizers:
- percona.com/delete-pxc-pods-in-order
## Set this if you want to delete proxysql persistent volumes on cluster deletion
# - percona.com/delete-proxysql-pvc
## Set this if you want to delete database persistent volumes on cluster deletion
# - percona.com/delete-pxc-pvc
## Set this if you want to delete cert manager certificates on cluster deletion
# - percona.com/delete-ssl
nameOverride: ""
fullnameOverride: ""
# PerconaXtraDBCluster annotations
annotations: {}
operatorImageRepository: percona/percona-xtradb-cluster-operator
crVersion: 1.15.0
ignoreAnnotations: []
# - iam.amazonaws.com/role
ignoreLabels: []
# - rack
pause: false
# initContainer:
# image: "percona/percona-xtradb-cluster-operator:1.15.0"
# resources:
# requests:
# memory: 100M
# cpu: 100m
# limits:
# memory: 200M
# cpu: 200m
# unsafeFlags:
# tls: false
# pxcSize: false
# proxySize: false
# backupIfUnhealthy: false
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: disabled
schedule: "0 4 * * *"
enableCRValidationWebhook: false
tls:
enabled: true
# SANs:
# - pxc-1.example.com
# - pxc-2.example.com
# - pxc-3.example.com
# issuerConf:
# name: special-selfsigned-issuer
# kind: ClusterIssuer
# group: cert-manager.io
pxc:
size: 3
image:
repository: percona/percona-xtradb-cluster
tag: 8.0.36-28.1
# imagePullPolicy: Always
autoRecovery: true
# expose:
# enabled: true
# type: LoadBalancer
# externalTrafficPolicy: Local
# internalTrafficPolicy: Local
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# loadBalancerIP: 127.0.0.1
# annotations:
# networking.gke.io/load-balancer-type: "Internal"
# labels:
# rack: rack-22
# replicationChannels:
# - name: pxc1_to_pxc2
# isSource: true
# - name: pxc2_to_pxc1
# isSource: false
# configuration:
# sourceRetryCount: 3
# sourceConnectRetry: 60
# ssl: false
# sslSkipVerify: true
# ca: '/etc/mysql/ssl/ca.crt'
# sourcesList:
# - host: 10.95.251.101
# port: 3306
# weight: 100
# schedulerName: mycustom-scheduler
imagePullSecrets: []
# - name: private-registry-credentials
annotations: {}
# iam.amazonaws.com/role: role-arn
labels: {}
# rack: rack-22
# priorityClassName: high-priority
readinessDelaySec: 15
livenessDelaySec: 300
## Uncomment to pass in a mysql config file
# configuration: |
# [mysqld]
# wsrep_debug=ON
# wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
# envVarsSecret: my-env-var-secrets
resources:
requests:
memory: 1G
cpu: 600m
limits: {}
# memory: 1G
# cpu: 600m
# runtimeClassName: image-rc
sidecars: []
sidecarVolumes: []
sidecarPVCs: []
sidecarResources:
requests: {}
limits: {}
nodeSelector: {}
# disktype: ssd
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-xtradb-cluster-operator
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
tolerations: []
# - key: "node.alpha.kubernetes.io/unreachable"
# operator: "Exists"
# effect: "NoExecute"
# tolerationSeconds: 6000
gracePeriod: 600
# lifecycle:
# preStop:
# exec:
# command: [ "/bin/true" ]
# postStart:
# exec:
# command: [ "/bin/true" ]
podDisruptionBudget:
# only one of maxUnavailable or minAvaliable can be set
maxUnavailable: 1
# minAvailable: 0
persistence:
enabled: true
## percona data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
# dataSource:
# name: new-snapshot-test
# kind: VolumeSnapshot
# apiGroup: snapshot.storage.k8s.io
size: 8Gi
# disable Helm creating TLS certificates if you want to let the operator
# request certificates from cert-manager
certManager: false
# If this is set will not create secrets from values and will instead try to use
# a pre-existing secret of the same name.
# clusterSecretName: cluster1-secrets
readinessProbes:
initialDelaySeconds: 15
timeoutSeconds: 15
periodSeconds: 30
successThreshold: 1
failureThreshold: 5
livenessProbes:
initialDelaySeconds: 300
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# A custom Kubernetes Security Context for a Container to be used instead of the default one
# containerSecurityContext:
# privileged: false
# A custom Kubernetes Security Context for a Pod to be used instead of the default one
# podSecurityContext:
# fsGroup: 1001
# supplementalGroups:
# - 1001
# serviceAccountName: percona-xtradb-cluster-operator-workload
haproxy:
enabled: true
size: 3
image: percona/haproxy:2.8.5
# imagePullPolicy: Always
imagePullSecrets: []
# - name: private-registry-credentials
# configuration: |
#
# the actual default configuration file can be found here https://raw.githubusercontent.com/percona/percona-xtradb-cluster-operator/main/build/haproxy-global.cfg
#
# global
# maxconn 2048
# external-check
# insecure-fork-wanted
# stats socket /etc/haproxy/pxc/haproxy.sock mode 600 expose-fd listeners level admin
#
# defaults
# default-server init-addr last,libc,none
# log global
# mode tcp
# retries 10
# timeout client 28800s
# timeout connect 100500
# timeout server 28800s
#
# resolvers kubernetes
# parse-resolv-conf
#
# frontend galera-in
# bind *:3309 accept-proxy
# bind *:3306
# mode tcp
# option clitcpka
# default_backend galera-nodes
#
# frontend galera-admin-in
# bind *:33062
# mode tcp
# option clitcpka
# default_backend galera-admin-nodes
#
# frontend galera-replica-in
# bind *:3307
# mode tcp
# option clitcpka
# default_backend galera-replica-nodes
#
# frontend galera-mysqlx-in
# bind *:33060
# mode tcp
# option clitcpka
# default_backend galera-mysqlx-nodes
#
# frontend stats
# bind *:8404
# mode http
# option http-use-htx
# http-request use-service prometheus-exporter if { path /metrics }
annotations: {}
# iam.amazonaws.com/role: role-arn
labels: {}
# rack: rack-22
# runtimeClassName: image-rc
# priorityClassName: high-priority
# schedulerName: mycustom-scheduler
readinessDelaySec: 15
livenessDelaySec: 300
# envVarsSecret: my-env-var-secrets
resources:
requests:
memory: 1G
cpu: 600m
limits: {}
# memory: 1G
# cpu: 600m
sidecars: []
sidecarVolumes: []
sidecarPVCs: []
sidecarResources:
requests: {}
limits: {}
nodeSelector: {}
# disktype: ssd
# serviceAccountName: percona-xtradb-cluster-operator-workload
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-xtradb-cluster-operator
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
tolerations: []
# - key: "node.alpha.kubernetes.io/unreachable"
# operator: "Exists"
# effect: "NoExecute"
# tolerationSeconds: 6000
gracePeriod: 30
# lifecycle:
# preStop:
# exec:
# command: [ "/bin/true" ]
# postStart:
# exec:
# command: [ "/bin/true" ]
# only one of `maxUnavailable` or `minAvailable` can be set.
podDisruptionBudget:
maxUnavailable: 1
# minAvailable: 0
readinessProbes:
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
livenessProbes:
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 4
# exposePrimary:
# enabled: false
# type: ClusterIP
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
# externalTrafficPolicy: Cluster
# internalTrafficPolicy: Cluster
# labels:
# rack: rack-22
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# loadBalancerIP: 127.0.0.1
# exposeReplicas:
# enabled: true
# onlyReaders: false
# type: ClusterIP
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
# externalTrafficPolicy: Cluster
# internalTrafficPolicy: Cluster
# labels:
# rack: rack-22
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# loadBalancerIP: 127.0.0.1
# A custom Kubernetes Security Context for a Container to be used instead of the default one
# containerSecurityContext:
# privileged: false
# A custom Kubernetes Security Context for a Pod to be used instead of the default one
# podSecurityContext:
# fsGroup: 1001
# supplementalGroups:
# - 1001
proxysql:
enabled: false
size: 3
image: "percona/proxysql2:2.5.5"
# imagePullPolicy: Always
imagePullSecrets: []
# configuration: |
# datadir="/var/lib/proxysql"
#
# admin_variables =
# {
# admin_credentials="proxyadmin:admin_password"
# mysql_ifaces="0.0.0.0:6032"
# refresh_interval=2000
#
# cluster_username="proxyadmin"
# cluster_password="admin_password"
# checksum_admin_variables=false
# checksum_ldap_variables=false
# checksum_mysql_variables=false
# cluster_check_interval_ms=200
# cluster_check_status_frequency=100
# cluster_mysql_query_rules_save_to_disk=true
# cluster_mysql_servers_save_to_disk=true
# cluster_mysql_users_save_to_disk=true
# cluster_proxysql_servers_save_to_disk=true
# cluster_mysql_query_rules_diffs_before_sync=1
# cluster_mysql_servers_diffs_before_sync=1
# cluster_mysql_users_diffs_before_sync=1
# cluster_proxysql_servers_diffs_before_sync=1
# }
#
# mysql_variables=
# {
# monitor_password="monitor"
# monitor_galera_healthcheck_interval=1000
# threads=2
# max_connections=2048
# default_query_delay=0
# default_query_timeout=10000
# poll_timeout=2000
# interfaces="0.0.0.0:3306"
# default_schema="information_schema"
# stacksize=1048576
# connect_timeout_server=10000
# monitor_history=60000
# monitor_connect_interval=20000
# monitor_ping_interval=10000
# ping_timeout_server=200
# commands_stats=true
# sessions_sort=true
# have_ssl=true
# ssl_p2s_ca="/etc/proxysql/ssl-internal/ca.crt"
# ssl_p2s_cert="/etc/proxysql/ssl-internal/tls.crt"
# ssl_p2s_key="/etc/proxysql/ssl-internal/tls.key"
# ssl_p2s_cipher="ECDHE-RSA-AES128-GCM-SHA256"
# }
# - name: private-registry-credentials
annotations: {}
# iam.amazonaws.com/role: role-arn
labels: {}
# rack: rack-22
# runtimeClassName: image-rc
# expose:
# enabled: false
# type: ClusterIP
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
# externalTrafficPolicy: Cluster
# internalTrafficPolicy: Cluster
# labels:
# rack: rack-22
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# loadBalancerIP: 127.0.0.1
# priorityClassName: high-priority
# schedulerName: mycustom-scheduler
readinessDelaySec: 15
livenessDelaySec: 300
# envVarsSecret: my-env-var-secrets
resources:
requests:
memory: 1G
cpu: 600m
limits: {}
# memory: 1G
# cpu: 600m
sidecars: []
sidecarVolumes: []
sidecarPVCs: []
sidecarResources:
requests: {}
limits: {}
nodeSelector: {}
# disktype: ssd
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-xtradb-cluster-operator
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
# serviceAccountName: percona-xtradb-cluster-operator-workload
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
tolerations: []
# - key: "node.alpha.kubernetes.io/unreachable"
# operator: "Exists"
# effect: "NoExecute"
# tolerationSeconds: 6000
gracePeriod: 30
# lifecycle:
# preStop:
# exec:
# command: [ "/bin/true" ]
# postStart:
# exec:
# command: [ "/bin/true" ]
# only one of `maxUnavailable` or `minAvailable` can be set.
podDisruptionBudget:
maxUnavailable: 1
# minAvailable: 0
persistence:
enabled: true
## Percona data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 8Gi
# A custom Kubernetes Security Context for a Container to be used instead of the default one
# containerSecurityContext:
# privileged: false
# A custom Kubernetes Security Context for a Pod to be used instead of the default one
# podSecurityContext:
# fsGroup: 1001
# supplementalGroups:
# - 1001
logcollector:
enabled: true
image: percona/percona-xtradb-cluster-operator:1.15.0-logcollector-fluentbit3.1.4
# imagePullPolicy: Always
imagePullSecrets: []
# configuration: |
# [OUTPUT]
# Name es
# Match *
# Host 192.168.2.3
# Port 9200
# Index my_index
# Type my_type
resources:
requests:
memory: 100M
cpu: 200m
limits: {}
# A custom Kubernetes Security Context for a Container to be used instead of the default one
# containerSecurityContext:
# privileged: false
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.42.0
# imagePullPolicy: Always
imagePullSecrets: []
serverHost: monitoring-service
serverUser: admin
# pxcParams: "--disable-tablestats-limit=2000"
# proxysqlParams: "--custom-labels=CUSTOM-LABELS"
# containerSecurityContext:
# privileged: false
resources:
requests:
memory: 150M
cpu: 300m
limits: {}
backup:
enabled: true
# allowParallel: true
image:
repository: percona/percona-xtradb-cluster-operator
tag: 1.15.0-pxc8.0-backup-pxb8.0.35
# backoffLimit: 6
# serviceAccountName: percona-xtradb-cluster-operator
# imagePullPolicy: Always
imagePullSecrets: []
# - name: private-registry-credentials
pitr:
enabled: false
storageName: s3-us-west-binlogs
timeBetweenUploads: 60
timeoutSeconds: 60
resources:
requests: {}
limits: {}
storages: {}
# fs-pvc:
# type: filesystem
# volume:
# persistentVolumeClaim:
# storageClassName: standard
# accessModes: ["ReadWriteOnce"]
# resources:
# requests:
# storage: 6Gi
# s3-us-west:
# type: s3
# verifyTLS: true
# nodeSelector:
# storage: tape
# backupWorker: 'True'
# resources:
# requests:
# memory: 1G
# cpu: 600m
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-xtradb-cluster-operator
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: backupWorker
# operator: In
# values:
# - 'True'
# tolerations:
# - key: "backupWorker"
# operator: "Equal"
# value: "True"
# effect: "NoSchedule"
# annotations:
# testName: scheduled-backup
# labels:
# backupWorker: 'True'
# schedulerName: 'default-scheduler'
# priorityClassName: 'high-priority'
# containerSecurityContext:
# privileged: true
# podSecurityContext:
# fsGroup: 1001
# supplementalGroups: [1001, 1002, 1003]
# containerOptions:
# env:
# - name: VERIFY_TLS
# value: "false"
# args:
# xtrabackup:
# - "--someflag=abc"
# xbcloud:
# - "--someflag=abc"
# xbstream:
# - "--someflag=abc"
# s3:
# bucket: S3-BACKUP-BUCKET-NAME-HERE
# # Use credentialsSecret OR credentialsAccessKey/credentialsSecretKey
# credentialsSecret: my-cluster-name-backup-s3
# #credentialsAccessKey: REPLACE-WITH-AWS-ACCESS-KEY
# #credentialsSecretKey: REPLACE-WITH-AWS-SECRET-KEY
# region: us-west-2
# endpointUrl: https://sfo2.digitaloceanspaces.com
# s3-us-west-binlogs:
# type: s3
# s3:
# bucket: S3-BACKUP-BUCKET-NAME-HERE/DIRECTORY
# credentialsSecret: my-cluster-name-backup-s3
# region: us-west-2
# endpointUrl: https://sfo2.digitaloceanspaces.com
# azure-blob:
# type: azure
# azure:
# credentialsSecret: azure-secret
# container: test
# endpointUrl: https://accountName.blob.core.windows.net
# storageClass: Hot
schedule: []
# - name: "daily-backup"
# schedule: "0 0 * * *"
# keep: 5
# storageName: fs-pvc
# - name: "sat-night-backup"
# schedule: "0 0 * * 6"
# keep: 3
# storageName: s3-us-west
secrets:
## You should be overriding these with your own or specify name for clusterSecretName.
# passwords:
# root: insecure-root-password
# xtrabackup: insecure-xtrabackup-password
# monitor: insecure-monitor-password
# clustercheck: insecure-clustercheck-password
# proxyadmin: insecure-proxyadmin-password
# pmmserver: insecure-pmmserver-password
# # If pmmserverkey is set in that case pmmserver pass will not be included
# # pmmserverkey: set-pmmserver-api-key
# operator: insecure-operator-password
# replication: insecure-replication-password
## If you are using `cert-manager` you can skip this next section.
tls: {}
# This should be the name of a secret that contains certificates.
# it should have the following keys: `ca.crt`, `tls.crt`, `tls.key`
# If not set the Helm chart will attempt to create certificates
# for you [not recommended for prod]:
# cluster:
# This should be the name of a secret that contains certificates.
# it should have the following keys: `ca.crt`, `tls.crt`, `tls.key`
# If not set the Helm chart will attempt to create certificates
# for you [not recommended for prod]:
# internal:
# logCollector: cluster1-log-collector-secrets
# vault: keyring-secret-vault

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,22 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Operator For MySQL based on Percona XtraDB
Cluster
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: pxc-operator
apiVersion: v2
appVersion: 1.15.0
description: A Helm chart for deploying the Percona Operator for MySQL (based on Percona
XtraDB Cluster)
home: https://docs.percona.com/percona-operator-for-mysql/pxc/
icon: file://assets/icons/pxc-operator.png
kubeVersion: '>=1.21-0'
maintainers:
- email: tomislav.plavcic@percona.com
name: tplavcic
- email: natalia.marukovich@percona.com
name: nmarukovich
- email: sergey.pronin@percona.com
name: spron-in
name: pxc-operator
version: 1.15.0

View File

@ -0,0 +1,13 @@
Copyright 2019 Paul Czarkowski <username.taken@gmail.com>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,64 @@
# Percona Operator For MySQL
[Percona XtraDB Cluster (PXC)](https://www.percona.com/doc/percona-xtradb-cluster/LATEST/index.html) is a database clustering solution for MySQL. Percona Operator For MySQL allows users to deploy and manage Percona XtraDB Clusters on Kubernetes.
Useful links
* [Operator Github repository](https://github.com/percona/percona-xtradb-cluster-operator)
* [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-pxc/index.html)
## Pre-requisites
* Kubernetes 1.28+
* Helm v3
# Installation
This chart will deploy the Operator Pod for the further Percona XtraDB Cluster creation in Kubernetes.
## Installing the Chart
To install the chart with the `pxc` release name using a dedicated namespace (recommended):
```sh
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install my-operator percona/pxc-operator --version 1.15.0 --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------- | -----------------------------------------------------------------------------------------------| -------------------------------------------------|
| `image` | PXC Operator Container image full path | `percona/percona-xtradb-cluster-operator:1.15.0` |
| `imagePullPolicy` | PXC Operator Container pull policy | `Always` |
| `containerSecurityContext` | PXC Operator Container securityContext | `{}` |
| `imagePullSecrets` | PXC Operator Pod pull secret | `[]` |
| `replicaCount` | PXC Operator Pod quantity | `1` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `podAnnotations` | Operator Pod user-defined annotations | `{}` |
| `resources` | Resource requests and limits | `{}` |
| `nodeSelector` | Labels for Pod assignment | `{}` |
| `logStructured` | Force PXC operator to print JSON-wrapped log messages | `false` |
| `logLevel` | PXC Operator logging level | `INFO` |
| `disableTelemetry` | Disable sending PXC Operator telemetry data to Percona | `false` |
| `watchAllNamespaces` | Watch all namespaces (Install cluster-wide) | `false` |
| `watchNamespace` | Comma separated list of namespace(s) to watch when different from release namespace | `""` |
| `createNamespace` | Create the watched namespace(s) | `false` |
| `rbac.create` | If false RBAC will not be created. RBAC resources will need to be created manually | `true` |
| `serviceAccount.create` | If false the ServiceAccounts will not be created. The ServiceAccounts must be created manually | `true` |
| `extraEnvVars` | Custom pod environment variables | `[]` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Alternatively a YAML file that specifies the values for the parameters can be provided like this:
```sh
helm install pxc-operator -f values.yaml percona/pxc-operator
```
## Deploy the database
To deploy Percona XtraDB Cluster run the following command:
```sh
helm install my-db percona/pxc-db
```
See more about Percona XtraDB Cluster in its chart [here](https://github.com/percona/percona-helm-charts/blob/main/charts/pxc-db) or in the [Helm chart installation guide](https://www.percona.com/doc/kubernetes-operator-for-pxc/helm.html).

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,16 @@
1. Percona Operator for MySQL is deployed.
Check if operator Pod is running:
kubectl get pods -l app.kubernetes.io/name={{ template "pxc-operator.name" . }} --namespace {{ .Release.Namespace }}
Troubleshoot by checking the logs:
export POD=$(kubectl get pods -l app.kubernetes.io/name={{ template "pxc-operator.name" . }} --namespace {{ .Release.Namespace }} --output name)
kubectl logs $POD --namespace={{ .Release.Namespace }}
2. Deploy the cluster with the following command:
helm install my-db percona/pxc-db --namespace={{ .Release.Namespace }}
Read more in our documentation: https://docs.percona.com/percona-operator-for-mysql/pxc/

View File

@ -0,0 +1,56 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "pxc-operator.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "pxc-operator.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "pxc-operator.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "pxc-operator.labels" -}}
app.kubernetes.io/name: {{ include "pxc-operator.name" . }}
helm.sh/chart: {{ include "pxc-operator.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Functions returns image URI according to parameters set
*/}}
{{- define "pxc-operator.image" -}}
{{- if .Values.image }}
{{- .Values.image }}
{{- else }}
{{- printf "%s:%s" .Values.operatorImageRepository .Chart.AppVersion }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,119 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "pxc-operator.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "pxc-operator.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/component: operator
app.kubernetes.io/name: {{ include "pxc-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/part-of: {{ include "pxc-operator.name" . }}
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
app.kubernetes.io/component: operator
app.kubernetes.io/name: {{ include "pxc-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/part-of: {{ include "pxc-operator.name" . }}
spec:
serviceAccountName: {{ include "pxc-operator.fullname" . }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
terminationGracePeriodSeconds: 600
containers:
- name: percona-xtradb-cluster-operator
image: {{ include "pxc-operator.image" . }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
ports:
- containerPort: 8080
name: metrics
protocol: TCP
command:
- percona-xtradb-cluster-operator
{{- if .Values.containerSecurityContext.readOnlyRootFilesystem }}
volumeMounts:
- name: tmpdir
mountPath: /tmp
{{- end }}
env:
- name: WATCH_NAMESPACE
{{- if .Values.watchAllNamespaces }}
value: ""
{{- else }}
value: "{{ default .Release.Namespace .Values.watchNamespace }}"
{{- end }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: {{ include "pxc-operator.fullname" . }}
- name: LOG_STRUCTURED
value: "{{ .Values.logStructured }}"
- name: LOG_LEVEL
value: "{{ .Values.logLevel }}"
- name: DISABLE_TELEMETRY
value: "{{ .Values.disableTelemetry }}"
{{- if .Values.extraEnvVars }}
{{- toYaml .Values.extraEnvVars | nindent 12 }}
{{- end }}
livenessProbe:
failureThreshold: 3
httpGet:
path: /metrics
port: metrics
scheme: HTTP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.containerSecurityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.containerSecurityContext.readOnlyRootFilesystem }}
volumes:
- name: tmpdir
emptyDir: {}
{{- end }}
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
---
apiVersion: v1
kind: Service
metadata:
name: percona-xtradb-cluster-operator
namespace: {{ .Release.Namespace }}
labels:
name: percona-xtradb-cluster-operator
spec:
ports:
- port: 443
targetPort: 9443
selector:
app.kubernetes.io/name: {{ include "pxc-operator.name" . }}
{{- end }}

View File

@ -0,0 +1,11 @@
{{ if and .Values.watchNamespace .Values.createNamespace }}
{{ range ( split "," .Values.watchNamespace ) }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ trim . }}
annotations:
helm.sh/resource-policy: keep
---
{{ end }}
{{ end }}

View File

@ -0,0 +1,37 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "pxc-operator.fullname" . }}
namespace: {{ .Release.Namespace }}
---
{{- end }}
{{- if .Values.rbac.create }}
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
kind: ClusterRoleBinding
{{- else }}
kind: RoleBinding
{{- end }}
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "pxc-operator.fullname" . }}
{{- if not (or .Values.watchNamespace .Values.watchAllNamespaces) }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{ include "pxc-operator.labels" . | indent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "pxc-operator.fullname" . }}
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
namespace: {{ .Release.Namespace }}
{{- end }}
roleRef:
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
name: {{ include "pxc-operator.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@ -0,0 +1,142 @@
{{- if .Values.rbac.create }}
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "pxc-operator.fullname" . }}
{{- if not (or .Values.watchNamespace .Values.watchAllNamespaces) }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{ include "pxc-operator.labels" . | indent 4 }}
rules:
- apiGroups:
- pxc.percona.com
resources:
- perconaxtradbclusters
- perconaxtradbclusters/status
- perconaxtradbclusterbackups
- perconaxtradbclusterbackups/status
- perconaxtradbclusterrestores
- perconaxtradbclusterrestores/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
{{- end }}
- apiGroups:
- ""
resources:
- pods
- pods/exec
- pods/log
- configmaps
- services
- persistentvolumeclaims
- secrets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- events.k8s.io
- ""
resources:
- events
verbs:
- create
- patch
- get
- list
- watch
- apiGroups:
- certmanager.k8s.io
- cert-manager.io
resources:
- issuers
- certificates
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
{{- end }}

View File

@ -0,0 +1,69 @@
# Default values for pxc-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
operatorImageRepository: percona/percona-xtradb-cluster-operator
imagePullPolicy: IfNotPresent
image: ""
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# multiple namespaces can be specified and separated by comma
# watchNamespace:
# set if you want that watched namespaces are created by helm
# createNamespace: false
# set if operator should be deployed in cluster wide mode. defaults to false
watchAllNamespaces: false
# rbac: settings for deployer RBAC creation
rbac:
# rbac.create: if false RBAC resources should be in place
create: true
# serviceAccount: settings for Service Accounts used by the deployer
serviceAccount:
# serviceAccount.create: Whether to create the Service Accounts or not
create: true
# set if you want to use a different operator name
# defaults to `percona-xtradb-cluster-operator`
# operatorName:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you don't want to specify resources, comment the following
# lines and add the curly braces after 'resources:'.
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 100m
memory: 20Mi
containerSecurityContext: {}
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
logStructured: false
logLevel: "INFO"
disableTelemetry: false
extraEnvVars: []
# - name: http_proxy
# value: "example-proxy-http"
# - name: https_proxy
# value: "example-proxy-https"

View File

@ -30166,6 +30166,31 @@ entries:
- assets/percona/psmdb-operator-1.14.3.tgz
version: 1.14.3
pxc-db:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona XtraDB Cluster
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: pxc-db
apiVersion: v2
appVersion: 1.15.0
created: "2024-08-22T00:49:29.364375893Z"
description: A Helm chart for installing Percona XtraDB Cluster Databases using
the PXC Operator.
digest: 7f2aa92c5c2326c0b44142391ec2411e2dadfb8d42de7c039e48c6f7ec25e9c5
home: https://www.percona.com/doc/kubernetes-operator-for-pxc/kubernetes.html
icon: file://assets/icons/pxc-db.png
kubeVersion: '>=1.21-0'
maintainers:
- email: tomislav.plavcic@percona.com
name: tplavcic
- email: sergey.pronin@percona.com
name: spron-in
- email: natalia.marukovich@percona.com
name: nmarukovich
name: pxc-db
urls:
- assets/percona/pxc-db-1.15.0.tgz
version: 1.15.0
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona XtraDB Cluster
@ -30456,6 +30481,32 @@ entries:
- assets/percona/pxc-db-1.12.3.tgz
version: 1.12.3
pxc-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Operator For MySQL based on Percona
XtraDB Cluster
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: pxc-operator
apiVersion: v2
appVersion: 1.15.0
created: "2024-08-22T00:49:29.377176385Z"
description: A Helm chart for deploying the Percona Operator for MySQL (based
on Percona XtraDB Cluster)
digest: 2d63941c128d3fd6be857cf0c00a6e4bd252fd3544f2d9999bef395c99f1192e
home: https://docs.percona.com/percona-operator-for-mysql/pxc/
icon: file://assets/icons/pxc-operator.png
kubeVersion: '>=1.21-0'
maintainers:
- email: tomislav.plavcic@percona.com
name: tplavcic
- email: natalia.marukovich@percona.com
name: nmarukovich
- email: sergey.pronin@percona.com
name: spron-in
name: pxc-operator
urls:
- assets/percona/pxc-operator-1.15.0.tgz
version: 1.15.0
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Operator For MySQL based on Percona
@ -40108,4 +40159,4 @@ entries:
urls:
- assets/netfoundry/ziti-host-1.5.1.tgz
version: 1.5.1
generated: "2024-08-21T00:47:47.792456339Z"
generated: "2024-08-22T00:49:25.611738404Z"