Merge pull request #562 from samuelattwood/percona

Charts CI - Adding Percona
pull/564/head
Samuel Attwood 2022-11-03 13:54:36 -04:00 committed by GitHub
commit 6a45a7b378
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 34382 additions and 0 deletions

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,18 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Server for MongoDB
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: psmdb-db
apiVersion: v2
appVersion: 1.13.0
description: A Helm chart for installing Percona Server MongoDB Cluster Databases
using the PSMDB Operator.
home: https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html
icon: https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/operator.png
maintainers:
- email: ivan.pylypenko@percona.com
name: cap1984
- email: tomislav.plavcic@percona.com
name: tplavcic
name: psmdb-db
version: 1.13.0

View File

@ -0,0 +1,211 @@
# Percona Server for MongoDB
This chart deploys Percona Server for MongoDB Cluster on Kubernetes controlled by Percona Operator for MongoDB.
Useful links:
- [Operator Github repository](https://github.com/percona/percona-server-mongodb-operator)
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html)
## Pre-requisites
* Percona Operator for MongoDB running in your Kubernetes cluster. See installation details [here](https://github.com/percona/percona-helm-charts/blob/main/charts/psmdb-operator) or in the [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/helm.html).
* Kubernetes 1.19+
* Helm v3
# Chart Details
This chart will deploy Percona Server for MongoDB Cluster in Kubernetes. It will create a Custom Resource, and the Operator will trigger the creation of corresponding Kubernetes primitives: StatefulSets, Pods, Secrets, etc.
## Installing the Chart
To install the chart with the `psmdb` release name using a dedicated namespace (recommended):
```sh
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install my-db percona/psmdb-db --version 1.12.0 --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------- | ------------------------------------------------------------------------------| ------------------------------------------|
| `crVersion` | CR Cluster Manifest version | `1.13.0` |
| `pause` | Stop PSMDB Database safely | `false` |
| `unmanaged` | Start cluster and don't manage it (cross cluster replication) | `false` |
| `allowUnsafeConfigurations` | Allows forbidden configurations like even number of PSMDB cluster pods | `false` |
| `clusterServiceDNSSuffix` | The (non-standard) cluster domain to be used as a suffix of the Service name | `""` |
| `clusterServiceDNSMode` | Mode for the cluster service dns (Internal/ServiceMesh) | `""` |
| `multiCluster.enabled` | Enable Multi Cluster Services (MCS) cluster mode | `false` |
| `multiCluster.DNSSuffix` | The cluster domain to be used as a suffix for multi-cluster Services used by Kubernetes | `""` |
| `updateStrategy` | Regulates the way how PSMDB Cluster Pods will be updated after setting a new image | `SmartUpdate` |
| `upgradeOptions.versionServiceEndpoint` | Endpoint for actual PSMDB Versions provider | `https://check.percona.com/versions/` |
| `upgradeOptions.apply` | PSMDB image to apply from version service - recommended, latest, actual version like 4.4.2-4 | `disabled` |
| `upgradeOptions.schedule` | Cron formatted time to execute the update | `"0 2 * * *"` |
| `upgradeOptions.setFCV` | Set feature compatibility version on major upgrade | `false` |
| `finalizers:delete-psmdb-pvc` | Set this if you want to delete database persistent volumes on cluster deletion | `[]` |
| `finalizers:delete-psmdb-pods-in-order` | Set this if you want to delete PSMDB pods in order (primary last) | `[]` |
| `image.repository` | PSMDB Container image repository | `percona/percona-server-mongodb` |
| `image.tag` | PSMDB Container image tag | `5.0.11-10` |
| `imagePullPolicy` | The policy used to update images | `Always` |
| `imagePullSecrets` | PSMDB Container pull secret | `[]` |
| `tls.certValidityDuration` | The validity duration of the external certificate for cert manager | `""` |
| `secrets` | Operator secrets section | `{}` |
| `pmm.enabled` | Enable integration with [Percona Monitoring and Management software](https://www.percona.com/blog/2020/07/23/using-percona-kubernetes-operators-with-percona-monitoring-and-management/) | `false` |
| `pmm.image.repository` | PMM Container image repository | `percona/pmm-client` |
| `pmm.image.tag` | PMM Container image tag | `2.30.0` |
| `pmm.serverHost` | PMM server related K8S service hostname | `monitoring-service` |
||
| `replsets[0].name` | ReplicaSet name | `rs0` |
| `replsets[0].size` | ReplicaSet size (pod quantity) | `3` |
| `replsets[0].externalNodes` | ReplicaSet external nodes (cross cluster replication) | `[]` |
| `replsets[0].configuration` | Custom config for mongod in replica set | `""` |
| `replsets[0].antiAffinityTopologyKey` | ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `replsets[0].tolerations` | ReplicaSet Pod tolerations | `[]` |
| `replsets[0].priorityClass` | ReplicaSet Pod priorityClassName | `""` |
| `replsets[0].annotations` | ReplicaSet Pod annotations | `{}` |
| `replsets[0].labels` | ReplicaSet Pod labels | `{}` |
| `replsets[0].nodeSelector` | ReplicaSet Pod nodeSelector labels | `{}` |
| `replsets[0].livenessProbe` | ReplicaSet Pod livenessProbe structure | `{}` |
| `replsets[0].readinessProbe` | ReplicaSet Pod readinessProbe structure | `{}` |
| `replsets[0].storage` | Set cacheSizeRatio or other custom MongoDB storage options | `{}` |
| `replsets[0].podSecurityContext` | Set the security context for a Pod | `{}` |
| `replsets[0].containerSecurityContext` | Set the security context for a Container | `{}` |
| `replsets[0].runtimeClass` | ReplicaSet Pod runtimeClassName | `""` |
| `replsets[0].sidecars` | ReplicaSet Pod sidecars | `{}` |
| `replsets[0].sidecarVolumes` | ReplicaSet Pod sidecar volumes | `[]` |
| `replsets[0].sidecarPVCs` | ReplicaSet Pod sidecar PVCs | `[]` |
| `replsets[0].podDisruptionBudget.maxUnavailable` | ReplicaSet failed Pods maximum quantity | `1` |
| `replsets[0].expose.enabled` | Allow access to replicaSet from outside of Kubernetes | `false` |
| `replsets[0].expose.exposeType` | Network service access point type | `ClusterIP` |
| `replsets[0].expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `replsets[0].expose.serviceAnnotations` | ReplicaSet service annotations | `{}` |
| `replsets[0].expose.serviceLabels` | ReplicaSet service labels | `{}` |
| `replsets[0].nonvoting.enabled` | Add MongoDB nonvoting Pods | `false` |
| `replsets[0].nonvoting.podSecurityContext` | Set the security context for a Pod | `{}` |
| `replsets[0].nonvoting.containerSecurityContext` | Set the security context for a Container | `{}` |
| `replsets[0].nonvoting.size` | Number of nonvoting Pods | `1` |
| `replsets[0].nonvoting.configuration` | Custom config for mongod nonvoting member | `""` |
| `replsets[0].nonvoting.antiAffinityTopologyKey` | Nonvoting Pods affinity | `kubernetes.io/hostname` |
| `replsets[0].nonvoting.tolerations` | Nonvoting Pod tolerations | `[]` |
| `replsets[0].nonvoting.priorityClass` | Nonvoting Pod priorityClassName | `""` |
| `replsets[0].nonvoting.annotations` | Nonvoting Pod annotations | `{}` |
| `replsets[0].nonvoting.labels` | Nonvoting Pod labels | `{}` |
| `replsets[0].nonvoting.nodeSelector` | Nonvoting Pod nodeSelector labels | `{}` |
| `replsets[0].nonvoting.podDisruptionBudget.maxUnavailable` | Nonvoting failed Pods maximum quantity | `1` |
| `replsets[0].nonvoting.resources` | Nonvoting Pods resource requests and limits | `{}` |
| `replsets[0].nonvoting.volumeSpec` | Nonvoting Pods storage resources | `{}` |
| `replsets[0].nonvoting.volumeSpec.emptyDir` | Nonvoting Pods emptyDir K8S storage | `{}` |
| `replsets[0].nonvoting.volumeSpec.hostPath` | Nonvoting Pods hostPath K8S storage | |
| `replsets[0].nonvoting.volumeSpec.hostPath.path` | Nonvoting Pods hostPath K8S storage path | `""` |
| `replsets[0].nonvoting.volumeSpec.pvc` | Nonvoting Pods PVC request parameters | |
| `replsets[0].nonvoting.volumeSpec.pvc.storageClassName` | Nonvoting Pods PVC target storageClass | `""` |
| `replsets[0].nonvoting.volumeSpec.pvc.accessModes` | Nonvoting Pods PVC access policy | `[]` |
| `replsets[0].nonvoting.volumeSpec.pvc.resources.requests.storage` | Nonvoting Pods PVC storage size | `3Gi` |
| `replsets[0].arbiter.enabled` | Create MongoDB arbiter service | `false` |
| `replsets[0].arbiter.size` | MongoDB arbiter Pod quantity | `1` |
| `replsets[0].arbiter.antiAffinityTopologyKey` | MongoDB arbiter Pod affinity | `kubernetes.io/hostname` |
| `replsets[0].arbiter.tolerations` | MongoDB arbiter Pod tolerations | `[]` |
| `replsets[0].arbiter.priorityClass` | MongoDB arbiter priorityClassName | `""` |
| `replsets[0].arbiter.annotations` | MongoDB arbiter Pod annotations | `{}` |
| `replsets[0].arbiter.labels` | MongoDB arbiter Pod labels | `{}` |
| `replsets[0].arbiter.nodeSelector` | MongoDB arbiter Pod nodeSelector labels | `{}` |
| `replsets[0].schedulerName` | ReplicaSet Pod schedulerName | `""` |
| `replsets[0].resources` | ReplicaSet Pods resource requests and limits | `{}` |
| `replsets[0].volumeSpec` | ReplicaSet Pods storage resources | `{}` |
| `replsets[0].volumeSpec.emptyDir` | ReplicaSet Pods emptyDir K8S storage | `{}` |
| `replsets[0].volumeSpec.hostPath` | ReplicaSet Pods hostPath K8S storage | |
| `replsets[0].volumeSpec.hostPath.path` | ReplicaSet Pods hostPath K8S storage path | `""` |
| `replsets[0].volumeSpec.pvc` | ReplicaSet Pods PVC request parameters | |
| `replsets[0].volumeSpec.pvc.storageClassName` | ReplicaSet Pods PVC target storageClass | `""` |
| `replsets[0].volumeSpec.pvc.accessModes` | ReplicaSet Pods PVC access policy | `[]` |
| `replsets[0].volumeSpec.pvc.resources.requests.storage` | ReplicaSet Pods PVC storage size | `3Gi` |
| |
| `sharding.enabled` | Enable sharding setup | `true` |
| `sharding.configrs.size` | Config ReplicaSet size (pod quantity) | `3` |
| `sharding.configrs.externalNodes` | Config ReplicaSet external nodes (cross cluster replication) | `[]` |
| `sharding.configrs.configuration` | Custom config for mongod in config replica set | `""` |
| `sharding.configrs.antiAffinityTopologyKey` | Config ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `sharding.configrs.tolerations` | Config ReplicaSet Pod tolerations | `[]` |
| `sharding.configrs.priorityClass` | Config ReplicaSet Pod priorityClassName | `""` |
| `sharding.configrs.annotations` | Config ReplicaSet Pod annotations | `{}` |
| `sharding.configrs.labels` | Config ReplicaSet Pod labels | `{}` |
| `sharding.configrs.nodeSelector` | Config ReplicaSet Pod nodeSelector labels | `{}` |
| `sharding.configrs.livenessProbe` | Config ReplicaSet Pod livenessProbe structure | `{}` |
| `sharding.configrs.readinessProbe` | Config ReplicaSet Pod readinessProbe structure | `{}` |
| `sharding.configrs.storage` | Set cacheSizeRatio or other custom MongoDB storage options | `{}` |
| `sharding.configrs.podSecurityContext` | Set the security context for a Pod | `{}` |
| `sharding.configrs.containerSecurityContext` | Set the security context for a Container | `{}` |
| `sharding.configrs.runtimeClass` | Config ReplicaSet Pod runtimeClassName | `""` |
| `sharding.configrs.sidecars` | Config ReplicaSet Pod sidecars | `{}` |
| `sharding.configrs.sidecarVolumes` | Config ReplicaSet Pod sidecar volumes | `[]` |
| `sharding.configrs.sidecarPVCs` | Config ReplicaSet Pod sidecar PVCs | `[]` |
| `sharding.configrs.podDisruptionBudget.maxUnavailable` | Config ReplicaSet failed Pods maximum quantity | `1` |
| `sharding.configrs.expose.enabled` | Allow access to cfg replica from outside of Kubernetes | `false` |
| `sharding.configrs.expose.exposeType` | Network service access point type | `ClusterIP` |
| `sharding.configrs.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `sharding.configrs.expose.serviceAnnotations` | Config ReplicaSet service annotations | `{}` |
| `sharding.configrs.expose.serviceLabels` | Config ReplicaSet service labels | `{}` |
| `sharding.configrs.resources.limits.cpu` | Config ReplicaSet resource limits CPU | `300m` |
| `sharding.configrs.resources.limits.memory` | Config ReplicaSet resource limits memory | `0.5G` |
| `sharding.configrs.resources.requests.cpu` | Config ReplicaSet resource requests CPU | `300m` |
| `sharding.configrs.resources.requests.memory` | Config ReplicaSet resource requests memory | `0.5G` |
| `sharding.configrs.volumeSpec.hostPath` | Config ReplicaSet hostPath K8S storage | |
| `sharding.configrs.volumeSpec.hostPath.path` | Config ReplicaSet hostPath K8S storage path | `""` |
| `sharding.configrs.volumeSpec.emptyDir` | Config ReplicaSet Pods emptyDir K8S storage | |
| `sharding.configrs.volumeSpec.pvc` | Config ReplicaSet Pods PVC request parameters | |
| `sharding.configrs.volumeSpec.pvc.storageClassName` | Config ReplicaSet Pods PVC storageClass | `""` |
| `sharding.configrs.volumeSpec.pvc.accessModes` | Config ReplicaSet Pods PVC access policy | `[]` |
| `sharding.configrs.volumeSpec.pvc.resources.requests.storage` | Config ReplicaSet Pods PVC storage size | `3Gi` |
| `sharding.mongos.size` | Mongos size (pod quantity) | `3` |
| `sharding.mongos.configuration` | Custom config for mongos | `""` |
| `sharding.mongos.antiAffinityTopologyKey` | Mongos Pods affinity | `kubernetes.io/hostname` |
| `sharding.mongos.tolerations` | Mongos Pods tolerations | `[]` |
| `sharding.mongos.priorityClass` | Mongos Pods priorityClassName | `""` |
| `sharding.mongos.annotations` | Mongos Pods annotations | `{}` |
| `sharding.mongos.labels` | Mongos Pods labels | `{}` |
| `sharding.mongos.nodeSelector` | Mongos Pods nodeSelector labels | `{}` |
| `sharding.mongos.livenessProbe` | Mongos Pod livenessProbe structure | `{}` |
| `sharding.mongos.readinessProbe` | Mongos Pod readinessProbe structure | `{}` |
| `sharding.mongos.podSecurityContext` | Set the security context for a Pod | `{}` |
| `sharding.mongos.containerSecurityContext` | Set the security context for a Container | `{}` |
| `sharding.mongos.runtimeClass` | Mongos Pod runtimeClassName | `""` |
| `sharding.mongos.sidecars` | Mongos Pod sidecars | `{}` |
| `sharding.mongos.sidecarVolumes` | Mongos Pod sidecar volumes | `[]` |
| `sharding.mongos.sidecarPVCs` | Mongos Pod sidecar PVCs | `[]` |
| `sharding.mongos.podDisruptionBudget.maxUnavailable` | Mongos failed Pods maximum quantity | `1` |
| `sharding.mongos.resources.limits.cpu` | Mongos Pods resource limits CPU | `300m` |
| `sharding.mongos.resources.limits.memory` | Mongos Pods resource limits memory | `0.5G` |
| `sharding.mongos.resources.requests.cpu` | Mongos Pods resource requests CPU | `300m` |
| `sharding.mongos.resources.requests.memory` | Mongos Pods resource requests memory | `0.5G` |
| `sharding.mongos.expose.exposeType` | Mongos service exposeType | `ClusterIP` |
| `sharding.mongos.expose.servicePerPod` | Create a separate ClusterIP Service for each mongos instance | `false` |
| `sharding.mongos.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `sharding.mongos.expose.serviceAnnotations` | Mongos service annotations | `{}` |
| `sharding.mongos.expose.serviceLabels` | Mongos service labels | `{}` |
| |
| `backup.enabled` | Enable backup PBM agent | `true` |
| `backup.annotations` | Backup job annotations | `{}` |
| `backup.restartOnFailure` | Backup Pods restart policy | `true` |
| `backup.image.repository` | PBM Container image repository | `percona/percona-backup-mongodb` |
| `backup.image.tag` | PBM Container image tag | `1.8.1` |
| `backup.serviceAccountName` | Run PBM Container under specified K8S SA | `percona-server-mongodb-operator` |
| `backup.storages` | Local/remote backup storages settings | `{}` |
| `backup.pitr.enabled` | Enable point in time recovery for backup | `false` |
| `backup.pitr.oplogSpanMin` | Number of minutes between the uploads of oplogs | `10` |
| `backup.pitr.compressionType` | The point-in-time-recovery chunks compression format | `""` |
| `backup.pitr.compressionLevel` | The point-in-time-recovery chunks compression level | `""` |
| `backup.tasks` | Backup working schedule | `{}` |
| `users` | PSMDB essential users | `{}` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Notice that you can use multiple replica sets only with sharding enabled.
## Examples
### Deploy a replica set with disabled backups and no mongos pods
This is great for a dev PSMDB/MongoDB cluster as it doesn't bother with backups and sharding setup.
```bash
$ helm install dev --namespace psmdb . \
--set runUid=1001 --set "replsets[0].volumeSpec.pvc.resources.requests.storage=20Gi" \
--set backup.enabled=false --set sharding.enabled=false
```

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,388 @@
# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Platform type: kubernetes, openshift
# platform: kubernetes
# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
# clusterServiceDNSMode: "Internal"
finalizers:
## Set this if you want that operator deletes the primary pod last
- delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
# - delete-psmdb-pvc
nameOverride: ""
fullnameOverride: ""
crVersion: 1.13.0
pause: false
unmanaged: false
allowUnsafeConfigurations: false
multiCluster:
enabled: false
# DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: disabled
schedule: "0 2 * * *"
setFCV: false
image:
repository: percona/percona-server-mongodb
tag: 5.0.7-6
imagePullPolicy: Always
# imagePullSecrets: []
# tls:
# # 90 days in hours
# certValidityDuration: 2160h
secrets: {}
# If you set users secret here, it will not be constructed from the values at the
# bottom of this file, but the operator will use existing one or generate random values
# users: my-cluster-name-secrets
# encryptionKey: my-cluster-name-mongodb-encryption-key
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.27.0
serverHost: monitoring-service
replsets:
- name: rs0
size: 3
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe:
# failureThreshold: 4
# initialDelaySeconds: 60
# periodSeconds: 30
# timeoutSeconds: 10
# startupDelaySeconds: 7200
# readinessProbe:
# failureThreshold: 8
# initialDelaySeconds: 10
# periodSeconds: 3
# successThreshold: 1
# timeoutSeconds: 2
# runtimeClassName: image-rc
# storage:
# engine: wiredTiger
# wiredTiger:
# engineConfig:
# cacheSizeRatio: 0.5
# directoryForIndexes: false
# journalCompressor: snappy
# collectionConfig:
# blockCompressor: snappy
# indexConfig:
# prefixCompression: true
# inMemory:
# engineConfig:
# inMemorySizeRatio: 0.5
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# - mountPath: /secret
# name: sidecar-secret
# - mountPath: /configmap
# name: sidecar-config
# sidecarVolumes:
# - name: sidecar-secret
# secret:
# secretName: mysecret
# - name: sidecar-config
# configMap:
# name: myconfigmap
# sidecarPVCs:
# - apiVersion: v1
# kind: PersistentVolumeClaim
# metadata:
# name: sidecar-volume-claim
# spec:
# resources:
# requests:
# storage: 1Gi
# volumeMode: Filesystem
# accessModes:
# - ReadWriteOnce
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# serviceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
nonvoting:
enabled: false
# podSecurityContext: {}
# containerSecurityContext: {}
size: 3
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
arbiter:
enabled: false
size: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# schedulerName: ""
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
sharding:
enabled: true
configrs:
size: 3
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# serviceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
mongos:
size: 3
# configuration: |
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
expose:
exposeType: ClusterIP
# servicePerPod: true
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# serviceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# auditLog:
# destination: file
# format: BSON
# filter: '{}'
backup:
enabled: true
image:
repository: percona/percona-backup-mongodb
tag: 1.7.0
serviceAccountName: percona-server-mongodb-operator
# annotations:
# iam.amazonaws.com/role: role-arn
# resources:
# limits:
# cpu: "300m"
# memory: "0.5G"
# requests:
# cpu: "300m"
# memory: "0.5G"
storages:
# s3-us-west:
# type: s3
# s3:
# bucket: S3-BACKUP-BUCKET-NAME-HERE
# credentialsSecret: my-cluster-name-backup-s3
# region: us-west-2
# prefix: ""
# uploadPartSize: 10485760
# maxUploadParts: 10000
# storageClass: STANDARD
# insecureSkipTLSVerify: false
# minio:
# type: s3
# s3:
# bucket: MINIO-BACKUP-BUCKET-NAME-HERE
# region: us-east-1
# credentialsSecret: my-cluster-name-backup-minio
# endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
# prefix: ""
# azure-blob:
# type: azure
# azure:
# container: CONTAINER-NAME
# prefix: PREFIX-NAME
# credentialsSecret: SECRET-NAME
pitr:
enabled: false
# oplogSpanMin: 10
# compressionType: gzip
# compressionLevel: 6
tasks:
# - name: daily-s3-us-west
# enabled: true
# schedule: "0 0 * * *"
# keep: 3
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west
# enabled: false
# schedule: "0 0 * * 0"
# keep: 5
# storageName: s3-us-west
# compressionType: gzip
users:
MONGODB_BACKUP_USER: backup
MONGODB_BACKUP_PASSWORD: backup123456
MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
MONGODB_USER_ADMIN_USER: userAdmin
MONGODB_USER_ADMIN_PASSWORD: userAdmin123456
PMM_SERVER_API_KEY: apikey
# PMM_SERVER_USER: admin
# PMM_SERVER_PASSWORD: admin

View File

@ -0,0 +1,18 @@
Percona Server for MongoDB cluster is deployed now. Get the username and password:
ADMIN_USER=$(kubectl -n {{ .Release.Namespace }} get secrets {{ include "psmdb-database.fullname" . }}-secrets -o jsonpath="{.data.MONGODB_USER_ADMIN_USER}" | base64 --decode)
ADMIN_PASSWORD=$(kubectl -n {{ .Release.Namespace }} get secrets {{ include "psmdb-database.fullname" . }}-secrets -o jsonpath="{.data.MONGODB_USER_ADMIN_PASSWORD}" | base64 --decode)
Connect to the cluster:
{{- if .Values.sharding.enabled }}
kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0 --restart=Never \
-- mongo "mongodb://${ADMIN_USER}:${ADMIN_PASSWORD}@{{ include "psmdb-database.fullname" . }}-mongos.{{ .Release.Namespace }}.svc.cluster.local/admin?ssl=false"
{{- else }}
kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0 --restart=Never \
-- mongo "mongodb+srv://${ADMIN_USER}:${ADMIN_PASSWORD}@{{ include "psmdb-database.fullname" . }}-{{ (index .Values.replsets 0).name }}.{{ .Release.Namespace }}.svc.cluster.local/admin?replicaSet=rs0&ssl=false"
{{- end }}

View File

@ -0,0 +1,45 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "psmdb-database.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "psmdb-database.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 21 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 21 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 21 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "psmdb-database.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 21 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "psmdb-database.labels" -}}
app.kubernetes.io/name: {{ include "psmdb-database.name" . }}
helm.sh/chart: {{ include "psmdb-database.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}

View File

@ -0,0 +1,11 @@
{{- if not (hasKey .Values.secrets "users") }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "psmdb-database.fullname" . }}-secrets
labels:
{{ include "psmdb-database.labels" . | indent 4 }}
type: Opaque
stringData:
{{ .Values.users | toYaml | indent 2 }}
{{- end -}}

View File

@ -0,0 +1,497 @@
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB"}
name: {{ include "psmdb-database.fullname" . }}
labels:
{{ include "psmdb-database.labels" . | indent 4 }}
finalizers:
{{ .Values.finalizers | toYaml | indent 4 }}
spec:
crVersion: {{ .Values.crVersion }}
pause: {{ .Values.pause }}
unmanaged: {{ .Values.unmanaged }}
{{- if .Values.platform }}
platform: {{ .Values.platform }}
{{- end }}
{{- if .Values.clusterServiceDNSSuffix }}
clusterServiceDNSSuffix: {{ .Values.clusterServiceDNSSuffix }}
{{- end }}
{{- if .Values.clusterServiceDNSMode }}
clusterServiceDNSMode: {{ .Values.clusterServiceDNSMode }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
{{- if .Values.allowUnsafeConfigurations }}
allowUnsafeConfigurations: true
{{- end }}
multiCluster:
enabled: {{ .Values.multiCluster.enabled }}
{{- if .Values.multiCluster.DNSSuffix }}
DNSSuffix: {{ .Values.multiCluster.DNSSuffix }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ .Values.imagePullSecrets | toYaml | indent 4 }}
{{- end }}
{{- if .Values.tls }}
tls:
{{ .Values.tls | toYaml | indent 4 }}
{{- end }}
{{- if .Values.secrets }}
secrets:
{{ .Values.secrets | toYaml | indent 4 }}
{{- else }}
secrets:
users: {{ include "psmdb-database.fullname" . }}-secrets
encryptionKey: {{ include "psmdb-database.fullname" . }}-mongodb-encryption-key
{{- end }}
{{- if .Values.updateStrategy }}
updateStrategy: {{ .Values.updateStrategy }}
upgradeOptions:
versionServiceEndpoint: {{ .Values.upgradeOptions.versionServiceEndpoint }}
apply: {{ .Values.upgradeOptions.apply }}
schedule: {{ .Values.upgradeOptions.schedule }}
setFCV: {{ .Values.upgradeOptions.setFCV }}
{{- end }}
pmm:
enabled: {{ .Values.pmm.enabled }}
image: "{{ .Values.pmm.image.repository }}:{{ .Values.pmm.image.tag }}"
serverHost: {{ .Values.pmm.serverHost }}
replsets:
{{- range $replset := .Values.replsets }}
- name: {{ $replset.name }}
size: {{ $replset.size }}
{{- if $replset.externalNodes }}
externalNodes:
{{ $replset.externalNodes | toYaml | indent 6 }}
{{- end }}
{{- if $replset.configuration }}
configuration: |
{{ $replset.configuration | indent 6 }}
{{- end }}
affinity:
antiAffinityTopologyKey: {{ $replset.antiAffinityTopologyKey }}
{{- if $replset.priorityClass }}
priorityClassName: {{ $replset.priorityClass }}
{{- end }}
{{- if $replset.annotations }}
annotations:
{{ $replset.annotations | toYaml | indent 6 }}
{{- end }}
{{- if $replset.labels }}
labels:
{{ $replset.labels | toYaml | indent 6 }}
{{- end }}
{{- if $replset.nodeSelector }}
nodeSelector:
{{ $replset.nodeSelector | toYaml | indent 6 }}
{{- end }}
{{- if $replset.tolerations }}
tolerations:
{{ $replset.tolerations | toYaml | indent 6 }}
{{- end }}
{{- if $replset.livenessProbe }}
livenessProbe:
{{ $replset.livenessProbe | toYaml | indent 6 }}
{{- end }}
{{- if $replset.readinessProbe }}
readinessProbe:
{{ $replset.readinessProbe | toYaml | indent 6 }}
{{- end }}
{{- if $replset.storage }}
storage:
{{ $replset.storage | toYaml | indent 6 }}
{{- end }}
{{- if $replset.podSecurityContext }}
podSecurityContext:
{{ $replset.podSecurityContext | toYaml | indent 6 }}
{{- end }}
{{- if $replset.containerSecurityContext }}
containerSecurityContext:
{{ $replset.containerSecurityContext | toYaml | indent 6 }}
{{- end }}
{{- if $replset.runtimeClass }}
runtimeClassName: {{ $replset.runtimeClass }}
{{- end }}
{{- if $replset.sidecars }}
sidecars:
{{ $replset.sidecars | toYaml | indent 6 }}
{{- end }}
{{- if $replset.sidecarVolumes }}
sidecarVolumes:
{{ $replset.sidecarVolumes | toYaml | indent 6 }}
{{- end }}
{{- if $replset.sidecarPVCs }}
sidecarPVCs:
{{ $replset.sidecarPVCs | toYaml | indent 6 }}
{{- end }}
{{- if $replset.podDisruptionBudget }}
podDisruptionBudget:
{{- if $replset.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ $replset.podDisruptionBudget.maxUnavailable }}
{{- else }}
minAvailable: {{ $replset.podDisruptionBudget.minAvailable }}
{{- end }}
{{- end }}
{{- if $replset.expose }}
expose:
enabled: {{ $replset.expose.enabled }}
exposeType: {{ $replset.expose.exposeType }}
{{- if $replset.expose.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ $replset.expose.loadBalancerSourceRanges | toYaml | indent 8 }}
{{- end }}
{{- if $replset.expose.serviceAnnotations }}
serviceAnnotations:
{{ $replset.expose.serviceAnnotations | toYaml | indent 8 }}
{{- end }}
{{- if $replset.expose.serviceLabels }}
serviceLabels:
{{ $replset.expose.serviceLabels | toYaml | indent 8 }}
{{- end }}
{{- end }}
{{- if $replset.nonvoting }}
nonvoting:
enabled: {{ $replset.nonvoting.enabled }}
size: {{ $replset.nonvoting.size }}
{{- if $replset.nonvoting.configuration }}
configuration: |
{{ $replset.nonvoting.configuration | indent 8 }}
{{- end }}
affinity:
antiAffinityTopologyKey: {{ $replset.nonvoting.antiAffinityTopologyKey }}
{{- if $replset.nonvoting.priorityClass }}
priorityClassName: {{ $replset.nonvoting.priorityClass }}
{{- end }}
{{- if $replset.nonvoting.annotations }}
annotations:
{{ $replset.nonvoting.annotations | toYaml | indent 8 }}
{{- end }}
{{- if $replset.nonvoting.labels }}
labels:
{{ $replset.nonvoting.labels | toYaml | indent 8 }}
{{- end }}
{{- if $replset.nonvoting.podSecurityContext }}
podSecurityContext:
{{ $replset.nonvoting.podSecurityContext | toYaml | indent 8 }}
{{- end }}
{{- if $replset.nonvoting.containerSecurityContext }}
containerSecurityContext:
{{ $replset.nonvoting.containerSecurityContext | toYaml | indent 8 }}
{{- end }}
{{- if $replset.nonvoting.nodeSelector }}
nodeSelector:
{{ $replset.nonvoting.nodeSelector | toYaml | indent 8 }}
{{- end }}
{{- if $replset.nonvoting.tolerations }}
tolerations:
{{ $replset.nonvoting.tolerations | toYaml | indent 8 }}
{{- end }}
{{- if $replset.nonvoting.podDisruptionBudget }}
podDisruptionBudget:
{{- if $replset.nonvoting.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ $replset.nonvoting.podDisruptionBudget.maxUnavailable }}
{{- else }}
minAvailable: {{ $replset.nonvoting.podDisruptionBudget.minAvailable }}
{{- end }}
{{- end }}
resources:
{{ $replset.nonvoting.resources | toYaml | indent 8 }}
{{- if $replset.nonvoting.volumeSpec }}
volumeSpec:
{{- if $replset.nonvoting.volumeSpec.hostPath }}
hostPath:
path: {{ $replset.nonvoting.volumeSpec.hostPath }}
type: Directory
{{- else if $replset.nonvoting.volumeSpec.pvc }}
persistentVolumeClaim:
{{ $replset.nonvoting.volumeSpec.pvc | toYaml | indent 10 }}
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}
{{- end }}
{{- if $replset.arbiter }}
arbiter:
enabled: {{ $replset.arbiter.enabled }}
size: {{ $replset.arbiter.size }}
affinity:
antiAffinityTopologyKey: {{ $replset.arbiter.antiAffinityTopologyKey }}
{{- if $replset.arbiter.priorityClass }}
priorityClassName: {{ $replset.arbiter.priorityClass }}
{{- end }}
{{- if $replset.arbiter.annotations }}
annotations:
{{ $replset.arbiter.annotations | toYaml | indent 8 }}
{{- end }}
{{- if $replset.arbiter.labels }}
labels:
{{ $replset.arbiter.labels | toYaml | indent 8 }}
{{- end }}
{{- if $replset.arbiter.nodeSelector }}
nodeSelector:
{{ $replset.arbiter.nodeSelector | toYaml | indent 8 }}
{{- end }}
{{- if $replset.arbiter.tolerations }}
tolerations:
{{ $replset.arbiter.tolerations | toYaml | indent 8 }}
{{- end }}
{{- end }}
{{- if $replset.schedulerName }}
schedulerName: {{ $replset.schedulerName }}
{{- end }}
resources:
{{ $replset.resources | toYaml | indent 6 }}
{{- if $replset.volumeSpec }}
volumeSpec:
{{- if $replset.volumeSpec.hostPath }}
hostPath:
path: {{ $replset.volumeSpec.hostPath }}
type: Directory
{{- else if $replset.volumeSpec.pvc }}
persistentVolumeClaim:
{{ $replset.volumeSpec.pvc | toYaml | indent 8 }}
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}
{{- end }}
sharding:
enabled: {{ .Values.sharding.enabled }}
configsvrReplSet:
size: {{ .Values.sharding.configrs.size }}
{{- if .Values.sharding.configrs.externalNodes }}
externalNodes:
{{ .Values.sharding.configrs.externalNodes | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.configuration }}
configuration: |
{{ .Values.sharding.configrs.configuration | indent 8 }}
{{- end }}
affinity:
antiAffinityTopologyKey: {{ .Values.sharding.configrs.antiAffinityTopologyKey }}
{{- if .Values.sharding.configrs.priorityClass }}
priorityClassName: {{ .Values.sharding.configrs.priorityClass }}
{{- end }}
{{- if .Values.sharding.configrs.annotations }}
annotations:
{{ .Values.sharding.configrs.annotations | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.labels }}
labels:
{{ .Values.sharding.configrs.labels | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.nodeSelector }}
nodeSelector:
{{ .Values.sharding.configrs.nodeSelector | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.tolerations }}
tolerations:
{{ .Values.sharding.configrs.tolerations | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.livenessProbe }}
livenessProbe:
{{ .Values.sharding.configrs.livenessProbe | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.readinessProbe }}
readinessProbe:
{{ .Values.sharding.configrs.readinessProbe | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.storage }}
storage:
{{ .Values.sharding.configrs.storage | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.podSecurityContext }}
podSecurityContext:
{{ .Values.sharding.configrs.podSecurityContext | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.containerSecurityContext }}
containerSecurityContext:
{{ .Values.sharding.configrs.containerSecurityContext | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.runtimeClass }}
runtimeClassName: {{ .Values.sharding.configrs.runtimeClass }}
{{- end }}
{{- if .Values.sharding.configrs.sidecars }}
sidecars:
{{ .Values.sharding.configrs.sidecars | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.sidecarVolumes }}
sidecarVolumes:
{{ .Values.sharding.configrs.sidecarVolumes | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.configrs.sidecarPVCs }}
sidecarPVCs:
{{ .Values.sharding.configrs.sidecarPVCs | toYaml | indent 8 }}
{{- end }}
podDisruptionBudget:
{{- if .Values.sharding.configrs.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.sharding.configrs.podDisruptionBudget.maxUnavailable }}
{{- else }}
minAvailable: {{ .Values.sharding.configrs.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.sharding.configrs.expose }}
expose:
enabled: {{ .Values.sharding.configrs.expose.enabled }}
exposeType: {{ .Values.sharding.configrs.expose.exposeType }}
{{- if .Values.sharding.configrs.expose.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ .Values.sharding.configrs.expose.loadBalancerSourceRanges | toYaml | indent 10 }}
{{- end }}
{{- if .Values.sharding.configrs.expose.serviceAnnotations }}
serviceAnnotations:
{{ .Values.sharding.configrs.expose.serviceAnnotations | toYaml | indent 10 }}
{{- end }}
{{- if .Values.sharding.configrs.expose.serviceLabels }}
serviceLabels:
{{ .Values.sharding.configrs.expose.serviceLabels | toYaml | indent 10 }}
{{- end }}
{{- end }}
resources:
limits:
cpu: {{ .Values.sharding.configrs.resources.limits.cpu }}
memory: {{ .Values.sharding.configrs.resources.limits.memory }}
requests:
cpu: {{ .Values.sharding.configrs.resources.requests.cpu }}
memory: {{ .Values.sharding.configrs.resources.requests.memory }}
volumeSpec:
{{- if .Values.sharding.configrs.volumeSpec.hostPath }}
hostPath:
path: {{ .Values.sharding.configrs.volumeSpec.hostPath }}
type: Directory
{{- else if .Values.sharding.configrs.volumeSpec.pvc }}
persistentVolumeClaim:
{{ .Values.sharding.configrs.volumeSpec.pvc | toYaml | indent 10 }}
{{- else }}
emptyDir: {}
{{- end }}
mongos:
size: {{ .Values.sharding.mongos.size }}
{{- if .Values.sharding.mongos.configuration }}
configuration: |
{{ .Values.sharding.mongos.configuration | indent 8 }}
{{- end }}
affinity:
antiAffinityTopologyKey: {{ .Values.sharding.mongos.antiAffinityTopologyKey }}
{{- if .Values.sharding.mongos.priorityClass }}
priorityClassName: {{ .Values.sharding.mongos.priorityClass }}
{{- end }}
{{- if .Values.sharding.mongos.annotations }}
annotations:
{{ .Values.sharding.mongos.annotations | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.labels }}
labels:
{{ .Values.sharding.mongos.labels | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.nodeSelector }}
nodeSelector:
{{ .Values.sharding.mongos.nodeSelector | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.tolerations }}
tolerations:
{{ .Values.sharding.mongos.tolerations | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.livenessProbe }}
livenessProbe:
{{ .Values.sharding.mongos.livenessProbe | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.readinessProbe }}
readinessProbe:
{{ .Values.sharding.mongos.readinessProbe | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.podSecurityContext }}
podSecurityContext:
{{ .Values.sharding.mongos.podSecurityContext | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.containerSecurityContext }}
containerSecurityContext:
{{ .Values.sharding.mongos.containerSecurityContext | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.runtimeClass }}
runtimeClassName: {{ .Values.sharding.mongos.runtimeClass }}
{{- end }}
{{- if .Values.sharding.mongos.sidecars }}
sidecars:
{{ .Values.sharding.mongos.sidecars | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.sidecarVolumes }}
sidecarVolumes:
{{ .Values.sharding.mongos.sidecarVolumes | toYaml | indent 8 }}
{{- end }}
{{- if .Values.sharding.mongos.sidecarPVCs }}
sidecarPVCs:
{{ .Values.sharding.mongos.sidecarPVCs | toYaml | indent 8 }}
{{- end }}
podDisruptionBudget:
{{- if .Values.sharding.mongos.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.sharding.mongos.podDisruptionBudget.maxUnavailable }}
{{- else }}
minAvailable: {{ .Values.sharding.mongos.podDisruptionBudget.minAvailable }}
{{- end }}
resources:
limits:
cpu: {{ .Values.sharding.mongos.resources.limits.cpu }}
memory: {{ .Values.sharding.mongos.resources.limits.memory }}
requests:
cpu: {{ .Values.sharding.mongos.resources.requests.cpu }}
memory: {{ .Values.sharding.mongos.resources.requests.memory }}
expose:
exposeType: {{ .Values.sharding.mongos.expose.exposeType }}
{{- if .Values.sharding.mongos.expose.servicePerPod }}
servicePerPod: {{ .Values.sharding.mongos.expose.servicePerPod }}
{{- end }}
{{- if .Values.sharding.mongos.expose.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ .Values.sharding.mongos.expose.loadBalancerSourceRanges | toYaml | indent 10 }}
{{- end }}
{{- if .Values.sharding.mongos.expose.serviceAnnotations }}
serviceAnnotations:
{{ .Values.sharding.mongos.expose.serviceAnnotations | toYaml | indent 10 }}
{{- end }}
{{- if .Values.sharding.mongos.expose.serviceLabels }}
serviceLabels:
{{ .Values.sharding.mongos.expose.serviceLabels | toYaml | indent 10 }}
{{- end }}
{{- if .Values.sharding.mongos.auditLog }}
auditLog:
{{ .Values.sharding.mongos.auditLog | toYaml | indent 8 }}
{{- end }}
backup:
enabled: {{ .Values.backup.enabled }}
{{- if .Values.backup.annotations }}
annotations:
{{ .Values.backup.annotations | toYaml | indent 6 }}
{{- end }}
image: "{{ .Values.backup.image.repository }}:{{ .Values.backup.image.tag }}"
serviceAccountName: {{ .Values.backup.serviceAccountName }}
{{- if .Values.backup.resources }}
resources:
{{ .Values.backup.resources | toYaml | indent 6 }}
{{- end }}
storages:
{{ .Values.backup.storages | toYaml | indent 6 }}
pitr:
{{- if and .Values.backup.enabled .Values.backup.pitr.enabled }}
enabled: true
{{- if .Values.backup.pitr.oplogSpanMin }}
oplogSpanMin: {{ .Values.backup.pitr.oplogSpanMin }}
{{- end }}
{{- if .Values.backup.pitr.compressionType }}
compressionType: {{ .Values.backup.pitr.compressionType }}
{{- end }}
{{- if .Values.backup.pitr.compressionLevel }}
compressionLevel: {{ .Values.backup.pitr.compressionLevel }}
{{- end }}
{{- else }}
enabled: false
{{- end }}
tasks:
{{ .Values.backup.tasks | toYaml | indent 6 }}

View File

@ -0,0 +1,393 @@
# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Platform type: kubernetes, openshift
# platform: kubernetes
# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
# clusterServiceDNSMode: "Internal"
finalizers:
## Set this if you want that operator deletes the primary pod last
- delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
- delete-psmdb-pvc
crVersion: 1.13.0
pause: false
unmanaged: false
allowUnsafeConfigurations: false
multiCluster:
enabled: false
# DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: disabled
schedule: "0 2 * * *"
setFCV: false
image:
repository: percona/percona-server-mongodb
tag: 5.0.11-10
imagePullPolicy: Always
# imagePullSecrets: []
# tls:
# # 90 days in hours
# certValidityDuration: 2160h
secrets: {}
# If you set users secret here, it will not be constructed from the values at the
# bottom of this file, but the operator will use existing one or generate random values
# users: my-cluster-name-secrets
# encryptionKey: my-cluster-name-mongodb-encryption-key
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.30.0
serverHost: monitoring-service
replsets:
- name: rs0
size: 3
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe:
# failureThreshold: 4
# initialDelaySeconds: 60
# periodSeconds: 30
# timeoutSeconds: 10
# startupDelaySeconds: 7200
# readinessProbe:
# failureThreshold: 8
# initialDelaySeconds: 10
# periodSeconds: 3
# successThreshold: 1
# timeoutSeconds: 2
# runtimeClassName: image-rc
# storage:
# engine: wiredTiger
# wiredTiger:
# engineConfig:
# cacheSizeRatio: 0.5
# directoryForIndexes: false
# journalCompressor: snappy
# collectionConfig:
# blockCompressor: snappy
# indexConfig:
# prefixCompression: true
# inMemory:
# engineConfig:
# inMemorySizeRatio: 0.5
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# - mountPath: /secret
# name: sidecar-secret
# - mountPath: /configmap
# name: sidecar-config
# sidecarVolumes:
# - name: sidecar-secret
# secret:
# secretName: mysecret
# - name: sidecar-config
# configMap:
# name: myconfigmap
# sidecarPVCs:
# - apiVersion: v1
# kind: PersistentVolumeClaim
# metadata:
# name: sidecar-volume-claim
# spec:
# resources:
# requests:
# storage: 1Gi
# volumeMode: Filesystem
# accessModes:
# - ReadWriteOnce
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# serviceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# serviceLabels:
# some-label: some-key
nonvoting:
enabled: false
# podSecurityContext: {}
# containerSecurityContext: {}
size: 3
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
arbiter:
enabled: false
size: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# schedulerName: ""
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
sharding:
enabled: true
configrs:
size: 3
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: ClusterIP
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# serviceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# serviceLabels:
# some-label: some-key
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
mongos:
size: 2
# configuration: |
# systemLog:
# verbosity: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
expose:
exposeType: ClusterIP
# servicePerPod: true
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# serviceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# serviceLabels:
# some-label: some-key
# auditLog:
# destination: file
# format: BSON
# filter: '{}'
backup:
enabled: true
image:
repository: percona/percona-backup-mongodb
tag: 1.8.1
serviceAccountName: percona-server-mongodb-operator
# annotations:
# iam.amazonaws.com/role: role-arn
# resources:
# limits:
# cpu: "300m"
# memory: "0.5G"
# requests:
# cpu: "300m"
# memory: "0.5G"
storages:
# s3-us-west:
# type: s3
# s3:
# bucket: S3-BACKUP-BUCKET-NAME-HERE
# credentialsSecret: my-cluster-name-backup-s3
# region: us-west-2
# prefix: ""
# uploadPartSize: 10485760
# maxUploadParts: 10000
# storageClass: STANDARD
# insecureSkipTLSVerify: false
# minio:
# type: s3
# s3:
# bucket: MINIO-BACKUP-BUCKET-NAME-HERE
# region: us-east-1
# credentialsSecret: my-cluster-name-backup-minio
# endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
# prefix: ""
# azure-blob:
# type: azure
# azure:
# container: CONTAINER-NAME
# prefix: PREFIX-NAME
# credentialsSecret: SECRET-NAME
pitr:
enabled: false
# oplogSpanMin: 10
# compressionType: gzip
# compressionLevel: 6
tasks:
# - name: daily-s3-us-west
# enabled: true
# schedule: "0 0 * * *"
# keep: 3
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west
# enabled: false
# schedule: "0 0 * * 0"
# keep: 5
# storageName: s3-us-west
# compressionType: gzip
users:
MONGODB_BACKUP_USER: backup
MONGODB_BACKUP_PASSWORD: backup123456
MONGODB_DATABASE_ADMIN_USER: databaseAdmin
MONGODB_DATABASE_ADMIN_PASSWORD: databaseAdmin123456
MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
MONGODB_USER_ADMIN_USER: userAdmin
MONGODB_USER_ADMIN_PASSWORD: userAdmin123456
PMM_SERVER_API_KEY: apikey
# PMM_SERVER_USER: admin
# PMM_SERVER_PASSWORD: admin

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,18 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Operator for MongoDB
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: psmdb-operator
apiVersion: v2
appVersion: 1.13.0
description: A Helm chart for Deploying the Percona Kubernetes Operator for Percona
Server for MongoDB
home: https://www.percona.com/doc/kubernetes-operator-for-psmongodb/kubernetes.html
icon: https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/operator.png
maintainers:
- email: ivan.pylypenko@percona.com
name: cap1984
- email: tomislav.plavcic@percona.com
name: tplavcic
name: psmdb-operator
version: 1.13.1

View File

@ -0,0 +1,13 @@
Copyright 2019 Paul Czarkowski <username.taken@gmail.com>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,55 @@
# Percona Operator for MongoDB
Percona Operator for MongoDB allows users to deploy and manage Percona Server for MongoDB Clusters on Kubernetes.
Useful links:
- [Operator Github repository](https://github.com/percona/percona-server-mongodb-operator)
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html)
## Pre-requisites
* Kubernetes 1.19+
* Helm v3
# Installation
This chart will deploy the Operator Pod for the further Percona Server for MongoDB creation in Kubernetes.
## Installing the chart
To install the chart with the `psmdb` release name using a dedicated namespace (recommended):
```sh
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install my-operator percona/psmdb-operator --version 1.13.0 --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------- | ------------------------------------------------------------------------------| ------------------------------------------|
| `image.repository` | PSMDB Operator Container image name | `percona/percona-server-mongodb-operator` |
| `image.tag` | PSMDB Operator Container image tag | `1.13.0` |
| `image.pullPolicy` | PSMDB Operator Container pull policy | `Always` |
| `image.pullSecrets` | PSMDB Operator Pod pull secret | `[]` |
| `replicaCount` | PSMDB Operator Pod quantity | `1` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `resources` | Resource requests and limits | `{}` |
| `nodeSelector` | Labels for Pod assignment | `{}` |
| `watchNamespace` | Set when a different from default namespace is needed to watch | `""` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Alternatively a YAML file that specifies the values for the parameters can be provided like this:
```sh
helm install psmdb-operator -f values.yaml percona/psmdb-operator
```
## Deploy the database
To deploy Percona Server for MongoDB run the following command:
```sh
helm install my-db percona/psmdb-db
```
See more about Percona Server for MongoDB deployment in its chart [here](https://github.com/percona/percona-helm-charts/tree/main/charts/psmdb-db) or in the [Helm chart installation guide](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/helm.html).

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,5 @@
1. psmdb-operator deployed.
If you would like to deploy an psmdb-cluster set cluster.enabled to true in values.yaml
Check the psmdb-operator logs
export POD=$(kubectl get pods -l app.kubernetes.io/name=psmdb-operator --namespace {{ .Release.Namespace }} --output name)
kubectl logs $POD --namespace={{ .Release.Namespace }}

View File

@ -0,0 +1,45 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "psmdb-operator.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "psmdb-operator.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "psmdb-operator.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "psmdb-operator.labels" -}}
app.kubernetes.io/name: {{ include "psmdb-operator.name" . }}
helm.sh/chart: {{ include "psmdb-operator.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}

View File

@ -0,0 +1,72 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "psmdb-operator.fullname" . }}
labels:
{{ include "psmdb-operator.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "psmdb-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "psmdb-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: {{ include "psmdb-operator.fullname" . }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 60000
protocol: TCP
name: metrics
command:
- percona-server-mongodb-operator
env:
- name: WATCH_NAMESPACE
{{- if .Values.watchAllNamespaces }}
value: ""
{{- else }}
value: "{{ default .Release.Namespace .Values.watchNamespace }}"
{{- end }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: {{ default "percona-server-mongodb-operator" .Values.operatorName }}
- name: RESYNC_PERIOD
value: "{{ .Values.env.resyncPeriod }}"
- name: LOG_VERBOSE
value: "{{ .Values.env.logVerbose }}"
# livenessProbe:
# httpGet:
# path: /
# port: metrics
# readinessProbe:
# httpGet:
# path: /
# port: metrics
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,8 @@
{{ if .Values.watchNamespace }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.watchNamespace }}
annotations:
helm.sh/resource-policy: keep
{{ end }}

View File

@ -0,0 +1,32 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "psmdb-operator.fullname" . }}
---
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
kind: ClusterRoleBinding
{{- else }}
kind: RoleBinding
{{- end }}
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-{{ include "psmdb-operator.fullname" . }}
{{- if .Values.watchNamespace }}
namespace: {{ .Values.watchNamespace }}
{{- end }}
labels:
{{ include "psmdb-operator.labels" . | indent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "psmdb-operator.fullname" . }}
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
namespace: {{ .Release.Namespace }}
{{- end }}
roleRef:
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
name: {{ include "psmdb-operator.fullname" . }}
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,146 @@
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "psmdb-operator.fullname" . }}
labels:
{{ include "psmdb-operator.labels" . | indent 4 }}
rules:
- apiGroups:
- psmdb.percona.com
resources:
- perconaservermongodbs
- perconaservermongodbs/status
- perconaservermongodbbackups
- perconaservermongodbbackups/status
- perconaservermongodbrestores
- perconaservermongodbrestores/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
{{- if or .Values.watchNamespace .Values.watchAllNamespaces }}
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
{{- end }}
- apiGroups:
- ""
resources:
- pods
- pods/exec
- services
- persistentvolumeclaims
- secrets
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- batch
resources:
- cronjobs
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- certmanager.k8s.io
- cert-manager.io
resources:
- issuers
- certificates
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection
- apiGroups:
- net.gke.io
- multicluster.x-k8s.io
resources:
- serviceexports
- serviceimports
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- deletecollection

View File

@ -0,0 +1,47 @@
# Default values for psmdb-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: percona/percona-server-mongodb-operator
tag: 1.13.0
pullPolicy: IfNotPresent
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# watchNamespace:
# set if operator should be deployed in cluster wide mode. defaults to false
watchAllNamespaces: false
# set if you want to use a different operator name
# defaults to `percona-server-mongodb-operator`
# operatorName:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
resyncPeriod: 5s
logVerbose: false
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -7094,6 +7094,52 @@ entries:
urls:
- assets/bitnami/postgresql-11.9.12.tgz
version: 11.9.12
psmdb-db:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Server for MongoDB
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: psmdb-db
apiVersion: v2
appVersion: 1.13.0
created: "2022-11-02T23:10:34.600673-04:00"
description: A Helm chart for installing Percona Server MongoDB Cluster Databases
using the PSMDB Operator.
digest: 8bfdc33231619e2e6f5b2e06a6eb498eedf4a786fb0a4a5028a162022159a75b
home: https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html
icon: https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/operator.png
maintainers:
- email: ivan.pylypenko@percona.com
name: cap1984
- email: tomislav.plavcic@percona.com
name: tplavcic
name: psmdb-db
urls:
- assets/percona/psmdb-db-1.13.0.tgz
version: 1.13.0
psmdb-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Percona Operator for MongoDB
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: psmdb-operator
apiVersion: v2
appVersion: 1.13.0
created: "2022-11-02T23:10:34.601755-04:00"
description: A Helm chart for Deploying the Percona Kubernetes Operator for Percona
Server for MongoDB
digest: 615c51dd2ad075f29dada7acb143121ae3134b4bd84c2abce4e92b28995e9956
home: https://www.percona.com/doc/kubernetes-operator-for-psmongodb/kubernetes.html
icon: https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/operator.png
maintainers:
- email: ivan.pylypenko@percona.com
name: cap1984
- email: tomislav.plavcic@percona.com
name: tplavcic
name: psmdb-operator
urls:
- assets/percona/psmdb-operator-1.13.1.tgz
version: 1.13.1
quobyte-cluster:
- annotations:
catalog.cattle.io/certified: partner

View File

@ -0,0 +1,7 @@
HelmRepo: https://percona.github.io/percona-helm-charts
HelmChart: psmdb-db
Vendor: Percona
DisplayName: Percona Server for MongoDB
ChartMetadata:
icon: https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/operator.png
kubeVersion: '>=1.21-0'

View File

@ -0,0 +1,7 @@
HelmRepo: https://percona.github.io/percona-helm-charts
HelmChart: psmdb-operator
Vendor: Percona
DisplayName: Percona Operator for MongoDB
ChartMetadata:
icon: https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/operator.png
kubeVersion: '>=1.21-0'