Merge branch 'rancher:main-source' into k10-4.5.9

pull/340/head
Akanksha kumari 2022-02-19 07:27:17 +05:30 committed by GitHub
commit f46351b877
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
38 changed files with 1826 additions and 9 deletions

View File

@ -11,15 +11,25 @@ jobs:
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Checkout into branch
run: git checkout -b staging-pr-workflow
- name: Fetch main-source
run: git fetch origin main-source
- name: Set git user for rebase
run: |
git config user.name "$(git log -n 1 --pretty=format:%an)"
git config user.email "$(git log -n 1 --pretty=format:%ae)"
- name: Rebase to main-source
run: git rebase origin/main-source
- name: Pull scripts
run: sudo make pull-scripts
- name: Pull in all relevant branches
run: git fetch origin main
- name: Validate
run: sudo make validate
run: sudo make validate

Binary file not shown.

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,37 @@
annotations:
artifacthub.io/changes: |
- Update Nutanix CSI Driver to 2.5.0
artifacthub.io/containsSecurityUpdates: "true"
artifacthub.io/displayName: Nutanix CSI Storage
artifacthub.io/links: |
- name: Nutanix CSI Driver documentation
url: https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_5_0:CSI-Volume-Driver-v2_5_0
artifacthub.io/maintainers: |
- name: Nutanix Cloud Native Team
email: cloudnative@nutanix.com
artifacthub.io/recommendations: |
- url: https://artifacthub.io/packages/helm/nutanix/nutanix-csi-snapshot
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Nutanix CSI Storage
catalog.cattle.io/release-name: nutanix-csi-storage
apiVersion: v1
appVersion: 2.5.1
description: Nutanix Container Storage Interface (CSI) Driver
home: https://github.com/nutanix/helm
icon: https://avatars2.githubusercontent.com/u/6165865?s=200&v=4
keywords:
- Nutanix
- Storage
- Volumes
- Files
- StorageClass
- RedHat
- CentOS
- Ubuntu
- CSI
kubeVersion: '>= 1.17.0-0'
maintainers:
- email: cloudnative@nutanix.com
name: nutanix-cloud-native-bot
name: nutanix-csi-storage
version: 2.5.100

View File

@ -0,0 +1,182 @@
# Nutanix CSI Storage Driver Helm chart
## Introduction
The Container Storage Interface (CSI) Volume Driver for Kubernetes leverages Nutanix Volumes and Nutanix Files to provide scalable and persistent storage for stateful applications.
When Files is used for persistent storage, applications on multiple pods can access the same storage, and also have the benefit of multi-pod read and write access.
## Important notice
Starting with version 2.5 of this chart we separate the Snapshot components to a second independent Chart.
If you plan to update an existing Nutanix CSI Chart version < v2.5.x with this Chart, you need to check below recommendation.
- Once you upgrade to version 2.5+, the snapshot-controler will be removed, but previously installed Snapshot CRD stay in place. You will then need to install the [nutanix-csi-snapshot](https://github.com/nutanix/helm/tree/master/charts/nutanix-csi-snapshot) Helm Chart following the [Important notice](https://github.com/nutanix/helm/tree/master/charts/nutanix-csi-snapshot#upgrading-from-nutanix-csi-storage-helm-chart-deployment) procedure.
- If you create Storageclass automatically with a previous Nutanix CSI Chart version < v2.5.x, take care to remove Storageclass before `Helm upgrade`.
If you previously installed Nutanix CSI Storage Driver with yaml file please follow the [Upgrading from yaml based deployment](#upgrading-from-yaml-based-deployment) section below.
If this is your first deployment and your Kubernetes Distribution does not bundle the snapshot components, you need to install first the [Nutanix CSI Snapshot Controller Helm chart](https://github.com/nutanix/helm/tree/master/charts/nutanix-csi-snapshot).
Please note that starting with v2.2.0, Nutanix CSI driver has changed format of driver name from com.nutanix.csi to csi.nutanix.com. All deployment yamls uses this new driver name format. However, if you initially installed CSI driver in version < v2.2.0 then you should need to continue to use old driver name com.nutanix.csi by setting `legacy` parameter to `true`. If not existing PVC/PV will not work with the new driver name.
## Nutanix CSI driver documentation
https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_5:CSI-Volume-Driver-v2_5
## Features list
- Nutanix CSI Driver v2.5.0
- Nutanix Volumes support
- Nutanix Files support
- Volume clone
- Volume snapshot and Restore
- IP Address Whitelisting
- LVM Volume supporting multi vdisks volume group
- NFS Dynamic share provisioning
- PV resize support for Volumes and Dynamic Files mode
- iSCSI Auto CHAP Authentication
- OS independence
- Volume metrics and CSI operations metrics support
## Prerequisites
- Kubernetes 1.17 or later
- Kubernetes worker nodes must have the iSCSI package installed (Nutanix Volumes mode) and/or NFS tools (Nutanix Files mode)
- This chart have been validated on RHEL/CentOS 7/8 and Ubuntu 18.04/20.04/21.04/21.10, but the new architecture enables easy portability to other distributions.
- This Chart is not made to be installed on the local k3s cluster (by default iscsi prerequisite is missing)
## Installing the Chart
To install the chart with the name `nutanix-csi`:
```console
helm repo add nutanix https://nutanix.github.io/helm/
helm install nutanix-csi nutanix/nutanix-csi-storage -n <namespace of your choice>
```
## Upgrade
Upgrades can be done using the normal Helm upgrade mechanism
```
helm repo update
helm upgrade nutanix-csi nutanix/nutanix-csi-storage
```
### Upgrading from yaml based deployment
Starting with CSI driver v2.5.0, yaml based deployment is discontinued. So to upgrade from yaml based deployment, you need to patch your existing CSI deployment with helm annotations. Please follow the following procedure.
```bash
HELM_CHART_NAME="nutanix-csi"
HELM_CHART_NAMESPACE="ntnx-system"
DRIVER_NAME="csi.nutanix.com"
kubectl delete sts csi-provisioner-ntnx-plugin -n ${HELM_CHART_NAMESPACE}
kubectl patch ds csi-node-ntnx-plugin -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch csidriver ${DRIVER_NAME} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch sa csi-provisioner -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch sa csi-node-ntnx-plugin -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch clusterrole external-provisioner-runner -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch clusterrole csi-node-runner -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch clusterrolebinding csi-provisioner-role -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch clusterrolebinding csi-node-role -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch service csi-provisioner-ntnx-plugin -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch service csi-metrics-service -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
kubectl patch servicemonitor csi-driver -n ${HELM_CHART_NAMESPACE} -p '{"metadata": {"annotations":{"meta.helm.sh/release-name":"'"${HELM_CHART_NAME}"'","meta.helm.sh/release-namespace":"'"${HELM_CHART_NAMESPACE}"'"}, "labels":{"app.kubernetes.io/managed-by":"Helm"}}}' --type=merge
```
Now follow [Installing the Chart](#installing-the-chart) section to finish upgrading the CSI driver.
## Uninstalling the Chart
To uninstall/delete the `nutanix-csi` deployment:
```console
helm delete nutanix-csi -n <namespace of your choice>
```
## Configuration
The following table lists the configurable parameters of the Nutanix-CSI chart and their default values.
| Parameter | Description | Default |
|----------------------------------|----------------------------------------|--------------------------------|
| `legacy` | Use old reverse notation for CSI driver name | `false` |
| `volumeClass` | Activate Nutanix Volumes Storage Class | `false` |
| `volumeClassName` | Name of the Nutanix Volumes Storage Class | `nutanix-volume` |
| `fileClass` | Activate Nutanix Files Storage Class | `false` |
| `fileClassName` | Name of the Nutanix Files Storage Class | `nutanix-file` |
| `dynamicFileClass` | Activate Nutanix Dynamic Files Storage Class | `false` |
| `dynamicFileClassName` | Name of the Nutanix Dynamic Files Storage Class | `nutanix-dynamicfile` |
| `defaultStorageClass` | Choose your default Storage Class (none, volume, file, dynfile) | `none`|
| `prismEndPoint` | Cluster Virtual IP Address |`10.0.0.1`|
| `username` | Name used for the admin role (if created) |`admin`|
| `password` | Password for the admin role (if created) |`nutanix/4u`|
| `secretName` | Name of the secret to use for admin role| `ntnx-secret`|
| `createSecret` | Create secret for admin role (if false use existing)| `true`|
| `storageContainer` | Nutanix storage container name | `default`|
| `fsType` | Type of file system you are using (ext4, xfs) |`xfs`|
| `networkSegmentation` | Activate Volumes Network Segmentation support |`false`|
| `lvmVolume` | Activate LVM to use multiple vdisks by Volume |`false`|
| `lvmDisks` | Number of vdisks by volume if lvm enabled | `4`|
| `fileHost` | NFS server IP address | `10.0.0.3`|
| `filePath` | Path of the NFS share |`share`|
| `fileServerName` | Name of the Nutanix FIle Server | `file`|
| `kubeletDir` | allows overriding the host location of kubelet's internal state | `/var/lib/kubelet`|
| `nodeSelector` | Add nodeSelector to all pods | `{}` |
| `tolerations` | Add tolerations to all pods | `[]` |
| `imagePullPolicy` | Specify imagePullPolicy for all pods| `IfNotPresent`|
| `provisioner.nodeSelector` | Add nodeSelector to provisioner pod | `{}` |
| `provisioner.tolerations` | Add tolerations to provisioner pod | `[]` |
| `node.nodeSelector` | Add nodeSelector to node pods | `{}` |
| `node.tolerations` | Add tolerations to node pods | `[]` |
| `servicemonitor.enabled` | Create ServiceMonitor to scrape CSI metrics | `false` |
| `servicemonitor.labels` | Labels to add to the ServiceMonitor (for match the Prometheus serviceMonitorSelector logic) | `k8s-app: csi-driver`|
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install` or provide a a file whit `-f value.yaml`.
### Configuration examples:
Install the driver in the `ntnx-system` namespace:
```console
helm install nutanix-storage nutanix/nutanix-csi-storage -n ntnx-system --create-namespace
```
Install the driver in the `ntnx-system` namespace and create a volume storageclass:
```console
helm install nutanix-storage nutanix/nutanix-csi-storage -n ntnx-system --create-namespace --set volumeClass=true --set prismEndPoint=X.X.X.X --set username=admin --set password=xxxxxxxxx --set storageContainer=container_name --set fsType=xfs
```
Install the driver in the `ntnx-system` namespace, create a volume and a dynamic file storageclass and set the volume storage class as default:
```console
helm install nutanix-storage nutanix/nutanix-csi-storage -n ntnx-system --create-namespace --set volumeClass=true --set prismEndPoint=X.X.X.X --set username=admin --set password=xxxxxxxxx --set storageContainer=container_name --set fsType=xfs --set defaultStorageClass=volume --set dynamicFileClass=true --set fileServerName=name_of_the_file_server
```
All the options can also be specified in a value.yaml file:
```console
helm install nutanix-storage nutanix/nutanix-csi-storage -n ntnx-system --create-namespace -f value.yaml
```
## Support
The Nutanix CSI Volume Driver is fully supported by Nutanix. Please use the standard support procedure to file a ticket [here](https://www.nutanix.com/support-services/product-support).
## Community
Please file any issues, questions or feature requests you may have [here](https://github.com/nutanix/csi-plugin/issues) for the Nutanix CSI Driver or [here](https://github.com/nutanix/helm/issues) for the Helm chart.
## Contributing
We value all feedback and contributions. If you find any issues or want to contribute, please feel free to open an issue or file a PR.

View File

@ -0,0 +1 @@
A Helm chart for installing Nutanix CSI Volume/File Storage Driver

View File

@ -0,0 +1,123 @@
questions:
- variable: volumeClass
label: "Volumes Storage Class"
type: boolean
default: true
description: "Activate Nutanix Volumes Storage Class"
group: "global Settings"
- variable: fileClass
label: "Files Storage Class"
type: boolean
default: false
description: "Activate Nutanix Files Storage Class"
group: "global Settings"
- variable: dynamicFileClass
label: "Dynamic Files Storage Class"
type: boolean
default: false
description: "Activate Nutanix Files Storage Class with dynamic share provisioning"
group: "global Settings"
- variable: legacy
label: "Driver Name Legacy mode"
type: boolean
default: false
description: "Set to True to continue to use old driver name in case of initial install with chart < 2.2.0"
group: "global Settings"
- variable: defaultStorageClass
label: "Default Storage Class"
type: enum
default: "none"
options: ["none", "volume", "file", "dynfile"]
description: "Select the default Storage Class you want"
group: "global Settings"
show_if: "volumeClass=true||dynamicFileClass=true||fileClass=true"
- variable: prismEndPoint
label: "Prism Endpoint"
type: string
required: true
description: "Please specify the cluster virtual address"
group: "global Settings"
show_if: "volumeClass=true||dynamicFileClass=true"
- variable: username
label: "Username"
type: string
required: true
description: "Specify username with cluster admin permission"
group: "global Settings"
show_if: "volumeClass=true||dynamicFileClass=true"
- variable: password
label: "Password"
type: password
required: true
description: "Specify password of the user"
group: "global Settings"
show_if: "volumeClass=true||dynamicFileClass=true"
- variable: servicemonitor.enabled
label: "Prometheus ServiceMonitor"
type: boolean
default: false
description: "Activate Prometheus ServiceMonitor to scrape CSI metrics"
group: "global Settings"
- variable: storageContainer
label: "Storage Container"
type: string
required: true
description: "Specify Nutanix container name where the Persistent Volume will be stored"
group: "Nutanix Volumes Settings"
show_if: "volumeClass=true"
- variable: fsType
label: "Filesystem"
type: enum
options: ["xfs", "ext4"]
description: "Select the filesystem for the Persistent Volume"
group: "Nutanix Volumes Settings"
show_if: "volumeClass=true"
- variable: networkSegmentation
label: "Volumes Network Segmentation"
type: boolean
default: false
description: "Activate Volumes Network Segmentation support"
group: "Nutanix Volumes Settings"
show_if: "volumeClass=true"
- variable: lvmVolume
label: "LVM Volume"
type: boolean
default: false
description: "Activate LVM to support multi vdisks volume group for PV"
group: "Nutanix Volumes Settings"
show_if: "volumeClass=true"
- variable: lvmDisks
label: "LVM Disks"
type: int
required: true
default: "4"
min: 1
max: 8
description: "Number of vdisk for each PV"
group: "Nutanix Volumes Settings"
show_if: "lvmVolume=true&&volumeClass=true"
- variable: fileHost
label: "File Server Address"
type: string
required: true
description: "Specify Nutanix Files address"
group: "Nutanix Files Settings"
show_if: "fileClass=true"
- variable: filePath
label: "Export share"
type: string
required: true
description: "Specify Nutanix Files share path"
group: "Nutanix Files Settings"
show_if: "fileClass=true"
- variable: fileServerName
label: "NFS File Server Name"
type: string
required: true
description: "Specify Nutanix Files server name"
group: "Nutanix Files Settings"
show_if: "dynamicFileClass=true"

View File

@ -0,0 +1,3 @@
Driver name: {{ include "nutanix-csi-storage.drivername" . }}
Nutanix CSI provider was deployed in namespace {{ .Release.Namespace }}

View File

@ -0,0 +1,43 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nutanix-csi-storage.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nutanix-csi-storage.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nutanix-csi-storage.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create CSI driver name.
*/}}
{{- define "nutanix-csi-storage.drivername" -}}
{{- if .Values.legacy -}}
com.nutanix.csi
{{- else -}}
csi.nutanix.com
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,11 @@
{{- if .Capabilities.APIVersions.Has "storage.k8s.io/v1/CSIDriver" }}
apiVersion: storage.k8s.io/v1
{{- else }}
apiVersion: storage.k8s.io/v1beta1
{{- end }}
kind: CSIDriver
metadata:
name: {{ include "nutanix-csi-storage.drivername" . }}
spec:
attachRequired: false
podInfoOnMount: true

View File

@ -0,0 +1,146 @@
# Copyright 2021 Nutanix Inc
#
# example usage: kubectl create -f <this_file>
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-node-ntnx-plugin
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app: csi-node-ntnx-plugin
template:
metadata:
labels:
app: csi-node-ntnx-plugin
spec:
serviceAccount: csi-node-ntnx-plugin
hostNetwork: true
containers:
- name: driver-registrar
image: {{ .Values.sidecars.registrar.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args:
- --v=5
- --csi-address=$(ADDRESS)
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: {{ .Values.kubeletDir }}/plugins/{{ include "nutanix-csi-storage.drivername" . }}/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: plugin-dir
mountPath: /csi/
- name: registration-dir
mountPath: /registration
- name: csi-node-ntnx-plugin
securityContext:
privileged: true
allowPrivilegeEscalation: true
image: {{ .Values.node.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args :
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_ID)"
- "--drivername={{ include "nutanix-csi-storage.drivername" . }}"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: {{ .Values.kubeletDir }}
# needed so that any mounts setup inside this container are
# propagated back to the host machine.
mountPropagation: "Bidirectional"
- mountPath: /dev
name: device-dir
- mountPath: /etc/iscsi
name: iscsi-dir
- mountPath: /host
name: root-dir
# This is needed because mount is run from host using chroot.
mountPropagation: "Bidirectional"
ports:
- containerPort: 9808
name: http-endpoint
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http-endpoint
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 2
failureThreshold: 3
- name: liveness-probe
volumeMounts:
- mountPath: /csi
name: plugin-dir
image: {{ .Values.sidecars.livenessprobe.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args:
- --csi-address=/csi/csi.sock
- --http-endpoint=:9808
{{- with (.Values.node.nodeSelector | default .Values.nodeSelector) }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with (.Values.node.tolerations | default .Values.tolerations) }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: registration-dir
hostPath:
path: {{ .Values.kubeletDir }}/plugins_registry/
type: Directory
- name: plugin-dir
hostPath:
path: {{ .Values.kubeletDir }}/plugins/{{ include "nutanix-csi-storage.drivername" . }}/
type: DirectoryOrCreate
- name: pods-mount-dir
hostPath:
path: {{ .Values.kubeletDir }}
type: Directory
- name: device-dir
hostPath:
path: /dev
- name: iscsi-dir
hostPath:
path: /etc/iscsi
type: Directory
- name: root-dir
hostPath:
path: /
type: Directory

View File

@ -0,0 +1,150 @@
# Copyright 2021 Nutanix Inc
#
# example usage: kubectl create -f <this_file>
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-provisioner-ntnx-plugin
namespace: {{ .Release.Namespace }}
spec:
serviceName: csi-provisioner-ntnx-plugin
replicas: 1
selector:
matchLabels:
app: csi-provisioner-ntnx-plugin
template:
metadata:
labels:
app: csi-provisioner-ntnx-plugin
spec:
serviceAccount: csi-provisioner
hostNetwork: true
containers:
- name: csi-provisioner
image: {{ .Values.sidecars.provisioner.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args:
- --csi-address=$(ADDRESS)
- --timeout=60s
- --worker-threads=16
# This adds PV/PVC metadata to create volume requests
- --extra-create-metadata=true
- --default-fstype=ext4
# This is used to collect CSI operation metrics
- --http-endpoint=:9809
- --v=5
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-resizer
image: {{ .Values.sidecars.resizer.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args:
- --v=5
- --csi-address=$(ADDRESS)
- --timeout=60s
- --leader-election=false
# NTNX CSI dirver supports online volume expansion.
- --handle-volume-inuse-error=false
- --http-endpoint=:9810
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-snapshotter
{{- if .Capabilities.APIVersions.Has "snapshot.storage.k8s.io/v1" }}
image: {{ .Values.sidecars.snapshotter.image }}
{{- else }}
image: {{ .Values.sidecars.snapshotter.imageBeta }}
{{- end }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args:
- --csi-address=$(ADDRESS)
- --leader-election=false
- --logtostderr=true
- --timeout=300s
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: ntnx-csi-plugin
image: {{ .Values.provisioner.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
securityContext:
allowPrivilegeEscalation: true
privileged: true
args:
- --endpoint=$(CSI_ENDPOINT)
- --nodeid=$(NODE_ID)
- --drivername={{ include "nutanix-csi-storage.drivername" . }}
env:
- name: CSI_ENDPOINT
value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- mountPath: /var/lib/csi/sockets/pluginproxy/
name: socket-dir
# This is needed for static NFS volume feature.
- mountPath: /host
name: root-dir
ports:
- containerPort: 9807
name: http-endpoint
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http-endpoint
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 2
failureThreshold: 3
- name: liveness-probe
volumeMounts:
- mountPath: /csi
name: socket-dir
image: {{ .Values.sidecars.livenessprobe.image }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
args:
- --csi-address=/csi/csi.sock
- --http-endpoint=:9807
{{- with (.Values.provisioner.nodeSelector | default .Values.nodeSelector) }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with (.Values.provisioner.tolerations | default .Values.tolerations) }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- emptyDir: {}
name: socket-dir
- hostPath:
path: /
type: Directory
name: root-dir

View File

@ -0,0 +1,130 @@
# Copyright 2018 Nutanix Inc
#
# Configuration to deploy the Nutanix CSI driver
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner
namespace: {{ .Release.Namespace }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots/status"]
verbs: ["update"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
namespace: {{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: csi-provisioner
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
# needed for StatefulSet
kind: Service
apiVersion: v1
metadata:
name: csi-provisioner-ntnx-plugin
namespace: {{ .Release.Namespace }}
labels:
app: csi-provisioner-ntnx-plugin
spec:
selector:
app: csi-provisioner-ntnx-plugin
ports:
- name: dummy
port: 12345
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-node-ntnx-plugin
namespace: {{ .Release.Namespace }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-node-runner
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-node-role
namespace: {{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: csi-node-ntnx-plugin
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: csi-node-runner
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,30 @@
{{- if eq .Values.os "openshift4"}}
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: ntnx-csi-scc
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: true
allowHostPID: false
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: []
defaultAddCapabilities: []
fsGroup:
type: RunAsAny
groups: []
priority:
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:{{ .Release.Namespace }}:csi-provisioner
- system:serviceaccount:{{ .Release.Namespace }}:csi-node-ntnx-plugin
{{- end}}

View File

@ -0,0 +1,82 @@
{{- if eq .Values.volumeClass true }}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: {{ .Values.volumeClassName }}
{{- if eq .Values.defaultStorageClass "volume" }}
annotations:
storageclass.kubernetes.io/is-default-class: "true"
{{- end }}
provisioner: {{ include "nutanix-csi-storage.drivername" . }}
parameters:
storageType: NutanixVolumes
csi.storage.k8s.io/provisioner-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/provisioner-secret-namespace: {{ .Release.Namespace }}
csi.storage.k8s.io/node-publish-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/node-publish-secret-namespace: {{ .Release.Namespace }}
csi.storage.k8s.io/controller-expand-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/controller-expand-secret-namespace: {{ .Release.Namespace }}
storageContainer: {{ .Values.storageContainer }}
csi.storage.k8s.io/fstype: {{ .Values.fsType }}
isSegmentedIscsiNetwork: {{ quote .Values.networkSegmentation }}
{{- if eq .Values.lvmVolume true }}
isLVMVolume: "true"
numLVMDisks: {{ quote .Values.lvmDisks }}
{{- end }}
allowVolumeExpansion: true
reclaimPolicy: Delete
---
{{- if .Capabilities.APIVersions.Has "snapshot.storage.k8s.io/v1" }}
apiVersion: snapshot.storage.k8s.io/v1
{{- else }}
apiVersion: snapshot.storage.k8s.io/v1beta1
{{- end }}
kind: VolumeSnapshotClass
metadata:
name: nutanix-snapshot-class
driver: {{ include "nutanix-csi-storage.drivername" . }}
parameters:
storageType: NutanixVolumes
csi.storage.k8s.io/snapshotter-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/snapshotter-secret-namespace: {{ .Release.Namespace }}
deletionPolicy: Delete
{{- end }}
---
{{- if eq .Values.fileClass true }}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: {{ .Values.fileClassName }}
{{- if eq .Values.defaultStorageClass "file" }}
annotations:
storageclass.kubernetes.io/is-default-class: "true"
{{- end }}
provisioner: {{ include "nutanix-csi-storage.drivername" . }}
parameters:
storageType: NutanixFiles
nfsServer: {{ .Values.fileHost }}
nfsPath: {{ .Values.filePath }}
{{- end }}
---
{{- if eq .Values.dynamicFileClass true }}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: {{ .Values.dynamicFileClassName }}
{{- if eq .Values.defaultStorageClass "dynfile" }}
annotations:
storageclass.kubernetes.io/is-default-class: "true"
{{- end }}
provisioner: {{ include "nutanix-csi-storage.drivername" . }}
parameters:
storageType: NutanixFiles
dynamicProv: ENABLED
nfsServerName: {{ .Values.fileServerName }}
csi.storage.k8s.io/provisioner-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/provisioner-secret-namespace: {{ .Release.Namespace }}
csi.storage.k8s.io/node-publish-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/node-publish-secret-namespace: {{ .Release.Namespace }}
csi.storage.k8s.io/controller-expand-secret-name: {{ .Values.secretName }}
csi.storage.k8s.io/controller-expand-secret-namespace: {{ .Release.Namespace }}
allowVolumeExpansion: true
{{- end }}

View File

@ -0,0 +1,11 @@
{{- if eq .Values.createSecret true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secretName }}
namespace: {{ .Release.Namespace }}
data:
# base64 encoded prism-ip:prism-port:admin:password.
# E.g.: echo -n "10.83.0.91:9440:admin:mypassword" | base64
key: {{ printf "%s:9440:%s:%s" .Values.prismEndPoint .Values.username .Values.password | b64enc}}
{{- end }}

View File

@ -0,0 +1,46 @@
# Copyright 2021 Nutanix Inc
#
# example usage: kubectl create -f <this_file>
#
apiVersion: v1
kind: Service
metadata:
name: csi-metrics-service
namespace: {{ .Release.Namespace }}
labels:
app: csi-provisioner-ntnx-plugin
spec:
type: ClusterIP
selector:
app: csi-provisioner-ntnx-plugin
ports:
- name: provisioner
port: 9809
targetPort: 9809
protocol: TCP
- name: resizer
port: 9810
targetPort: 9810
protocol: TCP
{{- if eq .Values.servicemonitor.enabled true }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- with .Values.servicemonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: csi-driver
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- interval: 30s
port: provisioner
- interval: 30s
port: resizer
selector:
matchLabels:
app: csi-provisioner-ntnx-plugin
{{- end }}

View File

@ -0,0 +1,119 @@
# Default values for nutanix-csi-storage.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# parameters
# Legacy mode
#
# if legacy set to true we keep the old reverse domain notation for CSI driver name (com.nutanix.csi).
# need to be set to true only if upgrade and initialy installed with helm package before 2.2.x
legacy: false
# OS settings
#
# Starting v2.3.1 CSI driver is OS independent, this value is reserved
os: none
# kubeletDir allows overriding the host location of kubelet's internal state.
kubeletDir: "/var/lib/kubelet"
# Global Settings for all pods
nodeSelector: {}
tolerations: []
imagePullPolicy: IfNotPresent
# Storage Class settings
#
# choose for which mode (Volume, File, Dynamic File) storageclass need to be created
volumeClass: false
volumeClassName: "nutanix-volume"
fileClass: false
fileClassName: "nutanix-file"
dynamicFileClass: false
dynamicFileClassName: "nutanix-dynamicfile"
# Default Storage Class settings
#
# Decide wich storageclass will be the default
# value are: node, volume, file, dynfile
defaultStorageClass: none
# Nutanix Prism Elements settings
#
# Allow dynamic creation of Volumes and Fileshare
# needed if volumeClass or dynamicFileClass is set to true
prismEndPoint: 10.0.0.1
username: admin
password: nutanix/4u
secretName: ntnx-secret
# Nutanix Prism Elements Existing Secret
#
# if set to false a new secret will not be created
createSecret: true
# Volumes Settings
#
storageContainer: default
fsType: xfs
lvmVolume: false
lvmDisks: 4
networkSegmentation: false
# Files Settings
#
fileHost: 10.0.0.3
filePath: share
# Dynamic Files Settings
#
fileServerName: file
# Volume metrics and CSI operations metrics configuration
#
servicemonitor:
enabled: false
labels:
# This should match the serviceMonitorSelector logic configured
# on the prometheus.
k8s-app: csi-driver
# Pod pecific Settings
#
provisioner:
image: quay.io/karbon/ntnx-csi:v2.5.1
nodeSelector: {}
tolerations: []
node:
image: quay.io/karbon/ntnx-csi:v2.5.1
nodeSelector: {}
tolerations: []
sidecars:
registrar:
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
provisioner:
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
snapshotter:
image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.1
imageBeta: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3
resizer:
image: k8s.gcr.io/sig-storage/csi-resizer:v1.2.0
livenessprobe:
image: k8s.gcr.io/sig-storage/livenessprobe:v2.3.0

View File

@ -0,0 +1,49 @@
# These are some examples of commonly ignored file patterns.
# You should customize this list as applicable to your project.
# Learn more about .gitignore:
# https://www.atlassian.com/git/tutorials/saving-changes/gitignore
# Node artifact files
node_modules/
dist/
# Compiled Java class files
*.class
# Compiled Python bytecode
*.py[cod]
# Log files
*.log
# Package files
*.jar
# Maven
target/
dist/
# JetBrains IDE
.idea/
# Unit test reports
TEST*.xml
# Generated by MacOS
.DS_Store
# Generated by Windows
Thumbs.db
# Applications
*.app
*.exe
*.war
# Large media files
*.mp4
*.tiff
*.avi
*.flv
*.mov
*.wmv

View File

@ -0,0 +1,16 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Vals-Operator
catalog.cattle.io/release-name: vals-operator
apiVersion: v2
appVersion: v0.5.0
description: This helm chart installs the Digitalis Vals Operator to manage sync secrets
from supported backends into Kubernetes
icon: https://digitalis.io/wp-content/uploads/2020/06/cropped-Digitalis-512x512-Blue_Digitalis-512x512-Blue-32x32.png
kubeVersion: '>= 1.19'
maintainers:
- email: info@digitalis.io
name: Digitalis.IO
name: vals-operator
type: application
version: 0.4.1

View File

@ -0,0 +1,33 @@
vals-operator
=============
This helm chart installs the Digitalis Vals Operator to manage sync secrets from supported backends into Kubernetes
## Chart Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | |
| args | list | `[]` | |
| env | list | `[]` | |
| fullnameOverride | string | `""` | |
| image.pullPolicy | string | `"IfNotPresent"` | |
| image.repository | string | `"digitalisdocker/vals-operator"` | |
| image.tag | string | `""` | |
| imagePullSecrets | list | `[]` | |
| manageCrds | bool | `true` | |
| nameOverride | string | `""` | |
| nodeSelector | object | `{}` | |
| podSecurityContext | object | `{}` | |
| replicaCount | int | `1` | |
| resources | object | `{}` | |
| secretEnv | list | `[]` | |
| securityContext | object | `{}` | |
| serviceAccount.annotations | object | `{}` | |
| serviceAccount.create | bool | `true` | |
| serviceAccount.name | string | `""` | |
| serviceMonitor.enabled | bool | `false` | |
| serviceMonitor.labels | object | `{}` | |
| tolerations | list | `[]` | |
| volumeMounts | list | `[]` | |
| volumes | list | `[]` | |

View File

@ -0,0 +1,9 @@
# Vals-Operator
Here at [Digitalis](https://digitalis.io) we love [vals](https://github.com/variantdev/vals), it's a tool we use daily to keep secrets stored securely. We also use [secrets-manager](https://github.com/tuenti/secrets-manager) on the Kubernetes deployment we manage. Inspired by these two wonderful tools we have created this operator.
*vals-operator* syncs secrets from any secrets store supported by [vals](https://github.com/variantdev/vals) into Kubernetes. It works very similarly to [secrets-manager](https://github.com/tuenti/secrets-manager) and the code is actually based on it. Where they differ is that it not just supports HashiCorp Vault but many other secrets stores.
## Mirroring secrets
We have also added the ability to copy secrets between namespaces. It uses the format `ref+k8s://namespace/secret#key`. This way you can keep secrets generated in one namespace in sync with any other namespace in the cluster.

View File

@ -0,0 +1,130 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
"helm.sh/hook": crd-install
"helm.sh/hook-delete-policy": "before-hook-creation"
creationTimestamp: null
name: valssecrets.digitalis.io
spec:
group: digitalis.io
names:
kind: ValsSecret
listKind: ValsSecretList
plural: valssecrets
singular: valssecret
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: ValsSecret is the Schema for the valssecrets API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ValsSecretSpec defines the desired state of ValsSecret
properties:
data:
additionalProperties:
properties:
encoding:
description: Encoding type for the secret. Only base64 supported.
Optional
type: string
ref:
description: Ref value to the secret in the format ref+backend://path
https://github.com/variantdev/vals
type: string
required:
- ref
type: object
type: object
databases:
items:
properties:
driver:
description: Defines the database type
type: string
hosts:
description: List of hosts to connect to, they'll be tried in
sequence until one succeeds
items:
type: string
type: array
loginCredentials:
description: Credentials to access the database
properties:
namespace:
description: Optional namespace of the secret, default current
namespace
type: string
passwordKey:
description: Key in the secret containing the database username
type: string
secretName:
description: Name of the secret containing the credentials
to be able to log in to the database
type: string
usernameKey:
description: Key in the secret containing the database username
type: string
required:
- passwordKey
- secretName
type: object
passwordKey:
description: Key in the secret containing the database username
type: string
port:
description: Database port number
type: integer
userHost:
description: Used for MySQL only, the host part for the username
type: string
usernameKey:
description: Key in the secret containing the database username
type: string
required:
- driver
- hosts
- passwordKey
type: object
type: array
name:
type: string
ttl:
format: int64
type: integer
type:
type: string
required:
- data
type: object
status:
description: ValsSecretStatus defines the observed state of ValsSecret
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,26 @@
questions:
#image configurations
- variable: image.repository
default: "digitalisdocker/vals-operator"
description: image registry
type: string
label: Image Registry
group: "Container Images"
- variable: image.tag
default: "v0.3.0"
description: Image tag
type: string
label: Image Tag
group: "Container Images"
- variable: imagePullSecrets
default: ""
description: secret name to pull image
type: string
label: Image Pull Secrets
group: "Container Images"
- variable: environmentSecret
default: ""
description: "The secret containing env variables to access the backend secrets store."
label: Config Secret
type: string
group: "Settings"

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "vals-operator.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "vals-operator.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "vals-operator.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "vals-operator.labels" -}}
helm.sh/chart: {{ include "vals-operator.chart" . }}
{{ include "vals-operator.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "vals-operator.selectorLabels" -}}
app.kubernetes.io/name: {{ include "vals-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "vals-operator.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "vals-operator.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,6 @@
{{- if .Values.manageCrds -}}
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" }}
{{ $.Files.Get $path }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,73 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "vals-operator.fullname" . }}
labels:
{{- include "vals-operator.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "vals-operator.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "vals-operator.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "vals-operator.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.args }}
args:
{{- toYaml .Values.args | nindent 12 }}
{{- end }}
{{- if .Values.environmentSecret }}
envFrom:
- secretRef:
name: "{{ .Values.environmentSecret }}"
{{- else }}
envFrom:
{{- toYaml .Values.secretEnv | nindent 12 }}
{{- end }}
{{- if .Values.env }}
env:
{{- toYaml .Values.env | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.volumeMounts }}
volumeMounts:
{{- toYaml .Values.volumeMounts | nindent 12 }}
{{- end }}
ports:
- containerPort: {{ .Values.metricsPort | default 8080 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.volumes }}
volumes:
{{- toYaml .Values.volumes | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,64 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vals-operator
labels:
{{- include "vals-operator.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- "secrets"
verbs:
- "get"
- "list"
- "watch"
- "update"
- "delete"
- "create"
- apiGroups:
- ""
resources:
- "events"
verbs:
- "create"
- "patch"
- apiGroups:
- "digitalis.io"
resources:
- "valssecrets"
verbs:
- "get"
- "list"
- "watch"
- "update"
- "delete"
- "create"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vals-operator
labels:
{{- include "vals-operator.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vals-operator
subjects:
- kind: ServiceAccount
name: {{ include "vals-operator.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "vals-operator.serviceAccountName" . }}
labels:
{{- include "vals-operator.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,37 @@
{{- if .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "vals-operator.fullname" . }}
labels:
{{- if .Values.serviceMonitor.labels }}
{{ toYaml .Values.serviceMonitor.labels | nindent 4 }}
{{- else }}
app: {{ template "vals-operator.name" . }}
chart: {{ template "vals-operator.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end }}
{{- if .Values.serviceMonitor.namespace }}
namespace: {{ .Values.serviceMonitor.namespace }}
{{- end }}
spec:
endpoints:
- targetPort: "metrics"
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
{{- end }}
path: /metrics
port: {{ .Values.metricsPort | default 8080 }}
tlsConfig:
insecureSkipVerify: true
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
{{- include "vals-operator.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,106 @@
replicaCount: 1
image:
repository: digitalisdocker/vals-operator
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
manageCrds: true
# additional arguments to operator
args: []
# -exclude-namespaces string
# Comma separated list of namespaces to ignore.
# -health-probe-bind-address string
# The address the probe endpoint binds to. (default ":8081")
# -kubeconfig string
# Paths to a kubeconfig. Only required if out-of-cluster.
# -leader-elect
# Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
# -metrics-bind-address string
# The address the metric endpoint binds to. (default ":8080")
# -reconcile-period duration
# How often the controller will re-queue vals-operator events. (default 5s)
# -record-changes
# Records every time a secret has been updated. You can view them with kubectl describe. It may also be disabled globally and enabled per secret via the annotation 'vals-operator.digitalis.io/record: "true"' (default true)
# -ttl duration
# How often to check backend for updates. (default 5m0s)
# -watch-namespaces string
# Comma separated list of namespaces that vals-operator will watch.
# -zap-devel
# Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
# -zap-encoder value
# Zap log encoding (one of 'json' or 'console')
# -zap-log-level value
# Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
# -zap-stacktrace-level value
# Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
environmentSecret: ""
# See https://github.com/variantdev/vals
# for information on setting up your backend environment.
env: []
# - name: VAULT_SKIP_VERIFY
# value: "true"
secretEnv: []
# - secretRef:
# name: aws-creds
volumes: []
# - name: creds
# secret:
# secretName: gcs-credentials
volumeMounts: []
# - name: creds
# mountPath: /secret
# readOnly: true
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
metricsPort: 8080
serviceMonitor:
# When set to true then use a ServiceMonitor to collect metrics
enabled: false
# Custom labels to use in the ServiceMonitor to be matched with a specific Prometheus
labels: {}
# Set the namespace the ServiceMonitor should be deployed to
# namespace: default
# Set how frequently Prometheus should scrape
# interval: 30s
# Set timeout for scrape
# scrapeTimeout: 10s
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -2553,6 +2553,47 @@ entries:
- assets/nutanix-csi-snapshot/nutanix-csi-snapshot-1.0.0.tgz
version: 1.0.0
nutanix-csi-storage:
- annotations:
artifacthub.io/changes: |
- Update Nutanix CSI Driver to 2.5.0
artifacthub.io/containsSecurityUpdates: "true"
artifacthub.io/displayName: Nutanix CSI Storage
artifacthub.io/links: |
- name: Nutanix CSI Driver documentation
url: https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_5_0:CSI-Volume-Driver-v2_5_0
artifacthub.io/maintainers: |
- name: Nutanix Cloud Native Team
email: cloudnative@nutanix.com
artifacthub.io/recommendations: |
- url: https://artifacthub.io/packages/helm/nutanix/nutanix-csi-snapshot
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Nutanix CSI Storage
catalog.cattle.io/release-name: nutanix-csi-storage
apiVersion: v1
appVersion: 2.5.1
created: "2022-02-17T11:01:02.445518+01:00"
description: Nutanix Container Storage Interface (CSI) Driver
digest: 9780b825e3298991fb93e6fa7764be7cd8fad51470b8684cd1cac13a5a26e187
home: https://github.com/nutanix/helm
icon: https://avatars2.githubusercontent.com/u/6165865?s=200&v=4
keywords:
- Nutanix
- Storage
- Volumes
- Files
- StorageClass
- RedHat
- CentOS
- Ubuntu
- CSI
kubeVersion: '>= 1.17.0-0'
maintainers:
- email: cloudnative@nutanix.com
name: nutanix-cloud-native-bot
name: nutanix-csi-storage
urls:
- assets/nutanix-csi-storage/nutanix-csi-storage-2.5.100.tgz
version: 2.5.100
- annotations:
artifacthub.io/changes: |
- Update Nutanix CSI Driver to 2.5.0
@ -3571,6 +3612,26 @@ entries:
- assets/universal-crossplane/universal-crossplane-1.2.200100.tgz
version: 1.2.200100
vals-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Vals-Operator
catalog.cattle.io/release-name: vals-operator
apiVersion: v2
appVersion: v0.5.0
created: "2022-02-18T13:18:49.589482-05:00"
description: This helm chart installs the Digitalis Vals Operator to manage sync
secrets from supported backends into Kubernetes
digest: 48919f4c9e4bf65c84d300466758533ef63ef00023403ce4fcd5189606af7d6a
icon: https://digitalis.io/wp-content/uploads/2020/06/cropped-Digitalis-512x512-Blue_Digitalis-512x512-Blue-32x32.png
kubeVersion: '>= 1.19'
maintainers:
- email: info@digitalis.io
name: Digitalis.IO
name: vals-operator
type: application
urls:
- assets/vals-operator/vals-operator-0.4.1.tgz
version: 0.4.1
- apiVersion: v2
appVersion: v0.4.0
created: "2022-01-07T09:27:48.235665Z"

View File

@ -8,5 +8,5 @@
+ catalog.cattle.io/release-name: nutanix-csi-storage
+ catalog.cattle.io/display-name: Nutanix CSI Storage
apiVersion: v1
appVersion: 2.5.0
appVersion: 2.5.1
description: Nutanix Container Storage Interface (CSI) Driver

View File

@ -1,6 +1,6 @@
--- charts-original/README.md
+++ charts/README.md
@@ -41,6 +41,7 @@
@@ -43,6 +43,7 @@
- Kubernetes 1.17 or later
- Kubernetes worker nodes must have the iSCSI package installed (Nutanix Volumes mode) and/or NFS tools (Nutanix Files mode)
- This chart have been validated on RHEL/CentOS 7/8 and Ubuntu 18.04/20.04/21.04/21.10, but the new architecture enables easy portability to other distributions.

View File

@ -1,2 +1,2 @@
url: https://github.com/nutanix/helm/releases/download/nutanix-csi-storage-2.5.0/nutanix-csi-storage-2.5.0.tgz
url: https://github.com/nutanix/helm/releases/download/nutanix-csi-storage-2.5.1/nutanix-csi-storage-2.5.1.tgz
packageVersion: 00

View File

@ -3,7 +3,7 @@
@@ -10,3 +10,7 @@
name: vals-operator
type: application
version: 0.3.0
version: 0.4.0
+annotations:
+ catalog.cattle.io/certified: partner
+ catalog.cattle.io/display-name: Vals-Operator

View File

@ -1,2 +1,2 @@
url: https://digitalis-io.github.io/helm-charts/charts/vals-operator-0.3.0.tgz
url: https://digitalis-io.github.io/helm-charts/charts/vals-operator-0.4.0.tgz
packageVersion: 01