Merge pull request #352 from samuelattwood/main

Release Partner Charts
pull/359/head
Samuel Attwood 2022-02-25 17:10:20 -05:00 committed by GitHub
commit 866ead5a8b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
66 changed files with 6784 additions and 0 deletions

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,17 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: NeuVector
catalog.cattle.io/release-name: neuvector
apiVersion: v1
appVersion: 4.4.4
description: Helm chart for NeuVector's core services
home: https://neuvector.com
icon: https://avatars2.githubusercontent.com/u/19367275?s=200&v=4
keywords:
- security
kubeVersion: '>=1.13.0-0'
maintainers:
- email: support@neuvector.com
name: becitsthere
name: neuvector
version: 1.9.100

View File

@ -0,0 +1,198 @@
# NeuVector Helm Chart
Helm chart for NeuVector container security's core services.
## Preparation if using Helm 2
- Kubernetes 1.7+
- Helm installed and Tiller pod is running
- Cluster role `cluster-admin` available, check by:
```console
$ kubectl get clusterrole cluster-admin
```
If nothing returned, then add the `cluster-admin`:
cluster-admin.yaml
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
```
```console
$ kubectl create -f cluster-admin.yaml
```
- If you have not created a service account for tiller, and give it admin abilities on the cluster:
```console
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ kubectl patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' -n kube-system
```
## CRD
Because the CRD (Custom Resource Definition) policies can be deployed before NeuVector's core product, a new 'crd' helm chart is created. The crd template in the 'core' chart is kept for the backward compatibility. Please set 'crdwebhook.enabled' to false, if you use the new 'crd' chart.
## Choosing container runtime
NeuVector platform support docker, cri-o and containerd as the container runtime. For the k3s or bottlerocket cluster, they have their own runtime socket path. You should enable their runtime, k3s.enabled and bottlerocket.enabled, respectively.
## Configuration
The following table lists the configurable parameters of the NeuVector chart and their default values.
Parameter | Description | Default | Notes
--------- | ----------- | ------- | -----
`openshift` | If deploying in OpenShift, set this to true | `false` |
`registry` | NeuVector container registry | `registry.neuvector.com` |
`tag` | image tag for controller enforcer manager | `latest` |
`oem` | OEM release name | `nil` |
`imagePullSecrets` | image pull secret | `nil` |
`psp` | NeuVector Pod Security Policy when psp policy is enabled | `false` |
`serviceAccount` | Service account name for NeuVector components | `default` |
`controller.enabled` | If true, create controller | `true` |
`controller.image.repository` | controller image repository | `neuvector/controller` |
`controller.image.hash` | controller image hash in the format of sha256:xxxx. If present it overwrites the image tag value. | |
`controller.replicas` | controller replicas | `3` |
`controller.schedulerName` | kubernetes scheduler name | `nil` |
`controller.affinity` | controller affinity rules | ... | spread controllers to different nodes |
`controller.tolerations` | List of node taints to tolerate | `nil` |
`controller.resources` | Add resources requests and limits to controller deployment | `{}` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`controller.nodeSelector` | Enable and specify nodeSelector labels | `{}` |
`controller.disruptionbudget` | controller PodDisruptionBudget. 0 to disable. Recommended value: 2. | `0` |
`controller.priorityClassName` | controller priorityClassName. Must exist prior to helm deployment. Leave empty to disable. | `nil` |
`controller.env` | User-defined environment variables for controller. | `[]` |
`controller.pvc.enabled` | If true, enable persistence for controller using PVC | `false` | Require persistent volume type RWX, and storage 1Gi
`controller.pvc.storageClass` | Storage Class to be used | `default` |
`controller.pvc.capacity` | Storage capacity | `1Gi` |
`controller.azureFileShare.enabled` | If true, enable the usage of an existing or statically provisioned Azure File Share | `false` |
`controller.azureFileShare.secretName` | The name of the secret containing the Azure file share storage account name and key | `nil` |
`controller.azureFileShare.shareName` | The name of the Azure file share to use | `nil` |
`controller.apisvc.type` | Controller REST API service type | `nil` |
`controller.apisvc.annotations` | Add annotations to controller REST API service | `{}` |
`controller.apisvc.route.enabled` | If true, create a OpenShift route to expose the Controller REST API service | `false` |
`controller.apisvc.route.termination` | Specify TLS termination for OpenShift route for Controller REST API service. Possible passthrough, edge, reencrypt | `passthrough` |
`controller.apisvc.route.host` | Set controller REST API service hostname | `nil` |
`controller.certificate.secret` | Replace controller REST API certificate using secret if secret name is specified | `nil` |
`controller.certificate.keyFile` | Replace controller REST API certificate key file | `tls.key` |
`controller.certificate.pemFile` | Replace controller REST API certificate pem file | `tls.pem` |
`controller.federation.mastersvc.type` | Multi-cluster primary cluster service type. If specified, the deployment will be used to manage other clusters. Possible values include NodePort, LoadBalancer and ClusterIP. | `nil` |
`controller.federation.mastersvc.route.enabled` | If true, create a OpenShift route to expose the Multi-cluster primary cluster service | `false` |
`controller.federation.mastersvc.route.host` | Set OpenShift route host for primary cluster service | `nil` |
`controller.federation.mastersvc.route.termination` | Specify TLS termination for OpenShift route for Multi-cluster primary cluster service. Possible passthrough, edge, reencrypt | `passthrough` |
`controller.federation.mastersvc.ingress.enabled` | If true, create ingress for federation master service, must also set ingress host value | `false` | enable this if ingress controller is installed
`controller.federation.mastersvc.ingress.tls` | If true, TLS is enabled for controller federation master ingress service |`false` | If set, the tls-host used is the one set with `controller.federation.mastersvc.ingress.host`.
`controller.federation.mastersvc.ingress.host` | Must set this host value if ingress is enabled | `nil` |
`controller.federation.mastersvc.ingress.secretName` | Name of the secret to be used for TLS-encryption | `nil` | Secret must be created separately (Let's encrypt, manually)
`controller.federation.mastersvc.ingress.path` | Set ingress path |`/` | If set, it might be necessary to set a rewrite rule in annotations.
`controller.federation.mastersvc.ingress.annotations` | Add annotations to ingress to influence behavior | `ingress.kubernetes.io/protocol: https ingress.kubernetes.io/rewrite-target: /` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`controller.federation.managedsvc.type` | Multi-cluster managed cluster service type. If specified, the deployment will be managed by the managed clsuter. Possible values include NodePort, LoadBalancer and ClusterIP. | `nil` |
`controller.federation.managedsvc.route.enabled` | If true, create a OpenShift route to expose the Multi-cluster managed cluster service | `false` |
`controller.federation.managedsvc.route.host` | Set OpenShift route host for manageed service | `nil` |
`controller.federation.managedsvc.route.termination` | Specify TLS termination for OpenShift route for Multi-cluster managed cluster service. Possible passthrough, edge, reencrypt | `passthrough` |
`controller.federation.managedsvc.ingress.enabled` | If true, create ingress for federation managed service, must also set ingress host value | `false` | enable this if ingress controller is installed
`controller.federation.managedsvc.ingress.tls` | If true, TLS is enabled for controller federation managed ingress service |`false` | If set, the tls-host used is the one set with `controller.federation.managedsvc.ingress.host`.
`controller.federation.managedsvc.ingress.host` | Must set this host value if ingress is enabled | `nil` |
`controller.federation.managedsvc.ingress.secretName` | Name of the secret to be used for TLS-encryption | `nil` | Secret must be created separately (Let's encrypt, manually)
`controller.federation.managedsvc.ingress.path` | Set ingress path |`/` | If set, it might be necessary to set a rewrite rule in annotations.
`controller.federation.managedsvc.ingress.annotations` | Add annotations to ingress to influence behavior | `ingress.kubernetes.io/protocol: https ingress.kubernetes.io/rewrite-target: /` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`controller.ingress.enabled` | If true, create ingress for rest api, must also set ingress host value | `false` | enable this if ingress controller is installed
`controller.ingress.tls` | If true, TLS is enabled for controller rest api ingress service |`false` | If set, the tls-host used is the one set with `controller.ingress.host`.
`controller.ingress.host` | Must set this host value if ingress is enabled | `nil` |
`controller.ingress.secretName` | Name of the secret to be used for TLS-encryption | `nil` | Secret must be created separately (Let's encrypt, manually)
`controller.ingress.path` | Set ingress path |`/` | If set, it might be necessary to set a rewrite rule in annotations.
`controller.ingress.annotations` | Add annotations to ingress to influence behavior | `ingress.kubernetes.io/protocol: https ingress.kubernetes.io/rewrite-target: /` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`controller.configmap.enabled` | If true, configure NeuVector global settings using a ConfigMap | `false`
`controller.configmap.data` | NeuVector configuration in YAML format | `{}`
`controller.secret.enabled` | If true, configure NeuVector global settings using secrets | `false`
`controller.secret.data` | NeuVector configuration in key/value pair format | `{}`
`enforcer.enabled` | If true, create enforcer | `true` |
`enforcer.image.repository` | enforcer image repository | `neuvector/enforcer` |
`enforcer.image.hash` | enforcer image hash in the format of sha256:xxxx. If present it overwrites the image tag value. | |
`enforcer.priorityClassName` | enforcer priorityClassName. Must exist prior to helm deployment. Leave empty to disable. | `nil` |
`enforcer.tolerations` | List of node taints to tolerate | `- effect: NoSchedule`<br>`key: node-role.kubernetes.io/master` | other taints can be added after the default
`enforcer.resources` | Add resources requests and limits to enforcer deployment | `{}` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`manager.enabled` | If true, create manager | `true` |
`manager.image.repository` | manager image repository | `neuvector/manager` |
`manager.image.hash` | manager image hash in the format of sha256:xxxx. If present it overwrites the image tag value. | |
`manager.priorityClassName` | manager priorityClassName. Must exist prior to helm deployment. Leave empty to disable. | `nil` |
`manager.env.ssl` | If false, manager will listen on HTTP access instead of HTTPS | `true` |
`manager.svc.type` | set manager service type for native Kubernetes | `NodePort`;<br>if it is OpenShift platform or ingress is enabled, then default is `ClusterIP` | set to LoadBalancer if using cloud providers, such as Azure, Amazon, Google
`manager.svc.loadBalancerIP` | if manager service type is LoadBalancer, this is used to specify the load balancer's IP | `nil` |
`manager.svc.annotations` | Add annotations to manager service | `{}` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`manager.route.enabled` | If true, create a OpenShift route to expose the management consol service | `true` |
`manager.route.host` | Set OpenShift route host for management consol service | `nil` |
`manager.route.termination` | Specify TLS termination for OpenShift route for management consol service. Possible passthrough, edge, reencrypt | `passthrough` |
`manager.certificate.secret` | Replace manager UI certificate using secret if secret name is specified | `nil` |
`manager.certificate.keyFile` | Replace manager UI certificate key file | `tls.key` |
`manager.certificate.pemFile` | Replace manager UI certificate pem file | `tls.pem` |
`manager.ingress.enabled` | If true, create ingress, must also set ingress host value | `false` | enable this if ingress controller is installed
`manager.ingress.host` | Must set this host value if ingress is enabled | `nil` |
`manager.ingress.path` | Set ingress path |`/` | If set, it might be necessary to set a rewrite rule in annotations. Currently only supports `/`
`manager.ingress.annotations` | Add annotations to ingress to influence behavior | `{}` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`manager.ingress.tls` | If true, TLS is enabled for manager ingress service |`false` | If set, the tls-host used is the one set with `manager.ingress.host`.
`manager.ingress.secretName` | Name of the secret to be used for TLS-encryption | `nil` | Secret must be created separately (Let's encrypt, manually)
`manager.resources` | Add resources requests and limits to manager deployment | `{}` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml)
`manager.affinity` | manager affinity rules | `{}` |
`manager.tolerations` | List of node taints to tolerate | `nil` |
`manager.nodeSelector` | Enable and specify nodeSelector labels | `{}` |
`cve.updater.enabled` | If true, create cve updater | `true` |
`cve.updater.secure` | If ture, API server's certificate is validated | `false` |
`cve.updater.image.repository` | cve updater image repository | `neuvector/updater` |
`cve.updater.image.tag` | image tag for cve updater | `latest` |
`cve.updater.image.hash` | cve updateer image hash in the format of sha256:xxxx. If present it overwrites the image tag value. | |
`cve.updater.priorityClassName` | cve updater priorityClassName. Must exist prior to helm deployment. Leave empty to disable. | `nil` |
`cve.updater.schedule` | cronjob cve updater schedule | `0 0 * * *` |
`cve.scanner.enabled` | If true, cve scanners will be deployed | `true` |
`cve.scanner.image.repository` | cve scanner image repository | `neuvector/scanner` |
`cve.scanner.image.tag` | cve scanner image tag | `latest` |
`cve.updater.image.hash` | cve scanner image hash in the format of sha256:xxxx. If present it overwrites the image tag value. | |
`cve.scanner.priorityClassName` | cve scanner priorityClassName. Must exist prior to helm deployment. Leave empty to disable. | `nil` |
`cve.scanner.replicas` | external scanner replicas | `3` |
`cve.scanner.dockerPath` | the remote docker socket if CI/CD integration need scan images before they are pushed to the registry | `nil` |
`cve.scanner.resources` | Add resources requests and limits to scanner deployment | `{}` | see examples in [values.yaml](https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml) |
`cve.scanner.affinity` | scanner affinity rules | `{}` |
`cve.scanner.tolerations` | List of node taints to tolerate | `nil` |
`cve.scanner.nodeSelector` | Enable and specify nodeSelector labels | `{}` |
`docker.path` | docker path | `/var/run/docker.sock` |
`containerd.enabled` | Set to true, if the container runtime is containerd | `false` | **Note**: For k3s cluster, set k3s.enabled to true instead
`containerd.path` | If containerd is enabled, this local containerd socket path will be used | `/var/run/containerd/containerd.sock` |
`crio.enabled` | Set to true, if the container runtime is cri-o | `false` |
`crio.path` | If cri-o is enabled, this local cri-o socket path will be used | `/var/run/crio/crio.sock` |
`k3s.enabled` | Set to true for k3s | `false` |
`k3s.runtimePath` | If k3s is enabled, this local containerd socket path will be used | `/run/k3s/containerd/containerd.sock` |
`bottlerocket.enabled` | Set to true if using AWS bottlerocket | `false` |
`bottlerocket.runtimePath` | If bottlerocket is enabled, this local containerd socket path will be used | `/run/dockershim.sock` |
`admissionwebhook.type` | admission webhook type | `ClusterIP` |
`crdwebhook.enabled` | Enable crd service and create crd related resources | `true` |
`crdwebhook.type` | crd webhook type | `ClusterIP` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install my-release --namespace neuvector ./neuvector-helm/ --set manager.env.ssl=off
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install my-release --namespace neuvector ./neuvector-helm/ -f values.yaml
```
---
Contact <support@neuvector.com> for access to container registry and docs.

View File

@ -0,0 +1,14 @@
### Run-Time Protection Without Compromise
NeuVector delivers a complete run-time security solution with container process/file system protection and vulnerability scanning combined with the only true Layer 7 container firewall. Protect sensitive data with a complete container security platform.
NeuVector integrates tightly with Rancher and Kubernetes to extend the built-in security features for applications that require defense in depth. Security features include:
+ Build phase vulnerability scanning with Jenkins plug-in and registry scanning
+ Admission control to prevent vulnerable or unauthorized image deployments using Kubernetes admission control webhooks
+ Complete run-time scanning with network, process, and file system monitoring and protection
+ The industry's only layer 7 container firewall for multi-protocol threat detection and automated segmentation
+ Advanced network controls including DLP detection, service mesh integration, connection blocking and packet captures
+ Run-time vulnerability scanning and CIS benchmarks
Please Note: Before installing this chart, you will need to get an image pull secret and license key from NeuVector. Without this data supplied, the chart will not work. Configure correct container runtime and runtime path.

View File

@ -0,0 +1,213 @@
questions:
#image configurations
- variable: registry
default: "registry.neuvector.com"
description: image registry
type: string
label: Image Registry
group: "Container Images"
- variable: oem
default: ""
description: OEM release name
type: string
label: OEM name
group: "Container Images"
- variable: tag
default: "4.4.4"
description: image tag for controller enforcer manager
type: string
label: Image Tag
group: "Container Images"
- variable: imagePullSecrets
default: ""
description: secret name to pull image
type: string
label: Image Pull Secrets
group: "Container Images"
- variable: controller.image.repository
default: "neuvector/controller"
description: controller image repository
type: string
label: Controller image path
group: "Container Images"
- variable: manager.image.repository
default: "neuvector/manager"
description: manager image repository
type: string
label: Manager image path
group: "Container Images"
- variable: enforcer.image.repository
default: "neuvector/enforcer"
description: enforcer image repository
type: string
label: Enforcer image path
group: "Container Images"
- variable: cve.scanner.image.repository
default: "neuvector/scanner"
description: scanner image repository
type: string
label: Scanner image path
group: "Container Images"
- variable: cve.updater.image.repository
default: "neuvector/updater"
description: cve updater image repository
type: string
label: CVE Updater image path
group: "Container Images"
#Container Runtime configurations
- variable: docker.enabled
default: true
description: Docker runtime. Enable only one runtime.
type: boolean
label: Docker Runtime
show_subquestion_if: true
group: "Container Runtime"
subquestions:
- variable: docker.path
default: "/var/run/docker.sock"
description: "Docker Runtime Path"
type: string
label: Runtime Path
- variable: containerd.enabled
default: "false"
description: Containerd runtime. Enable only one runtime.
type: boolean
label: Containerd Runtime
show_subquestion_if: true
group: "Container Runtime"
subquestions:
- variable: containerd.path
default: " /var/run/containerd/containerd.sock"
description: "Containerd Runtime Path"
type: string
label: Runtime Path
- variable: crio.enabled
default: "false"
description: CRI-O runtime. Enable only one runtime.
type: boolean
label: CRI-O Runtime
show_subquestion_if: true
group: "Container Runtime"
subquestions:
- variable: crio.path
default: "/var/run/crio/crio.sock"
description: "CRI-O Runtime Path"
type: string
label: Runtime Path
- variable: k3s.enabled
default: "false"
description: k3s containerd runtime. Enable only one runtime.
type: boolean
label: k3s Containerd Runtime
show_subquestion_if: true
group: "Container Runtime"
subquestions:
- variable: k3s.runtimePath
default: " /run/k3s/containerd/containerd.sock"
description: "k3s Containerd Runtime Path"
type: string
label: Runtime Path
#storage configurations
- variable: controller.pvc.enabled
default: false
description: If true, enable persistence for controller using PVC
type: boolean
label: PVC status
group: "PVC Configuration"
- variable: controller.pvc.storageClass
default: ""
description: Storage Class to be used
type: string
label: Storage Class Name
group: "PVC Configuration"
#ingress configurations
- variable: manager.ingress.enabled
default: false
description: If true, create ingress, must also set ingress host value
type: boolean
label: Manager ingress status
group: "Ingress Configuration"
- variable: manager.ingress.host
default: ""
description: Must set this host value if ingress is enabled
type: string
label: Manager Ingress host
group: "Ingress Configuration"
- variable: manager.ingress.path
default: "/"
description: Set ingress path
type: string
label: Manager Ingress path
group: "Ingress Configuration"
- variable: manager.ingress.annotations
default: "{}"
description: Add annotations to ingress to influence behavior. Please use the 'Edit as YAML' feature in the Rancher UI to add single or multiple lines of annotation.
type: string
label: Manager Ingress annotations
group: "Ingress Configuration"
- variable: controller.ingress.enabled
default: false
description: If true, create ingress for rest api, must also set ingress host value
type: boolean
label: Controller ingress status
group: "Ingress Configuration"
- variable: controller.ingress.host
default: ""
description: Must set this host value if ingress is enabled
type: string
label: Controller Ingress host
group: "Ingress Configuration"
- variable: controller.ingress.path
default: "/"
description: Set ingress path
type: string
label: Controller Ingress path
group: "Ingress Configuration"
- variable: controller.ingress.annotations
default: "{}"
description: Add annotations to ingress to influence behavior. Please use the 'Edit as YAML' feature in the Rancher UI to add single or multiple lines of annotation.
type: string
label: Controller Ingress annotations
group: "Ingress Configuration"
#service configurations
- variable: manager.svc.type
default: "NodePort"
description: Set manager service type for native Kubernetes
type: enum
label: Manager service type
group: "Service Configuration"
options:
- "NodePort"
- "ClusterIP"
- "LoadBalancer"
- variable: controller.federation.mastersvc.type
default: ""
description: Multi-cluster master cluster service type. If specified, the deployment will be used to manage other clusters. Possible values include NodePort, LoadBalancer and Ingress
type: enum
label: Fed Master Service Type
group: "Service Configuration"
options:
- "NodePort"
- "Ingress"
- "LoadBalancer"
- variable: controller.federation.managedsvc.type
default: ""
description: Multi-cluster managed cluster service type. If specified, the deployment will be managed by the master clsuter. Possible values include NodePort, LoadBalancer and Ingress
type: enum
label: Fed Managed service type
group: "Service Configuration"
options:
- "NodePort"
- "Ingress"
- "LoadBalancer"
- variable: controller.apisvc.type
default: "NodePort"
description: Controller REST API service type
type: enum
label: Controller REST API Service Type
group: "Service Configuration"
options:
- "NodePort"
- "ClusterIP"
- "LoadBalancer"

View File

@ -0,0 +1,20 @@
{{- if and .Values.manager.enabled .Values.manager.ingress.enabled }}
From outside the cluster, the NeuVector URL is:
http://{{ .Values.manager.ingress.host }}
{{- else if not .Values.openshift }}
Get the NeuVector URL by running these commands:
{{- if contains "NodePort" .Values.manager.svc.type }}
NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services neuvector-service-webui)
NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo https://$NODE_IP:$NODE_PORT
{{- else if contains "ClusterIP" .Values.manager.svc.type }}
CLUSTER_IP=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.clusterIP}" services neuvector-service-webui)
echo https://$CLUSTER_IP:8443
{{- else if contains "LoadBalancer" .Values.manager.svc.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w neuvector-service-webui'
SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} neuvector-service-webui -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
echo https://$SERVICE_IP:8443
{{- end }}
{{- end }}

View File

@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "neuvector.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "neuvector.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "neuvector.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-admission-webhook
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: 443
targetPort: 20443
protocol: TCP
name: admission-webhook
type: {{ .Values.admissionwebhook.type }}
selector:
app: neuvector-controller-pod

View File

@ -0,0 +1,119 @@
{{- $oc4 := and .Values.openshift (semverCompare ">=1.12-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- $oc3 := and .Values.openshift (not $oc4) (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-app
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- ""
resources:
- nodes
- pods
- services
- namespaces
verbs:
- get
- list
- watch
- update
---
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-rbac
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
{{- if .Values.openshift }}
- apiGroups:
- image.openshift.io
resources:
- imagestreams
verbs:
- get
- list
- watch
{{- end }}
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
- clusterrolebindings
- clusterroles
verbs:
- get
- list
- watch
---
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-admission
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- get
- list
- watch
- create
- update
- delete
---
{{- if $oc4 }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: neuvector-binding-co
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- config.openshift.io
resources:
- clusteroperators
verbs:
- get
- list
{{- end }}

View File

@ -0,0 +1,145 @@
{{- $oc4 := and .Values.openshift (semverCompare ">=1.12-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- $oc3 := and .Values.openshift (not $oc4) (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-app
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-app
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-rbac
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-rbac
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-admission
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-admission
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-view
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: view
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
{{- if $oc4 }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-co
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: neuvector-binding-co
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,199 @@
{{- if .Values.controller.enabled -}}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apps/v1
{{- else }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Deployment
metadata:
name: neuvector-controller-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.controller.replicas }}
minReadySeconds: 60
strategy:
{{ toYaml .Values.controller.strategy | indent 4 }}
selector:
matchLabels:
app: neuvector-controller-pod
template:
metadata:
labels:
app: neuvector-controller-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.controller.affinity }}
affinity:
{{ toYaml .Values.controller.affinity | indent 8 }}
{{- end }}
{{- if .Values.controller.tolerations }}
tolerations:
{{ toYaml .Values.controller.tolerations | indent 8 }}
{{- end }}
{{- if .Values.controller.nodeSelector }}
nodeSelector:
{{ toYaml .Values.controller.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.controller.schedulerName }}
schedulerName: {{ .Values.controller.schedulerName }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
{{- if .Values.controller.priorityClassName }}
priorityClassName: {{ .Values.controller.priorityClassName }}
{{- end }}
serviceAccountName: {{ .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
containers:
- name: neuvector-controller-pod
{{ if eq .Values.registry "registry.neuvector.com" }}
{{ if .Values.oem }}
image: "{{ .Values.registry }}/{{ .Values.oem }}/controller:{{ .Values.tag }}"
{{- else }}
image: "{{ .Values.registry }}/controller:{{ .Values.tag }}"
{{- end }}
{{- else }}
{{ if .Values.controller.image.hash }}
image: "{{ .Values.registry }}/{{ .Values.controller.image.repository }}@{{ .Values.controller.image.hash }}"
{{- else }}
image: "{{ .Values.registry }}/{{ .Values.controller.image.repository }}:{{ .Values.tag }}"
{{- end }}
{{- end }}
securityContext:
privileged: true
resources:
{{- if .Values.controller.resources }}
{{ toYaml .Values.controller.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.resources | indent 12 }}
{{- end }}
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
{{- if or .Values.controller.pvc.enabled .Values.controller.azureFileShare.enabled }}
- name: CTRL_PERSIST_CONFIG
value: "1"
{{- end }}
{{- with .Values.controller.env }}
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /var/neuvector
name: nv-share
readOnly: false
{{- if .Values.containerd.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else if .Values.k3s.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else if .Values.bottlerocket.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else if .Values.crio.enabled }}
- mountPath: /var/run/crio/crio.sock
{{- else }}
- mountPath: /var/run/docker.sock
{{- end }}
name: runtime-sock
readOnly: true
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
- mountPath: /etc/config
name: config-volume
readOnly: true
{{- if .Values.controller.certificate.secret }}
- mountPath: /etc/neuvector/certs/ssl-cert.key
subPath: {{ .Values.controller.certificate.keyFile }}
name: cert
readOnly: true
- mountPath: /etc/neuvector/certs/ssl-cert.pem
subPath: {{ .Values.controller.certificate.pemFile }}
name: cert
readOnly: true
{{- end }}
terminationGracePeriodSeconds: 300
restartPolicy: Always
volumes:
- name: nv-share
{{- if .Values.controller.pvc.enabled }}
persistentVolumeClaim:
claimName: neuvector-data
{{- else if .Values.controller.azureFileShare.enabled }}
azureFile:
secretName: {{ .Values.controller.azureFileShare.secretName }}
shareName: {{ .Values.controller.azureFileShare.shareName }}
readOnly: false
{{- else }}
hostPath:
path: /var/neuvector
{{- end }}
- name: runtime-sock
hostPath:
{{- if .Values.containerd.enabled }}
path: {{ .Values.containerd.path }}
{{- else if .Values.crio.enabled }}
path: {{ .Values.crio.path }}
{{- else if .Values.k3s.enabled }}
path: {{ .Values.k3s.runtimePath }}
{{- else if .Values.bottlerocket.enabled }}
path: {{ .Values.bottlerocket.runtimePath }}
{{- else }}
path: {{ .Values.docker.path }}
{{- end }}
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
- name: config-volume
projected:
sources:
- configMap:
name: neuvector-init
optional: true
- secret:
name: neuvector-init
optional: true
{{- if .Values.controller.certificate.secret }}
- name: cert
secret:
secretName: {{ .Values.controller.certificate.secret }}
{{- end }}
{{- if gt (int .Values.controller.disruptionbudget) 0 }}
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: neuvector-controller-pdb
namespace: neuvector
spec:
minAvailable: {{ .Values.controller.disruptionbudget }}
selector:
matchLabels:
app: neuvector-controller-pod
{{- end }}
{{- end }}

View File

@ -0,0 +1,210 @@
{{- if .Values.controller.enabled }}
{{- if .Values.controller.ingress.enabled }}
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: neuvector-restapi-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.ingress.tls }}
tls:
- hosts:
- {{ .Values.controller.ingress.host }}
{{- if .Values.controller.ingress.secretName }}
secretName: {{ .Values.controller.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.controller.ingress.host }}
http:
paths:
- path: {{ .Values.controller.ingress.path }}
pathType: Prefix
backend:
service:
name: neuvector-svc-controller-api
port:
number: 10443
{{- else }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neuvector-restapi-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.ingress.tls }}
tls:
- hosts:
- {{ .Values.controller.ingress.host }}
{{- if .Values.controller.ingress.secretName }}
secretName: {{ .Values.controller.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.controller.ingress.host }}
http:
paths:
- path: {{ .Values.controller.ingress.path }}
backend:
serviceName: neuvector-svc-controller-api
servicePort: 10443
{{- end }}
{{- end }}
{{- if .Values.controller.federation.mastersvc.ingress.enabled }}
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: neuvector-mastersvc-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.federation.mastersvc.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.federation.mastersvc.ingress.tls }}
tls:
- hosts:
- {{ .Values.controller.federation.mastersvc.ingress.host }}
{{- if .Values.controller.federation.mastersvc.ingress.secretName }}
secretName: {{ .Values.controller.federation.mastersvc.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.controller.federation.mastersvc.ingress.host }}
http:
paths:
- path: {{ .Values.controller.federation.mastersvc.ingress.path }}
pathType: Prefix
backend:
service:
name: neuvector-svc-controller-fed-master
port:
number: 11443
{{- else }}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neuvector-mastersvc-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.federation.mastersvc.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.federation.mastersvc.ingress.tls }}
tls:
- hosts:
- {{ .Values.controller.federation.mastersvc.ingress.host }}
{{- if .Values.controller.federation.mastersvc.ingress.secretName }}
secretName: {{ .Values.controller.federation.mastersvc.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.controller.federation.mastersvc.ingress.host }}
http:
paths:
- path: {{ .Values.controller.federation.mastersvc.ingress.path }}
backend:
serviceName: neuvector-svc-controller-fed-master
servicePort: 11443
{{- end }}
{{- end }}
{{- if .Values.controller.federation.managedsvc.ingress.enabled }}
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: neuvector-managedsvc-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.federation.managedsvc.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.federation.managedsvc.ingress.tls }}
tls:
- hosts:
- {{ .Values.controller.federation.managedsvc.ingress.host }}
{{- if .Values.controller.federation.managedsvc.ingress.secretName }}
secretName: {{ .Values.controller.federation.managedsvc.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.controller.federation.managedsvc.ingress.host }}
http:
paths:
- path: {{ .Values.controller.federation.managedsvc.ingress.path }}
pathType: Prefix
backend:
service:
name: neuvector-svc-controller-fed-managed
port:
number: 10443
{{- else }}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neuvector-managedsvc-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.federation.managedsvc.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.federation.managedsvc.ingress.tls }}
tls:
- hosts:
- {{ .Values.controller.federation.managedsvc.ingress.host }}
{{- if .Values.controller.federation.managedsvc.ingress.secretName }}
secretName: {{ .Values.controller.federation.managedsvc.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.controller.federation.managedsvc.ingress.host }}
http:
paths:
- path: {{ .Values.controller.federation.managedsvc.ingress.path }}
backend:
serviceName: neuvector-svc-controller-fed-managed
servicePort: 10443
{{- end }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,82 @@
{{- if .Values.openshift -}}
{{- if .Values.controller.apisvc.route.enabled }}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: route.openshift.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: Route
metadata:
name: neuvector-route-api
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.apisvc.route.host }}
host: {{ .Values.controller.apisvc.route.host }}
{{- end }}
to:
kind: Service
name: neuvector-svc-controller-api
port:
targetPort: controller-api
tls:
termination: {{ .Values.controller.apisvc.route.termination }}
---
{{ end -}}
{{- if .Values.controller.federation.mastersvc.route.enabled }}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: route.openshift.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: Route
metadata:
name: neuvector-route-fed-master
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.federation.mastersvc.route.host }}
host: {{ .Values.controller.federation.mastersvc.route.host }}
{{- end }}
to:
kind: Service
name: neuvector-svc-controller-fed-master
port:
targetPort: fed
tls:
termination: {{ .Values.controller.federation.mastersvc.route.termination }}
---
{{ end -}}
{{- if .Values.controller.federation.managedsvc.route.enabled }}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: route.openshift.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: Route
metadata:
name: neuvector-route-fed-managed
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.controller.federation.managedsvc.route.host }}
host: {{ .Values.controller.federation.managedsvc.route.host }}
{{- end }}
to:
kind: Service
name: neuvector-svc-controller-fed-managed
port:
targetPort: fed
tls:
termination: {{ .Values.controller.federation.managedsvc.route.termination }}
{{ end -}}
{{- end -}}

View File

@ -0,0 +1,89 @@
{{- if .Values.controller.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- port: 18300
protocol: "TCP"
name: "cluster-tcp-18300"
- port: 18301
protocol: "TCP"
name: "cluster-tcp-18301"
- port: 18301
protocol: "UDP"
name: "cluster-udp-18301"
selector:
app: neuvector-controller-pod
{{- if .Values.controller.apisvc.type }}
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller-api
namespace: {{ .Release.Namespace }}
{{- with .Values.controller.apisvc.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.controller.apisvc.type }}
ports:
- port: 10443
protocol: "TCP"
name: "controller-api"
selector:
app: neuvector-controller-pod
{{ end -}}
{{- if .Values.controller.federation.mastersvc.type }}
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller-fed-master
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.controller.federation.mastersvc.type }}
ports:
- port: 11443
name: fed
protocol: TCP
selector:
app: neuvector-controller-pod
{{ end -}}
{{- if .Values.controller.federation.managedsvc.type }}
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller-fed-managed
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.controller.federation.managedsvc.type }}
ports:
- port: 10443
name: fed
protocol: TCP
selector:
app: neuvector-controller-pod
{{ end -}}
{{- end -}}

View File

@ -0,0 +1,926 @@
{{- if .Values.crdwebhook.enabled -}}
{{- $oc4 := and .Values.openshift (semverCompare ">=1.12-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- $oc3 := and .Values.openshift (not $oc4) (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apiextensions.k8s.io/v1
{{- else }}
apiVersion: apiextensions.k8s.io/v1beta1
{{- end }}
kind: CustomResourceDefinition
metadata:
name: nvsecurityrules.neuvector.com
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
group: neuvector.com
names:
kind: NvSecurityRule
listKind: NvSecurityRuleList
plural: nvsecurityrules
singular: nvsecurityrule
scope: Namespaced
{{- if (semverCompare "<1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
version: v1
{{- end }}
versions:
- name: v1
served: true
storage: true
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
schema:
openAPIV3Schema:
properties:
spec:
properties:
egress:
items:
properties:
action:
enum:
- allow
- deny
type: string
applications:
items:
type: string
type: array
name:
type: string
ports:
type: string
priority:
type: integer
selector:
properties:
comment:
type: string
criteria:
items:
properties:
key:
type: string
op:
type: string
value:
type: string
required:
- key
- op
- value
type: object
type: array
name:
type: string
original_name:
type: string
required:
- name
- criteria
type: object
required:
- action
- name
- selector
type: object
type: array
file:
items:
properties:
app:
items:
type: string
type: array
behavior:
enum:
- monitor_change
- block_access
type: string
filter:
type: string
recursive:
type: boolean
required:
- behavior
- filter
type: object
type: array
ingress:
items:
properties:
action:
enum:
- allow
- deny
type: string
applications:
items:
type: string
type: array
name:
type: string
ports:
type: string
priority:
type: integer
selector:
properties:
comment:
type: string
criteria:
items:
properties:
key:
type: string
op:
type: string
value:
type: string
required:
- key
- op
- value
type: object
type: array
name:
type: string
original_name:
type: string
required:
- name
- criteria
type: object
required:
- action
- name
- selector
type: object
type: array
process:
items:
properties:
action:
enum:
- allow
- deny
type: string
allow_update:
type: boolean
name:
type: string
path:
type: string
required:
- action
type: object
type: array
process_profile:
properties:
baseline:
enum:
- default
- shield
type: string
type: object
target:
properties:
policymode:
enum:
- Discover
- Monitor
- Protect
- N/A
type: string
selector:
properties:
comment:
type: string
criteria:
items:
properties:
key:
type: string
op:
type: string
value:
type: string
required:
- key
- op
- value
type: object
type: array
name:
type: string
original_name:
type: string
required:
- name
- criteria
type: object
required:
- selector
type: object
waf:
properties:
settings:
items:
properties:
action:
enum:
- allow
- deny
type: string
name:
type: string
required:
- name
- action
type: object
type: array
status:
type: boolean
type: object
required:
- target
type: object
type: object
{{- end }}
---
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apiextensions.k8s.io/v1
{{- else }}
apiVersion: apiextensions.k8s.io/v1beta1
{{- end }}
kind: CustomResourceDefinition
metadata:
name: nvclustersecurityrules.neuvector.com
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
group: neuvector.com
names:
kind: NvClusterSecurityRule
listKind: NvClusterSecurityRuleList
plural: nvclustersecurityrules
singular: nvclustersecurityrule
scope: Cluster
{{- if (semverCompare "<1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
version: v1
{{- end }}
versions:
- name: v1
served: true
storage: true
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
schema:
openAPIV3Schema:
properties:
spec:
properties:
egress:
items:
properties:
action:
enum:
- allow
- deny
type: string
applications:
items:
type: string
type: array
name:
type: string
ports:
type: string
priority:
type: integer
selector:
properties:
comment:
type: string
criteria:
items:
properties:
key:
type: string
op:
type: string
value:
type: string
required:
- key
- op
- value
type: object
type: array
name:
type: string
original_name:
type: string
required:
- name
- criteria
type: object
required:
- action
- name
- selector
type: object
type: array
file:
items:
properties:
app:
items:
type: string
type: array
behavior:
enum:
- monitor_change
- block_access
type: string
filter:
type: string
recursive:
type: boolean
required:
- behavior
- filter
type: object
type: array
ingress:
items:
properties:
action:
enum:
- allow
- deny
type: string
applications:
items:
type: string
type: array
name:
type: string
ports:
type: string
priority:
type: integer
selector:
properties:
comment:
type: string
criteria:
items:
properties:
key:
type: string
op:
type: string
value:
type: string
required:
- key
- op
- value
type: object
type: array
name:
type: string
original_name:
type: string
required:
- name
- criteria
type: object
required:
- action
- name
- selector
type: object
type: array
process:
items:
properties:
action:
enum:
- allow
- deny
type: string
allow_update:
type: boolean
name:
type: string
path:
type: string
required:
- action
type: object
type: array
process_profile:
properties:
baseline:
enum:
- default
- shield
type: string
type: object
target:
properties:
policymode:
enum:
- Discover
- Monitor
- Protect
- N/A
type: string
selector:
properties:
comment:
type: string
criteria:
items:
properties:
key:
type: string
op:
type: string
value:
type: string
required:
- key
- op
- value
type: object
type: array
name:
type: string
original_name:
type: string
required:
- name
- criteria
type: object
required:
- selector
type: object
waf:
properties:
settings:
items:
properties:
action:
enum:
- allow
- deny
type: string
name:
type: string
required:
- name
- action
type: object
type: array
status:
type: boolean
type: object
required:
- target
type: object
type: object
{{- end }}
---
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apiextensions.k8s.io/v1
{{- else }}
apiVersion: apiextensions.k8s.io/v1beta1
{{- end }}
kind: CustomResourceDefinition
metadata:
name: nvadmissioncontrolsecurityrules.neuvector.com
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
group: neuvector.com
names:
kind: NvAdmissionControlSecurityRule
listKind: NvAdmissionControlSecurityRuleList
plural: nvadmissioncontrolsecurityrules
singular: nvadmissioncontrolsecurityrule
scope: Cluster
{{- if (semverCompare "<1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
version: v1
{{- end }}
versions:
- name: v1
served: true
storage: true
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
schema:
openAPIV3Schema:
properties:
spec:
properties:
config:
properties:
client_mode:
enum:
- service
- url
type: string
enable:
type: boolean
mode:
enum:
- monitor
- protect
type: string
required:
- enable
- mode
- client_mode
type: object
rules:
items:
properties:
action:
enum:
- allow
- deny
type: string
comment:
type: string
criteria:
items:
properties:
name:
type: string
op:
type: string
sub_criteria:
items:
properties:
name:
type: string
op:
type: string
value:
type: string
required:
- name
- op
- value
type: object
type: array
value:
type: string
required:
- name
- op
- value
type: object
type: array
disabled:
type: boolean
id:
type: integer
required:
- action
- criteria
type: object
type: array
type: object
type: object
{{- end }}
---
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apiextensions.k8s.io/v1
{{- else }}
apiVersion: apiextensions.k8s.io/v1beta1
{{- end }}
kind: CustomResourceDefinition
metadata:
name: nvwafsecurityrules.neuvector.com
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
group: neuvector.com
names:
kind: NvWafSecurityRule
listKind: NvWafSecurityRuleList
plural: nvwafsecurityrules
singular: nvwafsecurityrule
scope: Cluster
{{- if (semverCompare "<1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
version: v1
{{- end }}
versions:
- name: v1
served: true
storage: true
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
schema:
openAPIV3Schema:
properties:
spec:
properties:
sensor:
properties:
comment:
type: string
name:
type: string
rules:
items:
properties:
name:
type: string
patterns:
items:
properties:
context:
enum:
- url
- header
- body
- packet
type: string
key:
enum:
- pattern
type: string
op:
enum:
- regex
- '!regex'
type: string
value:
type: string
required:
- key
- op
- value
- context
type: object
type: array
required:
- name
- patterns
type: object
type: array
required:
- name
type: object
required:
- sensor
type: object
type: object
{{- end }}
---
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-crd-webhook
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: 443
targetPort: 30443
protocol: TCP
name: crd-webhook
type: {{ .Values.crdwebhook.type }}
selector:
app: neuvector-controller-pod
---
# ClusterRole for NeuVector to operate CRD
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-customresourcedefinition
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- update
- watch
- create
- get
---
# ClusterRoleBinding for NeuVector to operate CRD
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-customresourcedefinition
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-customresourcedefinition
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
# ClusterRole for NeuVector to manager user-created network/process CRD rules
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-nvsecurityrules
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- neuvector.com
resources:
- nvsecurityrules
- nvclustersecurityrules
verbs:
- list
- delete
---
# ClusterRoleBinding for NeuVector to manager user-created network/process CRD rules
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-nvsecurityrules
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-nvsecurityrules
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
# ClusterRole for NeuVector to manager user-created admission control CRD rules
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-nvadmissioncontrolsecurityrules
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- neuvector.com
resources:
- nvadmissioncontrolsecurityrules
verbs:
- list
- delete
---
# ClusterRoleBinding for NeuVector to manager user-created admission control CRD rules
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-nvadmissioncontrolsecurityrules
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-nvadmissioncontrolsecurityrules
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
---
# ClusterRole for NeuVector to manager user-created waf CRD rules
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-nvwafsecurityrules
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- neuvector.com
resources:
- nvwafsecurityrules
verbs:
- list
- delete
---
# ClusterRoleBinding for NeuVector to manager user-created waf CRD rules
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-nvwafsecurityrules
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-nvwafsecurityrules
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,123 @@
{{- if .Values.enforcer.enabled -}}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apps/v1
{{- else }}
apiVersion: extensions/v1beta1
{{- end }}
kind: DaemonSet
metadata:
name: neuvector-enforcer-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: neuvector-enforcer-pod
template:
metadata:
labels:
app: neuvector-enforcer-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
{{- if .Values.enforcer.tolerations }}
tolerations:
{{ toYaml .Values.enforcer.tolerations | indent 8 }}
{{- end }}
hostPID: true
{{- if .Values.enforcer.priorityClassName }}
priorityClassName: {{ .Values.enforcer.priorityClassName }}
{{- end }}
serviceAccountName: {{ .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
containers:
- name: neuvector-enforcer-pod
{{ if eq .Values.registry "registry.neuvector.com" }}
{{ if .Values.oem }}
image: "{{ .Values.registry }}/{{ .Values.oem }}/enforcer:{{ .Values.tag }}"
{{- else }}
image: "{{ .Values.registry }}/enforcer:{{ .Values.tag }}"
{{- end }}
{{- else }}
{{ if .Values.enforcer.image.hash }}
image: "{{ .Values.registry }}/{{ .Values.enforcer.image.repository }}@{{ .Values.enforcer.image.hash }}"
{{- else }}
image: "{{ .Values.registry }}/{{ .Values.enforcer.image.repository }}:{{ .Values.tag }}"
{{- end }}
{{- end }}
securityContext:
privileged: true
resources:
{{- if .Values.enforcer.resources }}
{{ toYaml .Values.enforcer.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.resources | indent 12 }}
{{- end }}
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
{{- if .Values.containerd.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else if .Values.k3s.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else if .Values.bottlerocket.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else if .Values.crio.enabled }}
- mountPath: /var/run/crio/crio.sock
{{- else }}
- mountPath: /var/run/docker.sock
{{- end }}
name: runtime-sock
readOnly: true
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
- mountPath: /lib/modules
name: modules-vol
readOnly: true
terminationGracePeriodSeconds: 1200
restartPolicy: Always
volumes:
- name: runtime-sock
hostPath:
{{- if .Values.containerd.enabled }}
path: {{ .Values.containerd.path }}
{{- else if .Values.crio.enabled }}
path: {{ .Values.crio.path }}
{{- else if .Values.k3s.enabled }}
path: {{ .Values.k3s.runtimePath }}
{{- else if .Values.bottlerocket.enabled }}
path: {{ .Values.bottlerocket.runtimePath }}
{{- else }}
path: {{ .Values.docker.path }}
{{- end }}
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
- name: modules-vol
hostPath:
path: /lib/modules
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if .Values.controller.configmap.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: neuvector-init
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
{{ toYaml .Values.controller.configmap.data | indent 4 }}
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.controller.secret.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: neuvector-init
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
{{- range $key, $val := .Values.controller.secret.data }}
{{ $key }}: | {{ toYaml $val | b64enc | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,93 @@
{{- if .Values.manager.enabled -}}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apps/v1
{{- else }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Deployment
metadata:
name: neuvector-manager-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
selector:
matchLabels:
app: neuvector-manager-pod
template:
metadata:
labels:
app: neuvector-manager-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.manager.affinity }}
affinity:
{{ toYaml .Values.manager.affinity | indent 8 }}
{{- end }}
{{- if .Values.manager.tolerations }}
tolerations:
{{ toYaml .Values.manager.tolerations | indent 8 }}
{{- end }}
{{- if .Values.manager.nodeSelector }}
nodeSelector:
{{ toYaml .Values.manager.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
{{- if .Values.manager.priorityClassName }}
priorityClassName: {{ .Values.manager.priorityClassName }}
{{- end }}
serviceAccountName: {{ .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
containers:
- name: neuvector-manager-pod
{{ if eq .Values.registry "registry.neuvector.com" }}
{{ if .Values.oem }}
image: "{{ .Values.registry }}/{{ .Values.oem }}/manager:{{ .Values.tag }}"
{{- else }}
image: "{{ .Values.registry }}/manager:{{ .Values.tag }}"
{{- end }}
{{- else }}
{{ if .Values.manager.image.hash }}
image: "{{ .Values.registry }}/{{ .Values.manager.image.repository }}@{{ .Values.manager.image.hash }}"
{{- else }}
image: "{{ .Values.registry }}/{{ .Values.manager.image.repository }}:{{ .Values.tag }}"
{{- end }}
{{- end }}
env:
- name: CTRL_SERVER_IP
value: neuvector-svc-controller.{{ .Release.Namespace }}
{{- if not .Values.manager.env.ssl }}
- name: MANAGER_SSL
value: "off"
{{- end }}
volumeMounts:
{{- if .Values.manager.certificate.secret }}
- mountPath: /etc/neuvector/certs/ssl-cert.key
subPath: {{ .Values.manager.certificate.keyFile }}
name: cert
readOnly: true
- mountPath: /etc/neuvector/certs/ssl-cert.pem
subPath: {{ .Values.manager.certificate.pemFile }}
name: cert
readOnly: true
{{- end }}
resources:
{{- if .Values.manager.resources }}
{{ toYaml .Values.manager.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.resources | indent 12 }}
{{- end }}
restartPolicy: Always
volumes:
{{- if .Values.manager.certificate.secret }}
- name: cert
secret:
secretName: {{ .Values.manager.certificate.secret }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,68 @@
{{- if and .Values.manager.enabled .Values.manager.ingress.enabled -}}
{{- if (semverCompare ">=1.19-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: neuvector-webui-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.manager.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.manager.ingress.tls }}
tls:
- hosts:
- {{ .Values.manager.ingress.host }}
{{- if .Values.manager.ingress.secretName }}
secretName: {{ .Values.manager.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.manager.ingress.host }}
http:
paths:
- path: {{ .Values.manager.ingress.path }}
pathType: Prefix
backend:
service:
name: neuvector-service-webui
port:
number: 8443
{{- else }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neuvector-webui-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.manager.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.manager.ingress.tls }}
tls:
- hosts:
- {{ .Values.manager.ingress.host }}
{{- if .Values.manager.ingress.secretName }}
secretName: {{ .Values.manager.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.manager.ingress.host }}
http:
paths:
- path: {{ .Values.manager.ingress.path }}
backend:
serviceName: neuvector-service-webui
servicePort: 8443
{{- end }}
{{- end -}}

View File

@ -0,0 +1,28 @@
{{- if .Values.openshift -}}
{{- if .Values.manager.route.enabled }}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: route.openshift.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: Route
metadata:
name: neuvector-route-webui
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.manager.route.host }}
host: {{ .Values.manager.route.host }}
{{- end }}
to:
kind: Service
name: neuvector-service-webui
port:
targetPort: manager
tls:
termination: {{ .Values.manager.route.termination }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,26 @@
{{- if .Values.manager.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: neuvector-service-webui
namespace: {{ .Release.Namespace }}
{{- with .Values.manager.svc.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.manager.svc.type }}
{{- if and .Values.manager.svc.loadBalancerIP (eq .Values.manager.svc.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.manager.svc.loadBalancerIP }}
{{- end }}
ports:
- port: 8443
name: manager
protocol: TCP
selector:
app: neuvector-manager-pod
{{- end }}

View File

@ -0,0 +1,77 @@
{{- if .Values.psp -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: neuvector-binding-psp
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
chart: {{ template "neuvector.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
allowedCapabilities:
- SYS_ADMIN
- NET_ADMIN
- SYS_PTRACE
- IPC_LOCK
requiredDropCapabilities:
- ALL
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: neuvector-binding-psp
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups:
- policy
- extensions
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- neuvector-binding-psp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: neuvector-binding-psp
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: neuvector-binding-psp
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if and .Values.controller.enabled .Values.controller.pvc.enabled -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: neuvector-data
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
accessModes:
{{ toYaml .Values.controller.pvc.accessModes | indent 4 }}
volumeMode: Filesystem
{{- if .Values.controller.pvc.storageClass }}
storageClassName: {{ .Values.controller.pvc.storageClass }}
{{- end }}
resources:
requests:
{{- if .Values.controller.pvc.capacity }}
storage: {{ .Values.controller.pvc.capacity }}
{{- else }}
storage: 1Gi
{{- end }}
{{- end }}

View File

@ -0,0 +1,31 @@
{{- $oc4 := and .Values.openshift (semverCompare ">=1.12-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- $oc3 := and .Values.openshift (not $oc4) (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) -}}
{{- if $oc3 }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: RoleBinding
metadata:
name: neuvector-admin
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not $oc3 }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: admin
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- if $oc3 }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:{{ .Values.serviceAccount }}
{{- end }}

View File

@ -0,0 +1,74 @@
{{- if .Values.cve.scanner.enabled -}}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: apps/v1
{{- else }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Deployment
metadata:
name: neuvector-scanner-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
strategy:
{{ toYaml .Values.cve.scanner.strategy | indent 4 }}
replicas: {{ .Values.cve.scanner.replicas }}
selector:
matchLabels:
app: neuvector-scanner-pod
template:
metadata:
labels:
app: neuvector-scanner-pod
spec:
{{- if .Values.cve.scanner.affinity }}
affinity:
{{ toYaml .Values.cve.scanner.affinity | indent 8 }}
{{- end }}
{{- if .Values.cve.scanner.tolerations }}
tolerations:
{{ toYaml .Values.cve.scanner.tolerations | indent 8 }}
{{- end }}
{{- if .Values.cve.scanner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.cve.scanner.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
{{- if .Values.cve.scanner.priorityClassName }}
priorityClassName: {{ .Values.cve.scanner.priorityClassName }}
{{- end }}
serviceAccountName: {{ .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
containers:
- name: neuvector-scanner-pod
{{ if eq .Values.registry "registry.neuvector.com" }}
{{ if .Values.oem }}
image: "{{ .Values.registry }}/{{ .Values.oem }}/scanner:{{ .Values.cve.scanner.image.tag }}"
{{- else }}
image: "{{ .Values.registry }}/scanner:{{ .Values.cve.scanner.image.tag }}"
{{- end }}
{{- else }}
{{ if .Values.cve.scanner.image.hash }}
image: "{{ .Values.registry }}/{{ .Values.cve.scanner.image.repository }}@{{ .Values.cve.scanner.image.hash }}"
{{- else }}
image: "{{ .Values.registry }}/{{ .Values.cve.scanner.image.repository }}:{{ .Values.cve.scanner.image.tag }}"
{{- end }}
{{- end }}
imagePullPolicy: Always
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
{{- if .Values.cve.scanner.dockerPath }}
- name: SCANNER_DOCKER_URL
value: {{ .Values.cve.scanner.dockerPath }}
{{- end }}
resources:
{{ toYaml .Values.cve.scanner.resources | indent 12 }}
restartPolicy: Always
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if not .Values.openshift}}
{{- if ne .Values.serviceAccount "default"}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount }}
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,73 @@
{{- if .Values.cve.updater.enabled -}}
{{- if (semverCompare ">=1.21-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: batch/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: batch/v1beta1
{{- else }}
apiVersion: batch/v2alpha1
{{- end }}
kind: CronJob
metadata:
name: neuvector-updater-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
schedule: {{ .Values.cve.updater.schedule | quote }}
jobTemplate:
spec:
template:
metadata:
labels:
app: neuvector-updater-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
{{- if .Values.cve.updater.priorityClassName }}
priorityClassName: {{ .Values.cve.updater.priorityClassName }}
{{- end }}
serviceAccountName: {{ .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
containers:
- name: neuvector-updater-pod
{{ if eq .Values.registry "registry.neuvector.com" }}
{{ if .Values.oem }}
image: "{{ .Values.registry }}/{{ .Values.oem }}/updater:{{ .Values.cve.updater.image.tag }}"
{{- else }}
image: "{{ .Values.registry }}/updater:{{ .Values.cve.updater.image.tag }}"
{{- end }}
{{- else }}
{{ if .Values.cve.updater.image.hash }}
image: "{{ .Values.registry }}/{{ .Values.cve.updater.image.repository }}@{{ .Values.cve.updater.image.hash }}"
{{- else }}
image: "{{ .Values.registry }}/{{ .Values.cve.updater.image.repository }}:{{ .Values.cve.updater.image.tag }}"
{{- end }}
{{- end }}
imagePullPolicy: Always
{{- if .Values.cve.scanner.enabled }}
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
{{- if .Values.cve.updater.secure }}
- /usr/bin/curl -v -X PATCH -H "Authorization:Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/{{ .Release.Namespace }}/deployments/neuvector-scanner-pod'
{{- else }}
- /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/{{ .Release.Namespace }}/deployments/neuvector-scanner-pod'
{{- end }}
{{- else }}
- /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/extensions/v1beta1/namespaces/{{ .Release.Namespace }}/deployments/neuvector-scanner-pod'
{{- end }}
{{- end }}
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
restartPolicy: Never
{{- end }}

View File

@ -0,0 +1,292 @@
# Default values for neuvector.
# This is a YAML-formatted file.
# Declare variables to be passed into the templates.
openshift: false
registry: registry.neuvector.com
tag: 4.4.4
oem:
imagePullSecrets:
psp: false
serviceAccount: default
controller:
# If false, controller will not be installed
enabled: true
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
image:
repository: neuvector/controller
hash:
replicas: 3
disruptionbudget: 0
schedulerName:
priorityClassName:
env: []
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- neuvector-controller-pod
topologyKey: "kubernetes.io/hostname"
tolerations: []
nodeSelector: {}
# key1: value1
# key2: value2
apisvc:
type:
annotations: {}
# OpenShift Route configuration
route:
enabled: false
termination: passthrough
host:
pvc:
enabled: false
accessModes:
- ReadWriteMany
storageClass:
capacity:
azureFileShare:
enabled: false
secretName:
shareName:
certificate:
secret:
keyFile: tls.key
pemFile: tls.pem
federation:
mastersvc:
type:
# Federation Master Ingress
ingress:
enabled: false
host: # MUST be set, if ingress is enabled
path: "/" # or this could be "/api", but might need "rewrite-target" annotation
annotations:
ingress.kubernetes.io/protocol: https
# ingress.kubernetes.io/rewrite-target: /
tls: false
secretName:
# OpenShift Route configuration
route:
enabled: false
termination: passthrough
host:
managedsvc:
type:
# Federation Managed Ingress
ingress:
enabled: false
host: # MUST be set, if ingress is enabled
path: "/" # or this could be "/api", but might need "rewrite-target" annotation
annotations:
ingress.kubernetes.io/protocol: https
# ingress.kubernetes.io/rewrite-target: /
tls: false
secretName:
# OpenShift Route configuration
route:
enabled: false
termination: passthrough
host:
ingress:
enabled: false
host: # MUST be set, if ingress is enabled
path: "/" # or this could be "/api", but might need "rewrite-target" annotation
annotations:
ingress.kubernetes.io/protocol: https
# ingress.kubernetes.io/rewrite-target: /
tls: false
secretName:
resources: {}
# limits:
# cpu: 400m
# memory: 2792Mi
# requests:
# cpu: 100m
# memory: 2280Mi
configmap:
enabled: false
data:
# eulainitcfg.yaml: |
# ...
# ldapinitcfg.yaml: |
# ...
# oidcinitcfg.yaml: |
# ...
# samlinitcfg.yaml: |
# ...
# sysinitcfg.yaml: |
# ...
# userinitcfg.yaml: |
# ...
secret:
# NOTE: files defined here have preferrence over the ones defined in the configmap section
enabled: false
data: {}
# eulainitcfg.yaml:
# license_key: 0Bca63Iy2FiXGqjk...
# ...
# ldapinitcfg.yaml:
# directory: OpenLDAP
# ...
# oidcinitcfg.yaml:
# Issuer: https://...
# ...
# samlinitcfg.yaml:
# ...
# sysinitcfg.yaml:
# ...
# userinitcfg.yaml:
# ...
enforcer:
# If false, enforcer will not be installed
enabled: true
image:
repository: neuvector/enforcer
hash:
priorityClassName:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
resources: {}
# limits:
# cpu: 400m
# memory: 2792Mi
# requests:
# cpu: 100m
# memory: 2280Mi
manager:
# If false, manager will not be installed
enabled: true
image:
repository: neuvector/manager
hash:
priorityClassName:
env:
ssl: true
svc:
type: NodePort
loadBalancerIP:
annotations: {}
# azure
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "apps-subnet"
# OpenShift Route configuration
route:
enabled: true
termination: passthrough
host:
certificate:
secret:
keyFile: tls.key
pemFile: tls.pem
ingress:
enabled: false
host: # MUST be set, if ingress is enabled
path: "/"
annotations: {}
# kubernetes.io/ingress.class: my-nginx
# nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1"
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
# only for end-to-end tls conf - ingress-nginx accepts backend self-signed cert
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
tls: false
secretName: # my-tls-secret
resources: {}
# limits:
# cpu: 400m
# memory: 2792Mi
# requests:
# cpu: 100m
# memory: 2280Mi
affinity: {}
tolerations: []
nodeSelector: {}
# key1: value1
# key2: value2
cve:
updater:
# If false, cve updater will not be installed
enabled: true
secure: false
image:
repository: neuvector/updater
tag: latest
hash:
schedule: "0 0 * * *"
priorityClassName:
scanner:
enabled: true
replicas: 3
dockerPath: ""
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
image:
repository: neuvector/scanner
tag: latest
hash:
priorityClassName:
resources: {}
# limits:
# cpu: 400m
# memory: 2792Mi
# requests:
# cpu: 100m
# memory: 2280Mi
affinity: {}
tolerations: []
nodeSelector: {}
# key1: value1
# key2: value2
docker:
path: /var/run/docker.sock
resources: {}
# limits:
# cpu: 400m
# memory: 2792Mi
# requests:
# cpu: 100m
# memory: 2280Mi
k3s:
enabled: false
runtimePath: /run/k3s/containerd/containerd.sock
bottlerocket:
enabled: false
runtimePath: /run/dockershim.sock
containerd:
enabled: false
path: /var/run/containerd/containerd.sock
crio:
enabled: false
path: /var/run/crio/crio.sock
admissionwebhook:
type: ClusterIP
crdwebhook:
enabled: true
type: ClusterIP

View File

@ -0,0 +1,26 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Ondat Operator
catalog.cattle.io/release-name: ondat-operator
apiVersion: v2
appVersion: v2.6.0
description: Cloud Native storage for containers
home: https://ondat.io
icon: https://docs.ondat.io/images/generic/Ondat_logo.svg
keywords:
- storage
- block-storage
- volume
- operator
kubeVersion: '>= 1.19'
maintainers:
- email: david@ondat.io
name: DavidMarchant
- email: richard.kovacs@ondat.io
name: mhmxs
- email: angelos.perivolaropoulos@ondat.io
name: aeroniero33
name: ondat-operator
sources:
- https://github.com/ondat
version: 0.5.400

View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022 StorageOS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,271 @@
# Ondat Operator Helm Chart
> **Note**: This chart requires Helm 3 and defaults to StorageOS v2. To upgrade
> from a previous chart or from StorageOS version 1.x to 2.x, please contact
> support for assistance.
StorageOS is a cloud native, software-defined storage platform that transforms
commodity server or cloud based disk capacity into enterprise-class persistent
storage for containers. StorageOS volumes offer high throughput, low latency
and consistent performance, and are therefore ideal for deploying databases,
message queues, and other mission-critical stateful solutions. StorageOS
Project edition also offers ReadWriteMany volumes that are concurrently
accessible by multiple applications.
The Ondat Operator installs and manages StorageOS within a cluster. Cluster
nodes may contribute local or attached disk-based storage into a distributed
pool, which is then available to all cluster members via a global namespace.
Volumes are available across the cluster so if an application container gets
moved to another node it has immediate access to re-attach its data.
StorageOS is extremely lightweight - minimum requirements are a reserved CPU
core and 2GB of free memory. There are minimal external dependencies, and no
custom kernel modules.
After StorageOS is installed, please register for a free personal license to
enable 1TiB of capacity and HA with synchronous replication by following the
instructions [here](https://docs.ondat.io/docs/operations/licensing). For
additional capacity, features and support plans contact sales@ondat.io.
## Highlighted Features
* High Availability - synchronous replication insulates you from node failure.
* Delta Sync - replicas out of sync due to transient failures only transfer
changed blocks.
* Multiple AccessModes - dynamically provision ReadWriteOnce or ReadWriteMany
volumes.
* Rapid Failover - quickly detects node failure and automates recovery actions
without administrator intervention.
* Data Encryption - both in transit and at rest.
* Scalability - disaggregated consensus means no single scheduling point of
failure.
* Thin provisioning - only consume the space you need in a storage pool.
* Data reduction - transparent inline data compression to reduce the amount of
storage used in a backing store as well as reducing the network bandwidth
requirements for replication.
* Flexible configuration - all features can be enabled per volume, using PVC
and StorageClass labels.
* Multi-tenancy - fully supports standard Namespace and RBAC methods.
* Observability & instrumentation - Log streams for observability and
Prometheus support for instrumentation.
* Deployment flexibility - scale up or scale out storage based on application
requirements. Works with any infrastructure on-premises, VM, bare metal
or cloud.
## About StorageOS
StorageOS is a software-defined cloud native storage platform delivering
persistent storage for Kubernetes. StorageOS is built from the ground-up with
no legacy restrictions to give enterprises working with cloud native workloads
a scalable storage platform with no compromise on performance, availability or
security. For additional information, visit www.ondat.io.
This chart installs a Ondat Cluster Operator which helps deploy and
configure a StorageOS cluster on kubernetes.
## Prerequisites
- Helm 3
- Kubernetes 1.18+
- Privileged mode containers (enabled by default)
- Etcd cluster
Refer to the [StorageOS prerequisites
docs](https://docs.ondat.io/docs/prerequisites/) for more information.
## Installing the chart
<!-- TODO: which URL should I use to refrence the chart? The below also
works at time of writing -->
```console
# Add ondat charts repo.
$ helm repo add ondat https://charts.ondat.io
# Install the chart in a namespace.
$ kubectl create namespace ondat-operator
$ helm install my-ondat ondat/ondat-operator \
--namespace ondat-operator \
--set cluster.kvBackend.address=<etcd-node-ip>:2379 \
--set cluster.admin.password=<password>
```
This will install the Ondat cluster operator in `ondat-operator`
namespace and deploys StorageOS with a minimal configuration. Etcd address
(kvBackend) and admin password are mandatory values to install the chart.
The password must be at least 8 characters long and the default username is
`storageos`, which can be changed like the above values. Find more information
about installing etcd in our [etcd
docs](https://docs.ondat.io/docs/prerequisites/etcd/).
To avoid passing the password as a flag, install the chart with the values file.
Create a values.yaml file and pass the file name with `--values` flag.
```yaml
cluster:
kvBackend:
address: <etcd-node-ip>:2379
admin:
password: <password>
```
```console
$ helm install ondat/ondat-operator \
--namespace ondat-operator \
--values <values-file>
```
> **Tip**: List all releases using `helm list -A`
## Creating a StorageOS cluster manually
The Helm chart supports a subset of StorageOSCluster custom resource parameters.
For advanced configurations, you may wish to create the cluster resource
manually and only use the Helm chart to install the Operator.
To disable auto-provisioning the cluster with the Helm chart, set
`cluster.create` to false:
```yaml
cluster:
...
create: false
```
Create a secret to store storageos cluster secrets:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: "storageos-api"
namespace: <storageos-cluster-namespace>
labels:
app: "storageos"
type: "kubernetes.io/storageos"
data:
# echo -n '<secret>' | base64
username: c3RvcmFnZW9z
password: c3RvcmFnZW9z
```
Create a `StorageOSCluster` custom resource and refer the above secret in the
`secretRefName` field.
```yaml
apiVersion: "storageos.com/v1"
kind: "StorageOSCluster"
metadata:
name: "example-storageos"
namespace: <storageos-cluster-namespace>
spec:
secretRefName: "storageos-api"
kvBackend:
address: "etcd-client.etcd.svc.cluster.local:2379"
# address: '10.42.15.23:2379,10.42.12.22:2379,10.42.13.16:2379' # You can set ETCD server IPs.
storageClassName: "storageos"
```
<!--- TODO: replace this when an equivalent specification exsists for the new
operator, ticket has been created. Also replace in app-readme -->
Learn more about advanced configuration options
[here](https://github.com/storageos/cluster-operator/blob/master/README.md#storageoscluster-resource-configuration).
To check cluster status, run:
```console
$ kubectl get storageoscluster --namespace <storageos-cluster-namespace>
NAME READY STATUS AGE
example-storageos 3/3 Running 4m
```
All the events related to this cluster are logged as part of the cluster object
and can be viewed by describing the object.
```console
$ kubectl describe storageoscluster example-storageos --namespace <storageos-cluster-namespace>
Name: example-storageos
Namespace: default
Labels: <none>
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ChangedStatus 1m (x2 over 1m) storageos-operator 0/3 StorageOS nodes are functional
Normal ChangedStatus 35s storageos-operator 3/3 StorageOS nodes are functional. Cluster healthy
```
## Configuration
The following tables lists the configurable parameters of the StorageOSCluster
Operator chart and their default values.
Parameter | Description | Default
--------- | ----------- | -------
`operator.image.repository` | StorageOS Operator container image repository | `storageos/operator`
`operator.image.tag` | StorageOS Operator container image tag | `v2.5.0`
`operator.image.pullPolicy` | StorageOS Operator container image pull policy | `IfNotPresent`
`podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | `false`
`podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}`
`cluster.create` | If true, auto-create the StorageOS cluster | `true`
`cluster.name` | Name of the storageos deployment | `storageos`
`cluster.namespace` | Namespace to install the StorageOS cluster into | `kube-system`
`cluster.createNamespace` | If true, create the namespace used by the cluster | `true`
`cluster.secretRefName` | Name of the secret containing StorageOS API credentials | `storageos-api`
`cluster.admin.username` | Username to authenticate to the StorageOS API with | `storageos`
`cluster.admin.password` | Password to authenticate to the StorageOS API with |
`cluster.sharedDir` | The path shared into to kubelet container when running kubelet in a container |
`cluster.kvBackend.address` | List of etcd targets, in the form ip[:port], separated by commas |
`cluster.kvBackend.backend` | Key-Value store backend name | `etcd`
`cluster.kvBackend.tlsSecretName` | Name of the secret containing kv backend tls cert |
`cluster.kvBackend.tlsSecretNamespace` | Namespace of the secret containing kv backend tls cert |
`cluster.nodeSelectorTerm.key` | Key of the node selector term used for pod placement |
`cluster.nodeSelectorTerm.value` | Value of the node selector term used for pod placement |
`cluster.toleration.key` | Key of the pod toleration parameter |
`cluster.toleration.value` | Value of the pod toleration parameter |
`cluster.disableTelemetry` | If true, no telemetry data will be collected from the cluster | `false`
`cluster.storageClassName` | Name of the StorageClass to be created | `storageos`
`cluster.images.apiManager.repository` | StorageOS API Manager container image repository |
`cluster.images.apiManager.tag` | StorageOS API Manager container image tag |
`cluster.images.csiV1ExternalAttacherV3.repository` | CSI v1 External Attacher v3 image repository |
`cluster.images.csiV1ExternalAttacherV3.tag` | CSI v1 External Attacher v3 image tag |
`cluster.images.csiV1ExternalProvisioner.repository` | CSI v1 External Provisioner image repository |
`cluster.images.csiV1ExternalProvisioner.tag` | CSI v1 External Provisioner image tag |
`cluster.images.csiV1ExternalResizer.repository` | CSI v1 External Resizer image repository |
`cluster.images.csiV1ExternalResizer.tag` | CSI v1 External Resizer image tag |
`cluster.images.csiV1LivenessProbe.repository` | CSI v1 Liveness Probe image repository |
`cluster.images.csiV1LivenessProbe.tag` | CSI v1 Liveness Probe image tag |
`cluster.images.csiV1NodeDriverRegistrar.repository` | CSI v1 Node Driver Registrar image repository |
`cluster.images.csiV1NodeDriverRegistrar.tag` | CSI v1 Node Driver Registrar image tag |
`cluster.images.init.repository` | StorageOS init container image repository |
`cluster.images.init.tag` | StorageOS init container image tag |
`cluster.images.node.repository` | StorageOS Node container image repository |
`cluster.images.node.tag` | StorageOS Node container image tag |
## Deleting a StorageOS Cluster
Deleting the `StorageOSCluster` custom resource object would delete the
storageos cluster and its associated resources.
In the above example,
```console
$ kubectl delete storageoscluster example-storageos --namespace <storageos-cluster-namespace>
```
would delete the custom resource and the cluster.
## Uninstalling the Chart
To uninstall/delete the storageos cluster operator deployment:
```console
$ helm uninstall <release-name> --namespace ondat-operator
```
If the chart was installed with cluster auto-provisioning enabled, chart
uninstall will clean-up the installed StorageOS cluster resources as well.
Learn more about configuring the StorageOS Operator on
[GitHub](https://github.com/storageos/operator).

View File

@ -0,0 +1,75 @@
# Ondat Operator
StorageOS is a cloud native, software-defined storage platform that transforms
commodity server or cloud based disk capacity into enterprise-class persistent
storage for containers. StorageOS volumes offer high throughput, low latency
and consistent performance, and are therefore ideal for deploying databases,
message queues, and other mission-critical stateful solutions. StorageOS
Project edition also offers ReadWriteMany volumes that are concurrently
accessible by multiple applications.
The Ondat Operator installs and manages StorageOS within a cluster. Cluster
nodes may contribute local or attached disk-based storage into a distributed
pool, which is then available to all cluster members via a global namespace.
Volumes are available across the cluster so if an application container gets
moved to another node it has immediate access to re-attach its data.
StorageOS is extremely lightweight - minimum requirements are a reserved CPU
core and 2GB of free memory. There are minimal external dependencies, and no
custom kernel modules.
After StorageOS is installed, please register for a free personal license to
enable 1TiB of capacity and HA with synchronous replication by following the
instructions [here](https://docs.ondat.io/docs/operations/licensing). For
additional capacity, features and support plans contact sales@ondat.io.
## Highlighted Features
* High Availability - synchronous replication insulates you from node failure.
* Delta Sync - replicas out of sync due to transient failures only transfer
changed blocks.
* Multiple AccessModes - dynamically provision ReadWriteOnce or ReadWriteMany
volumes.
* Rapid Failover - quickly detects node failure and automates recovery actions
without administrator intervention.
* Data Encryption - both in transit and at rest.
* Scalability - disaggregated consensus means no single scheduling point of
failure.
* Thin provisioning - only consume the space you need in a storage pool.
* Data reduction - transparent inline data compression to reduce the amount of
storage used in a backing store as well as reducing the network bandwidth
requirements for replication.
* Flexible configuration - all features can be enabled per volume, using PVC
and StorageClass labels.
* Multi-tenancy - fully supports standard Namespace and RBAC methods.
* Observability & instrumentation - Log streams for observability and
Prometheus support for instrumentation.
* Deployment flexibility - scale up or scale out storage based on application
requirements. Works with any infrastructure on-premises, VM, bare metal
or cloud.
## About StorageOS
StorageOS is a software-defined cloud native storage platform delivering
persistent storage for Kubernetes. StorageOS is built from the ground-up with
no legacy restrictions to give enterprises working with cloud native workloads
a scalable storage platform with no compromise on performance, availability or
security. For additional information, visit www.ondat.io.
## Installation
StorageOS requires an etcd cluster in order to function. Find out more about
setting up an etcd cluster in our [etcd
docs](https://docs.ondat.io/docs/prerequisites/etcd/).
By default, a minimal configuration of StorageOS is installed. To set advanced
configurations, disable the default installation of the StorageOS cluster
and create a custom StorageOSCluster resource, documentation
[here](https://github.com/ondat/charts/blob/main/charts/ondat-operator/README.md#creating-a-storageos-cluster-manually)
Newly installed StorageOS clusters require a license to function. For
instructions on applying our free developer license, or obtaining a commercial
license, please see our documentation at
https://docs.ondat.io/docs/reference/licence/.

View File

@ -0,0 +1,5 @@
podSecurityPolicy:
enabled: true
cluster:
# Disable cluster creation in CI, should install the operator only.
create: false

View File

@ -0,0 +1,424 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
labels:
app: storageos
app.kubernetes.io/component: operator
name: storageosclusters.storageos.com
spec:
group: storageos.com
names:
kind: StorageOSCluster
listKind: StorageOSClusterList
plural: storageosclusters
shortNames:
- stos
singular: storageoscluster
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Ready status of the storageos nodes.
jsonPath: .status.ready
name: ready
type: string
- description: Status of the whole cluster.
jsonPath: .status.phase
name: status
type: string
- jsonPath: .metadata.creationTimestamp
name: age
type: date
name: v1
schema:
openAPIV3Schema:
description: StorageOSCluster is the Schema for the storageosclusters API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: StorageOSClusterSpec defines the desired state of StorageOSCluster
properties:
csi:
description: CSI defines the configurations for CSI.
properties:
deploymentStrategy:
type: string
deviceDir:
type: string
driverRegisterationMode:
type: string
driverRequiresAttachment:
type: string
enable:
type: boolean
enableControllerExpandCreds:
type: boolean
enableControllerPublishCreds:
type: boolean
enableNodePublishCreds:
type: boolean
enableProvisionCreds:
type: boolean
endpoint:
type: string
kubeletDir:
type: string
kubeletRegistrationPath:
type: string
pluginDir:
type: string
registrarSocketDir:
type: string
registrationDir:
type: string
version:
type: string
type: object
debug:
description: Debug is to set debug mode of the cluster.
type: boolean
disableFencing:
description: "Disable Pod Fencing. With StatefulSets, Pods are only re-scheduled if the Pod has been marked as killed. In practice this means that failover of a StatefulSet pod is a manual operation. \n By enabling Pod Fencing and setting the `storageos.com/fenced=true` label on a Pod, StorageOS will enable automated Pod failover (by killing the application Pod on the failed node) if the following conditions exist: \n - Pod fencing has not been explicitly disabled. - StorageOS has determined that the node the Pod is running on is offline. StorageOS uses Gossip and TCP checks and will retry for 30 seconds. At this point all volumes on the failed node are marked offline (irrespective of whether fencing is enabled) and volume failover starts. - The Pod has the label `storageos.com/fenced=true` set. - The Pod has at least one StorageOS volume attached. - Each StorageOS volume has at least 1 healthy replica. \n When Pod Fencing is disabled, StorageOS will not perform any interaction with Kubernetes when it detects that a node has gone offline. Additionally, the Kubernetes permissions required for Fencing will not be added to the StorageOS role. Deprecated: Not used any more, fencing is enabled/disabled by storageos.com/fenced label on pod."
type: boolean
disableScheduler:
description: 'Disable StorageOS scheduler extender. Deprecated: Not used any more, scheduler is always enabled on Kubernetes.'
type: boolean
disableTCMU:
description: "Disable TCMU can be set to true to disable the TCMU storage driver. This is required when there are multiple storage systems running on the same node and you wish to avoid conflicts. Only one TCMU-based storage system can run on a node at a time. \n Disabling TCMU will degrade performance. Deprecated: Not used any more."
type: boolean
disableTelemetry:
description: Disable Telemetry.
type: boolean
enablePortalManager:
description: EnablePortalManager enables Portal Manager.
type: boolean
environment:
additionalProperties:
type: string
description: Environment contains environment variables that are passed to StorageOS.
type: object
forceTCMU:
description: "Force TCMU can be set to true to ensure that TCMU is enabled or cause StorageOS to abort startup. \n At startup, StorageOS will automatically fallback to non-TCMU mode if another TCMU-based storage system is running on the node. Since non-TCMU will degrade performance, this may not always be desired. Deprecated: Not used any more."
type: boolean
images:
description: Images defines the various container images used in the cluster.
properties:
apiManagerContainer:
type: string
csiClusterDriverRegistrarContainer:
type: string
csiExternalAttacherContainer:
type: string
csiExternalProvisionerContainer:
type: string
csiExternalResizerContainer:
type: string
csiLivenessProbeContainer:
type: string
csiNodeDriverRegistrarContainer:
type: string
hyperkubeContainer:
type: string
initContainer:
type: string
kubeSchedulerContainer:
type: string
nfsContainer:
type: string
nodeContainer:
type: string
nodeManagerContainer:
type: string
portalManagerContainer:
type: string
upgradeGuardContainer:
type: string
type: object
ingress:
description: 'Ingress defines the ingress configurations used in the cluster. Deprecated: Not used any more, please create your ingress for dashboard on your own.'
properties:
annotations:
additionalProperties:
type: string
type: object
enable:
type: boolean
hostname:
type: string
tls:
type: boolean
type: object
join:
description: 'Join is the join token used for service discovery. Deprecated: Not used any more.'
type: string
k8sDistro:
description: "K8sDistro is the name of the Kubernetes distribution where the operator is being deployed. It should be in the format: `name[-1.0]`, where the version is optional and should only be appended if known. Suitable names include: `openshift`, `rancher`, `aks`, `gke`, `eks`, or the deployment method if using upstream directly, e.g `minishift` or `kubeadm`. \n Setting k8sDistro is optional, and will be used to simplify cluster configuration by setting appropriate defaults for the distribution. The distribution information will also be included in the product telemetry (if enabled), to help focus development efforts."
type: string
kvBackend:
description: KVBackend defines the key-value store backend used in the cluster.
properties:
address:
type: string
backend:
type: string
required:
- address
type: object
namespace:
description: 'Namespace is the kubernetes Namespace where storageos resources are provisioned. Deprecated: StorageOS uses namespace of storageosclusters.storageos.com resource.'
type: string
nodeManagerFeatures:
additionalProperties:
type: string
description: Node manager feature list with optional configurations.
type: object
nodeSelectorTerms:
description: NodeSelectorTerms is to set the placement of storageos pods using node affinity requiredDuringSchedulingIgnoredDuringExecution.
items:
description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
properties:
matchExpressions:
description: A list of node selector requirements by node's labels.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchFields:
description: A list of node selector requirements by node's fields.
items:
description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: The label key that the selector applies to.
type: string
operator:
description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
type: string
values:
description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
type: object
type: array
pause:
description: 'Pause is to pause the operator for the cluster. Deprecated: Not used any more, operator is always running.'
type: boolean
resources:
description: Resources is to set the resource requirements of the storageos containers.
properties:
limits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
type: object
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
type: object
type: object
secretRefName:
description: SecretRefName is the name of the secret object that contains all the sensitive cluster configurations.
type: string
secretRefNamespace:
description: 'SecretRefNamespace is the namespace of the secret reference. Deprecated: StorageOS uses namespace of storageosclusters.storageos.com resource.'
type: string
service:
description: Service is the Service configuration for the cluster nodes.
properties:
annotations:
additionalProperties:
type: string
type: object
externalPort:
type: integer
internalPort:
type: integer
name:
type: string
type:
type: string
required:
- name
- type
type: object
sharedDir:
description: 'SharedDir is the shared directory to be used when the kubelet is running in a container. Typically: "/var/lib/kubelet/plugins/kubernetes.io~storageos". If not set, defaults will be used.'
type: string
storageClassName:
description: StorageClassName is the name of default StorageClass created for StorageOS volumes.
type: string
tlsEtcdSecretRefName:
description: TLSEtcdSecretRefName is the name of the secret object that contains the etcd TLS certs. This secret is shared with etcd, therefore it's not part of the main storageos secret.
type: string
tlsEtcdSecretRefNamespace:
description: 'TLSEtcdSecretRefNamespace is the namespace of the etcd TLS secret object. Deprecated: StorageOS uses namespace of storageosclusters.storageos.com resource.'
type: string
tolerations:
description: Tolerations is to set the placement of storageos pods using pod toleration.
items:
description: The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.
properties:
effect:
description: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
type: string
key:
description: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
type: string
operator:
description: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
type: string
tolerationSeconds:
description: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
format: int64
type: integer
value:
description: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
type: string
type: object
type: array
required:
- kvBackend
- secretRefName
type: object
status:
description: StorageOSClusterStatus defines the observed state of StorageOSCluster
properties:
conditions:
description: Conditions is a list of status of all the components of StorageOS.
items:
description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }"
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
members:
description: Members is the list of StorageOS nodes in the cluster.
properties:
ready:
description: Ready are the storageos cluster members that are ready to serve requests. The member names are the same as the node IPs.
items:
type: string
type: array
unready:
description: Unready are the storageos cluster nodes not ready to serve requests.
items:
type: string
type: array
type: object
nodeHealthStatus:
additionalProperties:
description: NodeHealth contains health status of a node.
properties:
directfsInitiator:
type: string
director:
type: string
kv:
type: string
kvWrite:
type: string
nats:
type: string
presentation:
type: string
rdb:
type: string
type: object
type: object
nodes:
items:
type: string
type: array
phase:
description: Phase is the phase of the StorageOS cluster.
type: string
ready:
description: Ready is the ready status of the StorageOS control-plane pods.
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,177 @@
categories:
- storage
labels:
io.rancher.certified: partner
io.cattle.role: cluster
rancher_min_version: 2.4.0
questions:
- variable: k8sDistro
default: rancher
description: "Kubernetes Distribution is used to fine-tune configuration for
specific Kubernetes distributions. It is also included in anonymized
telemetry data so that we can focus development effort most effectively.
Example values: rancher, openshift"
type: string
label: Kubernetes Distribution
# Operator image configuration.
- variable: defaultImage
default: true
description: "Use default Docker images"
label: Use Default Images
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: operator.image.pullPolicy
default: IfNotPresent
description: "Operator Image pull policy"
type: enum
label: Operator Image pull policy
options:
- IfNotPresent
- Always
- Never
- variable: operator.image.repository
default: "storageos/operator"
description: "StorageOS operator image name"
type: string
label: StorageOS Operator Image Name
- variable: operator.image.tag
default: "v2.5.0"
description: "StorageOS Operator image tag"
type: string
label: StorageOS Operator Image Tag
# Default minimal cluster configuration.
- variable: cluster.create
default: true
type: boolean
description: "Install StorageOS cluster with minimal configurations"
label: "Install StorageOS cluster"
show_subquestion_if: true
group: "StorageOS Cluster"
subquestions:
# Cluster metadata.
- variable: cluster.name
default: "storageos"
description: "Name of the StorageOS cluster deployment"
type: string
label: Cluster Name
- variable: cluster.namespace
default: "storageos"
description: "Namespace of the StorageOS cluster deployment"
type: string
label: Cluster Namespace
- variable: cluster.createNamespace
default: true
description: "If true, create the namespace for the cluster deployment"
type: boolean
label: Create Cluster Namespace
# Node container image.
- variable: cluster.images.node.repository
default: "storageos/node"
description: "StorageOS node container image name"
type: string
label: StorageOS Node Container Image Name
- variable: cluster.images.node.tag
default: "v2.5.0"
description: "StorageOS Node container image tag"
type: string
label: StorageOS Node Container Image Tag
# Telemetry.
- variable: cluster.disableTelemetry
default: false
type: boolean
description: "Disable telemetry data collection. See https://docs.storageos.com/docs/reference/telemetry for more information."
label: Disable Telemetry
# Credentials.
- variable: cluster.admin.username
default: "admin"
description: "Username of the StorageOS administrator account"
type: string
label: Username
- variable: cluster.admin.password
default: ""
description: "Password of the StorageOS administrator account. Must be at
least 8 characters long"
type: password
label: Password
# KV store backend.
- variable: cluster.kvBackend.address
required: true
default: ""
description: "List of etcd targets, in the form ip:port, separated by
commas. Prefer multiple direct endpoints over a single load-balanced
endpoint. See https://docs.storageos.com/docs/prerequisites/etcd/ for more
information."
type: string
label: External etcd address(es)
- variable: cluster.kvBackend.tls
default: false
type: boolean
description: "Enable etcd TLS"
label: "TLS should be configured for external etcd to protect configuration data (Optional)."
- variable: cluster.kvBackend.tlsSecretName
required: false
default: ""
description: "Name of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret name
show_if: "cluster.kvBackend.tls=true"
- variable: cluster.kvBackend.tlsSecretNamespace
required: false
default: ""
description: "Namespace of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret namespace
show_if: "cluster.kvBackend.tls=true"
# Node Selector Term.
- variable: cluster.nodeSelectorTerm.key
required: false
default: ""
description: "Key of the node selector term match expression used to select the nodes to install StorageOS on, e.g. `node-role.kubernetes.io/worker`"
type: string
label: Node selector term key
- variable: cluster.nodeSelectorTerm.value
required: false
default: ""
description: "Value of the node selector term match expression used to select the nodes to install StorageOS on."
type: string
label: Node selector term value
# Pod tolerations.
- variable: cluster.toleration.key
required: false
default: ""
description: "Key of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration key
- variable: cluster.toleration.value
required: false
default: ""
description: "Value of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration value
# Shared Directory
- variable: cluster.sharedDir
required: false
default: "/var/lib/kubelet/plugins/kubernetes.io~storageos"
description: "Shared Directory should be set if running kubelet in a container. This should be the path shared into to kubelet container, typically: '/var/lib/kubelet/plugins/kubernetes.io~storageos'. If not set, defaults will be used."
type: string
label: Shared Directory
# Cluster metadata.
- variable: cluster.storageClassName
default: "storageos"
description: "Name of the default StorageOS StorageClass"
type: string
label: StorageClass Name

View File

@ -0,0 +1,51 @@
{{- if .Values.cluster.create }}
As you enabled automatic cluster creation, your StorageOS cluster is spinning
up in the {{ .Values.cluster.namespace }} namespace.
{{- else }}
StorageOS Operator deployed.
As you disabled automatic cluster creation, you can deploy a StorageOS cluster
by creating a custom StorageOSCluster resource:
1. Create a secret containing StorageOS cluster credentials. This secret
contains the API username and password that will be used to authenticate to the
StorageOS cluster. Base64 encode the username and password that you want to use
for your StorageOS cluster.
apiVersion: v1
kind: Secret
metadata:
name: storageos-api
namespace: storageos
labels:
app: storageos
type: kubernetes.io/storageos
data:
# echo -n '<secret>' | base64
username: c3RvcmFnZW9z
password: c3RvcmFnZW9z
2. Create a StorageOS custom resource that references the secret created
above (storageos-api in the above example). They must share a namespace.
When the resource is created, the cluster will be deployed.
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: example-storageos
namespace: storageos
spec:
secretRefName: storageos-api
storageClassName: storageos
kvBackend:
address: <etcd-endpoint>
Newly installed StorageOS clusters require a license to function. For
instructions on applying our free developer license, or obtaining a commercial
license, please see our documentation at
https://docs.storageos.com/docs/reference/licence/.
{{- end }}

View File

@ -0,0 +1,67 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "storageos.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "storageos.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "storageos.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "storageos.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "storageos.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Validate the admin username to be of minimum length
*/}}
{{- define "validate-username" -}}
{{ $length := len .Values.cluster.admin.username }}
{{- if ge $length 3 -}}
{{ .Values.cluster.admin.username }}
{{- else -}}
{{- fail "Invalid username. Must be at least 3 characters." -}}
{{- end -}}
{{- end -}}
{{/*
Validate the admin password to be of minimum length
*/}}
{{- define "validate-password" -}}
{{ $length := len .Values.cluster.admin.password }}
{{- if ge $length 8 -}}
{{ .Values.cluster.admin.password }}
{{- else -}}
{{- fail "Invalid password. Must be at least 8 characters." -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,315 @@
# ClusterRole, ClusterRoleBinding and ServiceAccounts have hook-failed in
# hook-delete-policy to make it easy to rerun the whole setup even after a
# failure, else the rerun fails with existing resource error.
# Hook delete policy before-hook-creation ensures any other leftover resources
# from previous run gets deleted when run again.
# The Job resources will not be deleted to help investigage the failure.
# Since the resources created by the operator are not managed by the chart, each
# of them must be individually deleted in separate jobs.
apiVersion: v1
kind: ServiceAccount
metadata:
name: storageos-cleanup
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
rules:
# Using apiGroup "apps" for daemonsets fails and the permission error indicates
# that it's in group "extensions". Not sure if it's a Job specific behavior,
# because the daemonsets deployed by the operator use "apps" apiGroup.
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
verbs:
- delete
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- daemonsets
verbs:
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- delete
- apiGroups:
- ""
resources:
- serviceaccounts
- secrets
- services
- configmaps
verbs:
- delete
- apiGroups:
- storageos.com
resources:
- storageosclusters
verbs:
- get
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "2"
subjects:
- name: storageos-cleanup
kind: ServiceAccount
namespace: {{ .Release.Namespace }}
roleRef:
name: storageos:cleanup
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
---
{{- if .Values.cluster.create }}
# Delete the CR
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-storageoscluster-cleanup"
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-storageoscluster-cleanup"
image: "{{ $.Values.cleanup.images.kubectl.repository }}:{{ $.Values.cleanup.images.kubectl.tag }}"
command:
- kubectl
- -n
- {{ .Values.cluster.namespace }}
- delete
- storageoscluster
- {{ .Values.cluster.name }}
- --ignore-not-found=true
restartPolicy: Never
backoffLimit: 4
---
# Wait for the operator to appropriately delete resources based on CR deletion
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-cleanup-wait"
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "4"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-cleanup-wait"
image: "{{ $.Values.cleanup.images.kubectl.repository }}:{{ $.Values.cleanup.images.kubectl.tag }}"
command:
- "/bin/bash"
- "-c"
args:
- 'while [ -n "$(kubectl get pods -n {{ .Values.cluster.namespace }} -l app=storageos --ignore-not-found)" ]; do echo "Pods stil deleting"; sleep 5; done'
restartPolicy: Never
backoffLimit: 4
---
{{- end }}
# Seperation between pre- & post-delete hooks
# The storageoscluster CR must be deleted before the operator, so the operator
# can handle cluster tear down.
# Some resources must be deleted after the operator otherwise the operator
# will re-create them.
apiVersion: v1
kind: ServiceAccount
metadata:
name: storageos-cleanup
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
rules:
# Using apiGroup "apps" for daemonsets fails and the permission error indicates
# that it's in group "extensions". Not sure if it's a Job specific behavior,
# because the daemonsets deployed by the operator use "apps" apiGroup.
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
verbs:
- delete
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- daemonsets
verbs:
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- delete
- apiGroups:
- ""
resources:
- serviceaccounts
- secrets
- services
- configmaps
verbs:
- delete
- apiGroups:
- storageos.com
resources:
- storageosclusters
verbs:
- get
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "2"
subjects:
- name: storageos-cleanup
kind: ServiceAccount
namespace: {{ .Release.Namespace }}
roleRef:
name: storageos:cleanup
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
---
# Delete some misc operator files that aren't cleaned up otherwise.
# Needs to be done afterwards in a post-delete hook as otherwise the operator
# will sometimes recreate them before it's destroyed.
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-operator-data-cleanup"
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-operator-data-cleanup"
image: "{{ $.Values.cleanup.images.kubectl.repository }}:{{ $.Values.cleanup.images.kubectl.tag }}"
command:
- kubectl
- -n
- {{ .Release.Namespace }}
- delete
- configmap/operator
- configmap/storageos-api-manager-leader
- secret/storageos-operator-webhook
- secret/storageos-webhook
- --ignore-not-found=true
restartPolicy: Never
backoffLimit: 4

View File

@ -0,0 +1,75 @@
apiVersion: v1
data:
operator_config.yaml: |
apiVersion: config.storageos.com/v1
kind: OperatorConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: storageos-operator
webhookCertRefreshInterval: 15m
webhookServiceName: storageos-operator-webhook
webhookSecretRef: storageos-operator-webhook
validatingWebhookConfigRef: storageos-operator-validating-webhook
kind: ConfigMap
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos-operator
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
data:
{{- if and .Values.cluster.images.apiManager.repository .Values.cluster.images.apiManager.tag }}
RELATED_IMAGE_API_MANAGER: "{{ .Values.cluster.images.apiManager.repository }}:{{ .Values.cluster.images.apiManager.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ExternalAttacherV3.repository .Values.cluster.images.csiV1ExternalAttacherV3.tag }}
RELATED_IMAGE_CSIV1_EXTERNAL_ATTACHER_V3: "{{ .Values.cluster.images.csiV1ExternalAttacherV3.repository }}:{{ .Values.cluster.images.csiV1ExternalAttacherV3.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ExternalProvisioner.repository .Values.cluster.images.csiV1ExternalProvisioner.tag }}
RELATED_IMAGE_CSIV1_EXTERNAL_PROVISIONER: "{{ .Values.cluster.images.csiV1ExternalProvisioner.repository }}:{{ .Values.cluster.images.csiV1ExternalProvisioner.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ExternalResizer.repository .Values.cluster.images.csiV1ExternalResizer.tag }}
RELATED_IMAGE_CSIV1_EXTERNAL_RESIZER: "{{ .Values.cluster.images.csiV1ExternalResizer.repository }}:{{ .Values.cluster.images.csiV1ExternalResizer.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1LivenessProbe.repository .Values.cluster.images.csiV1LivenessProbe.tag }}
RELATED_IMAGE_CSIV1_LIVENESS_PROBE: "{{ .Values.cluster.images.csiV1LivenessProbe.repository }}:{{ .Values.cluster.images.csiV1LivenessProbe.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1NodeDriverRegistrar.repository .Values.cluster.images.csiV1NodeDriverRegistrar.tag }}
RELATED_IMAGE_CSIV1_NODE_DRIVER_REGISTRAR: "{{ .Values.cluster.images.csiV1NodeDriverRegistrar.repository }}:{{ .Values.cluster.images.csiV1NodeDriverRegistrar.tag }}"
{{- end }}
{{- if and .Values.cluster.images.init.repository .Values.cluster.images.init.tag }}
RELATED_IMAGE_STORAGEOS_INIT: "{{ .Values.cluster.images.init.repository }}:{{ .Values.cluster.images.init.tag }}"
{{- end }}
{{- if and .Values.cluster.images.node.repository .Values.cluster.images.node.tag }}
RELATED_IMAGE_STORAGEOS_NODE: "{{ .Values.cluster.images.node.repository }}:{{ .Values.cluster.images.node.tag }}"
{{- end }}
{{- if and .Values.cluster.images.nodeManager.repository .Values.cluster.images.nodeManager.tag }}
RELATED_IMAGE_NODE_MANAGER: "{{ .Values.cluster.images.nodeManager.repository }}:{{ .Values.cluster.images.nodeManager.tag }}"
{{- end }}
{{- if and .Values.cluster.images.portalManager.repository .Values.cluster.images.portalManager.tag }}
RELATED_IMAGE_PORTAL_MANAGER: "{{ .Values.cluster.images.portalManager.repository }}:{{ .Values.cluster.images.portalManager.tag }}"
{{- end }}
{{- if and .Values.cluster.images.upgradeGuard.repository .Values.cluster.images.upgradeGuard.tag }}
RELATED_IMAGE_UPGRADE_GUARD: "{{ .Values.cluster.images.upgradeGuard.repository }}:{{ .Values.cluster.images.upgradeGuard.tag }}"
{{- end }}
kind: ConfigMap
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos-related-images
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,22 @@
{{- if .Values.cluster.createNamespace }}
# Don't want to attempt to create the ns if user has specificied the same ns
# for both the release and the StorageOS cluster.
# As otherwise it would fail & this could be confusing UX for them.
{{- if not (eq .Release.Namespace .Values.cluster.namespace) }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.cluster.namespace }}
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,87 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storageos.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator
release: {{ .Release.Name }}
spec:
containers:
- args:
- --config=operator_config.yaml
command:
- /manager
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
envFrom:
- configMapRef:
name: storageos-related-images
image: "{{ .Values.operator.image.repository }}:{{ .Values.operator.image.tag }}"
imagePullPolicy: {{ .Values.operator.image.pullPolicy }}
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
name: manager
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: 250m
memory: 200Mi
requests:
cpu: 10m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- mountPath: /operator_config.yaml
name: storageos-operator
subPath: operator_config.yaml
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: quay.io/brancz/kube-rbac-proxy:v0.10.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
securityContext:
runAsNonRoot: true
serviceAccountName: {{ template "storageos.serviceAccountName" . }}
terminationGracePeriodSeconds: 10
volumes:
- configMap:
name: storageos-operator
name: storageos-operator

View File

@ -0,0 +1,29 @@
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "storageos.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- if .Values.podSecurityPolicy.annotations }}
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
volumes:
- '*'
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
{{- end }}

View File

@ -0,0 +1,840 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
---
# Role for storageos operator
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps
- configmaps/status
- endpoints
- endpoints/status
- events
- namespaces
- persistentvolumeclaims
- persistentvolumeclaims/status
- persistentvolumes
- pods/binding
- pods/status
- replicationcontrollers
- secrets
- serviceaccounts
- services
- services/finalizers
- services/status
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- '*'
- apiGroups:
- api.storageos.com
resources:
- nodes
- volumes
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- api.storageos.com
resources:
- nodes/status
- volumes/status
verbs:
- get
- patch
- update
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- delete
- get
- patch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- '*'
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- csi.storage.k8s.io
resources:
- csidrivers
- csistoragecapacities
verbs:
- create
- delete
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
- rolebindings
- roles
verbs:
- bind
- create
- delete
- get
- patch
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- create
- delete
- get
- update
- use
- apiGroups:
- storage.k8s.io
resources:
- csidrivers
- csinodeinfos
- csinodes
- csistoragecapacities
- storageclasses
- volumeattachments
- volumeattachments/status
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- storageos.com
resources:
- storageosclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- storageos.com
resources:
- storageosclusters/finalizers
verbs:
- update
- apiGroups:
- storageos.com
resources:
- storageosclusters/status
verbs:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:api-manager
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- endpoints/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- node
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- get
- list
- update
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services/status
verbs:
- get
- patch
- update
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- '*'
- apiGroups:
- api.storageos.com
resources:
- nodes
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- api.storageos.com
resources:
- nodes/status
verbs:
- get
- patch
- update
- apiGroups:
- api.storageos.com
resources:
- volumes
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- api.storageos.com
resources:
- volumes/status
verbs:
- get
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- volumeattachments
verbs:
- delete
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:node-manager
rules:
- apiGroups:
- api.storageos.com
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- api.storageos.com
resources:
- volumes
verbs:
- get
- list
- watch
- apiGroups:
- storageos.com
resources:
- storageosclusters
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:portal-manager
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes
- persistentvolumeclaims
- persistentvolumes
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- api.storageos.com
resources:
- nodes
- volumes
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
verbs:
- list
- watch
- apiGroups:
- storageos.com
resources:
- storageosclusters
verbs:
- list
- watch
- apiGroups:
- storageos.com
resources:
- storageosportals
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:scheduler-extender
rules:
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- apiGroups:
- scheduling.k8s.io
resources:
- priorityclasses
verbs:
- get
- list
- create
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:proxy:operator
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
---
# Bind operator service account to storageos-operator role
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: storageos:operator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:api-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:operator:api-manager
subjects:
- kind: ServiceAccount
name: storageos-operator
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:node-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:operator:node-manager
subjects:
- kind: ServiceAccount
name: storageos-operator
namespace: storageos
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:portal-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:operator:portal-manager
subjects:
- kind: ServiceAccount
name: storageos-operator
namespace: storageos
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:operator:scheduler-extender
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:operator:scheduler-extender
subjects:
- kind: ServiceAccount
name: storageos-operator
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos:proxy:operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:proxy:operator
subjects:
- kind: ServiceAccount
name: storageos-operator
namespace: {{ .Release.Namespace }}
{{- if .Values.podSecurityPolicy.enabled }}
---
# ClusterRole for using pod security policy.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- {{ template "storageos.fullname" . }}-psp
---
# Bind pod security policy cluster role to the operator service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:psp-user
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,19 @@
{{- if .Values.cluster.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.cluster.secretRefName }}
namespace: {{ .Values.cluster.namespace }}
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: "kubernetes.io/storageos"
data:
username: {{ include "validate-username" . | b64enc | quote }}
password: {{ include "validate-password" . | b64enc | quote }}
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end }}

View File

@ -0,0 +1,42 @@
apiVersion: v1
kind: Service
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos-operator
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: https
port: 8443
targetPort: https
selector:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator
---
apiVersion: v1
kind: Service
metadata:
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos-operator-webhook
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 443
targetPort: 9443
selector:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
control-plane: storageos-operator

View File

@ -0,0 +1,52 @@
{{- if .Values.cluster.create }}
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: {{ .Values.cluster.name }}
namespace: {{ .Values.cluster.namespace }}
spec:
secretRefName: {{ .Values.cluster.secretRefName }}
disableTelemetry: {{ .Values.cluster.disableTelemetry }}
storageClassName: {{ .Values.cluster.storageClassName }}
{{- if .Values.k8sDistro }}
k8sDistro: {{ .Values.k8sDistro }}
{{- end }}
{{- if .Values.cluster.sharedDir }}
sharedDir: {{ .Values.cluster.sharedDir }}
{{- end }}
kvBackend:
address: {{ required "kv backend address must be set" .Values.cluster.kvBackend.address }}
backend: {{ .Values.cluster.kvBackend.backend }}
{{- if .Values.cluster.kvBackend.tlsSecretName }}
tlsEtcdSecretRefName: {{ .Values.cluster.kvBackend.tlsSecretName }}
{{- end }}
{{- if .Values.cluster.kvBackend.tlsSecretNamespace }}
tlsEtcdSecretRefNamespace: {{ .Values.cluster.kvBackend.tlsSecretNamespace }}
{{- end }}
resources:
{{ toYaml .Values.cluster.resources | indent 4 }}
{{- if .Values.cluster.nodeSelectorTerm.key }}
nodeSelectorTerms:
- matchExpressions:
- key: {{ .Values.cluster.nodeSelectorTerm.key }}
operator: In
values:
- "{{ .Values.cluster.nodeSelectorTerm.value }}"
{{- end }}
{{- if .Values.cluster.toleration.key }}
tolerations:
- key: {{ .Values.cluster.toleration.key }}
operator: "Equal"
value: {{ .Values.cluster.toleration.value }}
effect: "NoSchedule"
{{- end }}
{{- end }}

View File

@ -0,0 +1,31 @@
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
labels:
app: {{ template "storageos.name" . }}
app.kubernetes.io/component: operator
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: storageos-operator-validating-webhook
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: storageos-operator-webhook
namespace: {{ .Release.Namespace }}
path: /validate-storageoscluster
failurePolicy: Fail
name: cluster-validator.storageos.com
rules:
- apiGroups:
- storageos.com
apiVersions:
- v1
operations:
- CREATE
resources:
- storageosclusters
sideEffects: None

View File

@ -0,0 +1,144 @@
# Default values for storageos.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
name: ondat-operator
k8sDistro: default
serviceAccount:
create: true
name: storageos-operator
podSecurityPolicy:
enabled: false
annotations:
{}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
# operator-specific configuation parameters.
operator:
image:
repository: storageos/operator
tag: v2.6.0
pullPolicy: IfNotPresent
# cluster-specific configuation parameters.
cluster:
# set create to true if the operator should auto-create the StorageOS cluster.
create: true
# Name of the deployment.
name: storageos
# Namespace to install the StorageOS cluster into.
# This is opposed to the namespace of the operator, which is refered to
# with .Release.Namespace
namespace: storageos
# Set to false if you'd like to use a pre-existing namespace
createNamespace: true
# Name of the secret containing StorageOS API credentials.
secretRefName: storageos-api
# Default admin account.
admin:
# Username to authenticate to the StorageOS API with.
username: storageos
# Password to authenticate to the StorageOS API with. This must be at least
# 8 characters long.
password:
# sharedDir should be set if running kubelet in a container. This should
# be the path shared into to kubelet container, typically:
# "/var/lib/kubelet/plugins/kubernetes.io~storageos". If not set, defaults
# will be used.
sharedDir:
# Key-Value store backend.
kvBackend:
address:
backend: etcd
tlsSecretName:
tlsSecretNamespace:
# Resource requests and limits for the node container
resources: {}
# requests:
# cpu: 1
# memory: 2Gi
# limits:
# cpu:
# memory:
# Node selector terms to install StorageOS on.
nodeSelectorTerm:
key:
value:
# Pod toleration for the StorageOS pods.
toleration:
key:
value:
# To disable anonymous usage reporting across the cluster, set to true.
# Defaults to false. To help improve the product, data such as API usage and
# StorageOS configuration information is collected.
disableTelemetry: false
# The name of the StorageClass to be created
# Using a YAML anchor to allow deletion of the custom storageClass
storageClassName: storageos
images:
apiManager:
repository: storageos/api-manager
tag: v1.2.5
csiV1ExternalAttacherV3:
repository: quay.io/k8scsi/csi-attacher
tag: v3.1.0
csiV1ExternalProvisioner:
repository: storageos/csi-provisioner
tag: v2.1.1-patched
csiV1ExternalResizer:
repository: quay.io/k8scsi/csi-resizer
tag: v1.1.0
csiV1LivenessProbe:
repository: quay.io/k8scsi/livenessprobe
tag: v2.2.0
csiV1NodeDriverRegistrar:
repository: quay.io/k8scsi/csi-node-driver-registrar
tag: v2.1.0
init:
repository: storageos/init
tag: v2.1.1
# nodeContainer is the StorageOS node image to use, available from the
# [Docker Hub](https://hub.docker.com/r/storageos/node/).
node:
repository: storageos/node
tag: v2.6.0
nodeManager:
repository: storageos/node-manager
tag: v0.0.2
portalManager:
repository: storageos/portal-manager
tag: v1.0.1
upgradeGuard:
repository: storageos/upgrade-guard
tag: v0.0.2
# The following is used for cleaning up unmanaged cluster resources when
# auto-install is enabled.
cleanup:
images:
kubectl:
repository: bitnami/kubectl
tag: 1.18.2

View File

@ -0,0 +1,26 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Speedscale Operator
catalog.cattle.io/release-name: speedscale-operator
apiVersion: v1
appVersion: 0.11.43
description: Stress test your APIs with real world scenarios. Collect and replay
traffic without scripting.
home: https://speedscale.com
icon: https://raw.githubusercontent.com/speedscale/assets/main/logo/gold_logo_only.png
keywords:
- speedscale
- test
- testing
- regression
- reliability
- load
- replay
- network
- traffic
kubeVersion: '>= 1.17.0-0'
maintainers:
- email: support@speedscale.com
name: Speedscale Support
name: speedscale-operator
version: 0.11.4300

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2021 Speedscale
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,16 @@
# Speedscale Operator
The [Speedscale](https://www.speedscale.com) Operator is a [Kubernetes operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
that watches for deployments to be applied to the cluster and takes action based on annotations. The operator
can inject a proxy to capture traffic into or out of applications, or setup an isolation test environment around
a deployment for testing. The operator itself is a deployment that will be always present on the cluster once
the helm chart is installed.
## Install
Install the operator through this chart and annotate deployments to record traffic or replay snapshots.
## Help
Speedscale docs information available at [docs.speedscale.com](https://docs.speedscale.com) or join us
on the [Speedscale community Slack](https://join.slack.com/t/speedscalecommunity/shared_invite/zt-x5rcrzn4-XHG1QqcHNXIM~4yozRrz8A)!

View File

@ -0,0 +1,16 @@
# Speedscale Operator
The [Speedscale](https://www.speedscale.com) Operator is a [Kubernetes operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
that watches for deployments to be applied to the cluster and takes action based on annotations. The operator
can inject a proxy to capture traffic into or out of applications, or setup an isolation test environment around
a deployment for testing. The operator itself is a deployment that will be always present on the cluster once
the helm chart is installed.
## Install
Install the operator through this chart and annotate deployments to record traffic or replay snapshots.
## Help
Speedscale docs information available at [docs.speedscale.com](https://docs.speedscale.com) or join us
on the [Speedscale community Slack](https://join.slack.com/t/speedscalecommunity/shared_invite/zt-x5rcrzn4-XHG1QqcHNXIM~4yozRrz8A)!

View File

@ -0,0 +1,9 @@
questions:
- variable: apiKey
default: "fffffffffffffffffffffffffffffffffffffffffffff"
description: "An API key is required to connect to the Speedscale cloud."
required: true
type: string
label: API Key
group: Authentication

View File

@ -0,0 +1,20 @@
apiVersion: v1
data:
CLUSTER_NAME: MY_CLUSTER
CONTAINER_REGISTRY: gcr.io/speedscale
CONTAINER_TYPE: {{ .Values.image.tag }}
LOG_LEVEL: info
SLACK_WEBHOOK_URL: ""
SUB_TENANT_NAME: ""
SUB_TENANT_STREAM: ""
TELEMETRY_INTERVAL: "60"
TELEMETRY_TEST_INTERVAL: "1"
TENANT_BUCKET: ""
TENANT_ID: ""
TENANT_NAME: ""
TENANT_REGION: ""
kind: ConfigMap
metadata:
creationTimestamp: null
name: speedscale-controller
namespace: speedscale

View File

@ -0,0 +1,76 @@
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
control-plane: controller-manager
name: speed-operator-controller-manager
namespace: speedscale
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
control-plane: controller-manager
spec:
containers:
- args:
- --metrics-addr=127.0.0.1:8080
command:
- /manager
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
envFrom:
- configMapRef:
name: speedscale-controller
- secretRef:
name: speedscale-apikey
optional: true
image: gcr.io/speedscale/speed-operator:{{ .Values.image.tag }}
imagePullPolicy: Always
name: speedscale-manager
ports:
- containerPort: 9443
name: webhook-server
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: operator-cert
- mountPath: /etc/ssl/speedscale
name: ss-certs
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
name: speedscale-kube-rbac-proxy
ports:
- containerPort: 8443
name: https
resources: {}
serviceAccountName: speedscale-control-sa
terminationGracePeriodSeconds: 10
volumes:
- name: operator-cert
secret:
defaultMode: 420
secretName: operator-cert
- name: ss-certs
secret:
secretName: ss-certs
status: {}

View File

@ -0,0 +1,57 @@
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: speedscale-keys-create
namespace: speedscale
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- /bin/sh
- -ce
- "speedctl init --force --api-key {{ .Values.apiKey }} --app-url {{ .Values.appUrl
}} \\\n && speedctl deploy operator --dir ./manifests \\\n && kubectl
apply -f ./manifests/webhook.yaml \\\n && kubectl apply -f ./manifests/configmap.yaml
\\\n && kubectl apply -f ./manifests/secret.yaml \\\n\t|| echo 'manifest
apply failed, verify API key is correct'\n"
image: gcr.io/sspublic/speedscale-cli:v0.11.43
imagePullPolicy: Always
name: speedscale-cli
resources: {}
restartPolicy: Never
serviceAccountName: speedscale-control-sa
status: {}
---
apiVersion: batch/v1
kind: Job
metadata:
annotations:
helm.sh/hook: pre-delete
creationTimestamp: null
name: speedscale-keys-cleanup
namespace: speedscale
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- /bin/sh
- -ce
- |
speedctl init --force --api-key {{ .Values.apiKey }} --app-url {{ .Values.appUrl }} \
&& speedctl deploy operator --dir ./manifests \
&& kubectl delete -f ./manifests/webhook.yaml \
|| echo 'cleanup failed, quitting'
image: gcr.io/sspublic/speedscale-cli:v0.11.43
imagePullPolicy: Always
name: speedscale-cli
resources: {}
restartPolicy: Never
serviceAccountName: speedscale-control-sa
status: {}

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: speedscale

View File

@ -0,0 +1,197 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: speed-operator-manager-role
rules:
- apiGroups:
- apps
resources:
- deployments
- statefulsets
- jobs
- namespaces
- secrets
- daemonsets
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps
- jobs
- namespaces
- pods
- pods/log
- secrets
- services
- serviceaccounts
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- create
- delete
- deletecollection
- use
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
verbs:
- create
- delete
- deletecollection
- list
- update
- watch
- apiGroups:
- speedscale.com
resources:
- test-reports
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- apiGroups:
- networking.istio.io
resources:
- envoyfilters
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- create
- delete
- deletecollection
- update
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
verbs:
- get
- create
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: speed-operator-manager-rolebinding
namespace: speedscale
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: speed-operator-manager-role
subjects:
- kind: ServiceAccount
name: speedscale-control-sa
namespace: speedscale
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: speed-operator-proxy-role
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- tokenreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: speed-operator-proxy-rolebinding
namespace: speedscale
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: speed-operator-proxy-role
subjects:
- kind: ServiceAccount
name: speedscale-control-sa
namespace: speedscale
---
apiVersion: v1
automountServiceAccountToken: true
imagePullSecrets:
- name: gcrcred
kind: ServiceAccount
metadata:
creationTimestamp: null
name: speedscale-control-sa
namespace: speedscale

View File

@ -0,0 +1,33 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
control-plane: controller-manager
name: speed-operator-controller-manager-metrics-service
namespace: speedscale
spec:
ports:
- port: 8443
protocol: TCP
targetPort: https
selector:
control-plane: controller-manager
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: speed-operator-webhook-service
namespace: speedscale
spec:
ports:
- port: 443
protocol: TCP
targetPort: 9443
selector:
control-plane: controller-manager
status:
loadBalancer: {}

View File

@ -0,0 +1,8 @@
# An API key is required to connect to the Speedscale cloud.
# If you need a key email support@speedscale.com.
apiKey: <SPEEDSCALE_API_KEY>
image:
tag: stable
appUrl: app.speedscale.com

View File

@ -2326,6 +2326,27 @@ entries:
- assets/nats/nats-0.10.0.tgz - assets/nats/nats-0.10.0.tgz
version: 0.10.0 version: 0.10.0
neuvector: neuvector:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: NeuVector
catalog.cattle.io/release-name: neuvector
apiVersion: v1
appVersion: 4.4.4
created: "2022-02-23T16:15:47.730445764-08:00"
description: Helm chart for NeuVector's core services
digest: 3acc84eae24466ea0e60c6044059173c693aeeb049484bbcb8730874efef589f
home: https://neuvector.com
icon: https://avatars2.githubusercontent.com/u/19367275?s=200&v=4
keywords:
- security
kubeVersion: '>=1.13.0-0'
maintainers:
- email: support@neuvector.com
name: becitsthere
name: neuvector
urls:
- assets/neuvector/neuvector-1.9.100.tgz
version: 1.9.100
- annotations: - annotations:
catalog.cattle.io/certified: partner catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: NeuVector catalog.cattle.io/display-name: NeuVector
@ -2698,6 +2719,36 @@ entries:
- assets/nutanix-csi-storage/nutanix-csi-storage-2.3.100.tgz - assets/nutanix-csi-storage/nutanix-csi-storage-2.3.100.tgz
version: 2.3.100 version: 2.3.100
ondat-operator: ondat-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Ondat Operator
catalog.cattle.io/release-name: ondat-operator
apiVersion: v2
appVersion: v2.6.0
created: "2022-02-24T15:13:07.677580962Z"
description: Cloud Native storage for containers
digest: edfbee79757a2403fab03bcb3f220a205ac31c95330045a215c9a49d2c03c65a
home: https://ondat.io
icon: https://docs.ondat.io/images/generic/Ondat_logo.svg
keywords:
- storage
- block-storage
- volume
- operator
kubeVersion: '>= 1.19'
maintainers:
- email: david@ondat.io
name: DavidMarchant
- email: richard.kovacs@ondat.io
name: mhmxs
- email: angelos.perivolaropoulos@ondat.io
name: aeroniero33
name: ondat-operator
sources:
- https://github.com/ondat
urls:
- assets/ondat-operator/ondat-operator-0.5.400.tgz
version: 0.5.400
- annotations: - annotations:
catalog.cattle.io/certified: partner catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Ondat Operator catalog.cattle.io/display-name: Ondat Operator
@ -3169,6 +3220,36 @@ entries:
- assets/shipa/shipa-1.4.0.tgz - assets/shipa/shipa-1.4.0.tgz
version: 1.4.0 version: 1.4.0
speedscale-operator: speedscale-operator:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Speedscale Operator
catalog.cattle.io/release-name: speedscale-operator
apiVersion: v1
appVersion: 0.11.43
created: "2022-02-18T12:22:53.941072-05:00"
description: Stress test your APIs with real world scenarios. Collect and replay
traffic without scripting.
digest: f9257740c46c1c0b9b16e38a98458249409e4b1bdb5b5fdec932ebc4b72d65d4
home: https://speedscale.com
icon: https://raw.githubusercontent.com/speedscale/assets/main/logo/gold_logo_only.png
keywords:
- speedscale
- test
- testing
- regression
- reliability
- load
- replay
- network
- traffic
kubeVersion: '>= 1.17.0-0'
maintainers:
- email: support@speedscale.com
name: Speedscale Support
name: speedscale-operator
urls:
- assets/speedscale-operator/speedscale-operator-0.11.4300.tgz
version: 0.11.4300
- annotations: - annotations:
catalog.cattle.io/certified: partner catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Speedscale Operator catalog.cattle.io/display-name: Speedscale Operator