Migrating crowdstrike/falcon-sensor chart

pull/615/head
Samuel Attwood 2022-12-16 20:00:54 -05:00
parent 1a6eec7105
commit 478d681db5
No known key found for this signature in database
GPG Key ID: DCDF141BF44E368E
45 changed files with 2498 additions and 531 deletions

Binary file not shown.

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,9 +1,10 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: CrowdStrike Falcon Platform
catalog.cattle.io/release-name: falcon-helm
catalog.cattle.io/kube-version: '>1.15.0-0'
catalog.cattle.io/release-name: falcon-sensor
apiVersion: v2
appVersion: 0.9.3
appVersion: 1.18.1
description: A Helm chart to deploy CrowdStrike Falcon sensors into Kubernetes clusters.
home: https://crowdstrike.com
icon: https://raw.githubusercontent.com/CrowdStrike/falcon-helm/main/images/crowdstrike-logo.svg
@ -15,12 +16,12 @@ keywords:
- security
- monitoring
- alerting
kubeVersion: '>1.15.0-0'
maintainers:
- name: CrowdStrike Solution Architecture
- email: gabriel.alford@crowdstrike.com
name: Gabe Alford
- email: integrations@crowdstrike.com
name: CrowdStrike Solutions Architecture
name: falcon-sensor
sources:
- https://github.com/CrowdStrike/falcon-helm
type: application
version: 0.9.300
version: 1.18.1

View File

@ -0,0 +1,298 @@
# CrowdStrike Falcon Helm Chart
[Falcon](https://www.crowdstrike.com/) is the [CrowdStrike](https://www.crowdstrike.com/)
platform purpose-built to stop breaches via a unified set of cloud-delivered
technologies that prevent all types of attacks — including malware and much
more.
# Kubernetes Cluster Compatability
The Falcon Helm chart has been tested to deploy on the following Kubernetes distributions:
* Amazon Elastic Kubernetes Service (EKS)
* Azure Kubernetes Service (AKS)
* Google Kubernetes Engine (GKE) - DaemonSet support for Ubuntu nodes only, Container sensor for GCOS nodes.
* Rancher K3s
* Red Hat OpenShift Container Platform 4.6+
# Dependencies
1. Requires a x86_64 Kubernetes cluster
1. Must be a CrowdStrike customer with access to the Falcon Linux Sensor (container image) and Falcon Container from the CrowdStrike Container Registry.
1. Kubernetes nodes must be Linux distributions supported by CrowdStrike.
1. Before deploying the Helm chart, you should have a Falcon Linux Sensor and/or Falcon Container sensor in your own container registry or use CrowdStrike's registry before installing the Helm Chart. See the Deployment Considerations for more.
1. Helm 3.x is installed and supported by the Kubernetes vendor.
## Helm Chart Support for Falcon Sensor Versions
| Helm chart Version | Falcon Sensor Version |
|:------------------------|:----------------------------------|
| `<= 1.6.x` | `<= 6.34.x` |
| `>= 1.7.x && <= 1.17.x` | `>= 6.35.x && < 6.49.x` |
| `>= 1.18.x` | `>= 6.49.x` |
# Installation
### Add the CrowdStrike Falcon Helm repository
```
helm repo add crowdstrike https://crowdstrike.github.io/falcon-helm
```
### Update the local Helm repository Cache
```
helm repo update
```
# Falcon Configuration Options
The following tables lists the Falcon Sensor configurable parameters and their default values.
| Parameter | Description | Default |
|:----------------------------|:----------------------------------------------------------|:----------------------|
| `falcon.cid` | CrowdStrike Customer ID (CID) | None (Required) |
| `falcon.apd` | App Proxy Disable (APD) | None |
| `falcon.aph` | App Proxy Hostname (APH) | None |
| `falcon.app` | App Proxy Port (APP) | None |
| `falcon.trace` | Set trace level. (`none`,`err`,`warn`,`info`,`debug`) | `none` |
| `falcon.feature` | Sensor Feature options | None |
| `falcon.backend` | Choose sensor backend (`kernel`,`bpf`). Sensor 6.49+ only | None |
| `falcon.message_log` | Enable message log (true/false) | None |
| `falcon.billing` | Utilize default or metered billing | None |
| `falcon.tags` | Comma separated list of tags for sensor grouping | None |
| `falcon.provisioning_token` | Provisioning token value | None |
## Installing on Kubernetes Cluster Nodes
### Deployment Considerations
To ensure a successful deployment, you will want to ensure that:
1. By default, the Helm Chart installs in the `default` namespace. Best practices for deploying to Kubernetes is to create a new namespace. This can be done by adding `-n falcon-system --create-namespace` to your `helm install` command. The namespace can be any name that you wish to use.
1. The Falcon Linux Sensor (not the Falcon Container) should be used as the container image to deploy to Kubernetes nodes.
1. You must be a cluster administrator to deploy Helm Charts to the cluster.
1. When deploying the Falcon Linux Sensor (container image) to Kubernetes nodes, it is a requirement that the Falcon Sensor run as a privileged container so that the Sensor can properly work with the kernel. This is a requirement for any kernel module that gets deployed to any container-optimized operating system regardless of whether it is a security sensor, graphics card driver, etc.
1. The Falcon Linux Sensor should be deployed to Kubernetes environments that allow node access or installation via a Kubernetes DaemonSet.
1. The Falcon Linux Sensor will create `/opt/CrowdStrike` on the Kubernetes nodes. DO NOT DELETE this folder.
1. CrowdStrike's Helm Chart is a project, not a product, and released to the community as a way to automate sensor deployment to kubernetes clusters. The upstream repository for this project is [https://github.com/CrowdStrike/falcon-helm](https://github.com/CrowdStrike/falcon-helm).
### Pod Security Standards
Starting with Kubernetes 1.25, Pod Security Standards will be enforced. Setting the appropriate Pod Security Standards policy needs to be performed by adding a label to the namespace. Run the following command replacing `my-existing-namespace` with the namespace that you have installed the falcon sensors e.g. `falcon-system`..
```
kubectl label --overwrite ns my-existing-namespace \
pod-security.kubernetes.io/enforce=privileged
```
If your cluster is OpenShift version 4.11+, you will need to add an additional label to disable added OpenShift functionality that will sync Pod Security Standard policies based on the default Security Context Constraints (SCC).
Run the following command replacing `my-existing-namespace` with the namespace that you have installed the falcon sensors e.g. `falcon-system`.
```
kubectl label --overwrite ns my-existing-namespace \
security.openshift.io/scc.podSecurityLabelSync=false
```
If desired to silence the warning and change the auditing level for the Pod Security Standard, add the following labels
```
kubectl label ns --overwrite my-existing-namespace pod-security.kubernetes.io/audit=privileged
kubectl label ns --overwrite my-existing-namespace pod-security.kubernetes.io/warn=privileged
```
### Install CrowdStrike Falcon Helm Chart on Kubernetes Nodes
```
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
--set falcon.cid="<CrowdStrike_CID>" \
--set node.image.repository="<Your_Registry>/falcon-node-sensor"
```
Above command will install the CrowdStrike Falcon Helm Chart with the release name `falcon-helm` in the namespace your `kubectl` context is currently set to.
You can install also install into a customized namespace by running the following:
```
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
-n falcon-system --create-namespace \
--set falcon.cid="<CrowdStrike_CID>" \
--set node.image.repository="<Your_Registry>/falcon-node-sensor"
```
For more details please see the [falcon-helm](https://github.com/CrowdStrike/falcon-helm) repository.
### Node Configuration
The following tables lists the more common configurable parameters of the chart and their default values for installing on a Kubernetes node.
| Parameter | Description | Default |
|:--------------------------------|:---------------------------------------------------------------------|:---------------------------------------------------------------------- |
| `node.enabled` | Enable installation on the Kubernetes node | `true` |
| `node.image.repository` | Falcon Sensor Node registry/image name | `falcon-node-sensor` |
| `node.image.tag` | The version of the official image to use | `latest` (Use node.image.digest instead for security and production) |
| `node.image.digest` | The sha256 digest of the official image to use | None (Use instead of the image tag for security and production) |
| `node.image.pullPolicy` | Policy for updating images | `Always` |
| `node.image.pullSecrets` | Pull secrets for private registry | None (Conflicts with node.image.registryConfigJSON) |
| `node.image.registryConfigJSON` | base64 encoded docker config json for the pull secret | None (Conflicts with node.image.pullSecrets) |
| `falcon.cid` | CrowdStrike Customer ID (CID) | None (Required) |
`falcon.cid` and `node.image.repository` are required values.
For a complete listing of configurable parameters, run the following command:
```
helm show values crowdstrike/falcon-sensor
```
## Installing in Kubernetes Cluster as a Sidecar
### Deployment Considerations
To ensure a successful deployment, you will want to ensure that:
1. You must be a cluster administrator to deploy Helm Charts to the cluster.
1. When deploying the Falcon Container as a sidecar sensor, make sure that there are no firewall rules blocking communication to the Mutating Webhook. This will most likely result in a `context deadline exceeded` error. The default port for the Webhook is `4433`.
1. The Falcon Container as a sidecar sensor should be deployed to Kubernetes managed environments, or environments that do not allow node access or installation via a Kubernetes DaemonSet.
1. CrowdStrike's Helm Chart is a project, not a product, and released to the community as a way to automate sensor deployment to kubernetes clusters. The upstream repository for this project is [https://github.com/CrowdStrike/falcon-helm](https://github.com/CrowdStrike/falcon-helm).
1. Be aware that there is advanced Helm Chart functionality in use and those specific features may not work fully with GitOps tools like ArgoCD. The reason for this is that ArgoCD does not fully support Helm when compared to FluxCD. For features that do not work in this instance, disable those features until ArgoCD supports Helm correctly.
### Install CrowdStrike Falcon Helm Chart in Kubernetes Cluster as a Sidecar
```
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
--set node.enabled=false \
--set container.enabled=true \
--set falcon.cid="<CrowdStrike_CID>" \
--set container.image.repository="<Your_Registry>/falcon-sensor"
```
Above command will install the CrowdStrike Falcon Helm Chart with the release name `falcon-helm` in the namespace your `kubectl` context is currently set to.
You can install also install into a customized namespace by running the following:
```
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
-n falcon-system --create-namespace \
--set node.enabled=false \
--set container.enabled=true \
--set falcon.cid="<CrowdStrike_CID>" \
--set container.image.repository="<Your_Registry>/falcon-sensor"
```
#### Note about installation namespace
For Kubernetes clusters <1.22 (or 1.21 where the NamespaceDefaultLabelName feature gate is NOT enabled), be sure to label your namespace for injector exclusion before installing the Container sensor:
```
kubectl create namespace falcon-system
kubectl label namespace falcon-system kubernetes.io/metadata.name=falcon-system
```
### Container Sensor Configuration
The following tables lists the more common configurable parameters of the chart and their default values for installing the Container sensor as a Sidecar.
| Parameter | Description | Default |
|:------------------------------------------------ |:--------------------------------------------------------------------------- |:---------------------------- |
| `container.enabled` | Enable installation on the Kubernetes node | `false` |
| `container.azure.enabled` | For AKS without the pulltoken option | `false` |
| `container.azure.azureConfig` | Path to the Kubernetes Azure config file on worker nodes | `/etc/kubernetes/azure.json` |
| `container.disableNSInjection` | Disable injection for all Namespaces | `false` |
| `container.disablePodInjection` | Disable injection for all Pods | `false` |
| `container.certExpiration` | Certificate validity duration in number of days | `3650` |
| `container.registryCertSecret` | Name of generic Secret with additional CAs for external registries | None |
| `container.image.repository` | Falcon Sensor Node registry/image name | `falcon-sensor` |
| `container.image.tag` | The version of the official image to use. | `latest` (Use container.image.digest instead for security and production.) |
| `container.image.digest` | The sha256 digest of the official image to use. | None (Use instead of image tag for security and production.) |
| `container.image.pullPolicy` | Policy for updating images | `Always` |
| `container.image.pullSecrets.enable` | Enable pull secrets for private registry | `false` |
| `container.image.pullSecrets.namespaces` | List of Namespaces to pull the Falcon sensor from an authenticated registry | None |
| `container.image.pullSecrets.allNamespaces` | Use Helm's lookup function to deploy the pull secret to all namespaces | `false` |
| `container.image.pullSecrets.registryConfigJSON` | base64 encoded docker config json for the pull secret | None |
| `container.image.sensorResources` | The requests and limits of the sensor ([see example below](#example-using-containerimagesensorresources)) | None |
| `falcon.cid` | CrowdStrike Customer ID (CID) | None (Required) |
`falcon.cid` and `container.image.repository` are required values.
For a complete listing of configurable parameters, run the following command:
```
helm show values crowdstrike/falcon-sensor
```
#### Note about using --set with lists
If you need to provide a list of values to a `--set` command, you need to escape the commas between the values e.g. `--set falcon.tags="tag1\,tag2\,tag3"`
#### Example using container.image.sensorResources
When setting `container.image.sensorResources`, the simplest method would be to provide a values file to the `helm install` command.
Example:
```bash
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
--set node.enabled=false \
--set container.enabled=true \
--set falcon.cid="<CrowdStrike_CID>" \
--set container.image.repository="<Your_Registry>/falcon-sensor" \
--values values.yaml
```
Where `values.yaml` is
```yaml
container:
sensorResources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 10m
memory: 20Mi
```
Of course, one could specify all options in the `values.yaml` file and skip the `--set` options altogether:
```yaml
node:
enabled: false
container:
enabled: true
image:
repository: "<Your_Registry>/falcon-sensor"
sensorResources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 10m
memory: 20Mi
falcon:
cid: "<CrowdStrike_CID>"
```
If using a local values file is not an option, you could do this:
```bash
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
--set node.enabled=false \
--set container.enabled=true \
--set falcon.cid="<CrowdStrike_CID>" \
--set container.image.repository="<Your_Registry>/falcon-sensor" \
--set container.sensorResources.limits.memory="128Mi" \
--set container.sensorResources.limits.cpu="100m" \
--set container.sensorResources.requests.memory="20Mi" \
--set container.sensorResources.requests.cpu="10m"
```
### Uninstall Helm Chart
To uninstall, run the following command:
```
helm uninstall falcon-helm
```
To uninstall from a custom namespace, run the following command:
```
helm uninstall falcon-helm -n falcon-system
```
You may need/want to delete the falcon-system as well since helm will not do it for you:
```
kubectl delete ns falcon-system
```

View File

@ -0,0 +1,9 @@
# CrowdStrike Falcon Platform
The [CrowdStrike Falcon Platform](https://www.crowdstrike.com/)
provides comprehensive breach protection for containers with scanless vulnerability management,
continuous threat detection and response, and runtime protection. When combined with compliance
enforcement and automated continuous integration/continuous delivery (CI/CD) pipeline security,
DevOps teams can securely build and run applications with speed and confidence in the cloud.
For more information, please visit [https://www.crowdstrike.com/cloud-security-products/falcon-cloud-workload-protection/](https://www.crowdstrike.com/cloud-security-products/falcon-cloud-workload-protection/)

View File

@ -0,0 +1,2 @@
falcon:
cid: 1234567890ABCDEF1234567890ABCDEF-12

View File

@ -0,0 +1,294 @@
questions:
- variable: node.enabled
description: "Deploy the Falcon Sensor to the Kubernetes nodes"
required: true
type: boolean
default: true
label: Deploy daemonset to nodes
group: "Falcon Node settings"
- variable: node.daemonset.updateStrategy
description: "Update strategy to role out new daemonset configuration to the nodes."
required: false
type: enum
options:
- RollingUpdate
- OnDelete
label: Container Image Repository
group: "Falcon Node settings"
- variable: node.daemonset.maxUnavailable
description: "Sets the max unavailable nodes. Default is 1 when no value exists."
required: false
type: int
default: 1
label: Max number of unavailable nodes
group: "Falcon Node settings"
- variable: node.image.repository
description: "URL of container image repository holding containerized Falcon sensor. Defaults to 'falcon-node-sensor'."
required: true
type: string
default: falcon-node-sensor
label: Image Repository
group: "Falcon Node settings"
- variable: node.image.tag
description: "Container registry image tag. Defaults to 'latest'."
required: true
type: string
default: "latest"
label: Image Tag
group: "Falcon Node settings"
- variable: node.image.pullPolicy
description: "The default image pullPolicy. Defaults to 'Always'."
required: false
type: enum
options:
- IfNotPresent
- Always
- Never
default: Always
label: Image pullPolicy
group: "Falcon Node settings"
- variable: node.image.pullSecrets
description: "Name of the pull secret to pull the container image. Conflicts with node.image.registryConfigJSON"
required: false
type: string
label: Pull Secret Name
group: "Falcon Node settings"
- variable: node.image.registryConfigJSON
description: "Value must be base64. This setting conflicts with node.image.pullSecrets. The base64 encoded string of the docker config json for the pull secret can be gotten through `$ cat ~/.docker/config.json | base64 -`"
required: false
type: string
label: Pull Secret as a base64 string
group: "Falcon Node settings"
- variable: container.enabled
description: "Deploy the Falcon Sensor to the Kubernetes pods as a sidecar"
required: true
type: boolean
default: false
label: Deploy sidecar sensor to pods
group: "Falcon Container settings"
- variable: container.image.repository
description: "URL of container image repository holding containerized Falcon sensor. Defaults to 'falcon-sensor'."
required: true
type: string
default: falcon-sensor
label: Image Repository
group: "Falcon Container settings"
- variable: container.image.tag
description: "Container registry image tag. Defaults to 'latest'."
required: true
type: string
default: "latest"
label: Image Tag
group: "Falcon Container settings"
- variable: container.image.pullPolicy
description: "The default image pullPolicy. Defaults to 'Always'."
required: false
type: enum
options:
- IfNotPresent
- Always
- Never
default: Always
label: Image pullPolicy
group: "Falcon Container settings"
- variable: container.image.pullSecrets.enable
description: "Enable pullSecrets to get container from registry that requires authentication."
required: false
type: boolean
default: false
label: Enable pullSecrets
group: "Falcon Container settings"
- variable: container.image.pullSecrets.namespaces
description: "Configure the list of namespaces that should have access to pull the Falcon sensor from a registry that requires authentication. This is a comma separated."
required: false
type: string
show_if: "container.image.pullSecrets.enable=true"
label: List of Namespaces for pullSecret
group: "Falcon Container settings"
- variable: container.image.pullSecrets.allNamespaces
description: "Attempt to create the Falcon sensor pull secret in all Namespaces instead of using 'container.image.pullSecrets.namespaces'"
required: false
type: boolean
default: false
show_if: "container.image.pullSecrets.enable=true"
label: Create pullSecret in all Namespaces
group: "Falcon Container settings"
- variable: container.image.pullSecrets.registryConfigJSON
description: "Value must be base64. The base64 encoded string of the docker config json for the pull secret can be gotten through `$ cat ~/.docker/config.json | base64 -`"
required: false
type: string
show_if: "container.image.pullSecrets.enable=true"
label: Pull Secret as a base64 string
group: "Falcon Container settings"
- variable: container.autoCertificateUpdate
description: "Auto-update the certificates every time there is an update"
required: false
type: boolean
default: true
label: Auto-update certificates
group: "Falcon Container settings"
- variable: container.autoDeploymentUpdate
description: "Update Webhook and roll out new Deployment on upgrade"
required: false
type: boolean
default: true
label: Update the webhook on upgrade
group: "Falcon Container settings"
- variable: container.azure.enabled
description: "Enable for AKS without the pulltoken option"
required: false
type: boolean
default: false
label: Configure AKS registry configuration
group: "Falcon Container settings"
- variable: container.azure.AzureConfig
description: "Path to the Kubernetes Azure config file on worker nodes"
required: false
type: string
default: "/etc/kubernetes/azure.json"
show_if: "container.azure.enabled=true"
label: Deploy sidecar sensor to pods
group: "Falcon Container settings"
- variable: container.disableNSInjection
description: "Disable injection for all Namespaces"
required: false
type: boolean
default: false
label: Disable Namespace injection
group: "Falcon Container settings"
- variable: container.disablePodInjection
description: "Disable injection for all Pods"
required: false
type: boolean
default: false
label: Disable Pod injection
group: "Falcon Container settings"
- variable: container.certExpiration
description: "Certificate validity duration in number of days"
required: false
type: int
default: 3650
label: Certificate validity
group: "Falcon Container settings"
- variable: container.injectorPort
description: "Configure the Injector Port"
required: false
type: int
default: 4433
label: Injector Port
group: "Falcon Container settings"
- variable: container.domainName
description: "For custom DNS configurations when .svc requires a domain for services"
required: false
type: string
label: Custom DNS domain name for webhook
group: "Falcon Container settings"
- variable: falcon.cid
description: "Configure your CrowdStrike Customer ID (CID)"
required: true
type: string
label: CrowdStrike Customer ID (CID)
group: "Falcon Sensor Settings"
- variable: falcon.apd
description: "App Proxy Disable (APD). Disables the Falcon sensor from using a proxy."
required: false
type: boolean
default: true
label: Enable using a proxy
group: "Falcon Sensor Settings"
- variable: falcon.aph
description: "App Proxy Hostname (APH). Uncommon in container-based deployments."
required: false
type: string
show_if: "falcon.apd=false"
label: Configure Proxy Host
group: "Falcon Sensor Settings"
- variable: falcon.app
description: "App Proxy Port (APP). Uncommon in container-based deployments."
required: false
type: string
show_if: "falcon.apd=false"
label: Configure Proxy Port
group: "Falcon Sensor Settings"
- variable: falcon.trace
description: "Options are [none|err|warn|info|debug]."
required: false
type: enum
options:
- none
- err
- warn
- info
- debug
label: Set logging trace level
default: none
group: "Falcon Sensor Settings"
- variable: falcon.feature
description: "Options to pass to the \"--feature\" flag. Options are [none,[enableLog[,disableLogBuffer[,disableOsfm[,emulateUpdate]]]]]"
required: false
type: string
label: Enable or disable certain sensor features
group: "Falcon Sensor Settings"
- variable: falcon.message_log
description: "Enable message log (true/false)"
required: false
type: boolean
default: false
label: Enable logging
group: "Falcon Sensor Settings"
- variable: falcon.billing
description: "Utilize default or metered billing. Should only be configured when needing to switch between the two."
required: false
type: enum
options:
- default
- metered
default: default
label: Configure Billing
group: "Falcon Sensor Settings"
- variable: falcon.tags
description: "Comma separated list of tags for sensor grouping. Allowed characters: all alphanumerics, '/', '-', '_', and ','."
required: false
type: string
label: Configure tags for sensor grouping
group: "Falcon Sensor Settings"
- variable: falcon.provisioning_token
description: "Used to protect the CID. Provisioning token value."
required: false
type: string
label: Set a provisioning installation token
group: "Falcon Sensor Settings"

View File

@ -0,0 +1,26 @@
Thank you for installing the CrowdStrike Falcon Helm Chart!
Access to the Falcon Linux and Container Sensor downloads at registry.crowdstrike.com are
required to complete the install of this Helm chart. If an internal registry is used instead of registry.crowdstrike.com.
the containerized sensor must be present in a container registry accessible from Kubernetes installation.
{{- if .Values.node.enabled }}
CrowdStrike Falcon sensors will deploy across all nodes in your Kubernetes cluster after
installing this Helm chart. The default image name to deploy a kernel sensor to a node is `falcon-node-sensor`.
{{- end }}
{{- if .Values.container.enabled }}
CrowdStrike Falcon sensors will deploy across all pods as sidecars in your Kubernetes cluster after
installing this Helm chart and starting a new pod deployment for all your applications.
The default image name to deploy the pod sensor is `falcon-sensor`.
{{- end }}
When utilizing your own registry, an extremely common error on installation is accidentally forgetting to add your containerized
sensor to your local image registry prior to executing `helm install`. Please read the Helm Chart's readme
for more deployment considerations.
{{ if and (.Capabilities.APIVersions.Has "security.openshift.io/v1") .Values.container.enabled -}}
If deploying the Falcon Container Sensor on Red Hat OpenShift, push the Falcon Container sensor image
after you install the Helm Chart if you are using OpenShift's internal registry.
This is due to OpenShift requiring a valid ImageStream Tag to pull from a valid image hash in
the internal registry.
{{- end }}

View File

@ -60,3 +60,27 @@ Create the name of the service account to use
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "falcon-sensor.image" -}}
{{- if .Values.node.enabled -}}
{{- if .Values.node.image.digest -}}
{{- if contains "sha256:" .Values.node.image.digest -}}
{{- printf "%s@%s" .Values.node.image.repository .Values.node.image.digest -}}
{{- else -}}
{{- printf "%s@%s:%s" .Values.node.image.repository "sha256" .Values.node.image.digest -}}
{{- end -}}
{{- else -}}
{{- printf "%s:%s" .Values.node.image.repository .Values.node.image.tag -}}
{{- end -}}
{{- else -}}
{{- if .Values.container.image.digest -}}
{{- if contains "sha256:" .Values.container.image.digest -}}
{{- printf "%s@%s" .Values.container.image.repository .Values.container.image.digest -}}
{{- else -}}
{{- printf "%s@%s:%s" .Values.container.image.repository "sha256" .Values.container.image.digest -}}
{{- end -}}
{{- else -}}
{{- printf "%s:%s" .Values.container.image.repository .Values.container.image.tag -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,62 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "falcon-sensor.fullname" . }}-access-role
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
rules:
{{- if .Values.container.enabled }}
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
{{- end }}
{{- if .Capabilities.APIVersions.Has "image.openshift.io/v1" }}
- apiGroups:
- ""
- image.openshift.io
resources:
- imagestreams/layers
verbs:
- get
{{- end }}
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1" }}
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
{{- if .Values.node.enabled }}
- privileged
{{- end }}
{{- if .Values.container.enabled }}
- {{ include "falcon-sensor.fullname" . }}-container
{{- end }}
verbs:
- use
{{- end }}
{{- if not (.Capabilities.APIVersions.Has "security.openshift.io/v1") }}
{{- if .Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy" }}
- apiGroups:
- policy
resourceNames:
{{- if .Values.node.enabled }}
- {{ include "falcon-sensor.fullname" . }}-node
{{- end }}
{{- if .Values.container.enabled }}
- {{ include "falcon-sensor.fullname" . }}-container
{{- end }}
resources:
- podsecuritypolicies
verbs:
- use
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "falcon-sensor.fullname" . }}-access-binding
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
subjects:
{{- if .Values.container.enabled }}
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
{{- end }}
- kind: ServiceAccount
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "falcon-sensor.fullname" . }}-access-role
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,35 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "falcon-sensor.fullname" . }}-config
namespace: {{ .Release.Namespace }}
labels:
app: "{{ include "falcon-sensor.name" . }}"
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
data:
FALCONCTL_OPT_CID: {{ .Values.falcon.cid }}
{{- range $key, $value := .Values.falcon }}
{{- if and ($value) (ne $key "cid") }}
FALCONCTL_OPT_{{ $key | upper }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.container.enabled }}
CP_NAMESPACE: {{ .Release.Namespace }}
FALCON_IMAGE_PULL_POLICY: "{{ .Values.container.image.pullPolicy }}"
FALCON_IMAGE: "{{ .Values.container.image.repository }}:{{ .Values.container.image.tag }}"
FALCON_INJECTOR_LISTEN_PORT: "{{ .Values.container.injectorPort }}"
{{- if .Values.container.image.pullSecrets.enable }}
FALCON_IMAGE_PULL_SECRET: {{ .Values.container.image.pullSecrets.name | default (printf "%s-pull-secret" (include "falcon-sensor.fullname" .)) }}
{{- end }}
{{- if .Values.container.disablePodInjection }}
INJECTION_DEFAULT_DISABLED: T
{{- end }}
{{- if .Values.container.sensorResources }}
FALCON_RESOURCES: '{{ toJson .Values.container.sensorResources | b64enc }}'
{{- end }}
{{- end }}

View File

@ -0,0 +1,285 @@
{{- if .Values.container.enabled }}
{{- $name := (printf "%s-injector" (include "falcon-sensor.name" .)) -}}
{{- $fullName := (printf "%s.%s.svc" $name .Release.Namespace) -}}
{{- if .Values.container.domainName }}
{{- $fullName = (printf "%s.%s.svc.%s" $name .Release.Namespace .Values.container.domainName) -}}
{{- end }}
{{- $certValid := (.Values.container.certExpiration | int) -}}
{{- $altNames := list ( printf "%s" $fullName ) ( printf "%s.%s.svc" $name .Release.Namespace ) ( printf "%s.%s.svc.cluster.local" $name .Release.Namespace ) ( printf "%s.%s" $name .Release.Namespace ) ( printf "%s" $name ) -}}
{{- $ca := genCA ( printf "%s ca" .Release.Namespace ) $certValid -}}
{{- $cert := genSignedCert $fullName nil $altNames $certValid $ca -}}
{{- if not .Values.container.autoCertificateUpdate }}
{{- $tlscrt := (lookup "v1" "Secret" .Release.Namespace (printf "%s-tls" (include "falcon-sensor.name" .))).data -}}
{{- if kindIs "map" $tlscrt }}
{{- $cert = dict "Cert" (index $tlscrt "tls.crt" | b64dec ) "Key" (index $tlscrt "tls.key" | b64dec ) -}}
{{- end }}
{{- $tlsca := (lookup "admissionregistration.k8s.io/v1" "MutatingWebhookConfiguration" .Release.Namespace $name).webhooks -}}
{{- if kindIs "slice" $tlsca }}
{{- range $index, $wca := $tlsca -}}
{{- $ca = dict "Cert" ($wca.clientConfig.caBundle | b64dec) }}
{{- end }}
{{- end }}
{{- end }}
{{- $tlsCert := $cert.Cert | b64enc }}
{{- $tlsKey := $cert.Key | b64enc }}
{{- $caCert := $ca.Cert | b64enc }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "falcon-sensor.name" . }}-injector
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "falcon-sensor.name" . }}-injector
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
{{- if .Values.container.labels }}
{{- range $key, $value := .Values.container.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.container.annotations }}
annotations:
{{- range $key, $value := .Values.container.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
replicas: {{ .Values.container.replicas }}
selector:
matchLabels:
app: {{ include "falcon-sensor.name" . }}-injector
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
template:
metadata:
labels:
app: {{ include "falcon-sensor.name" . }}-injector
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
crowdstrike.com/component: crowdstrike-falcon-injector
{{- if .Values.container.labels }}
{{- range $key, $value := .Values.container.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if or (.Values.container.autoDeploymentUpdate) (.Values.container.podAnnotations) }}
annotations:
{{- if .Values.container.autoDeploymentUpdate }}
rollme: {{ randAlphaNum 5 | quote }}
{{- end }}
{{- if .Values.container.podAnnotations }}
{{- range $key, $value := .Values.container.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- end }}
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: DoesNotExist
securityContext:
runAsNonRoot: true
{{- if .Values.container.image.pullSecrets.enable }}
imagePullSecrets:
- name: {{ .Values.container.image.pullSecrets.name | default (printf "%s-pull-secret" (include "falcon-sensor.fullname" .)) }}
{{- end }}
{{- if .Values.container.azure.enabled }}
initContainers:
- name: {{ include "falcon-sensor.name" . }}-init-container
image: "{{ include "falcon-sensor.image" . }}"
imagePullPolicy: "{{ .Values.container.image.pullPolicy }}"
command: ['bash', '-c', "cp /run/azure.json /tmp/CrowdStrike/; chmod a+r /tmp/CrowdStrike/azure.json"]
securityContext:
runAsUser: 0
runAsNonRoot: false
privileged: false
volumeMounts:
- name: {{ include "falcon-sensor.name" . }}-volume
mountPath: /tmp/CrowdStrike
- name: {{ include "falcon-sensor.name" . }}-azure-config
mountPath: /run/azure.json
readOnly: true
{{- end }}
{{- if .Values.container.gcp.enabled }}
initContainers:
- name: {{ include "falcon-sensor.name" . }}-init-container
image: "gcr.io/google.com/cloudsdktool/cloud-sdk:alpine"
imagePullPolicy: "Always"
command:
- '/bin/bash'
- '-c'
- |
curl -sS -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token' --retry 30 --retry-connrefused --retry-max-time 60 --connect-timeout 3 --fail --retry-all-errors > /dev/null && exit 0 || echo 'Retry limit exceeded. Failed to wait for metadata server to be available. Check if the gke-metadata-server Pod in the kube-system namespace is healthy.' >&2; exit 1
securityContext:
runAsUser: 0
runAsNonRoot: false
privileged: false
{{- end }}
containers:
- name: {{ include "falcon-sensor.name" . }}-injector
image: "{{ include "falcon-sensor.image" . }}"
imagePullPolicy: "{{ .Values.container.image.pullPolicy }}"
command: ["injector"]
envFrom:
- configMapRef:
name: {{ include "falcon-sensor.fullname" . }}-config
ports:
- name: https
containerPort: {{ .Values.container.injectorPort }}
volumeMounts:
- name: {{ include "falcon-sensor.name" . }}-tls-certs
mountPath: /run/secrets/tls
readOnly: true
{{- if or (.Files.Glob "certs/*.crt") (.Values.container.registryCertSecret) }}
- name: {{ include "falcon-sensor.name" . }}-registry-certs
mountPath: /etc/docker/certs.d/{{ .Release.Namespace }}-certs
readOnly: true
{{- end }}
{{- if .Values.container.azure.enabled }}
- name: {{ include "falcon-sensor.name" . }}-volume
mountPath: /tmp/CrowdStrike
readOnly: true
{{- end }}
readinessProbe:
httpGet:
path: /live
port: {{ .Values.container.injectorPort }}
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /live
port: {{ .Values.container.injectorPort }}
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 10
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.container.tolerations }}
tolerations:
{{- with .Values.container.tolerations }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
volumes:
- name: {{ include "falcon-sensor.name" . }}-tls-certs
secret:
secretName: {{ include "falcon-sensor.name" . }}-tls
{{- if (.Files.Glob "certs/*.crt") }}
- name: {{ include "falcon-sensor.name" . }}-registry-certs
configMap:
name: {{ include "falcon-sensor.name" . }}-registry-certs-config
{{- else if .Values.container.registryCertSecret }}
- name: {{ include "falcon-sensor.name" . }}-registry-certs
secret:
secretName: {{ .Values.container.registryCertSecret }}
{{- end }}
{{- if .Values.container.azure.enabled }}
- emptyDir: {}
name: {{ include "falcon-sensor.name" . }}-volume
- name: {{ include "falcon-sensor.name" . }}-azure-config
hostPath:
path: {{ .Values.container.azure.azureConfig }}
type: File
{{- end }}
serviceAccountName: {{ .Values.serviceAccount.name }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falcon-sensor.name" . }}-tls
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
type: Opaque
data:
tls.crt: {{ $tlsCert }}
tls.key: {{ $tlsKey }}
ca.crt: {{ $caCert }}
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: {{ include "falcon-sensor.name" . }}-injector
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
webhooks:
- name: {{ $name }}.{{ .Release.Namespace }}.svc
admissionReviewVersions:
- v1
{{- if lt (int (semver .Capabilities.KubeVersion.Version).Minor) 22 }}
- v1beta1
{{- end }}
sideEffects: None
namespaceSelector:
matchExpressions:
- key: {{ .Values.container.namespaceLabelKey }}
operator: {{ if .Values.container.disableNSInjection }}In{{ else }}NotIn{{- end }}
values:
- {{ if .Values.container.disableNSInjection }}enabled{{ else }}disabled{{- end }}
{{- if lt (int (semver .Capabilities.KubeVersion.Version).Minor) 22 }}
- key: "name"
{{- else }}
- key: kubernetes.io/metadata.name
{{- end }}
operator: "NotIn"
values:
- {{ .Release.Namespace }}
- kube-system
- kube-public
clientConfig:
{{- if .Values.container.domainName }}
url: https://{{ $fullName }}:443/mutate
{{- else }}
service:
name: {{ include "falcon-sensor.name" . }}-injector
namespace: {{ .Release.Namespace }}
path: "/mutate"
{{- end }}
caBundle: {{ $caCert }}
failurePolicy: Fail
rules:
- operations:
- CREATE
apiGroups:
- ""
apiVersions:
- v1
resources:
- pods
timeoutSeconds: 30
{{- end }}

View File

@ -0,0 +1,60 @@
{{- if not (.Capabilities.APIVersions.Has "security.openshift.io/v1") }}
{{- if lt (int (semver .Capabilities.KubeVersion.Version).Minor) 25 }}
{{- if .Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy" }}
{{- if .Values.container.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "falcon-sensor.fullname" . }}-container
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
spec:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
- AUDIT_WRITE
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- NET_BIND_SERVICE
- NET_RAW
- SETGID
- SETPCAP
- SETUID
defaultAddCapabilities:
- SYS_PTRACE
allowedCapabilities:
- SYS_PTRACE
fsGroup:
rule: RunAsAny
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if and .Values.container.enabled (.Files.Glob "certs/*.crt") }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "falcon-sensor.name" . }}-registry-certs-config
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
data:
{{ (.Files.Glob "certs/*.crt").AsConfig | indent 2 }}
{{- end }}

View File

@ -0,0 +1,58 @@
{{- if .Values.container.enabled }}
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1" }}
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: {{ include "falcon-sensor.fullname" . }}-container
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
allowPrivilegedContainer: false
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
fsGroup:
type: MustRunAs
supplementalGroups:
type: MustRunAs
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
- AUDIT_WRITE
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- NET_BIND_SERVICE
- NET_RAW
- SETGID
- SETPCAP
- SETUID
defaultAddCapabilities:
- SYS_PTRACE
allowedCapabilities:
- SYS_PTRACE
users:
groups:
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
{{- end }}
{{- end }}

View File

@ -0,0 +1,36 @@
{{- if .Values.container.enabled }}
{{- if .Values.container.image.pullSecrets.enable }}
{{- if not .Values.container.image.pullSecrets.name }}
{{- $registry := .Values.container.image.pullSecrets.registryConfigJSON }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.container.image.pullSecrets.name | default (printf "%s-pull-secret" (include "falcon-sensor.fullname" .)) }}
namespace: {{ .Release.Namespace }}
data:
.dockerconfigjson: {{ $registry }}
type: kubernetes.io/dockerconfigjson
{{- if .Values.container.image.pullSecrets.namespaces }}
{{- $name := ( .Values.container.image.pullSecrets.name | default (printf "%s-pull-secret" (include "falcon-sensor.fullname" .))) }}
{{- $myns := split "," .Values.container.image.pullSecrets.namespaces -}}
{{- if .Values.container.image.pullSecrets.allNamespaces }}
{{- $myns = list -}}
{{- range $index, $ns := (lookup "v1" "Namespace" "" "").items -}}
{{ $myns = append $myns $ns.metadata.name }}
{{- end }}
{{- end }}
{{- range $value := $myns }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ $name }}
namespace: {{ $value }}
data:
.dockerconfigjson: {{ $registry }}
type: kubernetes.io/dockerconfigjson
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if .Values.container.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "falcon-sensor.name" . }}-injector
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
spec:
selector:
app: {{ include "falcon-sensor.name" . }}-injector
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: "container_sensor"
ports:
- name: https
port: 443
targetPort: https
{{- end }}

View File

@ -0,0 +1,72 @@
{{- if and .Values.container.enabled .Values.container.autoDeploymentUpdate }}
{{- $name := (printf "%s-injector" (include "falcon-sensor.name" .)) -}}
{{- $fullName := (printf "%s.%s.svc" $name .Release.Namespace) -}}
{{- $caCert := "" -}}
{{- $tlsca := (lookup "admissionregistration.k8s.io/v1" "MutatingWebhookConfiguration" .Release.Namespace $name).webhooks -}}
{{- if kindIs "slice" $tlsca }}
{{- $ca := dict }}
{{- range $index, $wca := $tlsca -}}
{{- $ca = dict "Cert" ($wca.clientConfig.caBundle | b64dec) }}
{{- end }}
{{- $caCert := $ca.Cert | b64enc }}
{{- end }}
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: {{ include "falcon-sensor.name" . }}-injector
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
webhooks:
- name: {{ $name }}.{{ .Release.Namespace }}.svc
failurePolicy: Ignore
admissionReviewVersions:
- v1
{{- if lt (int (semver .Capabilities.KubeVersion.Version).Minor) 22 }}
- v1beta1
{{- end }}
sideEffects: None
namespaceSelector:
matchExpressions:
- key: {{ .Values.container.namespaceLabelKey }}
operator: {{ if .Values.container.disableNSInjection }}In{{ else }}NotIn{{- end }}
values:
- {{ if .Values.container.disableNSInjection }}enabled{{ else }}disabled{{- end }}
{{- if lt (int (semver .Capabilities.KubeVersion.Version).Minor) 22 }}
- key: "name"
{{- else }}
- key: kubernetes.io/metadata.name
{{- end }}
operator: "NotIn"
values:
- {{ .Release.Namespace }}
clientConfig:
{{- if .Values.container.domainName }}
url: https://{{ $fullName }}:443/mutate
{{- else }}
service:
name: {{ include "falcon-sensor.name" . }}-injector
namespace: {{ .Release.Namespace }}
path: "/mutate"
{{- end }}
caBundle: {{ $caCert }}
rules:
- operations:
- CREATE
apiGroups:
- ""
apiVersions:
- v1
resources:
- pods
timeoutSeconds: 30
{{- end }}

View File

@ -0,0 +1,145 @@
{{- if .Values.node.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "falcon-sensor.fullname" . }}
labels:
app: "{{ include "falcon-sensor.name" . }}"
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "kernel_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
{{- if .Values.node.daemonset.labels }}
{{- range $key, $value := .Values.node.daemonset.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.node.daemonset.annotations }}
annotations:
{{- range $key, $value := .Values.node.daemonset.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: "kernel_sensor"
crowdstrike.com/provider: crowdstrike
updateStrategy:
type: {{ .Values.node.daemonset.updateStrategy }}
{{- if and (eq .Values.node.daemonset.updateStrategy "RollingUpdate") (ne (int .Values.node.daemonset.maxUnavailable) 1) }}
rollingUpdate:
maxUnavailable: {{ .Values.node.daemonset.maxUnavailable }}
{{- end }}
template:
metadata:
annotations:
{{ .Values.node.daemonset.podAnnotationKey }}: disabled
{{- range $key, $value := .Values.node.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
labels:
app: "{{ include "falcon-sensor.name" . }}"
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "kernel_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
{{- if .Values.node.daemonset.labels }}
{{- range $key, $value := .Values.node.daemonset.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- if and (.Values.node.image.pullSecrets) (.Values.node.image.registryConfigJSON) }}
{{- fail "node.image.pullSecrets and node.image.registryConfigJSON cannot be used together." }}
{{- else -}}
{{- if or (.Values.node.image.pullSecrets) (.Values.node.image.registryConfigJSON) }}
imagePullSecrets:
{{- if .Values.node.image.pullSecrets }}
- name: {{ .Values.node.image.pullSecrets }}
{{- end }}
{{- if .Values.node.image.registryConfigJSON }}
- name: {{ include "falcon-sensor.fullname" . }}-pull-secret
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.node.daemonset.tolerations }}
tolerations:
{{- with .Values.node.daemonset.tolerations }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
nodeSelector:
kubernetes.io/os: linux
{{- if .Values.node.daemonset.nodeAffinity }}
affinity:
nodeAffinity:
{{- with .Values.node.daemonset.nodeAffinity }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
initContainers:
# This init container creates empty falconstore file so that when
# it's mounted into the sensor-node-container, k8s would just use it
# rather than creating a directory. Mounting falconstore file as
# a file volume ensures that AID is preserved across container
# restarts.
- name: init-falconstore
image: "{{ include "falcon-sensor.image" . }}"
imagePullPolicy: "{{ .Values.node.image.pullPolicy }}"
command: ["/bin/bash"]
args: [-c, 'if [ -d "/opt/CrowdStrike/falconstore" ] ; then echo "Re-creating /opt/CrowdStrike/falconstore as it is a directory instead of a file"; rm -rf /opt/CrowdStrike/falconstore; fi; mkdir -p /opt/CrowdStrike && touch /opt/CrowdStrike/falconstore']
volumeMounts:
- name: falconstore-dir
mountPath: /opt
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
containers:
- name: falcon-node-sensor
image: "{{ include "falcon-sensor.image" . }}"
imagePullPolicy: "{{ .Values.node.image.pullPolicy }}"
# Various pod security context settings. Bear in mind that many of these have an impact
# on the Falcon Sensor working correctly.
#
# - User that the container will execute as. Typically necessary to run as root (0).
# - Runs the Falcon Sensor containers as privileged containers. This is required when
# running the Falcon Linux Sensor on Kubernetes nodes to properly run in the node's
# kernel and to actually protect the node.
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
envFrom:
- configMapRef:
name: {{ include "falcon-sensor.fullname" . }}-config
volumeMounts:
- name: falconstore
mountPath: /opt/CrowdStrike/falconstore
volumes:
- name: falconstore-dir
hostPath:
path: /opt
type: DirectoryOrCreate
- name: falconstore
hostPath:
path: /opt/CrowdStrike/falconstore
serviceAccountName: {{ .Values.serviceAccount.name }}
terminationGracePeriodSeconds: {{ .Values.node.terminationGracePeriod }}
{{- if .Values.node.daemonset.priorityClassName }}
priorityClassName: {{ .Values.node.daemonset.priorityClassName }}
{{- end }}
hostNetwork: true
hostPID: true
hostIPC: true
{{- end }}

View File

@ -0,0 +1,45 @@
{{- if .Values.container.enabled -}}
{{- if .Values.container.networkPolicy.enabled -}}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ include "falcon-sensor.fullname" . }}-default-deny-ingress
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
spec:
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ include "falcon-sensor.fullname" . }}-network-policy
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
spec:
ingress:
- from:
- podSelector:
matchLabels:
component: apiserver
provider: kubernetes
podSelector: {}
policyTypes:
- Ingress
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,115 @@
{{- if .Values.node.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "falcon-sensor.fullname" . }}-node-cleanup
labels:
app: "{{ include "falcon-sensor.name" . }}"
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}-node-cleanup
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "kernel_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
{{- if .Values.node.daemonset.labels }}
{{- range $key, $value := .Values.node.daemonset.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
{{- if .Values.node.daemonset.annotations }}
{{- range $key, $value := .Values.node.daemonset.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}-node-cleanup
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: "kernel_sensor"
crowdstrike.com/provider: crowdstrike
template:
metadata:
annotations:
{{ .Values.node.daemonset.podAnnotationKey }}: disabled
{{- range $key, $value := .Values.node.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
labels:
app: "{{ include "falcon-sensor.name" . }}"
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}-node-cleanup
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "kernel_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
{{- if .Values.node.daemonset.labels }}
{{- range $key, $value := .Values.node.daemonset.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- if and (.Values.node.image.pullSecrets) (.Values.node.image.registryConfigJSON) }}
{{- fail "node.image.pullSecrets and node.image.registryConfigJSON cannot be used together." }}
{{- else -}}
{{- if or (.Values.node.image.pullSecrets) (.Values.node.image.registryConfigJSON) }}
imagePullSecrets:
{{- if .Values.node.image.pullSecrets }}
- name: {{ .Values.node.image.pullSecrets }}
{{- end }}
{{- if .Values.node.image.registryConfigJSON }}
- name: {{ include "falcon-sensor.fullname" . }}-pull-secret
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.node.daemonset.tolerations }}
tolerations:
{{- with .Values.node.daemonset.tolerations }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
nodeSelector:
kubernetes.io/os: linux
{{- if .Values.node.daemonset.nodeAffinity }}
affinity:
nodeAffinity:
{{- with .Values.node.daemonset.nodeAffinity }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
initContainers:
- name: cleanup-opt-crowdstrike
image: "{{ include "falcon-sensor.image" . }}"
imagePullPolicy: "{{ .Values.node.image.pullPolicy }}"
command: ["rm", "-rf", "/opt/CrowdStrike"]
volumeMounts:
- name: opt-crowdstrike
mountPath: /opt
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
containers:
- name: cleanup-sleep
image: "{{ include "falcon-sensor.image" . }}"
imagePullPolicy: "{{ .Values.node.image.pullPolicy }}"
command: ["sleep", "10"]
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
volumes:
- name: opt-crowdstrike
hostPath:
path: /opt
serviceAccountName: {{ .Values.serviceAccount.name }}-node-cleanup
terminationGracePeriodSeconds: {{ .Values.node.terminationGracePeriod }}
{{- end }}

View File

@ -0,0 +1,39 @@
{{- if not (.Capabilities.APIVersions.Has "security.openshift.io/v1") }}
{{- if .Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy" }}
{{- if lt (int (semver .Capabilities.KubeVersion.Version).Minor) 25 }}
{{- if .Values.node.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "falcon-sensor.fullname" . }}-node
labels:
app: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/name: {{ include "falcon-sensor.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "container_sensor"
crowdstrike.com/provider: crowdstrike
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
spec:
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
hostIPC: true
hostNetwork: true
hostPID: true
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if .Values.node.enabled }}
{{- if .Values.node.image.registryConfigJSON }}
{{- $registry := .Values.node.image.registryConfigJSON }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falcon-sensor.fullname" . }}-pull-secret
namespace: {{ .Release.Namespace }}
data:
.dockerconfigjson: {{ $registry }}
type: kubernetes.io/dockerconfigjson
{{- end }}
{{- end }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "all_sensors"
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
{{- if .Values.serviceAccount.annotations }}
annotations: {{ toYaml .Values.serviceAccount.annotations | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.name }}-node-cleanup
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: "kernel_sensor"
helm.sh/chart: {{ include "falcon-sensor.chart" . }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "0"

View File

@ -0,0 +1,40 @@
{{- if .Values.testing.enabled -}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "falcon-sensor.fullname" . }}-test-access-role
labels:
{{- include "falcon-sensor.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "falcon-sensor.fullname" . }}-test-access-binding
labels:
{{- include "falcon-sensor.labels" . | nindent 4 }}
subjects:
{{- if .Values.container.enabled }}
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
{{- end }}
- kind: ServiceAccount
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "falcon-sensor.fullname" . }}-test-access-role
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,39 @@
{{- if .Values.testing.enabled -}}
{{- if .Values.node.enabled }}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "falcon-sensor.fullname" . }}-test-ds-sensor-running"
namespace: {{ .Release.Namespace }}
labels:
{{- include "falcon-sensor.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: kubectl
image: docker.io/bitnami/kubectl
command:
- /bin/sh
- -c
- |
echo "Waiting 10 seconds to allow pod time to initialize before running test"
sleep 10
KUBECMD=$(kubectl get pods -n "{{ .Release.Namespace }}" -l "app.kubernetes.io/component=kernel_sensor" --field-selector=status.phase!=Running --no-headers 2>&1)
if ! echo "${KUBECMD}" | grep -q "No resources found"; then
echo "[\033[0;31mFAIL\033[0m]: Not all sensor pods are running"
echo "${KUBECMD}"
exit 1
else
echo "[\033[0;32mOK\033[0m]: Sensor pods are running"
exit 0
fi
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
serviceAccountName: {{ .Values.serviceAccount.name }}
restartPolicy: Never
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,61 @@
{{- if .Values.testing.enabled -}}
{{- if .Values.container.enabled }}
---
apiVersion: v1
kind: Namespace
metadata:
name: busybox
namespace: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "falcon-sensor.fullname" . }}-test-sidecar-sensor-running"
namespace: {{ .Release.Namespace }}
labels:
{{- include "falcon-sensor.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: kubectl
image: docker.io/bitnami/kubectl
command:
- /bin/sh
- -c
- |
echo "Waiting 10 seconds to allow pod time to initialize before running test"
sleep 10
KUBECMD=$(kubectl get pods -n "{{ .Release.Namespace }}" -l "app.kubernetes.io/component=container_sensor" --field-selector=status.phase!=Running --no-headers 2>&1)
if ! echo "${KUBECMD}" | grep -q "No resources found"; then
echo "[\033[0;31mFAIL\033[0m]: Injector pod is NOT running"
echo "${KUBECMD}"
exit 1
fi
echo "[\033[0;32mOK\033[0m]: Injector pod is running"
echo "Running test pod to verify sidecar injection"
kubectl run busybox -n busybox --image=busybox --restart=Never --command sleep 120
echo "Waiting 15 seconds to allow pod time to initialize before running test"
sleep 15
KUBECMD2=$(kubectl get pods -n busybox --field-selector=status.phase!=Running -o jsonpath="{.items[*].spec.containers[*].name}")
if echo "${KUBECMD2}" | grep -q "crowdstrike-falcon-container"; then
echo "[\033[0;31mFAIL\033[0m]: crowdstrike-falcon-container sidecar container is NOT injected"
echo "${KUBECMD2}"
exit 1
fi
echo "[\033[0;32mOK\033[0m]: crowdstrike-falcon-container sidecar container is injected"
exit 0
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
serviceAccountName: {{ .Values.serviceAccount.name }}
restartPolicy: Never
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,360 @@
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"falcon": {
"type": "object",
"required": [
"cid"
],
"properties": {
"cid": {
"type": "string",
"pattern": "^[0-9a-fA-F]{32}-[0-9a-fA-F]{2}$",
"example": [
"1234567890ABCDEF1234567890ABCDEF-12"
]
},
"backend": {
"type": [
"null",
"string"
],
"pattern": "^(kernel|bpf)$"
},
"trace": {
"type": [
"null",
"string"
],
"pattern": "^(|none|err|warn|info|debug)$"
}
}
},
"node": {
"type": "object",
"required": [
"enabled"
],
"properties": {
"daemonset": {
"type": "object",
"required": [
"updateStrategy"
],
"properties": {
"annotations": {
"type": "object"
},
"podAnnotationKey": {
"type": "string"
},
"labels": {
"type": "object"
},
"tolerations": {
"type": "array"
},
"nodeAffinity": {
"type": "object"
},
"priorityClassName": {
"type": "string"
},
"updateStrategy": {
"type": "string",
"default": "RollingUpdate",
"pattern": "^(RollingUpdate|OnDelete)$"
},
"maxUnavailable": {
"type": "integer",
"default": "1",
"pattern": "^[0-9]+$"
},
"serviceAccountName": {
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "crowdstrike-falcon-sa"
},
"annotations": {
"type": "object",
"default": {}
}
}
}
}
},
"enabled": {
"type": "boolean",
"default": "true"
},
"image": {
"type": "object",
"required": [
"repository",
"pullPolicy",
"tag"
],
"properties": {
"registryConfigJSON": {
"type": [
"null",
"string"
]
},
"pullPolicy": {
"type": "string",
"default": "Always",
"pattern": "^(Always|Never|IfNotPresent)$"
},
"pullSecrets": {
"type": [
"null",
"string"
]
},
"repository": {
"type": "string"
},
"tag": {
"type": "string",
"default": "latest"
},
"digest": {
"type": [
"null",
"string"
],
"pattern": "^sha256:[0-9a-f]{64}$"
}
}
},
"podAnnotations": {
"type": "object"
},
"terminationGracePeriod": {
"type": "integer",
"default": "30",
"pattern": "^[0-9]+$"
}
}
},
"container": {
"type": "object",
"required": [
"enabled"
],
"properties": {
"tolerations": {
"type": "array"
},
"annotations": {
"type": "object"
},
"podAnnotations": {
"type": "object"
},
"labels": {
"type": "object"
},
"azure": {
"type": "object",
"required": [
"enabled",
"azureConfig"
],
"properties": {
"enabled": {
"type": "boolean",
"default": "false"
},
"azureConfig": {
"type": "string",
"default": "/etc/kubernetes/azure.json"
}
}
},
"gcp": {
"type": "object",
"required": [
"enabled"
],
"properties": {
"enabled": {
"type": "boolean",
"default": "false"
}
}
},
"networkPolicy": {
"type": "object",
"required": [
"enabled"
],
"properties": {
"enabled": {
"type": "boolean",
"default": "false"
}
}
},
"autoCertificateUpdate": {
"type": "boolean",
"default": "true"
},
"registryCertSecret": {
"type": [
"null",
"string"
]
},
"namespaceLabelKey": {
"type": "string"
},
"autoDeploymentUpdate": {
"type": "boolean",
"default": "true"
},
"certExpiration": {
"type": "integer",
"default": "3650",
"minimum": 0
},
"injectorPort": {
"type": "integer",
"default": "4433",
"minimum": 1024,
"maximum": 32767
},
"disableNSInjection": {
"type": "boolean",
"default": "false"
},
"disablePodInjection": {
"type": "boolean",
"default": "false"
},
"enabled": {
"type": "boolean",
"default": "true"
},
"image": {
"type": "object",
"required": [
"repository",
"pullPolicy",
"tag"
],
"properties": {
"pullPolicy": {
"type": "string",
"default": "Always",
"pattern": "^(Always|Never|IfNotPresent)$"
},
"pullSecrets": {
"type": "object",
"properties": {
"enable": {
"type": "boolean",
"default": "false"
},
"name": {
"type": [
"null",
"string"
]
},
"allNamespaces": {
"type": "boolean",
"default": "false"
},
"namespaces": {
"type": [
"null",
"string"
]
},
"registryConfigJSON": {
"type": [
"null",
"string"
]
}
}
},
"repository": {
"type": "string"
},
"tag": {
"type": "string",
"default": "latest"
},
"digest": {
"type": [
"null",
"string"
],
"pattern": "^sha256:[0-9a-f]{64}$"
}
}
},
"replicas": {
"type": "integer",
"default": 1,
"minimum": 1
},
"resources": {
"type": "object",
"properties": {
"requests": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
}
}
}
}
},
"serviceAccount": {
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "crowdstrike-falcon-sa"
},
"annotations": {
"type": "object",
"default": {},
"examples": [
{
"iam.gke.io/gcp-service-account": "my-service-account@my-project.iam.gserviceaccount.com"
}
]
}
}
},
"testing": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"default": "false"
}
}
},
"nameOverride": {
"type": "string"
},
"fullnameOverride": {
"type": "string"
}
}
}

View File

@ -0,0 +1,220 @@
# Default values for falcon-sensor.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
node:
# When enabled, Helm chart deploys the Falcon Sensors to Kubernetes nodes
enabled: true
daemonset:
# Annotations to apply to the daemonset
annotations: {}
# The key that is used to handle enabling/disabling sensor injection at the pod/node level
podAnnotationKey: sensor.falcon-system.crowdstrike.com/injection
# additionals labels
labels: {}
# Assign a PriorityClassName to pods if set
priorityClassName: ""
tolerations:
# We want to schedule on control plane nodes where they are accessible
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
# Future taint for K8s >=1.24
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "kubernetes.azure.com/scalesetpriority"
operator: "Equal"
value: "spot"
effect: "NoSchedule"
# Daemonsets automatically get additional tolerations: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
# Allow setting additional node selections e.g. processor type
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
nodeAffinity: {}
# Update strategy to role out new daemonset configuration to the nodes.
updateStrategy: RollingUpdate
# Sets the max unavailable nodes. Default is 1 when no value exists.
maxUnavailable: 1
image:
repository: falcon-node-sensor
pullPolicy: Always
pullSecrets:
# Overrides the image tag. In general, tags should not be used (including semver tags or `latest`). This variable is provided for those
# who have yet to move off of using tags. The sha256 digest should be used in place of tags for increased security and image immutability.
tag: "latest"
# Setting a digest will override any tag and should be used instead of tags.
#
# Example digest variable configuration:
# digest: sha256:ffdc91f66ef8570bd7612cf19145563a787f552656f5eec43cd80ef9caca0398
digest:
# Value must be base64. This setting conflicts with node.image.pullSecrets
# The base64 encoded string of the docker config json for the pull secret can be
# gotten through:
# $ cat ~/.docker/config.json | base64 -
registryConfigJSON:
podAnnotations: {}
# How long to wait for Falcon pods to stop gracefully
terminationGracePeriod: 30
container:
# When enabled, Helm chart deploys the Falcon Container Sensor to Pods through Webhooks
enabled: false
# Configure the number of replicas for the mutating webhook backend
replicas: 1
# Auto update the certificates every time there is an update
autoCertificateUpdate: true
# Update Webhook and roll out new Deployment on upgrade
autoDeploymentUpdate: true
# For AKS without the pulltoken option
azure:
enabled: false
# Path to the Kubernetes Azure config file on worker nodes
azureConfig: /etc/kubernetes/azure.json
# GCP GKE workload identity init container
gcp:
enabled: false
# Enable Network Policies within the Injector namespace to allow ingress
networkPolicy:
enabled: false
# Disable injection for all Namespaces
disableNSInjection: false
# Disable injection for all Pods
disablePodInjection: false
# Certificate validity duration in number of days
certExpiration: 3650
# Configure the Injector Port
injectorPort: 4433
# Configure the requests and limits of the sensor
sensorResources:
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 10m
# memory: 20Mi
# For custom DNS configurations when .svc requires a domain for services
# For example if service.my-namespace.svc doesn't resolve and the cluster uses
# service.my-namespace.svc.testing.io, you would add testing.io as the value below.
# Otherwise, keep this blank.
domainName:
# Provide a Secret containing CA certificate files.
# All CA certificates need to be a valid secret key, and have the extension ".crt"
# Example: kubectl create secret generic external-registry-cas --from-file=/tmp/thawte-Primary-Root-CA.crt --from-file=/tmp/DigiCert-Global-Root-CA.crt
#
# registryCertSecret: external-registry-cas
registryCertSecret:
# The key that is used to handle enabling/disabling sensor injection at the namespace level
namespaceLabelKey: sensor.falcon-system.crowdstrike.com/injection
image:
repository: falcon-sensor
pullPolicy: Always
# Set to true if connecting to a registry that requires authentication
pullSecrets:
enable: false
name:
# Configure the list of namespaces that should have access to pull the Falcon
# sensor from a registry that requires authentication. This is a comma separated
# list. For example:
#
# namespaces: ns1,ns2,ns3
namespaces:
# Attempt to create the Falcon sensor pull secret in all Namespaces
# instead of using "container.image.pullSecrets.namespaces"
allNamespaces: false
# Value must be base64
# The base64 encoded string of the docker config json for the pull secret can be
# gotten through:
# $ cat ~/.docker/config.json | base64 -
registryConfigJSON:
# Overrides the image tag. In general, tags should not be used (including semver tags or `latest`). This variable is provided for those
# who have yet to move off of using tags. The sha256 digest should be used in place of tags for increased security and image immutability.
tag: "latest"
# Setting a digest will override any tag and should be used instead of tags.
#
# Example digest variable configuration:
# digest: sha256:ffdc91f66ef8570bd7612cf19145563a787f552656f5eec43cd80ef9caca0398
digest:
# Annotations to apply to the injector deployment
annotations: {}
# additionals labels to apply to the injector deployment
labels: {}
# Annotations to apply to the injector deployment
podAnnotations: {}
tolerations: []
resources:
# limits:
# cpu: 100m
# memory: 128Mi
requests:
cpu: 10m
memory: 20Mi
serviceAccount:
name: crowdstrike-falcon-sa
annotations: {}
# Deploys the test suite during install for testing purposes.
testing:
enabled: false
falcon:
cid:
apd:
aph:
app:
trace: none
feature:
backend: kernel
message_log:
billing:
tags:
provisioning_token:
# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""

View File

@ -1,90 +0,0 @@
# CrowdStrike Falcon Helm Chart
[Falcon](https://www.crowdstrike.com/) is the [CrowdStrike](https://www.crowdstrike.com/)
platform purpose-built to stop breaches via a unified set of cloud-delivered
technologies that prevent all types of attacks — including malware and much
more.
# Kubernetes Cluster Compatability
The Falcon Helm chart has been tested to deploy on the following Kubernetes distributions:
* Amazon Elastic Kubernetes Service (EKS)
* Azure Kubernetes Service (AKS) - Linux Nodes Only
* Google Kubernetes Engine (GKE)
* Rancher K3s
* Nodes must be Linux distributions supported by CrowdStrike. See [https://falcon.crowdstrike.com/support/documentation/20/falcon-sensor-for-linux#operating-systems](https://falcon.crowdstrike.com/support/documentation/20/falcon-sensor-for-linux#operating-systems) for supported Linux distributions and kernels.
* Red Hat OpenShift Container Platform 4.6+
# Dependencies
1. Requires a x86_64 Kubernetes cluster
1. Must be a CrowdStrike customer with access to the Falcon Linux Sensor and Falcon Container downloads.
1. Before deploying the Helm chart, you should have a Falcon Linux Sensor in the container registry before installing the Helm Chart. See the Deployment Considerations for more.
1. Helm 3.x is installed and supported by the Kubernetes vendor.
# Deployment Considerations
To ensure a successful deployment, you will want to ensure that:
1. By default, the Helm Chart installs in the `default` namespace. Best practices for deploying to Kubernetes is to create a new namespace. This can be done by adding `-n falcon-system --create-namespace` to your `helm install` command.
1. You have access to a containerized falcon sensor image. This is most likely through a private image registry on your network or cloud provider. See [https://github.com/CrowdStrike/Dockerfiles](https://github.com/CrowdStrike/Dockerfiles) as an example of how to build a Falcon sensor for your registry.
1. The Falcon Linux Sensor (not the Falcon Container) should be used in the container image to deploy to Kubernetes nodes.
1. When deploying the Falcon Linux Sensor to a node, the container image should match the node's operating system. For example, if the node is running Red Hat Enterprise Linux 8.2, the container image should be based on Red Hat Enterprise Linux 8.2, etc. This is important to ensure sensor and image compatibility with the base node operating system.
1. You must have sufficient permissions to deploy Helm Charts to the cluster. This is often received through cluster admin privileges.
1. Only deploying to Kubernetes nodes are supported at this time.
1. When deploying the Falcon Linux Sensor as a container to Kubernetes nodes, it is a requirement that the Falcon Sensor run as a privileged container so that the Sensor can properly work with the kernel. If this is unacceptable, you can install the Falcon Linux Sensor (still runs with privileges) using an RPM or DEB package on the nodes themselves. This assumes that you have the capability to actually install RPM or DEB packages on the nodes. If you do not have this capability and you want to protect the nodes, you have to install using a privileged container.
1. CrowdStrike's Helm Operator is a project, not a product, and released to the community as a way to automate sensor deployment to kubernetes clusters. The upstream repository for this project is [https://github.com/CrowdStrike/falcon-helm](https://github.com/CrowdStrike/falcon-helm).
# Installation
### Add the CrowdStrike Falcon Helm repository
```
helm repo add crowdstrike https://crowdstrike.github.io/falcon-helm
```
### Install CrowdStrike Falcon Helm Chart
```
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
--set falcon.cid="<CrowdStrike_CID>" \
--set node.image.repository="<Your_Registry>/falcon-node-sensor"
```
Above command will install the CrowdStrike Falcon Helm Chart with the release name `falcon-helm` in the namespace your `kubectl` context is currently set to.
You can install also install into a customized namespace by running the following:
```
helm upgrade --install falcon-helm crowdstrike/falcon-sensor \
-n falcon-system --create-namespace \
--set falcon.cid="<CrowdStrike_CID>" \
--set node.image.repository="<Your_Registry>/falcon-node-sensor"
```
For more details please see the [falcon-helm](https://github.com/CrowdStrike/falcon-helm) repository.
## Node Configuration
The following tables lists the more common configurable parameters of the chart and their default values for installing on a Kubernetes node.
| Parameter | Description | Default |
|:--------------------------------|:---------------------------------------------------------------------|:----------------------------------------- |
| `node.enabled` | Enable installation on the Kubernetes node | `true` |
| `node.image.repository` | Falcon Sensor Node registry/image name | `falcon-node-sensor` |
| `node.image.tag` | The version of the official image to use | `latest` |
| `node.image.pullPolicy` | Policy for updating images | `Always` |
| `node.image.pullSecrets` | Pull secrets for private registry | `{}` |
| `falcon.cid` | CrowdStrike Customer ID (CID) | None (Required) |
`falcon.cid` and `node.image.repository` are required values.
### Uninstall Helm Chart
To uninstall, run the following command:
```
helm uninstall falcon-helm
```
To uninstall from a custom namespace, run the following command:
```
helm uninstall falcon-helm -n falcon-system
```

View File

@ -1,9 +0,0 @@
# CrowdStrike Falcon
[CrowdStrike](https://www.crowdstrike.com/) [Container Security](https://www.crowdstrike.com/cloud-security-products/falcon-cloud-workload-protection/)
comes complete with vulnerability management, continuous
threat detection and response, and runtime protection, combined with compliance
enforcement and automated continuous integration/continuous delivery (CI/CD) pipeline security, enabling
DevOps teams to stay secure while building in the cloud.
For more information, please visit [https://www.crowdstrike.com/cloud-security-products/falcon-cloud-workload-protection/](https://www.crowdstrike.com/cloud-security-products/falcon-cloud-workload-protection/)

View File

@ -1,2 +0,0 @@
falcon:
cid: 123456789TESTS-00

View File

@ -1,97 +0,0 @@
questions:
- variable: node.image.repository
description: "URL of container image repository holding containerized Falcon sensor. Defaults to 'falcon-node-sensor'."
required: true
type: string
default: falcon-node-sensor
label: Container Image Repository
group: "Node Container Images"
- variable: node.image.tag
description: "Container registry image tag. Defaults to 'latest'."
required: true
type: string
default: "latest"
label: Container Image Tag
group: "Node Container Images"
- variable: falcon.cid
description: "Passed to falconctl as \"--cid=\"{uuid string}\"\""
required: true
type: string
label: CrowdStrike Customer ID (CID)
group: "Falcon Sensor Node Settings"
- variable: falcon.apd
description: "App Proxy Disable. Passed to falconctl as \"--apt=true\" or \"--apt=false\"."
required: false
type: boolean
default: false
label: Disable using a proxy
group: "Falcon Sensor Node Settings"
- variable: falcon.aph
description: "App Proxy Hostname (APH). Uncommon in container-based deployments. Passed to falconctl as \"--aph <app proxy host name>\""
required: false
type: string
label: Configure Proxy Host
group: "Falcon Sensor Node Settings"
- variable: falcon.app
description: "App Proxy Port (APP). Uncommon in container-based deployments. Passed to falconctl as \"--app=<app proxy port>\""
required: false
type: string
label: Configure Proxy Port
group: "Falcon Sensor Node Settings"
- variable: falcon.trace
description: "Options are [none|err|warn|info|debug]. Passed to falconctl as \"--trace=[none|err|warn|info|debug]\""
required: false
type: string
label: Set logging trace level
default: "none"
group: "Falcon Sensor Node Settings"
- variable: falcon.feature
description: "Options to pass to the \"--feature\" flag. Options are [none,[enableLog[,disableLogBuffer[,disableOsfm[,emulateUpdate]]]]]"
required: false
type: string
label: Enable or disable certain sensor features
group: "Falcon Sensor Node Settings"
- variable: falcon.update
description: "SIGHUP the sensor for immediate trace/feature update."
required: false
type: boolean
default: false
label: Update sensor immediately
group: "Falcon Sensor Node Settings"
- variable: falcon.message_log
description: "Enable message log (true/false)"
required: false
type: boolean
default: false
label: Enable logging
group: "Falcon Sensor Node Settings"
- variable: falcon.billing
description: "Utilize default or metered billing. Should only be configured when needing to switch between the two. Options are: [default|metered]"
required: false
type: string
label: Configure Billing
group: "Falcon Sensor Node Settings"
- variable: falcon.tags
description: "Comma separated list of tags for sensor grouping. Allowed characters: all alphanumerics, '/', '-', '_', and ','."
required: false
type: string
label: Configure tags for sensor grouping
group: "Falcon Sensor Node Settings"
- variable: falcon.provisioning_token
description: "Used to protect the CID. Provisioning token value."
required: false
type: string
label: Set a provisioning installation token
group: "Falcon Sensor Node Settings"

View File

@ -1,10 +0,0 @@
Thank you for installing the CrowdStrike Falcon Helm Chart!
Access to the Falcon Linux and Container Sensor downloads at https://falcon.crowdstrike.com/hosts/sensor-downloads are
required to complete the install of this Helm chart. This is provided automatically to all active CrowdStrike customers.
Additionally, a containerized sensor must be present in a container registry accessible from Kubernetes installation.
Sample Dockerfiles are available at https://github.com/CrowdStrike/Dockerfiles.
CrowdStrike Falcon sensors will deploy across all nodes in your Kubernetes cluster after
installing this Helm chart. An extremely common error on installation is accidentally
forgetting to add your containerized sensor to your local image registry prior to executing
`helm install`. The default image name to deploy a kernel sensor to a node is `falcon-node-sensor`.

View File

@ -1,17 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "falcon-sensor.fullname" . }}-config
namespace: {{ .Release.Namespace }}
labels:
app: "{{ include "falcon-sensor.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
FALCONCTL_OPT_CID: {{ .Values.falcon.cid }}
{{- range $key, $value := .Values.falcon }}
{{- if and ($value) (ne $key "cid") }}
FALCONCTL_OPT_{{ $key | upper }}: {{ $value | quote }}
{{- end }}
{{- end }}

View File

@ -1,134 +0,0 @@
{{- if .Values.node.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "falcon-sensor.fullname" . }}
labels:
name: {{ include "falcon-sensor.fullname" . }}
app: {{ include "falcon-sensor.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- if .Values.node.daemonset.labels }}
{{- range $key, $value := .Values.node.daemonset.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.node.daemonset.annotations }}
annotations:
{{- range $key, $value := .Values.node.daemonset.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
name: {{ include "falcon-sensor.fullname" . }}
app: {{ include "falcon-sensor.fullname" . }}
release: {{ .Release.Name | quote }}
updateStrategy:
type: {{ .Values.node.daemonset.updateStrategy }}
template:
metadata:
annotations:
sensor.falcon-system.crowdstrike.com/injection: disabled
{{- range $key, $value := .Values.node.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
labels:
name: {{ include "falcon-sensor.fullname" . }}
app: {{ include "falcon-sensor.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- if .Values.node.daemonset.labels }}
{{- range $key, $value := .Values.node.daemonset.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- with .Values.node.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
tolerations:
# this toleration is to have the daemonset runnable on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
nodeSelector:
beta.kubernetes.io/os: linux
initContainers:
# This init container creates empty falconstore file so that when
# it's mounted into the sensor-node-container, k8s would just use it
# rather than creating a directory. Mounting falconstore file as
# a file volume ensures that AID is preserved across container
# restarts.
- name: init-falconstore
image: busybox
args: [/bin/sh, -c, 'touch /var/lib/crowdstrike/falconstore']
volumeMounts:
- name: falconstore-dir
mountPath: /var/lib/crowdstrike
containers:
- name: falcon-node-sensor
image: "{{ .Values.node.image.repository }}:{{ .Values.node.image.tag }}"
imagePullPolicy: "{{ .Values.node.image.pullPolicy }}"
volumeMounts:
- name: dev
mountPath: /dev
- name: var-run
mountPath: /var/run
- name: etc
mountPath: /etc
- name: var-log
mountPath: /var/log
- name: falconstore
mountPath: /opt/CrowdStrike/falconstore
# Various pod security context settings. Bear in mind that many of these have an impact
# on the Falcon Sensor working correctly.
#
# - User that the container will execute as. Typically necessary to run as root (0).
# - Runs the Falcon Sensor containers as privileged containers. This is required when
# running the Falcon Linux Sensor on Kubernetes nodes to properly run in the node's
# kernel and to actually protect the node.
securityContext:
runAsUser: 0
privileged: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
envFrom:
- configMapRef:
name: {{ include "falcon-sensor.fullname" . }}-config
# This spits out logs from sensor-node-container to stdout so that they
# are routed through k8s log driver.
- name: log
image: busybox
args: [/bin/sh, -c, 'tail -n1 -f /var/log/falcon-sensor.log']
volumeMounts:
- name: var-log
mountPath: /var/log
readOnly: True
volumes:
- name: dev
hostPath:
path: /dev
- name: etc
hostPath:
path: /etc
- name: var-run
hostPath:
path: /var/run
- name: var-log
emptyDir: {}
- name: falconstore
hostPath:
path: /var/lib/crowdstrike/falconstore
- name: falconstore-dir
hostPath:
path: /var/lib/crowdstrike
terminationGracePeriodSeconds: {{ .Values.node.terminationGracePeriod }}
hostNetwork: true
hostPID: true
hostIPC: true
{{- end }}

View File

@ -1,50 +0,0 @@
# Default values for falcon-sensor.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
node:
# When enabled, Helm chart deploys the Falcon Senors to Kubernetes nodes
enabled: true
daemonset:
# Annotations to apply to the daemonset
annotations: {}
# additionals labels
labels: {}
updateStrategy: RollingUpdate
image:
repository: falcon-node-sensor
pullPolicy: Always
pullSecrets: {}
# Overrides the image tag whose default is the chart appVersion.
tag: "latest"
# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""
podAnnotations: {}
# How long to wait for Falcon pods to stop gracefully
terminationGracePeriod: 10
falcon:
cid:
aid:
apd:
aph:
app:
trace:
feature:
update:
message_log:
billing:
tags:
assert:
memfail_grace_period:
memfail_every_n:
provisioning_token:

View File

@ -5320,6 +5320,38 @@ entries:
- assets/f5/f5-bigip-ctlr-0.0.1901.tgz
version: 0.0.1901
falcon-sensor:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: CrowdStrike Falcon Platform
catalog.cattle.io/kube-version: '>1.15.0-0'
catalog.cattle.io/release-name: falcon-sensor
apiVersion: v2
appVersion: 1.18.1
created: "2022-12-16T19:58:18.802102-05:00"
description: A Helm chart to deploy CrowdStrike Falcon sensors into Kubernetes
clusters.
digest: 6eb0029c3e7866bcd0aab8c094d34c9bf5dd13146a5270b264ac5e3ce8192844
home: https://crowdstrike.com
icon: https://raw.githubusercontent.com/CrowdStrike/falcon-helm/main/images/crowdstrike-logo.svg
keywords:
- CrowdStrike
- Falcon
- EDR
- kubernetes
- security
- monitoring
- alerting
kubeVersion: '>1.15.0-0'
maintainers:
- email: integrations@crowdstrike.com
name: CrowdStrike Solutions Architecture
name: falcon-sensor
sources:
- https://github.com/CrowdStrike/falcon-helm
type: application
urls:
- assets/crowdstrike/falcon-sensor-1.18.1.tgz
version: 1.18.1
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: CrowdStrike Falcon Platform
@ -5349,7 +5381,7 @@ entries:
- https://github.com/CrowdStrike/falcon-helm
type: application
urls:
- assets/falcon-sensor/falcon-sensor-0.9.300.tgz
- assets/crowdstrike/falcon-sensor-0.9.300.tgz
version: 0.9.300
federatorai:
- annotations:

View File

@ -0,0 +1,4 @@
HelmRepo: https://crowdstrike.github.io/falcon-helm/
HelmChart: falcon-sensor
Vendor: CrowdStrike
DisplayName: CrowdStrike Falcon Platform

View File

@ -1,10 +0,0 @@
--- charts-original/Chart.yaml
+++ charts/Chart.yaml
@@ -20,3 +20,7 @@
- https://github.com/CrowdStrike/falcon-helm
type: application
version: 0.9.3
+annotations:
+ catalog.cattle.io/certified: partner
+ catalog.cattle.io/release-name: falcon-helm
+ catalog.cattle.io/display-name: CrowdStrike Falcon Platform

View File

@ -1,103 +0,0 @@
--- charts-original/questions.yaml
+++ charts/questions.yaml
@@ -16,39 +16,39 @@
group: "Node Container Images"
- variable: falcon.cid
- description: "CrowdStrike Customer ID (CID). Passed to falconctl as \"--cid=\"{uuid string}\"\""
+ description: "Passed to falconctl as \"--cid=\"{uuid string}\"\""
required: true
type: string
- label: --cid
+ label: CrowdStrike Customer ID (CID)
group: "Falcon Sensor Node Settings"
- variable: falcon.apd
- description: "Description goes here. Passed to falconctl as \"--apt=true\" or \"--apt=false\"."
+ description: "App Proxy Disable. Passed to falconctl as \"--apt=true\" or \"--apt=false\"."
required: false
type: boolean
default: false
- label: --apt
+ label: Disable using a proxy
group: "Falcon Sensor Node Settings"
- variable: falcon.aph
description: "App Proxy Hostname (APH). Uncommon in container-based deployments. Passed to falconctl as \"--aph <app proxy host name>\""
required: false
type: string
- label: --aph
+ label: Configure Proxy Host
group: "Falcon Sensor Node Settings"
- variable: falcon.app
description: "App Proxy Port (APP). Uncommon in container-based deployments. Passed to falconctl as \"--app=<app proxy port>\""
required: false
type: string
- label: --app
+ label: Configure Proxy Port
group: "Falcon Sensor Node Settings"
- variable: falcon.trace
- description: "Set trace level. Options are [none|err|warn|info|debug]. Passed to falconctl as \"--trace=[none|err|warn|info|debug]\""
+ description: "Options are [none|err|warn|info|debug]. Passed to falconctl as \"--trace=[none|err|warn|info|debug]\""
required: false
type: string
- label: --trace
+ label: Set logging trace level
default: "none"
group: "Falcon Sensor Node Settings"
@@ -56,7 +56,7 @@
description: "Options to pass to the \"--feature\" flag. Options are [none,[enableLog[,disableLogBuffer[,disableOsfm[,emulateUpdate]]]]]"
required: false
type: string
- label: --feature
+ label: Enable or disable certain sensor features
group: "Falcon Sensor Node Settings"
- variable: falcon.update
@@ -64,7 +64,7 @@
required: false
type: boolean
default: false
- label: --update
+ label: Update sensor immediately
group: "Falcon Sensor Node Settings"
- variable: falcon.message_log
@@ -72,27 +72,26 @@
required: false
type: boolean
default: false
- label: --message-log
+ label: Enable logging
group: "Falcon Sensor Node Settings"
- variable: falcon.billing
- description: "Utilize default or metered billing."
+ description: "Utilize default or metered billing. Should only be configured when needing to switch between the two. Options are: [default|metered]"
required: false
- type: boolean
- default: true
- label: --billing
+ type: string
+ label: Configure Billing
group: "Falcon Sensor Node Settings"
- variable: falcon.tags
description: "Comma separated list of tags for sensor grouping. Allowed characters: all alphanumerics, '/', '-', '_', and ','."
required: false
type: string
- label: --tags
+ label: Configure tags for sensor grouping
group: "Falcon Sensor Node Settings"
- variable: falcon.provisioning_token
- description: "Provisioning token value."
+ description: "Used to protect the CID. Provisioning token value."
required: false
type: string
- label: --provisioning-token
+ label: Set a provisioning installation token
group: "Falcon Sensor Node Settings"

View File

@ -1,2 +0,0 @@
url: https://github.com/CrowdStrike/falcon-helm/releases/download/0.9.3/falcon-sensor-0.9.3.tgz
packageVersion: 00