Added chart versions:

amd/amd-gpu:
    - 0.16.0
  instana/instana-agent:
    - 2.0.9
  new-relic/nri-bundle:
    - 5.0.106
  traefik/traefik:
    - 34.1.0
pull/1099/head
github-actions[bot] 2025-01-16 00:05:22 +00:00
parent 9dbba7b23b
commit 9ec6f738f5
738 changed files with 95181 additions and 1 deletions

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,6 @@
dependencies:
- name: node-feature-discovery
repository: https://kubernetes-sigs.github.io/node-feature-discovery/charts
version: 0.17.1
digest: sha256:5e32e06e85ae6df9d5e0c3f433a23eede7952d243cdd92e013f4c4279e5a08ea
generated: "2025-01-15T21:25:40.048914523Z"

View File

@ -0,0 +1,28 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: AMD GPU Device Plugin
catalog.cattle.io/kube-version: '>= 1.18.0-0'
catalog.cattle.io/release-name: ""
apiVersion: v2
appVersion: 1.31.0.2
dependencies:
- condition: nfd.enabled
name: node-feature-discovery
repository: https://kubernetes-sigs.github.io/node-feature-discovery/charts
version: '>= 0.8.1-0'
description: A Helm chart for deploying Kubernetes AMD GPU device plugin
home: https://github.com/ROCm/k8s-device-plugin
icon: file://assets/icons/amd-gpu.png
keywords:
- kubernetes
- cluster
- hardware
- gpu
kubeVersion: '>= 1.18.0-0'
maintainers:
- name: Kenny Ho <Kenny.Ho@amd.com>
name: amd-gpu
sources:
- https://github.com/ROCm/k8s-device-plugin
type: application
version: 0.16.0

View File

@ -0,0 +1,40 @@
# AMD GPU Helm Chart
![Version: 0.16.0](https://img.shields.io/badge/Version-0.16.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.31.0.2](https://img.shields.io/badge/AppVersion-1.31.0.2-informational?style=flat-square)
A Helm chart for deploying Kubernetes AMD GPU device plugin
## Requirements
Kubernetes: `>= 1.18.0`
## Optional Dependencies
| Repository | Name | Version |
|------------|------|---------|
| https://kubernetes-sigs.github.io/node-feature-discovery/charts | node-feature-discovery | 0.8.1 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| dp.image.repository | string | `"docker.io/rocm/k8s-device-plugin"` | |
| dp.image.tag | string | `""` | |
| imagePullSecrets | list | `[]` | |
| labeller.enabled | bool | `false` | |
| lbl.image.repository | string | `"docker.io/rocm/k8s-device-plugin"` | |
| lbl.image.tag | string | `"labeller-latest"` | |
| nfd.enabled | bool | `false` | |
| node_selector_enabled | bool | `false` | |
| node_selector."feature.node.kubernetes.io/pci-0300_1002.present" | string | `"true"` | |
| securityContext.allowPrivilegeEscalation | bool | `false` | |
| securityContext.capabilities.drop[0] | string | `"ALL"` | |
| tolerations[0].key | string | `"CriticalAddonsOnly"` | |
| tolerations[0].operator | string | `"Exists"` | |
## More information
https://github.com/ROCm/k8s-device-plugin
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.5.0](https://github.com/norwoodj/helm-docs/releases/v1.5.0)

View File

@ -0,0 +1,17 @@
# AMD GPU Helm Chart
[Kubernetes][k8s] [device plugin][dp] implementation that enables the registration of AMD GPU in a container cluster for compute workload.
More information about [RadeonOpenCompute (ROCm)][rocm]
## Prerequisites
* [ROCm capable machines][sysreq]
* [ROCm kernel][rock] ([Installation guide][rocminstall]) or latest AMD GPU Linux driver ([Installation guide][amdgpuinstall])
[dp]: https://kubernetes.io/docs/concepts/cluster-administration/device-plugins/
[k8s]: https://kubernetes.io
[rocm]: https://docs.amd.com/en/latest/what-is-rocm.html
[rock]: https://github.com/ROCm/ROCK-Kernel-Driver
[rocminstall]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html
[amdgpuinstall]: https://amdgpu-install.readthedocs.io/en/latest/
[sysreq]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,14 @@
apiVersion: v2
appVersion: v0.17.1
description: 'Detects hardware features available on each node in a Kubernetes cluster,
and advertises those features using node labels. '
home: https://github.com/kubernetes-sigs/node-feature-discovery
keywords:
- feature-discovery
- feature-detection
- node-labels
name: node-feature-discovery
sources:
- https://github.com/kubernetes-sigs/node-feature-discovery
type: application
version: 0.17.1

View File

@ -0,0 +1,10 @@
# Node Feature Discovery
Node Feature Discovery (NFD) is a Kubernetes add-on for detecting hardware
features and system configuration. Detected features are advertised as node
labels. NFD provides flexible configuration and extension points for a wide
range of vendor and application specific node labeling needs.
See
[NFD documentation](https://kubernetes-sigs.github.io/node-feature-discovery/v0.17/deployment/helm.html)
for deployment instructions.

View File

@ -0,0 +1,711 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
name: nodefeatures.nfd.k8s-sigs.io
spec:
group: nfd.k8s-sigs.io
names:
kind: NodeFeature
listKind: NodeFeatureList
plural: nodefeatures
singular: nodefeature
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: |-
NodeFeature resource holds the features discovered for one node in the
cluster.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Specification of the NodeFeature, containing features discovered
for a node.
properties:
features:
description: Features is the full "raw" features data that has been
discovered.
properties:
attributes:
additionalProperties:
description: AttributeFeatureSet is a set of features having
string value.
properties:
elements:
additionalProperties:
type: string
description: Individual features of the feature set.
type: object
required:
- elements
type: object
description: Attributes contains all the attribute-type features
of the node.
type: object
flags:
additionalProperties:
description: FlagFeatureSet is a set of simple features only
containing names without values.
properties:
elements:
additionalProperties:
description: |-
Nil is a dummy empty struct for protobuf compatibility.
NOTE: protobuf definitions have been removed but this is kept for API compatibility.
type: object
description: Individual features of the feature set.
type: object
required:
- elements
type: object
description: Flags contains all the flag-type features of the
node.
type: object
instances:
additionalProperties:
description: InstanceFeatureSet is a set of features each of
which is an instance having multiple attributes.
properties:
elements:
description: Individual features of the feature set.
items:
description: InstanceFeature represents one instance of
a complex features, e.g. a device.
properties:
attributes:
additionalProperties:
type: string
description: Attributes of the instance feature.
type: object
required:
- attributes
type: object
type: array
required:
- elements
type: object
description: Instances contains all the instance-type features
of the node.
type: object
type: object
labels:
additionalProperties:
type: string
description: Labels is the set of node labels that are requested to
be created.
type: object
type: object
required:
- spec
type: object
served: true
storage: true
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
name: nodefeaturegroups.nfd.k8s-sigs.io
spec:
group: nfd.k8s-sigs.io
names:
kind: NodeFeatureGroup
listKind: NodeFeatureGroupList
plural: nodefeaturegroups
shortNames:
- nfg
singular: nodefeaturegroup
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: NodeFeatureGroup resource holds Node pools by featureGroup
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Spec defines the rules to be evaluated.
properties:
featureGroupRules:
description: List of rules to evaluate to determine nodes that belong
in this group.
items:
description: GroupRule defines a rule for nodegroup filtering.
properties:
matchAny:
description: MatchAny specifies a list of matchers one of which
must match.
items:
description: MatchAnyElem specifies one sub-matcher of MatchAny.
properties:
matchFeatures:
description: MatchFeatures specifies a set of matcher
terms all of which must match.
items:
description: |-
FeatureMatcherTerm defines requirements against one feature set. All
requirements (specified as MatchExpressions) are evaluated against each
element in the feature set.
properties:
feature:
description: Feature is the name of the feature
set to match against.
type: string
matchExpressions:
additionalProperties:
description: |-
MatchExpression specifies an expression to evaluate against a set of input
values. It contains an operator that is applied when matching the input and
an array of values that the operator evaluates the input against.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
description: |-
MatchExpressions is the set of per-element expressions evaluated. These
match against the value of the specified elements.
type: object
matchName:
description: |-
MatchName in an expression that is matched against the name of each
element in the feature set.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
required:
- feature
type: object
type: array
required:
- matchFeatures
type: object
type: array
matchFeatures:
description: MatchFeatures specifies a set of matcher terms
all of which must match.
items:
description: |-
FeatureMatcherTerm defines requirements against one feature set. All
requirements (specified as MatchExpressions) are evaluated against each
element in the feature set.
properties:
feature:
description: Feature is the name of the feature set to
match against.
type: string
matchExpressions:
additionalProperties:
description: |-
MatchExpression specifies an expression to evaluate against a set of input
values. It contains an operator that is applied when matching the input and
an array of values that the operator evaluates the input against.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
description: |-
MatchExpressions is the set of per-element expressions evaluated. These
match against the value of the specified elements.
type: object
matchName:
description: |-
MatchName in an expression that is matched against the name of each
element in the feature set.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
required:
- feature
type: object
type: array
name:
description: Name of the rule.
type: string
required:
- name
type: object
type: array
required:
- featureGroupRules
type: object
status:
description: |-
Status of the NodeFeatureGroup after the most recent evaluation of the
specification.
properties:
nodes:
description: Nodes is a list of FeatureGroupNode in the cluster that
match the featureGroupRules
items:
properties:
name:
description: Name of the node.
type: string
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
name: nodefeaturerules.nfd.k8s-sigs.io
spec:
group: nfd.k8s-sigs.io
names:
kind: NodeFeatureRule
listKind: NodeFeatureRuleList
plural: nodefeaturerules
shortNames:
- nfr
singular: nodefeaturerule
scope: Cluster
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: |-
NodeFeatureRule resource specifies a configuration for feature-based
customization of node objects, such as node labeling.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Spec defines the rules to be evaluated.
properties:
rules:
description: Rules is a list of node customization rules.
items:
description: Rule defines a rule for node customization such as
labeling.
properties:
annotations:
additionalProperties:
type: string
description: Annotations to create if the rule matches.
type: object
extendedResources:
additionalProperties:
type: string
description: ExtendedResources to create if the rule matches.
type: object
labels:
additionalProperties:
type: string
description: Labels to create if the rule matches.
type: object
labelsTemplate:
description: |-
LabelsTemplate specifies a template to expand for dynamically generating
multiple labels. Data (after template expansion) must be keys with an
optional value (<key>[=<value>]) separated by newlines.
type: string
matchAny:
description: MatchAny specifies a list of matchers one of which
must match.
items:
description: MatchAnyElem specifies one sub-matcher of MatchAny.
properties:
matchFeatures:
description: MatchFeatures specifies a set of matcher
terms all of which must match.
items:
description: |-
FeatureMatcherTerm defines requirements against one feature set. All
requirements (specified as MatchExpressions) are evaluated against each
element in the feature set.
properties:
feature:
description: Feature is the name of the feature
set to match against.
type: string
matchExpressions:
additionalProperties:
description: |-
MatchExpression specifies an expression to evaluate against a set of input
values. It contains an operator that is applied when matching the input and
an array of values that the operator evaluates the input against.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
description: |-
MatchExpressions is the set of per-element expressions evaluated. These
match against the value of the specified elements.
type: object
matchName:
description: |-
MatchName in an expression that is matched against the name of each
element in the feature set.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
required:
- feature
type: object
type: array
required:
- matchFeatures
type: object
type: array
matchFeatures:
description: MatchFeatures specifies a set of matcher terms
all of which must match.
items:
description: |-
FeatureMatcherTerm defines requirements against one feature set. All
requirements (specified as MatchExpressions) are evaluated against each
element in the feature set.
properties:
feature:
description: Feature is the name of the feature set to
match against.
type: string
matchExpressions:
additionalProperties:
description: |-
MatchExpression specifies an expression to evaluate against a set of input
values. It contains an operator that is applied when matching the input and
an array of values that the operator evaluates the input against.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
description: |-
MatchExpressions is the set of per-element expressions evaluated. These
match against the value of the specified elements.
type: object
matchName:
description: |-
MatchName in an expression that is matched against the name of each
element in the feature set.
properties:
op:
description: Op is the operator to be applied.
enum:
- In
- NotIn
- InRegexp
- Exists
- DoesNotExist
- Gt
- Lt
- GtLt
- IsTrue
- IsFalse
type: string
value:
description: |-
Value is the list of values that the operand evaluates the input
against. Value should be empty if the operator is Exists, DoesNotExist,
IsTrue or IsFalse. Value should contain exactly one element if the
operator is Gt or Lt and exactly two elements if the operator is GtLt.
In other cases Value should contain at least one element.
items:
type: string
type: array
required:
- op
type: object
required:
- feature
type: object
type: array
name:
description: Name of the rule.
type: string
taints:
description: Taints to create if the rule matches.
items:
description: |-
The node this Taint is attached to has the "effect" on
any pod that does not tolerate the Taint.
properties:
effect:
description: |-
Required. The effect of the taint on pods
that do not tolerate the taint.
Valid effects are NoSchedule, PreferNoSchedule and NoExecute.
type: string
key:
description: Required. The taint key to be applied to
a node.
type: string
timeAdded:
description: |-
TimeAdded represents the time at which the taint was added.
It is only written for NoExecute taints.
format: date-time
type: string
value:
description: The taint value corresponding to the taint
key.
type: string
required:
- effect
- key
type: object
type: array
vars:
additionalProperties:
type: string
description: |-
Vars is the variables to store if the rule matches. Variables do not
directly inflict any changes in the node object. However, they can be
referenced from other rules enabling more complex rule hierarchies,
without exposing intermediary output values as labels.
type: object
varsTemplate:
description: |-
VarsTemplate specifies a template to expand for dynamically generating
multiple variables. Data (after template expansion) must be keys with an
optional value (<key>[=<value>]) separated by newlines.
type: string
required:
- name
type: object
type: array
required:
- rules
type: object
required:
- spec
type: object
served: true
storage: true

View File

@ -0,0 +1,107 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "node-feature-discovery.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "node-feature-discovery.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "node-feature-discovery.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "node-feature-discovery.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "node-feature-discovery.labels" -}}
helm.sh/chart: {{ include "node-feature-discovery.chart" . }}
{{ include "node-feature-discovery.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "node-feature-discovery.selectorLabels" -}}
app.kubernetes.io/name: {{ include "node-feature-discovery.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account which the nfd master will use
*/}}
{{- define "node-feature-discovery.master.serviceAccountName" -}}
{{- if .Values.master.serviceAccount.create -}}
{{ default (include "node-feature-discovery.fullname" .) .Values.master.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.master.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account which the nfd worker will use
*/}}
{{- define "node-feature-discovery.worker.serviceAccountName" -}}
{{- if .Values.worker.serviceAccount.create -}}
{{ default (printf "%s-worker" (include "node-feature-discovery.fullname" .)) .Values.worker.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.worker.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account which topologyUpdater will use
*/}}
{{- define "node-feature-discovery.topologyUpdater.serviceAccountName" -}}
{{- if .Values.topologyUpdater.serviceAccount.create -}}
{{ default (printf "%s-topology-updater" (include "node-feature-discovery.fullname" .)) .Values.topologyUpdater.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.topologyUpdater.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account which nfd-gc will use
*/}}
{{- define "node-feature-discovery.gc.serviceAccountName" -}}
{{- if .Values.gc.serviceAccount.create -}}
{{ default (printf "%s-gc" (include "node-feature-discovery.fullname" .)) .Values.gc.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.gc.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,140 @@
{{- if and .Values.master.enable .Values.master.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "node-feature-discovery.fullname" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- watch
- list
- apiGroups:
- ""
resources:
- nodes
- nodes/status
verbs:
- get
- patch
- update
- list
- apiGroups:
- nfd.k8s-sigs.io
resources:
- nodefeatures
- nodefeaturerules
- nodefeaturegroups
verbs:
- get
- list
- watch
- apiGroups:
- nfd.k8s-sigs.io
resources:
- nodefeaturegroups/status
verbs:
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- coordination.k8s.io
resources:
- leases
resourceNames:
- "nfd-master.nfd.kubernetes.io"
verbs:
- get
- update
{{- end }}
{{- if and .Values.topologyUpdater.enable .Values.topologyUpdater.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-topology-updater
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- topology.node.k8s.io
resources:
- noderesourcetopologies
verbs:
- create
- get
- update
{{- end }}
{{- if and .Values.gc.enable .Values.gc.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-gc
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- get
- apiGroups:
- topology.node.k8s.io
resources:
- noderesourcetopologies
verbs:
- delete
- list
- apiGroups:
- nfd.k8s-sigs.io
resources:
- nodefeatures
verbs:
- delete
- list
{{- end }}

View File

@ -0,0 +1,52 @@
{{- if and .Values.master.enable .Values.master.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "node-feature-discovery.fullname" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "node-feature-discovery.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "node-feature-discovery.master.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
{{- end }}
{{- if and .Values.topologyUpdater.enable .Values.topologyUpdater.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-topology-updater
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "node-feature-discovery.fullname" . }}-topology-updater
subjects:
- kind: ServiceAccount
name: {{ include "node-feature-discovery.topologyUpdater.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
{{- end }}
{{- if and .Values.gc.enable .Values.gc.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-gc
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "node-feature-discovery.fullname" . }}-gc
subjects:
- kind: ServiceAccount
name: {{ include "node-feature-discovery.gc.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
{{- end }}

View File

@ -0,0 +1,170 @@
{{- if .Values.master.enable }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-master
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
role: master
{{- with .Values.master.deploymentAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.master.replicaCount }}
revisionHistoryLimit: {{ .Values.master.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 6 }}
role: master
template:
metadata:
labels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 8 }}
role: master
annotations:
checksum/config: {{ include (print $.Template.BasePath "/nfd-master-conf.yaml") . | sha256sum }}
{{- with .Values.master.annotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "node-feature-discovery.master.serviceAccountName" . }}
enableServiceLinks: false
securityContext:
{{- toYaml .Values.master.podSecurityContext | nindent 8 }}
hostNetwork: {{ .Values.master.hostNetwork }}
containers:
- name: master
securityContext:
{{- toYaml .Values.master.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
startupProbe:
grpc:
port: {{ .Values.master.healthPort | default "8082" }}
{{- with .Values.master.startupProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.master.startupProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.master.startupProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.master.startupProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
livenessProbe:
grpc:
port: {{ .Values.master.healthPort | default "8082" }}
{{- with .Values.master.livenessProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.master.livenessProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.master.livenessProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.master.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
readinessProbe:
grpc:
port: {{ .Values.master.healthPort | default "8082" }}
{{- with .Values.master.readinessProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.master.readinessProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.master.readinessProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.master.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- with .Values.master.readinessProbe.successThreshold }}
successThreshold: {{ . }}
{{- end }}
ports:
- containerPort: {{ .Values.master.metricsPort | default "8081" }}
name: metrics
- containerPort: {{ .Values.master.healthPort | default "8082" }}
name: health
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- with .Values.master.extraEnvs }}
{{- toYaml . | nindent 8 }}
{{- end}}
command:
- "nfd-master"
resources:
{{- toYaml .Values.master.resources | nindent 12 }}
args:
{{- if .Values.master.instance | empty | not }}
- "-instance={{ .Values.master.instance }}"
{{- end }}
- "-enable-leader-election"
{{- if .Values.master.extraLabelNs | empty | not }}
- "-extra-label-ns={{- join "," .Values.master.extraLabelNs }}"
{{- end }}
{{- if .Values.master.denyLabelNs | empty | not }}
- "-deny-label-ns={{- join "," .Values.master.denyLabelNs }}"
{{- end }}
{{- if .Values.master.enableTaints }}
- "-enable-taints"
{{- end }}
{{- if .Values.master.featureRulesController | kindIs "invalid" | not }}
- "-featurerules-controller={{ .Values.master.featureRulesController }}"
{{- end }}
{{- if .Values.master.resyncPeriod }}
- "-resync-period={{ .Values.master.resyncPeriod }}"
{{- end }}
{{- if .Values.master.nfdApiParallelism | empty | not }}
- "-nfd-api-parallelism={{ .Values.master.nfdApiParallelism }}"
{{- end }}
# Go over featureGates and add the feature-gate flag
{{- range $key, $value := .Values.featureGates }}
- "-feature-gates={{ $key }}={{ $value }}"
{{- end }}
- "-metrics={{ .Values.master.metricsPort | default "8081" }}"
- "-grpc-health={{ .Values.master.healthPort | default "8082" }}"
{{- with .Values.master.extraArgs }}
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- name: nfd-master-conf
mountPath: "/etc/kubernetes/node-feature-discovery"
readOnly: true
volumes:
- name: nfd-master-conf
configMap:
name: {{ include "node-feature-discovery.fullname" . }}-master-conf
items:
- key: nfd-master.conf
path: nfd-master.conf
{{- with .Values.master.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.master.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.master.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,88 @@
{{- if and .Values.gc.enable -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-gc
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
role: gc
{{- with .Values.gc.deploymentAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.gc.replicaCount | default 1 }}
revisionHistoryLimit: {{ .Values.gc.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 6 }}
role: gc
template:
metadata:
labels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 8 }}
role: gc
{{- with .Values.gc.annotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "node-feature-discovery.gc.serviceAccountName" . }}
dnsPolicy: ClusterFirstWithHostNet
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.gc.podSecurityContext | nindent 8 }}
hostNetwork: {{ .Values.gc.hostNetwork }}
containers:
- name: gc
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- with .Values.gc.extraEnvs }}
{{- toYaml . | nindent 8 }}
{{- end}}
command:
- "nfd-gc"
args:
{{- if .Values.gc.interval | empty | not }}
- "-gc-interval={{ .Values.gc.interval }}"
{{- end }}
{{- with .Values.gc.extraArgs }}
{{- toYaml . | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.gc.resources | nindent 12 }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
readOnlyRootFilesystem: true
runAsNonRoot: true
ports:
- name: metrics
containerPort: {{ .Values.gc.metricsPort | default "8081"}}
{{- with .Values.gc.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.gc.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.gc.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if .Values.master.enable }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-master-conf
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
data:
nfd-master.conf: |-
{{- .Values.master.config | toYaml | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if .Values.topologyUpdater.enable -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-topology-updater-conf
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
data:
nfd-topology-updater.conf: |-
{{- .Values.topologyUpdater.config | toYaml | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if .Values.worker.enable }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-worker-conf
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
data:
nfd-worker.conf: |-
{{- .Values.worker.config | toYaml | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,94 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-prune
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-prune
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/status
verbs:
- get
- patch
- update
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-prune
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "node-feature-discovery.fullname" . }}-prune
subjects:
- kind: ServiceAccount
name: {{ include "node-feature-discovery.fullname" . }}-prune
namespace: {{ include "node-feature-discovery.namespace" . }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-prune
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
labels:
{{- include "node-feature-discovery.labels" . | nindent 8 }}
role: prune
spec:
serviceAccountName: {{ include "node-feature-discovery.fullname" . }}-prune
containers:
- name: nfd-master
securityContext:
{{- toYaml .Values.master.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- "nfd-master"
args:
- "-prune"
{{- if .Values.master.instance | empty | not }}
- "-instance={{ .Values.master.instance }}"
{{- end }}
restartPolicy: Never
{{- with .Values.master.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.master.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.master.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if .Values.prometheus.enable }}
# Prometheus Monitor Service (Metrics)
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ include "node-feature-discovery.fullname" . }}
labels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 4 }}
{{- with .Values.prometheus.labels }}
{{ toYaml . | nindent 4 }}
{{- end }}
spec:
podMetricsEndpoints:
- honorLabels: true
interval: {{ .Values.prometheus.scrapeInterval }}
path: /metrics
port: metrics
scheme: http
namespaceSelector:
matchNames:
- {{ include "node-feature-discovery.namespace" . }}
selector:
matchExpressions:
- {key: app.kubernetes.io/instance, operator: In, values: ["{{ .Release.Name }}"]}
- {key: app.kubernetes.io/name, operator: In, values: ["{{ include "node-feature-discovery.name" . }}"]}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if and .Values.worker.enable .Values.worker.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-worker
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
rules:
- apiGroups:
- nfd.k8s-sigs.io
resources:
- nodefeatures
verbs:
- create
- get
- update
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if and .Values.worker.enable .Values.worker.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-worker
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "node-feature-discovery.fullname" . }}-worker
subjects:
- kind: ServiceAccount
name: {{ include "node-feature-discovery.worker.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
{{- end }}

View File

@ -0,0 +1,58 @@
{{- if and .Values.master.enable .Values.master.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "node-feature-discovery.master.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
{{- with .Values.master.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- if and .Values.topologyUpdater.enable .Values.topologyUpdater.serviceAccount.create }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "node-feature-discovery.topologyUpdater.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
{{- with .Values.topologyUpdater.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- if and .Values.gc.enable .Values.gc.serviceAccount.create }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "node-feature-discovery.gc.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
{{- with .Values.gc.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- if and .Values.worker.enable .Values.worker.serviceAccount.create }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "node-feature-discovery.worker.serviceAccountName" . }}
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
{{- with .Values.worker.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,278 @@
{{- if and .Values.topologyUpdater.enable .Values.topologyUpdater.createCRDs -}}
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: https://github.com/kubernetes/enhancements/pull/1870
controller-gen.kubebuilder.io/version: v0.11.2
creationTimestamp: null
name: noderesourcetopologies.topology.node.k8s.io
spec:
group: topology.node.k8s.io
names:
kind: NodeResourceTopology
listKind: NodeResourceTopologyList
plural: noderesourcetopologies
shortNames:
- node-res-topo
singular: noderesourcetopology
scope: Cluster
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: NodeResourceTopology describes node resources and their topology.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
topologyPolicies:
items:
type: string
type: array
zones:
description: ZoneList contains an array of Zone objects.
items:
description: Zone represents a resource topology zone, e.g. socket,
node, die or core.
properties:
attributes:
description: AttributeList contains an array of AttributeInfo objects.
items:
description: AttributeInfo contains one attribute of a Zone.
properties:
name:
type: string
value:
type: string
required:
- name
- value
type: object
type: array
costs:
description: CostList contains an array of CostInfo objects.
items:
description: CostInfo describes the cost (or distance) between
two Zones.
properties:
name:
type: string
value:
format: int64
type: integer
required:
- name
- value
type: object
type: array
name:
type: string
parent:
type: string
resources:
description: ResourceInfoList contains an array of ResourceInfo
objects.
items:
description: ResourceInfo contains information about one resource
type.
properties:
allocatable:
anyOf:
- type: integer
- type: string
description: Allocatable quantity of the resource, corresponding
to allocatable in node status, i.e. total amount of this
resource available to be used by pods.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
available:
anyOf:
- type: integer
- type: string
description: Available is the amount of this resource currently
available for new (to be scheduled) pods, i.e. Allocatable
minus the resources reserved by currently running pods.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
capacity:
anyOf:
- type: integer
- type: string
description: Capacity of the resource, corresponding to capacity
in node status, i.e. total amount of this resource that
the node has.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
name:
description: Name of the resource.
type: string
required:
- allocatable
- available
- capacity
- name
type: object
type: array
type:
type: string
required:
- name
- type
type: object
type: array
required:
- topologyPolicies
- zones
type: object
served: true
storage: false
- name: v1alpha2
schema:
openAPIV3Schema:
description: NodeResourceTopology describes node resources and their topology.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
attributes:
description: AttributeList contains an array of AttributeInfo objects.
items:
description: AttributeInfo contains one attribute of a Zone.
properties:
name:
type: string
value:
type: string
required:
- name
- value
type: object
type: array
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
topologyPolicies:
description: 'DEPRECATED (to be removed in v1beta1): use top level attributes
if needed'
items:
type: string
type: array
zones:
description: ZoneList contains an array of Zone objects.
items:
description: Zone represents a resource topology zone, e.g. socket,
node, die or core.
properties:
attributes:
description: AttributeList contains an array of AttributeInfo objects.
items:
description: AttributeInfo contains one attribute of a Zone.
properties:
name:
type: string
value:
type: string
required:
- name
- value
type: object
type: array
costs:
description: CostList contains an array of CostInfo objects.
items:
description: CostInfo describes the cost (or distance) between
two Zones.
properties:
name:
type: string
value:
format: int64
type: integer
required:
- name
- value
type: object
type: array
name:
type: string
parent:
type: string
resources:
description: ResourceInfoList contains an array of ResourceInfo
objects.
items:
description: ResourceInfo contains information about one resource
type.
properties:
allocatable:
anyOf:
- type: integer
- type: string
description: Allocatable quantity of the resource, corresponding
to allocatable in node status, i.e. total amount of this
resource available to be used by pods.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
available:
anyOf:
- type: integer
- type: string
description: Available is the amount of this resource currently
available for new (to be scheduled) pods, i.e. Allocatable
minus the resources reserved by currently running pods.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
capacity:
anyOf:
- type: integer
- type: string
description: Capacity of the resource, corresponding to capacity
in node status, i.e. total amount of this resource that
the node has.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
name:
description: Name of the resource.
type: string
required:
- allocatable
- available
- capacity
- name
type: object
type: array
type:
type: string
required:
- name
- type
type: object
type: array
required:
- zones
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
{{- end }}

View File

@ -0,0 +1,188 @@
{{- if .Values.topologyUpdater.enable -}}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-topology-updater
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
role: topology-updater
{{- with .Values.topologyUpdater.daemonsetAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
revisionHistoryLimit: {{ .Values.topologyUpdater.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 6 }}
role: topology-updater
template:
metadata:
labels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 8 }}
role: topology-updater
annotations:
checksum/config: {{ include (print $.Template.BasePath "/nfd-topologyupdater-conf.yaml") . | sha256sum }}
{{- with .Values.topologyUpdater.annotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "node-feature-discovery.topologyUpdater.serviceAccountName" . }}
dnsPolicy: ClusterFirstWithHostNet
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.topologyUpdater.podSecurityContext | nindent 8 }}
hostNetwork: {{ .Values.topologyUpdater.hostNetwork }}
containers:
- name: topology-updater
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
livenessProbe:
grpc:
port: {{ .Values.topologyUpdater.healthPort | default "8082" }}
{{- with .Values.topologyUpdater.livenessProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.livenessProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.livenessProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
readinessProbe:
grpc:
port: {{ .Values.topologyUpdater.healthPort | default "8082" }}
{{- with .Values.topologyUpdater.readinessProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.readinessProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.readinessProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- with .Values.topologyUpdater.readinessProbe.successThreshold }}
successThreshold: {{ . }}
{{- end }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_ADDRESS
valueFrom:
fieldRef:
fieldPath: status.hostIP
{{- with .Values.topologyUpdater.extraEnvs }}
{{- toYaml . | nindent 8 }}
{{- end}}
command:
- "nfd-topology-updater"
args:
- "-podresources-socket=/host-var/lib/kubelet-podresources/kubelet.sock"
{{- if .Values.topologyUpdater.updateInterval | empty | not }}
- "-sleep-interval={{ .Values.topologyUpdater.updateInterval }}"
{{- else }}
- "-sleep-interval=3s"
{{- end }}
{{- if .Values.topologyUpdater.watchNamespace | empty | not }}
- "-watch-namespace={{ .Values.topologyUpdater.watchNamespace }}"
{{- else }}
- "-watch-namespace=*"
{{- end }}
{{- if not .Values.topologyUpdater.podSetFingerprint }}
- "-pods-fingerprint=false"
{{- end }}
{{- if .Values.topologyUpdater.kubeletConfigPath | empty | not }}
- "-kubelet-config-uri=file:///host-var/kubelet-config"
{{- end }}
{{- if .Values.topologyUpdater.kubeletStateDir | empty }}
# Disable kubelet state tracking by giving an empty path
- "-kubelet-state-dir="
{{- end }}
- "-metrics={{ .Values.topologyUpdater.metricsPort | default "8081"}}"
- "-grpc-health={{ .Values.topologyUpdater.healthPort | default "8082" }}"
{{- with .Values.topologyUpdater.extraArgs }}
{{- toYaml . | nindent 10 }}
{{- end }}
ports:
- containerPort: {{ .Values.topologyUpdater.metricsPort | default "8081"}}
name: metrics
- containerPort: {{ .Values.topologyUpdater.healthPort | default "8082" }}
name: health
volumeMounts:
{{- if .Values.topologyUpdater.kubeletConfigPath | empty | not }}
- name: kubelet-config
mountPath: /host-var/kubelet-config
{{- end }}
- name: kubelet-podresources-sock
mountPath: /host-var/lib/kubelet-podresources/kubelet.sock
- name: host-sys
mountPath: /host-sys
{{- if .Values.topologyUpdater.kubeletStateDir | empty | not }}
- name: kubelet-state-files
mountPath: /host-var/lib/kubelet
readOnly: true
{{- end }}
- name: nfd-topology-updater-conf
mountPath: "/etc/kubernetes/node-feature-discovery"
readOnly: true
resources:
{{- toYaml .Values.topologyUpdater.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.topologyUpdater.securityContext | nindent 12 }}
volumes:
- name: host-sys
hostPath:
path: "/sys"
{{- if .Values.topologyUpdater.kubeletConfigPath | empty | not }}
- name: kubelet-config
hostPath:
path: {{ .Values.topologyUpdater.kubeletConfigPath }}
{{- end }}
- name: kubelet-podresources-sock
hostPath:
{{- if .Values.topologyUpdater.kubeletPodResourcesSockPath | empty | not }}
path: {{ .Values.topologyUpdater.kubeletPodResourcesSockPath }}
{{- else }}
path: /var/lib/kubelet/pod-resources/kubelet.sock
{{- end }}
{{- if .Values.topologyUpdater.kubeletStateDir | empty | not }}
- name: kubelet-state-files
hostPath:
path: {{ .Values.topologyUpdater.kubeletStateDir }}
{{- end }}
- name: nfd-topology-updater-conf
configMap:
name: {{ include "node-feature-discovery.fullname" . }}-topology-updater-conf
items:
- key: nfd-topology-updater.conf
path: nfd-topology-updater.conf
{{- with .Values.topologyUpdater.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.topologyUpdater.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.topologyUpdater.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,195 @@
{{- if .Values.worker.enable }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "node-feature-discovery.fullname" . }}-worker
namespace: {{ include "node-feature-discovery.namespace" . }}
labels:
{{- include "node-feature-discovery.labels" . | nindent 4 }}
role: worker
{{- with .Values.worker.daemonsetAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
revisionHistoryLimit: {{ .Values.worker.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 6 }}
role: worker
template:
metadata:
labels:
{{- include "node-feature-discovery.selectorLabels" . | nindent 8 }}
role: worker
annotations:
checksum/config: {{ include (print $.Template.BasePath "/nfd-worker-conf.yaml") . | sha256sum }}
{{- with .Values.worker.annotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
dnsPolicy: ClusterFirstWithHostNet
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "node-feature-discovery.worker.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.worker.podSecurityContext | nindent 8 }}
hostNetwork: {{ .Values.worker.hostNetwork }}
containers:
- name: worker
securityContext:
{{- toYaml .Values.worker.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
livenessProbe:
grpc:
port: {{ .Values.worker.healthPort | default "8082" }}
{{- with .Values.worker.livenessProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.worker.livenessProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.worker.livenessProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.worker.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
readinessProbe:
grpc:
port: {{ .Values.worker.healthPort | default "8082" }}
{{- with .Values.worker.readinessProbe.initialDelaySeconds }}
initialDelaySeconds: {{ . }}
{{- end }}
{{- with .Values.worker.readinessProbe.failureThreshold }}
failureThreshold: {{ . }}
{{- end }}
{{- with .Values.worker.readinessProbe.periodSeconds }}
periodSeconds: {{ . }}
{{- end }}
{{- with .Values.worker.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ . }}
{{- end }}
{{- with .Values.worker.readinessProbe.successThreshold }}
successThreshold: {{ . }}
{{- end }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
{{- with .Values.worker.extraEnvs }}
{{- toYaml . | nindent 8 }}
{{- end}}
resources:
{{- toYaml .Values.worker.resources | nindent 12 }}
command:
- "nfd-worker"
args:
# Go over featureGate and add the feature-gate flag
{{- range $key, $value := .Values.featureGates }}
- "-feature-gates={{ $key }}={{ $value }}"
{{- end }}
- "-metrics={{ .Values.worker.metricsPort | default "8081"}}"
- "-grpc-health={{ .Values.worker.healthPort | default "8082" }}"
{{- with .Values.gc.extraArgs }}
{{- toYaml . | nindent 8 }}
{{- end }}
ports:
- containerPort: {{ .Values.worker.metricsPort | default "8081"}}
name: metrics
- containerPort: {{ .Values.worker.healthPort | default "8082" }}
name: health
volumeMounts:
- name: host-boot
mountPath: "/host-boot"
readOnly: true
- name: host-os-release
mountPath: "/host-etc/os-release"
readOnly: true
- name: host-sys
mountPath: "/host-sys"
readOnly: true
- name: host-usr-lib
mountPath: "/host-usr/lib"
readOnly: true
- name: host-lib
mountPath: "/host-lib"
readOnly: true
- name: host-proc-swaps
mountPath: "/host-proc/swaps"
readOnly: true
{{- if .Values.worker.mountUsrSrc }}
- name: host-usr-src
mountPath: "/host-usr/src"
readOnly: true
{{- end }}
- name: features-d
mountPath: "/etc/kubernetes/node-feature-discovery/features.d/"
readOnly: true
- name: nfd-worker-conf
mountPath: "/etc/kubernetes/node-feature-discovery"
readOnly: true
volumes:
- name: host-boot
hostPath:
path: "/boot"
- name: host-os-release
hostPath:
path: "/etc/os-release"
- name: host-sys
hostPath:
path: "/sys"
- name: host-usr-lib
hostPath:
path: "/usr/lib"
- name: host-lib
hostPath:
path: "/lib"
- name: host-proc-swaps
hostPath:
path: "/proc/swaps"
{{- if .Values.worker.mountUsrSrc }}
- name: host-usr-src
hostPath:
path: "/usr/src"
{{- end }}
- name: features-d
hostPath:
path: "/etc/kubernetes/node-feature-discovery/features.d/"
- name: nfd-worker-conf
configMap:
name: {{ include "node-feature-discovery.fullname" . }}-worker-conf
items:
- key: nfd-worker.conf
path: nfd-worker.conf
{{- with .Values.worker.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker.priorityClassName }}
priorityClassName: {{ . | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,599 @@
image:
repository: registry.k8s.io/nfd/node-feature-discovery
# This should be set to 'IfNotPresent' for released version
pullPolicy: IfNotPresent
# tag, if defined will use the given image tag, else Chart.AppVersion will be used
# tag
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
namespaceOverride: ""
featureGates:
NodeFeatureGroupAPI: false
priorityClassName: ""
master:
enable: true
extraArgs: []
extraEnvs: []
hostNetwork: false
config: ### <NFD-MASTER-CONF-START-DO-NOT-REMOVE>
# noPublish: false
# autoDefaultNs: true
# extraLabelNs: ["added.ns.io","added.kubernets.io"]
# denyLabelNs: ["denied.ns.io","denied.kubernetes.io"]
# enableTaints: false
# labelWhiteList: "foo"
# resyncPeriod: "2h"
# restrictions:
# disableLabels: true
# disableTaints: true
# disableExtendedResources: true
# disableAnnotations: true
# allowOverwrite: false
# denyNodeFeatureLabels: true
# nodeFeatureNamespaceSelector:
# matchLabels:
# kubernetes.io/metadata.name: "node-feature-discovery"
# matchExpressions:
# - key: "kubernetes.io/metadata.name"
# operator: "In"
# values:
# - "node-feature-discovery"
# klog:
# addDirHeader: false
# alsologtostderr: false
# logBacktraceAt:
# logtostderr: true
# skipHeaders: false
# stderrthreshold: 2
# v: 0
# vmodule:
## NOTE: the following options are not dynamically run-time configurable
## and require a nfd-master restart to take effect after being changed
# logDir:
# logFile:
# logFileMaxSize: 1800
# skipLogHeaders: false
# leaderElection:
# leaseDuration: 15s
# # this value has to be lower than leaseDuration and greater than retryPeriod*1.2
# renewDeadline: 10s
# # this value has to be greater than 0
# retryPeriod: 2s
# nfdApiParallelism: 10
### <NFD-MASTER-CONF-END-DO-NOT-REMOVE>
metricsPort: 8081
healthPort: 8082
instance:
featureApi:
resyncPeriod:
denyLabelNs: []
extraLabelNs: []
enableTaints: false
featureRulesController: null
nfdApiParallelism: null
deploymentAnnotations: {}
replicaCount: 1
podSecurityContext: {}
# fsGroup: 2000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
readOnlyRootFilesystem: true
runAsNonRoot: true
# runAsUser: 1000
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
# specify how many old ReplicaSets for the Deployment to retain.
revisionHistoryLimit:
rbac:
create: true
resources:
limits:
memory: 4Gi
requests:
cpu: 100m
# You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes.
# If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke
# the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory
# cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod.
# Natan Yellin 22/09/2022 https://home.robusta.dev/blog/kubernetes-memory-limit
memory: 128Mi
nodeSelector: {}
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
effect: "NoSchedule"
annotations: {}
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: "node-role.kubernetes.io/master"
operator: In
values: [""]
- weight: 1
preference:
matchExpressions:
- key: "node-role.kubernetes.io/control-plane"
operator: In
values: [""]
startupProbe:
grpc:
port: 8082
failureThreshold: 30
# periodSeconds: 10
livenessProbe:
grpc:
port: 8082
# failureThreshold: 3
# initialDelaySeconds: 0
# periodSeconds: 10
# timeoutSeconds: 1
readinessProbe:
grpc:
port: 8082
failureThreshold: 10
# initialDelaySeconds: 0
# periodSeconds: 10
# timeoutSeconds: 1
# successThreshold: 1
worker:
enable: true
extraArgs: []
extraEnvs: []
hostNetwork: false
config: ### <NFD-WORKER-CONF-START-DO-NOT-REMOVE>
#core:
# labelWhiteList:
# noPublish: false
# noOwnerRefs: false
# sleepInterval: 60s
# featureSources: [all]
# labelSources: [all]
# klog:
# addDirHeader: false
# alsologtostderr: false
# logBacktraceAt:
# logtostderr: true
# skipHeaders: false
# stderrthreshold: 2
# v: 0
# vmodule:
## NOTE: the following options are not dynamically run-time configurable
## and require a nfd-worker restart to take effect after being changed
# logDir:
# logFile:
# logFileMaxSize: 1800
# skipLogHeaders: false
#sources:
# cpu:
# cpuid:
## NOTE: whitelist has priority over blacklist
# attributeBlacklist:
# - "AVX10"
# - "BMI1"
# - "BMI2"
# - "CLMUL"
# - "CMOV"
# - "CX16"
# - "ERMS"
# - "F16C"
# - "HTT"
# - "LZCNT"
# - "MMX"
# - "MMXEXT"
# - "NX"
# - "POPCNT"
# - "RDRAND"
# - "RDSEED"
# - "RDTSCP"
# - "SGX"
# - "SSE"
# - "SSE2"
# - "SSE3"
# - "SSE4"
# - "SSE42"
# - "SSSE3"
# - "TDX_GUEST"
# attributeWhitelist:
# kernel:
# kconfigFile: "/path/to/kconfig"
# configOpts:
# - "NO_HZ"
# - "X86"
# - "DMI"
# pci:
# deviceClassWhitelist:
# - "0200"
# - "03"
# - "12"
# deviceLabelFields:
# - "class"
# - "vendor"
# - "device"
# - "subsystem_vendor"
# - "subsystem_device"
# usb:
# deviceClassWhitelist:
# - "0e"
# - "ef"
# - "fe"
# - "ff"
# deviceLabelFields:
# - "class"
# - "vendor"
# - "device"
# custom:
# # The following feature demonstrates the capabilities of the matchFeatures
# - name: "my custom rule"
# labels:
# "vendor.io/my-ng-feature": "true"
# # matchFeatures implements a logical AND over all matcher terms in the
# # list (i.e. all of the terms, or per-feature matchers, must match)
# matchFeatures:
# - feature: cpu.cpuid
# matchExpressions:
# AVX512F: {op: Exists}
# - feature: cpu.cstate
# matchExpressions:
# enabled: {op: IsTrue}
# - feature: cpu.pstate
# matchExpressions:
# no_turbo: {op: IsFalse}
# scaling_governor: {op: In, value: ["performance"]}
# - feature: cpu.rdt
# matchExpressions:
# RDTL3CA: {op: Exists}
# - feature: cpu.sst
# matchExpressions:
# bf.enabled: {op: IsTrue}
# - feature: cpu.topology
# matchExpressions:
# hardware_multithreading: {op: IsFalse}
#
# - feature: kernel.config
# matchExpressions:
# X86: {op: Exists}
# LSM: {op: InRegexp, value: ["apparmor"]}
# - feature: kernel.loadedmodule
# matchExpressions:
# e1000e: {op: Exists}
# - feature: kernel.selinux
# matchExpressions:
# enabled: {op: IsFalse}
# - feature: kernel.version
# matchExpressions:
# major: {op: In, value: ["5"]}
# minor: {op: Gt, value: ["10"]}
#
# - feature: storage.block
# matchExpressions:
# rotational: {op: In, value: ["0"]}
# dax: {op: In, value: ["0"]}
#
# - feature: network.device
# matchExpressions:
# operstate: {op: In, value: ["up"]}
# speed: {op: Gt, value: ["100"]}
#
# - feature: memory.numa
# matchExpressions:
# node_count: {op: Gt, value: ["2"]}
# - feature: memory.nv
# matchExpressions:
# devtype: {op: In, value: ["nd_dax"]}
# mode: {op: In, value: ["memory"]}
#
# - feature: system.osrelease
# matchExpressions:
# ID: {op: In, value: ["fedora", "centos"]}
# - feature: system.name
# matchExpressions:
# nodename: {op: InRegexp, value: ["^worker-X"]}
#
# - feature: local.label
# matchExpressions:
# custom-feature-knob: {op: Gt, value: ["100"]}
#
# # The following feature demonstrates the capabilities of the matchAny
# - name: "my matchAny rule"
# labels:
# "vendor.io/my-ng-feature-2": "my-value"
# # matchAny implements a logical IF over all elements (sub-matchers) in
# # the list (i.e. at least one feature matcher must match)
# matchAny:
# - matchFeatures:
# - feature: kernel.loadedmodule
# matchExpressions:
# driver-module-X: {op: Exists}
# - feature: pci.device
# matchExpressions:
# vendor: {op: In, value: ["8086"]}
# class: {op: In, value: ["0200"]}
# - matchFeatures:
# - feature: kernel.loadedmodule
# matchExpressions:
# driver-module-Y: {op: Exists}
# - feature: usb.device
# matchExpressions:
# vendor: {op: In, value: ["8086"]}
# class: {op: In, value: ["02"]}
#
# - name: "avx wildcard rule"
# labels:
# "my-avx-feature": "true"
# matchFeatures:
# - feature: cpu.cpuid
# matchName: {op: InRegexp, value: ["^AVX512"]}
#
# # The following features demonstreate label templating capabilities
# - name: "my template rule"
# labelsTemplate: |
# {{ range .system.osrelease }}vendor.io/my-system-feature.{{ .Name }}={{ .Value }}
# {{ end }}
# matchFeatures:
# - feature: system.osrelease
# matchExpressions:
# ID: {op: InRegexp, value: ["^open.*"]}
# VERSION_ID.major: {op: In, value: ["13", "15"]}
#
# - name: "my template rule 2"
# labelsTemplate: |
# {{ range .pci.device }}vendor.io/my-pci-device.{{ .class }}-{{ .device }}=with-cpuid
# {{ end }}
# matchFeatures:
# - feature: pci.device
# matchExpressions:
# class: {op: InRegexp, value: ["^06"]}
# vendor: ["8086"]
# - feature: cpu.cpuid
# matchExpressions:
# AVX: {op: Exists}
#
# # The following examples demonstrate vars field and back-referencing
# # previous labels and vars
# - name: "my dummy kernel rule"
# labels:
# "vendor.io/my.kernel.feature": "true"
# matchFeatures:
# - feature: kernel.version
# matchExpressions:
# major: {op: Gt, value: ["2"]}
#
# - name: "my dummy rule with no labels"
# vars:
# "my.dummy.var": "1"
# matchFeatures:
# - feature: cpu.cpuid
# matchExpressions: {}
#
# - name: "my rule using backrefs"
# labels:
# "vendor.io/my.backref.feature": "true"
# matchFeatures:
# - feature: rule.matched
# matchExpressions:
# vendor.io/my.kernel.feature: {op: IsTrue}
# my.dummy.var: {op: Gt, value: ["0"]}
#
# - name: "kconfig template rule"
# labelsTemplate: |
# {{ range .kernel.config }}kconfig-{{ .Name }}={{ .Value }}
# {{ end }}
# matchFeatures:
# - feature: kernel.config
# matchName: {op: In, value: ["SWAP", "X86", "ARM"]}
### <NFD-WORKER-CONF-END-DO-NOT-REMOVE>
metricsPort: 8081
healthPort: 8082
daemonsetAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
readOnlyRootFilesystem: true
runAsNonRoot: true
# runAsUser: 1000
livenessProbe:
grpc:
port: 8082
initialDelaySeconds: 10
# failureThreshold: 3
# periodSeconds: 10
# timeoutSeconds: 1
readinessProbe:
grpc:
port: 8082
initialDelaySeconds: 5
failureThreshold: 10
# periodSeconds: 10
# timeoutSeconds: 1
# successThreshold: 1
serviceAccount:
# Specifies whether a service account should be created.
# We create this by default to make it easier for downstream users to apply PodSecurityPolicies.
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
# specify how many old ControllerRevisions for the DaemonSet to retain.
revisionHistoryLimit:
rbac:
create: true
# Allow users to mount the hostPath /usr/src, useful for RHCOS on s390x
# Does not work on systems without /usr/src AND a read-only /usr, such as Talos
mountUsrSrc: false
resources:
limits:
memory: 512Mi
requests:
cpu: 5m
memory: 64Mi
nodeSelector: {}
tolerations: []
annotations: {}
affinity: {}
priorityClassName: ""
topologyUpdater:
config: ### <NFD-TOPOLOGY-UPDATER-CONF-START-DO-NOT-REMOVE>
## key = node name, value = list of resources to be excluded.
## use * to exclude from all nodes.
## an example for how the exclude list should looks like
#excludeList:
# node1: [cpu]
# node2: [memory, example/deviceA]
# *: [hugepages-2Mi]
### <NFD-TOPOLOGY-UPDATER-CONF-END-DO-NOT-REMOVE>
enable: false
createCRDs: false
extraArgs: []
extraEnvs: []
hostNetwork: false
serviceAccount:
create: true
annotations: {}
name:
# specify how many old ControllerRevisions for the DaemonSet to retain.
revisionHistoryLimit:
rbac:
create: true
metricsPort: 8081
healthPort: 8082
kubeletConfigPath:
kubeletPodResourcesSockPath:
updateInterval: 60s
watchNamespace: "*"
kubeletStateDir: /var/lib/kubelet
podSecurityContext: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
readOnlyRootFilesystem: true
runAsUser: 0
livenessProbe:
grpc:
port: 8082
initialDelaySeconds: 10
# failureThreshold: 3
# periodSeconds: 10
# timeoutSeconds: 1
readinessProbe:
grpc:
port: 8082
initialDelaySeconds: 5
failureThreshold: 10
# periodSeconds: 10
# timeoutSeconds: 1
# successThreshold: 1
resources:
limits:
memory: 60Mi
requests:
cpu: 50m
memory: 40Mi
nodeSelector: {}
tolerations: []
annotations: {}
daemonsetAnnotations: {}
affinity: {}
podSetFingerprint: true
gc:
enable: true
extraArgs: []
extraEnvs: []
hostNetwork: false
replicaCount: 1
serviceAccount:
create: true
annotations: {}
name:
rbac:
create: true
interval: 1h
podSecurityContext: {}
resources:
limits:
memory: 1Gi
requests:
cpu: 10m
memory: 128Mi
metricsPort: 8081
nodeSelector: {}
tolerations: []
annotations: {}
deploymentAnnotations: {}
affinity: {}
# specify how many old ReplicaSets for the Deployment to retain.
revisionHistoryLimit:
prometheus:
enable: false
scrapeInterval: 10s
labels: {}

View File

@ -0,0 +1,4 @@
{{ .Chart.Name }}-device-plugin-daemonset deployed in namespace '{{ .Release.Namespace }}'
{{- if .Values.labeller.enabled }}
{{ .Chart.Name }}-labeller-daemonset deployed in namespace '{{ .Release.Namespace }}'
{{- end }}

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "amd-gpu.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "amd-gpu.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "amd-gpu.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "amd-gpu.labels" -}}
helm.sh/chart: {{ include "amd-gpu.chart" . }}
{{ include "amd-gpu.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "amd-gpu.selectorLabels" -}}
app.kubernetes.io/name: {{ include "amd-gpu.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "amd-gpu.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "amd-gpu.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Chart.Name }}-device-plugin-daemonset
spec:
selector:
matchLabels:
name: {{ .Chart.Name }}-dp-ds
template:
metadata:
labels:
name: {{ .Chart.Name }}-dp-ds
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.node_selector_enabled }}
{{- with .Values.node_selector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
priorityClassName: system-node-critical
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}-dp-cntr
image: {{ .Values.dp.image.repository }}:{{ .Values.dp.image.tag | default .Chart.AppVersion }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
- name: dp
mountPath: /var/lib/kubelet/device-plugins
- name: sys
mountPath: /sys
resources:
{{- toYaml .Values.dp.resources | nindent 12 }}
volumes:
- name: dp
hostPath:
path: /var/lib/kubelet/device-plugins
- name: sys
hostPath:
path: /sys

View File

@ -0,0 +1,78 @@
{{- if .Values.labeller.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cr-{{ .Chart.Name }}-node-labeller
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "list", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crb-{{ .Chart.Name }}-labeller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cr-{{ .Chart.Name }}-node-labeller
subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace }}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Chart.Name }}-labeller-daemonset
spec:
selector:
matchLabels:
name: amdgpu-lr-ds
template:
metadata:
labels:
name: amdgpu-lr-ds
spec:
{{- if .Values.node_selector_enabled }}
{{- with .Values.node_selector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
priorityClassName: system-node-critical
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- image: {{ .Values.lbl.image.repository }}:{{ .Values.lbl.image.tag }}
name: {{ .Chart.Name }}-lr-cntr
imagePullPolicy: Always
workingDir: /root
command: ["./k8s-node-labeller"]
args: ["-vram", "-cu-count", "-simd-count", "-device-id", "-family"]
env:
- name: DS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true #Needed for /dev
capabilities:
drop: ["ALL"]
volumeMounts:
- name: sys
mountPath: /sys
- name: dev
mountPath: /dev
resources:
{{- toYaml .Values.lbl.resources | nindent 10 }}
volumes:
- name: sys
hostPath:
path: /sys
- name: dev
hostPath:
path: /dev
{{- end }}

View File

@ -0,0 +1,35 @@
nfd:
enabled: false
labeller:
enabled: false
dp:
image:
repository: docker.io/rocm/k8s-device-plugin
# Overrides the image tag whose default is the chart appVersion.
tag: "1.31.0.2"
resources: {}
lbl:
image:
repository: docker.io/rocm/k8s-device-plugin
tag: "labeller-1.31.0.2"
resources: {}
imagePullSecrets: []
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
tolerations:
- key: CriticalAddonsOnly
operator: Exists
node_selector_enabled: false
node_selector:
feature.node.kubernetes.io/pci-0300_1002.present: "true"
kubernetes.io/arch: amd64

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
# OWNERS file for helm
OWNERS

View File

@ -0,0 +1,39 @@
annotations:
artifacthub.io/links: |
- name: Instana website
url: https://www.ibm.com/products/instana
- name: Instana Helm charts
url: https://github.com/instana/helm-charts
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Instana Agent
catalog.cattle.io/kube-version: '>=1.21-0'
catalog.cattle.io/release-name: instana-agent
apiVersion: v2
appVersion: 1.288.0
description: Instana Agent for Kubernetes
home: https://www.instana.com/
icon: file://assets/icons/instana-agent.png
kubeVersion: '>=1.21-0'
maintainers:
- email: felix.marx@ibm.com
name: FelixMarxIBM
- email: henning.treu@ibm.com
name: htreu
- email: konrad.ohms@de.ibm.com
name: Konrad-Ohms
- email: fredrik.gundersen@ibm.com
name: FredrikAtIBM
- email: jefiyamj@ibm.com
name: Jefiya-MJ
- email: milica.cvrkota@ibm.com
name: Milica-Cvrkota-IBM
- email: Nagaraj.Kandoor@ibm.com
name: nagaraj-kandoor
- email: Vineeth.Soman@ibm.com
name: vineethsoman03
- email: Rashmi.Swamy@ibm.com
name: rashmiswamyibm
name: instana-agent
sources:
- https://github.com/instana/instana-agent-docker
version: 2.0.9

View File

@ -0,0 +1,54 @@
# Kubernetes Deployment Mode (tech preview)
Instana has always endeavored to make the experience of using Instana as seamless as possible from auto-instrumentation to one-liner installs. To date for our customers with Kubernetes clusters containing more than 1,000 entities this wasnt the case. The Kubernetes sensor as a deployment is one of many steps were taking to improve the experience of operating Instana in Kubernetes. This is a tech preview however we have a high degree of confidence it will work well in your production workloads. The fundamental change moves the Kubernetes sensor from the DaemonSet responsible for monitoring your hosts and processes into its own dedicated Deployment where it does not contend for resources with other sensors. An overview of this deployment is below:
![kubernetes.deployment.enabled=true](kubernetes.deployment.enabled.png)
This change provides a few primary benefits including:
* Lower load on the Kubernetes api-server as it eliminates per node pod monitoring.
* Lower load on the Kubernetes api-server as it reduces the endpoint watch to 2 leader elector side cars.
* Lower memory and CPU requests in the DaemonSet as it is no longer responsible for monitoring Kubernetes.
* Elimination of the leader elector sidecar in the DaemonSet as it is only required for the Kubernetes sensor.
* Better performance of the Kubernetes sensor as it is isolated from other sensors and does not contend for CPU and memory.
* Better scaling behaviour as you can adjust the memory and CPU requirements to monitor your clusters without overprovisioning utilisation cluster wide.
The primary drawback of this model in the tech preview include:
* Reduced control and observability of the Kubernetes specific Agents in the Agent dashboard.
* Some unnecessary features are still enabled in the Kubernetes sensor (e.g. trace sinks, and host monitoring).
Some limitations remain unchanged from the previous sensor:
* Clusters with a high number of entities (e.g. pods, deployments, etc) are likely to have non-deterministic behaviour due to limitations we impose on message sizes. This is unlikely to be experienced in clusters with fewer than 500 hosts.
* The ServiceAccount is shared between both the DaemonSet and Deployment meaning no change in the security posture. We plan to add an additional service account to limit access to the api-server to only the Kubernetes sensor Deployment.
## Installation
For clusters with minimal controls you can install the tech preview with the following Helm install command:
```
helm template instana-agent \
--repo https://agents.instana.io/helm \
--namespace instana-agent \
--create-namespace \
--set agent.key=${AGENT_KEY} \
--set agent.endpointHost=${BACKEND_URL} \
--set agent.endpointPort=443 \
--set cluster.name=${CLUSTER_NAME} \
--set zone.name=${ZONE_NAME} \
--set kubernetes.deployment.enabled=true \
instana-agent
```
If your cluster employs Pod Security Policies you will need the following additional flag:
```
--set podSecurityPolicy.enable=true
```
If you are deploying into an OpenShift 4.x cluster you will need the following additional flag:
```
--set openshift=true
```

View File

@ -0,0 +1,793 @@
# Instana
Instana is an [APM solution](https://www.instana.com/) built for microservices that enables IT Ops to build applications faster and deliver higher quality services by automating monitoring, tracing and root cause analysis.
This solution is optimized for [Kubernetes](https://www.instana.com/automatic-kubernetes-monitoring/).
This chart adds the Instana Agent to all schedulable nodes in your cluster via a privileged `DaemonSet` and accompanying resources like `ConfigurationMap`s, `Secret`s and RBAC settings.
## Prerequisites
* Kubernetes 1.21+ OR OpenShift 4.8+
* Helm 3
## Installation
To configure the installation you can either specify the options on the command line using the **--set** switch, or you can edit **values.yaml**.
First, create a namespace for the instana-agent
```bash
$ kubectl create namespace instana-agent
```
**OpenShift:** When targetting an OpenShift 4.x cluster, ensure proper permission before installing the helm chart, otherwise the agent pods will not be scheduled correctly.
```bash
$ oc adm policy add-scc-to-user privileged -z instana-agent
```
To install the chart with the release name `instana-agent` and set the values on the command line run:
```bash
$ helm install instana-agent \
--namespace instana-agent \
--repo https://agents.instana.io/helm \
--set agent.key=INSTANA_AGENT_KEY \
--set agent.endpointHost=HOST \
--set zone.name=ZONE_NAME \
--set cluster.name="CLUSTER_NAME" \
instana-agent
```
## Upgrade
The helm chart deploys a Kubernetes Operator internally that reconciles the agent resources based on an agent CustomResource (CR) that is created based on the helm values. As the Operator pattern requires a CustomResourceDefinition (CRD) to be present in the cluster before defining any CRs, the CRD definition is included in the helm chart. On initial installations the chart deploys the CRD before submitting the rest of the artifacts.
Helm has known limitations around handling the CRD lifecycle as outlined in their [documentation](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/).
This leads to the problem, that CRD updates are only submitted to the Cluster on **initial installation**, but **NOT applied automatically during upgrades**.
It is also worth noting, that CRDs must be removed manually if the chart should be removed completly, see more details in the [uninstall](#uninstallation) section.
To ensure a proper update, apply the CRD updates before running the upgrade:
```
helm pull --repo https://agents.instana.io/helm --untar instana-agent; kubectl apply -f instana-agent/crds; helm upgrade instana-agent instana-agent \
--namespace instana-agent \
--repo https://agents.instana.io/helm \
--reuse-values
```
This is especially important when migrating from helm charts v1 to v2, as the upgrade will fail otherwise as the CR artifact cannot be created.
### Required Settings
#### Configuring the Instana Backend
In order to report the data it collects to the Instana backend for analysis, the Instana agent must know which backend to report to, and which credentials to use to authenticate, known as "agent key".
As described by the [Install Using the Helm Chart](https://www.instana.com/docs/setup_and_manage/host_agent/on/kubernetes#install-using-the-helm-chart) documentation, you will find the right values for the following fields inside Instana itself:
* `agent.endpointHost`
* `agent.endpointPort`
* `agent.key`
_Note:_ You can find the options mentioned in the [configuration section below](#configuration-reference)
If your agents report into a self-managed Instana unit (also known as "on-prem"), you will also need to configure a "download key", which allows the agent to fetch its components from the Instana repository.
The download key is set via the following value:
* `agent.downloadKey`
#### Zone and Cluster
Instana needs to know how to name your Kubernetes cluster and, optionally, how to group your Instana agents in [Custom zones](https://www.instana.com/docs/setup_and_manage/host_agent/configuration/#custom-zones) using the following fields:
* `zone.name`
* `cluster.name`
Either `zone.name` or `cluster.name` are required.
If you omit `cluster.name`, the value of `zone.name` will be used as cluster name as well.
If you omit `zone.name`, the host zone will be automatically determined by the availability zone information provided by the [supported Cloud providers](https://www.instana.com/docs/setup_and_manage/cloud_service_agents).
## Uninstallation
To uninstall/delete the `instana-agent` release:
```bash
helm uninstall instana-agent -n instana-agent && kubectl patch agent instana-agent -n instana-agent -p '{"metadata":{"finalizers":null}}' --type=merge &&
kubectl delete crd/agents.instana.io
```
## Configuration Reference
The following table lists the configurable parameters of the Instana chart and their default values.
| Parameter | Description | Default |
| --------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| `agent.configuration_yaml` | Custom content for the agent configuration.yaml file | `nil` See [below](#agent-configuration) for more details |
| `agent.endpointHost` | Instana Agent backend endpoint host | `ingress-red-saas.instana.io` (US and ROW). If in Europe, please override with `ingress-blue-saas.instana.io` |
| `agent.endpointPort` | Instana Agent backend endpoint port | `443` |
| `agent.key` | Your Instana Agent key | `nil` You must provide your own key unless `agent.keysSecret` is specified |
| `agent.downloadKey` | Your Instana Download key | `nil` Usually not required |
| `agent.keysSecret` | As an alternative to specifying `agent.key` and, optionally, `agent.downloadKey`, you can instead specify the name of the secret in the namespace in which you install the Instana agent that carries the agent key and download key | `nil` Usually not required, see [Bring your own Keys secret](#bring-your-own-keys-secret) for more details |
| `agent.additionalBackends` | List of additional backends to report to; it must specify the `endpointHost` and `key` fields, and optionally `endpointPort` | `[]` Usually not required; see [Configuring Additional Backends](#configuring-additional-backends) for more info and examples |
| `agent.tls.secretName` | The name of the secret of type `kubernetes.io/tls` which contains the TLS relevant data. If the name is provided, `agent.tls.certificate` and `agent.tls.key` will be ignored. | `nil` |
| `agent.tls.certificate` | The certificate data encoded as base64. Which will be used to create a new secret of type `kubernetes.io/tls`. | `nil` |
| `agent.tls.key` | The private key data encoded as base64. Which will be used to create a new secret of type `kubernetes.io/tls`. | `nil` |
| `agent.image.name` | The image name to pull | `instana/agent` |
| `agent.image.digest` | The image digest to pull; if specified, it causes `agent.image.tag` to be ignored | `nil` |
| `agent.image.tag` | The image tag to pull; this property is ignored if `agent.image.digest` is specified | `latest` |
| `agent.image.pullPolicy` | Image pull policy | `Always` |
| `agent.image.pullSecrets` | Image pull secrets; if not specified (default) _and_ `agent.image.name` starts with `containers.instana.io`, it will be automatically set to `[{ "name": "containers-instana-io" }]` to match the default secret created in this case. | `nil` |
| `agent.listenAddress` | List of addresses to listen on, or "*" for all interfaces | `nil` |
| `agent.mode` | Agent mode. Supported values are `APM`, `INFRASTRUCTURE`, `AWS` | `APM` |
| `agent.instanaMvnRepoUrl` | Override for the Maven repository URL when the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
| `agent.instanaMvnRepoFeaturesPath` | Override for the Maven repository features path the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
| `agent.instanaMvnRepoSharedPath` | Override for the Maven repository shared path when the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
| `agent.agentReleaseRepoMirrorUrl` | The URL of the agent features repository mirror. For more information, see [Configuring the agent repository as the mirror](https://www.ibm.com/docs/en/instana-observability/current?topic=agents-setting-up-agent-repositories-dynamic-host#configuring-the-agent-repository-as-the-mirror). | `nil` |
| `agent.agentReleaseRepoMirrorUsername` | The username for authentication for the agent features repository mirror. For more information, see [Configuring the agent repository as the mirror](https://www.ibm.com/docs/en/instana-observability/current?topic=agents-setting-up-agent-repositories-dynamic-host#configuring-the-agent-repository-as-the-mirror). | `nil` |
| `agent.agentReleaseRepoMirrorPassword` | The password for authentication for the agent features repository mirror. For more information, see [Configuring the agent repository as the mirror](https://www.ibm.com/docs/en/instana-observability/current?topic=agents-setting-up-agent-repositories-dynamic-host#configuring-the-agent-repository-as-the-mirror). | `nil` |
| `agent.instanaSharedRepoMirrorUrl` | The URL of the agent shared repository mirror. For more information, see [Configuring the agent repository as the mirror](https://www.ibm.com/docs/en/instana-observability/current?topic=agents-setting-up-agent-repositories-dynamic-host#configuring-the-agent-repository-as-the-mirror). | `nil` |
| `agent.instanaSharedRepoMirrorUsername` | The username for authentication for the for the agent shared repository mirror. For more information, see [Configuring the agent repository as the mirror](https://www.ibm.com/docs/en/instana-observability/current?topic=agents-setting-up-agent-repositories-dynamic-host#configuring-the-agent-repository-as-the-mirror). | `nil` |
| `agent.instanaSharedRepoMirrorPassword` | The password for authentication for the for the agent shared repository mirror. For more information, see [Configuring the agent repository as the mirror](https://www.ibm.com/docs/en/instana-observability/current?topic=agents-setting-up-agent-repositories-dynamic-host#configuring-the-agent-repository-as-the-mirror). | `nil` |
| `agent.updateStrategy.type` | [DaemonSet update strategy type](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/); valid values are `OnDelete` and `RollingUpdate` | `RollingUpdate` |
| `agent.updateStrategy.rollingUpdate.maxUnavailable` | How many agent pods can be updated at once; this value is ignored if `agent.updateStrategy.type` is different than `RollingUpdate` | `1` |
| `agent.minReadySeconds` | The minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available | `0` |
| `agent.pod.annotations` | Additional annotations to apply to the pod | `{}` |
| `agent.pod.labels` | Additional labels to apply to the Agent pod | `{}` |
| `agent.pod.priorityClassName` | Name of an _existing_ PriorityClass that should be set on the agent pods | `nil` |
| `agent.proxyHost` | Hostname/address of a proxy | `nil` |
| `agent.proxyPort` | Port of a proxy | `nil` |
| `agent.proxyProtocol` | Proxy protocol. Supported proxy types are `http` (for both HTTP and HTTPS proxies), `socks4`, `socks5`. | `nil` |
| `agent.proxyUser` | Username of the proxy auth | `nil` |
| `agent.proxyPassword` | Password of the proxy auth | `nil` |
| `agent.proxyUseDNS` | Boolean if proxy also does DNS | `nil` |
| `agent.pod.limits.cpu` | Container cpu limits in cpu cores | `1.5` |
| `agent.pod.limits.memory` | Container memory limits in MiB | `768Mi` |
| `agent.pod.requests.cpu` | Container cpu requests in cpu cores | `0.5` |
| `agent.pod.requests.memory` | Container memory requests in MiB | `768Mi` |
| `agent.pod.tolerations` | Tolerations for pod assignment | `[]` |
| `agent.pod.affinity` | Affinity for pod assignment | `{}` |
| `agent.pod.volumes` | Custom volumes of the agent pod, see https://kubernetes.io/docs/concepts/storage/volumes/ | `[]` |
| `agent.pod.volumeMounts` | Custom volume mounts of the agent pod, see https://kubernetes.io/docs/concepts/storage/volumes/ | `[]` |
| `agent.serviceMesh.enabled` | Activate Instana Agent JVM monitoring service mesh support for Istio or OpenShift ServiceMesh | `true` |
| `agent.env` | Additional environment variables for the agent | `{}` |
| `agent.redactKubernetesSecrets` | Enable additional secrets redaction for selected Kubernetes resources | `nil` See [Kubernetes secrets](https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/#secrets) for more details. |
| `cluster.name` | Display name of the monitored cluster | Value of `zone.name` |
| `k8s_sensor.deployment.enabled` | Isolate k8sensor with a deployment |
| `k8s_sensor.deployment.minReadySeconds` | The minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available | `0` | `true` |
| `k8s_sensor.image.name` | The k8sensor image name to pull | `gcr.io/instana/k8sensor` |
| `k8s_sensor.image.digest` | The image digest to pull; if specified, it causes `k8s_sensor.image.tag` to be ignored | `nil` |
| `k8s_sensor.image.tag` | The image tag to pull; this property is ignored if `k8s_sensor.image.digest` is specified | `latest` |
| `k8s_sensor.deployment.pod.limits.cpu` | CPU request for the `k8sensor` pods | `4` |
| `k8s_sensor.deployment.pod.limits.memory` | Memory request limits for the `k8sensor` pods | `6144Mi` |
| `k8s_sensor.deployment.pod.requests.cpu` | CPU limit for the `k8sensor` pods | `1.5` |
| `k8s_sensor.deployment.pod.requests.memory` | Memory limit for the `k8sensor` pods | `1024Mi` |
| `podSecurityPolicy.enable` | Whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to be `true` as well and it is available until Kubernetes version v1.25. | `false` See [PodSecurityPolicy](https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/#podsecuritypolicy) for more details. |
| `podSecurityPolicy.name` | Name of an _existing_ PodSecurityPolicy to authorize for the Instana Agent pods. If not provided and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created for you. | `nil` |
| `rbac.create` | Whether RBAC resources should be created | `true` |
| `opentelemetry.grpc.enabled` | Whether to configure the agent to accept telemetry from OpenTelemetry applications via gRPC. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `true` |
| `opentelemetry.http.enabled` | Whether to configure the agent to accept telemetry from OpenTelemetry applications via HTTP. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `true` |
| `prometheus.remoteWrite.enabled` | Whether to configure the agent to accept metrics over its implementation of the `remote_write` Prometheus endpoint. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
| `service.create` | Whether to create a service that exposes the agents' Prometheus, OpenTelemetry and other APIs inside the cluster. Requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. The `ServiceInternalTrafficPolicy` feature gate needs to be enabled (default: enabled). | `true` |
| `serviceAccount.create` | Whether a ServiceAccount should be created | `true` |
| `serviceAccount.name` | Name of the ServiceAccount to use | `instana-agent` |
| `serviceAccount.annotations` | Annotations to add to the service account | `{}` |
| `zone.name` | Zone that detected technologies will be assigned to | `nil` You must provide either `zone.name` or `cluster.name`, see [above](#installation) for details |
| `zones` | Multi-zone daemonset configuration. | `nil` see [below](#multiple-zones) for details |
| `k8s_sensor.podDisruptionBudget.enabled` | Whether to create DisruptionBudget for k8sensor to limit the number of concurrent disruptions | `false` |
| `k8s_sensor.deployment.pod.affinity` | `k8sensor` deployment affinity format | `podAntiAffinity` defined in `values.yaml` |
### Agent Modes
Agent can have either `APM` or `INFRASTRUCTURE`.
Default is APM and if you want to override that, ensure you set value:
* `agent.mode`
For more information on agent modes, refer to the [Host Agent Modes](https://www.instana.com/docs/setup_and_manage/host_agent#host-agent-modes) documentation.
### Agent Configuration
Besides the settings listed above, there are many more settings that can be applied to the agent via the so-called "Agent Configuration File", often also referred to as `configuration.yaml` file.
An overview of the settings that can be applied is provided in the [Agent Configuration File](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#agent-configuration-file) documentation.
To configure the agent, you need to provide the configuration via the `agent.configuration_yaml` parameter in [values.yaml](values.yaml). As all other settings, the Agent configuration is handled by the Operator and stored in Kubernetes Secret resources internally. This way, even plain text passwords are not exposed in any configmap after deployment.
This configuration will be used for all Instana Agents on all nodes. Visit the [agent configuration documentation](https://docs.instana.io/setup_and_manage/host_agent/#agent-configuration-file) for more details on configuration options.
_Note:_ This Helm Chart does not support configuring [Multiple Configuration Files](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#multiple-configuration-files).
### Agent Pod Sizing
The `agent.pod.requests.cpu`, `agent.pod.requests.memory`, `agent.pod.limits.cpu` and `agent.pod.limits.memory` settings allow you to change the sizing of the `instana-agent` pods.
If you are using the [Kubernetes Sensor Deployment](#kubernetes-sensor-deployment) functionality, you may be able to reduce the default amount of resources, and especially memory, allocated to the Instana agents that monitor your applications.
Actual sizing data depends very much on how many pods, containers and applications are monitored, and how much traces they generate, so we cannot really provide a rule of thumb for the sizing.
### Bring your own Keys secret
In case you have automation that creates secrets for you, it may not be desirable for this Helm chart to create a secret containing the `agent.key` and `agent.downloadKey`.
In this case, you can instead specify the name of an alread-existing secret in the namespace in which you install the Instana agent that carries the agent key and download key.
The secret you specify The secret you specify _must_ have a field called `key`, which would contain the value you would otherwise set to `agent.key`, and _may_ contain a field called `downloadKey`, which would contain the value you would otherwise set to `agent.downloadKey`.
### Configuring Additional Configuration Files
[Multiple configuration files](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#multiple-configuration-files) is a capability of the Instana agent that allows for modularity in its configurations files.
The experimental `agent.configuration.autoMountConfigEntries`, which uses functionality available in Helm 3.1+ to automatically look up the entries of the default `instana-agent` ConfigMap, and mount as agent configuration files in the `instana-agent` container under the `/opt/instana/agent/etc/instana` directory all ConfigMap entries with keys that match the `configuration-*.yaml` scheme.
**IMPORTANT:** Needs Helm 3.1+ as it is built on the `lookup` function
**IMPORTANT:** Editing the ConfigMap adding keys requires a `helm upgrade` to take effect
### Configuring Additional Backends
You may want to have your Instana agents report to multiple backends.
The first backend must be configured as shown in the [Configuring the Instana Backend](#configuring-the-instana-backend); every backend after the first, is configured in the `agent.additionalBackends` list in the [values.yaml](values.yaml) as follows:
```yaml
agent:
additionalBackends:
# Second backend
- endpointHost: my-instana.instana.io # endpoint host; e.g., my-instana.instana.io
endpointPort: 443 # default is 443, so this line could be omitted
key: ABCDEFG # agent key for this backend
# Third backend
- endpointHost: another-instana.instana.io # endpoint host; e.g., my-instana.instana.io
endpointPort: 1444 # default is 443, so this line could be omitted
key: LMNOPQR # agent key for this backend
```
The snippet above configures the agent to report to two additional backends.
The same effect as the above can be accomplished via the command line via:
```sh
$ helm install -n instana-agent instana-agent ... \
--repo https://agents.instana.io/helm \
--set 'agent.additionalBackends[0].endpointHost=my-instana.instana.io' \
--set 'agent.additionalBackends[0].endpointPort=443' \
--set 'agent.additionalBackends[0].key=ABCDEFG' \
--set 'agent.additionalBackends[1].endpointHost=another-instana.instana.io' \
--set 'agent.additionalBackends[1].endpointPort=1444' \
--set 'agent.additionalBackends[1].key=LMNOPQR' \
instana-agent
```
_Note:_ There is no hard limitation on the number of backends an Instana agent can report to, although each comes at the cost of a slight increase in CPU and memory consumption.
### Configuring a Proxy between the Instana agents and the Instana backend
If your infrastructure uses a proxy, you should ensure that you set values for:
* `agent.proxyHost`
* `agent.pod.proxyPort`
* `agent.pod.proxyProtocol`
* `agent.pod.proxyUser`
* `agent.pod.proxyPassword`
* `agent.pod.proxyUseDNS`
#### Same Proxy for Repository and the Instana backend
If the same proxy is utilized for both backend and repository, configure only the 'Agent' proxy settings using the following parameter:
```
--set agent.proxyHost='<Hostname/address of a proxy>'
```
#### Separate Proxies for Repository and the Instana backend
In scenarios where distinct proxy settings are employed for the backend and repository, both proxies must be configured separately. The key is to ensure that `INSTANA_REPOSITORY_PROXY_ENABLED=true` is set.
To use this variant, execute helm install with the following additional parameters:
```
--set agent.proxyHost='Hostname/address of a proxy'
--set agent.env.INSTANA_REPOSITORY_PROXY_ENABLED='true'
--set agent.env.INSTANA_REPOSITORY_PROXY_HOST='Hostname/address of a proxy'
```
Make sure to replace 'Hostname/address of a proxy' with the actual hostname or address of your proxy.
### Configuring which Networks the Instana Agent should listen on
If your infrastructure has multiple networks defined, you might need to allow the agent to listen on all addresses (typically with value set to `*`):
* `agent.listenAddress`
### Setup TLS Encryption for Agent Endpoint
TLS encryption can be added via two variants.
Either an existing secret can be used or a certificate and a private key can be used during the installation.
#### Using existing secret
An existing secret of type `kubernetes.io/tls` can be used.
Only the `secretName` must be provided during the installation with `--set 'agent.tls.secretName=<YOUR_SECRET_NAME>'`.
The files from the provided secret are then mounted into the agent.
#### Provide certificate and private key
On the other side, a certificate and a private key can be added during the installation.
The certificate and private key must be base64 encoded.
To use this variant, execute `helm install` with the following additional parameters:
```
--set 'agent.tls.certificate=<YOUR_CERTIFICATE_BASE64_ENCODED>'
--set 'agent.tls.key=<YOUR_PRIVATE_KEY_BASE64_ENCODED>'
```
If `agent.tls.secretName` is set, then `agent.tls.certificate` and `agent.tls.key` are ignored.
### Development and debugging options
These options will be rarely used outside of development or debugging of the agent.
| Parameter | Description | Default |
| ----------------------- | ------------------------------------------------ | ------- |
| `agent.host.repository` | Host path to mount as the agent maven repository | `nil` |
### Kubernetes Sensor Deployment
_Note: leader-elector and kubernetes sensor are fully deprecated and can no longer be chosen. Instead, the k8s_sensor will be used._
The Helm chart will schedule additional Instana agents running _only_ the Kubernetes sensor that runs in a dedicated `k8sensor` Deployment inside the `instana-agent` namespace.
The pods containing agents that run only the Kubernetes sensor are called `k8sensor` pods.
When `k8s_sensor.deployment.enabled=true`, the `instana-agent` pods running inside the daemonset do _not_ contain the `leader-elector` container, which is instead scheduled inside the `k8sensor` pods.
The `instana-agent` and `k8sensor` pods share the same configurations in terms of backend-related configurations (including [additional backends](#configuring-additional-backends)).
The `k8s_sensor.deployment.pod.requests.cpu`, `k8s_sensor.deployment.pod.requests.memory`, `k8s_sensor.deployment.pod.limits.cpu` and `k8s_sensor.deployment.pod.limits.memory` settings, allow you to change the sizing of the `k8sensor` pods.
### Multiple Zones
You can list zones to use affinities and tolerations as the basis to associate a specific daemonset per tainted node pool. Each zone will have the following data:
* `name` (required) - zone name.
* `mode` (optional) - instana agent mode (e.g. APM, INFRASTRUCTURE, etc).
* `affinity` (optional) - standard kubernetes pod affinity list for the daemonset.
* `tolerations` (optional) - standard kubernetes pod toleration list for the daemonset.
The following is an example that will create 2 zones an api-server and a worker zone:
```yaml
zones:
- name: workers
mode: APM
- name: api-server
mode: INFRASTRUCTURE
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
```
### Volumes and volumeMounts
You can define volumes and volumeMounts in the helm configuration to make files available to the agent pod, e.g. to provide client certificates or custom certificate authorities for a sensor to reach a monitored target.
Example:
An application requires the usage of a customer provided java key store (JKS) to interact with a monitored process, e.g. IBM MQ. The keystore file is created as secret in the cluster.
To create the secret, the file can be uploaded with the Kubernetes `kubectl` or Openshift `oc` command line tools.
```bash
kubectl create secret generic keystore-secret-name --from-file=./application.jks -n instana-agent
```
Create a custom values file for the helm installation, e.g. `custom-values.yaml` and adjust the following content accordingly.
```yaml
agent:
pod:
volumeMounts:
- mountPath: /opt/instana/agent/etc/application.jks
name: jks-mount
subPath: application.jks
volumes:
- name: jks-mount
secret:
secretName: keystore-secret-name
```
To deploy the helm chart with the custom mount, specify the configuration as additional parameter.
```bash
helm install instana-agent \
--repo https://agents.instana.io/helm \
--namespace instana-agent \
--set agent.key='<your_agent_key>' \
--set agent.endpointHost='<your_host_agent_endpoint>' \
--set agent.endpointPort=443 \
--set cluster.name='<your_cluster_name>' \
--set zone.name='<your_zone_name>' \
-f custom-values.yaml \
instana-agent
```
See [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/) for other examples of volume options.
The mounted file will be available inside the agent pods after the installation.
```
$ kubectl exec instana-agent-xxxxx -- ls /opt/instana/agent/etc/application.jks
/opt/instana/agent/etc/application.jks
```
## Changelog
### 2.0.9
* Fix rendering of the agent zones
### 2.0.8
* Add option to define custom volumes and volumeMounts for the agent pod
### 2.0.7
* Fix handling of opentelemetry settings, if `spec.opentelemetry.grpc.enabled` or `spec.opentelemetry.http.enabled` are set to false
* Update operator to v2.1.13
### 2.0.6
* Rename flags for the agent repository mirror configuration
### 2.0.5
* Add flags for the agent repository mirror configuration
### 2.0.4
* Update to operator v2.1.10: Add roles for node metrics and stats for k8sensor
### 2.0.3
* Fix k8sensor deployment rendering
### 2.0.2
* Hardening for endpointPort and configuration parsing
### 2.0.1
* Fix rendering of the `spec.agent.env`, `spec.configuration_yaml`, `spec.agent.image.pullSecrets`
### 2.0.0
* Deploy the instana-agent operator instead of managing agent artifacts directly
* Always use the k8sensor, the deprecated kubernetes sensor is no longer supported (this is an internal change, Kubernetes clusters will still report into the Instana backend)
* BREAKING CHANGE: Due to limitations of helm to manage Custom Resource Definition (CRD) updates, the upgrade requires to apply the CRD from the helm chart crds folder manually. Find more details in the [upgrade](#upgrade) section.
### 1.2.74
* Enable OTLP by default
### 1.2.73
* Fix label for `io.instana/zone` to reflect the real agent mode
* Change the charts flag from ENABLE_AGENT_SOCKET to serviceMesh.enabled
* Add type: DirectoryOrCreate to DaemonSet definitions to ensure required directories exist
### 1.2.72
* Add minReadySeconds field to agent daemonset yaml
### 1.2.71
* Fix usage of digest for pulling images
### 1.2.70
* Allow the configuration of `minReadySeconds` for the agent daemonset and deployment
### 1.2.69
* Add possibility to set annotations for the serviceAccount.
### 1.2.68
* Add leader elector configuration back to allow for proper deprecation
### 1.2.67
* Fix variable name in the K8s deployment
### 1.2.66
* Allign the default Memory requests to 768Mi for the Agent container.
### 1.2.65
* Ensure we have appropriate SCC when running with new K8s sensor.
### 1.2.64
* Remove RBAC not required by agent when kubernetes-sensor is disabed.
* Add settings override for k8s-sensor affinity
* Add optional pod disruption budget for k8s-sensor
### 1.2.63
* Add RBAC required to allow access to /metrics end-points.
### 1.2.62
* Include k8s-sensor resources in the default static YAML definitions
### 1.2.61
* Increase timeout and initialDelay for the Agent container
* Add OTLP ports to headless service
### 1.2.60
* Enable the k8s_sensor by default
### 1.2.59
* Introduce unique selectorLabels and commonLabels for k8s-sensor deployment
### 1.2.58
* Default to `internalTrafficPolicy` instead of `topologyKeys` for rendering of static YAMLs
### 1.2.57
* Fix vulnerability in the leader-elector image
### 1.2.49
* Add zone name to label `io.instana/zone` in daemonset
### 1.2.48
* Set env var INSTANA_KUBERNETES_REDACT_SECRETS true if agent.redactKubernetesSecrets is enabled.
* Use feature PSP flag in k8sensor ClusterRole only when podsecuritypolicy.enable is true.
### 1.2.47
* Roll back the changes from version 1.2.46 to be compatible with the Agent Operator installation
### 1.2.46
* Use K8sensor by default.
* kubernetes.deployment.enabled setting overrides k8s_sensor.deployment.enabled setting.
* Use feature PSP flag in k8sensor ClusterRole only when podsecuritypolicy.enable is true.
* Throw failure if customer specifies proxy with k8sensor.
* Set env var INSTANA_KUBERNETES_REDACT_SECRETS true if agent.redactKubernetesSecrets is enabled.
### 1.2.45
* Use agent key secret in k8sensor deployment.
### 1.2.44
* Add support for enabling the hot-reload of `configuration.yaml` when the default `instana-agent` ConfigMap changes
* Enablement is done via the flag `--set agent.configuration.hotreloadEnabled=true`
### 1.2.43
* Bump leader-elector image to v0.5.16 (Update dependencies)
### 1.2.42
* Add support for creating multiple zones within the same cluster using affinity and tolerations.
### 1.2.41
* Add additional permissions (HPA, ResourceQuotas, etc) to k8sensor clusterrole.
### 1.2.40
* Mount all system mounts mountPropagation: HostToContainer.
### 1.2.39
* Add NO_PROXY to k8sensor deployment to prevent api-server requests from being routed to the proxy.
### 1.2.38
* Fix issue related to EKS version format when enabling OTel service.
### 1.2.37
* Fix issue where cluster_zone is used as cluster_name when `k8s_sensor.deployment.enabled=true`.
* Set `HTTPS_PROXY` in k8s deployment when proxy information is set.
### 1.2.36
* Remove Service `topologyKeys`, which was removed in Kubernetes v1.22. Replaced by `internalTrafficPolicy` which is available with Kubernetes v1.21+.
### 1.2.35
* Fix invalid backend port for new Kubernetes sensor (k8sensor)
### 1.2.34
* Add support for new Kubernetes sensor (k8sensor)
* New Kubernetes sensor can be used via the flag `--set k8s_sensor.deployment.enabled=true`
### 1.2.33
* Bump leader-elector image to v0.5.15 (Update dependencies)
### 1.2.32
* Add support for containerd montoring on TKGI
### 1.2.31
* Bump leader-elector image to v0.5.14 (Update dependencies)
### 1.2.30
* Pull agent image from IBM Cloud Container Registry (icr.io/instana/agent). No code changes have been made.
* Bump leader-elector image to v0.5.13 and pull from IBM Cloud Container Registry (icr.io/instana/leader-elector). No code changes have been made.
### 1.2.29
* Add an additional port to the Instana Agent `Service` definition, for the OpenTelemetry registered IANA port 4317.
### 1.2.28
* Fix deployment when `cluster.name` is not specified. Should be allowed according to docs but previously broke the Pod
when starting up.
### 1.2.27
* Update leader elector image to `0.5.10` to tone down logging and make it configurable
### 1.2.26
* Add TLS support. An existing secret can be used of type `kubernetes.io/tls`. Or provide a certificate and a private key, which creates a new secret.
* Update leader elector image version to 0.5.9 to support PPCle
### 1.2.25
* Add `agent.pod.labels` to add custom labels to the Instana Agent pods
### 1.2.24
* Bump leader-elector image to v0.5.8 which includes a health-check endpoint. Update the `livenessProbe`
correspondingly.
### 1.2.23
* Bump leader-elector image to v0.5.7 to fix a potential Golang bug in the elector
### 1.2.22
* Fix templating scope when defining multiple backends
### 1.2.21
* Internal updates
### 1.2.20
* upgrade leader-elector image to v0.5.6 to enable usage on s390x and arm64
### 1.2.18 / 1.2.19
* Internal change on generated DaemonSet YAML from the Helm charts
### 1.2.17
* Update Pod Security Policies as the `readOnly: true` appears not to be working for the mount points and
actually causes the Agent deployment to fail when these policies are enforced in the cluster.
### 1.2.16
* Add configuration option for `INSTANA_MVN_REPOSITORY_URL` setting on the Agent container.
### 1.2.15
* Internal pipeline changes. No significant changes to the Helm charts
### v1.2.14
* Update Agent container mounts. Make some read-only as we don't need all mounts with read-write permissions.
Additionally add the mount for `/var/data` which is needed in certain environments for the Agent to function
properly.
### v1.2.13
* Update memory settings specifically for the Kubernetes sensor (Technical Preview)
### v1.2.11
* Simplify setup for using OpenTelemetry and the Prometheus `remote_write` endpoint using the `opentelemetry.enabled` and `prometheus.remoteWrite.enabled` settings, respectively.
### v1.2.9
* **Technical Preview:** Introduce a new mode of running to the Kubernetes sensor using a dedicated deployment.
See the [Kubernetes Sensor Deployment](#kubernetes-sensor-deployment) section for more information.
### v1.2.7
* Fix: Make service opt-in, as it uses functionality (`topologyKeys`) that is available only in K8S 1.17+.
### v1.2.6
* Fix bug that might cause some OpenShift-specific resources to be created in other flavours of Kubernetes.
### v1.2.5
* Introduce the `instana-agent:instana-agent` Kubernetes service that allows you to talk to the Instana agent on the same node.
### v1.2.3
* Bug fix: Extend the built-in Pod Security Policy to cover the Docker socket mount for Tanzu Kubernetes Grid systems.
### v1.2.1
* Support OpenShift 4.x: just add --set openshift=true to the usual settings, and off you go :-)
* Restructure documentation for consistency and readability
* Deprecation: Helm 2 is no longer supported; the minimum Helm API version is now v2, which will make Helm 2 refuse to process the chart.
### v1.1.10
* Some linting of the whitespaces in the generated YAML
### v1.1.9
* Update the README to replace all references of `stable/instana-agent` with specifically setting the repo flag to `https://agents.instana.io/helm`.
* Add support for TKGI and PKS systems, providing a workaround for the [unexpected Docker socket location](https://github.com/cloudfoundry-incubator/kubo-release/issues/329).
### v1.1.7
* Store the cluster name in a new `cluster-name` entry of the `instana-agent` ConfigMap rather than directly as the value of the `INSTANA_KUBERNETES_CLUSTER_NAME`, so that you can edit the cluster name in the ConfigMap in deployments like VMware Tanzu Kubernetes Grid in which, when installing the Instana agent over the [Instana tile](https://www.instana.com/docs/setup_and_manage/host_agent/on/vmware_tanzu), you do not have directly control to the configuration of the cluster name.
If you edit the ConfigMap, you will need to delete the `instana-agent` pods for its new value to take effect.
### v1.1.6
* Allow to use user-specified memony measurement units in `agent.pod.requests.memory` and `agent.pod.limits.memory`.
If the value set is numerical, the Chart will assume it to be expressed in `Mi` for backwards compatibility.
* Exposed `agent.updateStrategy.type` and `agent.updateStrategy.rollingUpdate.maxUnavailable` settings.
### v1.1.5
Restore compatibility with Helm 2 that was broken in v1.1.4 by the usage of the `lookup` function, a function actually introduced only with Helm 3.1.
Coincidentally, this has been an _excellent_ opportunity to introduce `helm lint` to our validation pipeline and end-to-end tests with Helm 2 ;-)
### v1.1.4
* Bring-your-own secret for agent keys: using the new `agent.keysSecret` setting, you can specify the name of the secret that contains the agent key and, optionally, the download key; refer to [Bring your own Keys secret](#bring-your-own-keys-secret) for more details.
* Add support for affinities for the instana agent pod via the `agent.pod.affinity` setting.
* Put some love into the ArtifactHub.io metadata; likely to add some more improvements related to this over time.
### v1.1.3
* No new features, just ironing some wrinkles out of our release automation.
### v1.1.2
* Improvement: Seamless support for Instana static agent images: When using an `agent.image.name` starting with `containers.instana.io`, automatically create a secret called `containers-instana-io` containing the `.dockerconfigjson` for `containers.instana.io`, using `_` as username and `agent.downloadKey` or, if missing, `agent.key` as password. If you want to control the creation of the image pull secret, or disable it, you can use `agent.image.pullSecrets`, passing to it the YAML to use for the `imagePullSecrets` field of the Daemonset spec, including an empty array `[]` to mount no pull secrets, no matter what.
### v1.1.1
* Fix: Recreate the `instana-agent` pods when there is a change in one of the following configuration, which are mapped to the chart-managed ConfigMap:
* `agent.configuration_yaml`
* `agent.additional_backends`
The pod recreation is achieved by annotating the `instana-agent` Pod with a new `instana-configuration-hash` annotation that has, as value, the SHA-1 hash of the configurations used to populate the ConfigMap.
This way, when the configuration changes, the respective change in the `instana-configuration-hash` annotation will cause the agent pods to be recreated.
This technique has been described at [1] (or, at least, that is were we learned about it) and it is pretty cool :-)
### v1.1.0
* Improvement: The `instana-agent` Helm chart has a new home at `https://agents.instana.io/helm` and `https://github.com/instana/helm-charts/instana-agent`!
This release is functionally equivalent to `1.0.34`, but we bumped the major to denote the new location ;-)
## References
[1] ["Using Kubernetes Helm to push ConfigMap changes to your Deployments", by Sander Knape; Mar 7, 2019](https://sanderknape.com/2019/03/kubernetes-helm-configmaps-changes-deployments/)

View File

@ -0,0 +1,5 @@
# Instana
Instana is an [APM solution(https://www.instana.com/) built for microservices that enables IT Ops to build applications faster and deliver higher quality services by automating monitoring tracing and root cause analysis. This solution is optimized for [Rancher](https://www.instana.com/rancher/).
This chart adds the Instana Agent to all schedulable nodes in your cluster via a `DaemonSet`.

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@ -0,0 +1,20 @@
from diagrams import Cluster, Diagram
from diagrams.k8s.compute import Deploy, DaemonSet, Pod
from diagrams.k8s.podconfig import ConfigMap
with Diagram("kubernetes.deployment.enabled", show=True, direction="LR"):
ds = None
deploy = None
with Cluster("Namespace\ninstana-agent"):
with Cluster("Deployment\nkubernetes-sensor"):
deploy = Pod("2 Replicas\nKubernetes Sensor")
with Cluster("DaemonSet\ninstana-agent"):
ds = Pod('Per Node\nHost & APM')
cm = ConfigMap("instana-agent")
dcm = ConfigMap("instana-agent-deployment")
cm >> deploy
cm >> ds
dcm >> deploy

View File

@ -0,0 +1,236 @@
questions:
# Basic agent configuration
- variable: agent.key
label: agent.key
description: "Your Instana Agent key is the secret token which your agent uses to authenticate to Instana's servers"
type: string
required: true
group: "Agent Configuration"
- variable: agent.endpointHost
label: agent.endpointHost
description: "The hostname of the Instana server your agents will connect to. Defaults to ingress-red-saas.instana.io for US and ROW. If in Europe, please use ingress-blue-saas.instana.io"
type: string
required: true
default: "ingress-red-saas.instana.io"
group: "Agent Configuration"
- variable: zone.name
label: zone.name
description: "Custom zone that detected technologies will be assigned to"
type: string
required: true
group: "Agent Configuration"
# Advanced agent configuration
- variable: advancedAgentConfiguration
description: "Show advanced configuration for the Instana Agent"
label: Show advanced configuration
type: boolean
default: false
show_subquestion_if: true
group: "Advanced Agent Configuration"
subquestions:
- variable: agent.configuration_yaml
label: agent.configuration_yaml (Optional)
description: "Custom content for the agent configuration.yaml file in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.downloadKey
label: agent.downloadKey (Optional)
description: "Your Instana download key"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.endpointPort
label: agent.endpointPort
description: "The Agent backend port number (as a string) of the Instana server your agents will connect to"
type: string
required: true
default: "443"
group: "Advanced Agent Configuration"
- variable: agent.image.name
label: agent.image.name
description: "The name of the Instana Agent container image"
type: string
required: true
default: "instana/agent"
group: "Advanced Agent Configuration"
- variable: agent.image.tag
label: agent.image.tag
description: "The tag name of the Instana Agent container image"
type: string
required: true
default: "latest"
group: "Advanced Agent Configuration"
- variable: agent.image.pullPolicy
label: agent.image.pullPolicy
description: "Specifies when to pull the Instana Agent image container"
type: string
required: true
default: "Always"
group: "Advanced Agent Configuration"
- variable: agent.listenAddress
label: agent.listenAddress (Optional)
description: "The IP address the agent HTTP server will listen to, or '*' for all interfaces"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.mode
label: agent.mode (Optional)
description: "Agent mode. Possible options are: APM, INFRASTRUCTURE or AWS"
type: enum
options:
- "APM"
- "INFRASTRUCTURE"
- "AWS"
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.annotations
label: agent.pod.annotations (Optional)
description: "Additional annotations to be added to the agent pods in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.limits.cpu
label: agent.pod.limits.cpu
description: "CPU units allocation limits for the agent pods"
type: string
required: true
default: "1.5"
group: "Advanced Agent Configuration"
- variable: agent.pod.limits.memory
label: agent.pod.limits.memory
description: "Memory allocation limits in MiB for the agent pods"
type: int
required: true
default: 512
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyHost
label: agent.pod.proxyHost (Optional)
description: "Hostname/address of a proxy. Sets the INSTANA_AGENT_PROXY_HOST environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyPort
label: agent.pod.proxyPort (Optional)
description: "Port of a proxy. Sets the INSTANA_AGENT_PROXY_PORT environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyProtocol
label: agent.pod.proxyProtocol (Optional)
description: "Proxy protocol. Sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable. Supported proxy types are http, socks4, socks5"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyUser
label: agent.pod.proxyUser (Optional)
description: "Username of the proxy auth. Sets the INSTANA_AGENT_PROXY_USER environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyPassword
label: agent.pod.proxyPassword (Optional)
description: "Password of the proxy auth. Sets the INSTANA_AGENT_PROXY_PASSWORD environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyUseDNS
label: agent.pod.proxyUseDNS. (Optional)
description: "Boolean if proxy also does DNS. Sets the INSTANA_AGENT_PROXY_USE_DNS environment variable"
type: enum
options:
- "true"
- "false"
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.requests.cpu
label: agent.pod.requests.cpu
description: "Requested CPU units allocation for the agent pods"
type: string
required: true
default: "0.5"
group: "Advanced Agent Configuration"
- variable: agent.pod.requests.memory
label: agent.pod.requests.memory
description: "Requested memory allocation in MiB for the agent pods"
type: int
required: true
default: 512
group: "Advanced Agent Configuration"
- variable: agent.pod.tolerations
label: agent.pod.tolerations (Optional)
description: "Tolerations to influence agent pod assignment in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.redactKubernetesSecrets
label: agent.redactKubernetesSecrets (Optional)
description: "Enable additional secrets redaction for selected Kubernetes resources"
type: boolean
required: false
default: false
group: "Advanced Agent Configuration"
- variable: cluster.name
label: cluster.name (Optional)
description: "The name that will be assigned to this cluster in Instana. See the 'Installing the Chart' section in the 'Detailed Descriptions' tab for more details"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: leaderElector.image.name
label: leaderElector.image.name
description: "The name of the leader elector container image"
type: string
required: true
default: "instana/leader-elector"
group: "Advanced Agent Configuration"
- variable: leaderElector.image.tag
label: leaderElector.image.tag
description: "The tag name of the leader elector container image"
type: string
required: true
default: "0.5.4"
group: "Advanced Agent Configuration"
- variable: leaderElector.port
label: leaderElector.port
description: "The port on which the leader elector sidecar is exposed"
type: int
required: true
default: 42655
group: "Advanced Agent Configuration"
- variable: podSecurityPolicy.enable
label: podSecurityPolicy.enable (Optional)
description: "Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to also be `true`"
type: boolean
show_if: "rbac.create=true"
required: false
default: false
group: "Pod Security Policy Configuration"
- variable: podSecurityPolicy.name
label: podSecurityPolicy.name (Optional)
description: "The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods. If not set and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created with a name generated using the fullname template"
type: string
show_if: "rbac.create=true&&podSecurityPolicy.enable=true"
required: false
group: "Pod Security Policy Configuration"
- variable: rbac.create
label: rbac.create
description: "Specifies whether RBAC resources should be created"
type: boolean
required: true
default: true
group: "RBAC Configuration"
- variable: serviceAccount.create
label: serviceAccount.create
description: "Specifies whether a ServiceAccount should be created"
type: boolean
required: true
default: true
show_subquestion_if: true
group: "RBAC Configuration"
subquestions:
- variable: serviceAccount.name
label: Name of the ServiceAccount (Optional)
description: "The name of the ServiceAccount to use. If not set and `serviceAccount.create` is true, a name is generated using the fullname template."
type: string
required: false
group: "RBAC Configuration"

View File

@ -0,0 +1,73 @@
{{- if (and (not (or .Values.agent.key .Values.agent.keysSecret )) (and (not .Values.zone.name) (not .Values.cluster.name))) }}
##############################################################################
#### ERROR: You did not specify your secret agent key. ####
#### ERROR: You also did not specify a zone or name for this cluster. ####
##############################################################################
This agent deployment will be incomplete until you set your agent key and zone or name for this cluster:
helm upgrade {{ .Release.Name }} --reuse-values \
--repo https://agents.instana.io/helm \
--set agent.key=$(YOUR_SECRET_AGENT_KEY) \
--set zone.name=$(YOUR_ZONE_NAME) instana-agent
Alternatively, you may specify a cluster name and the zone will be detected from availability zone information on the host:
helm upgrade {{ .Release.Name }} --reuse-values \
--repo https://agents.instana.io/helm \
--set agent.key=$(YOUR_SECRET_AGENT_KEY) \
--set cluster.name=$(YOUR_CLUSTER_NAME) instana-agent
- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
- YOUR_ZONE_NAME should be the zone that detected technologies will be assigned to.
- YOUR_CLUSTER_NAME should be the custom name of your cluster.
At least one of zone.name or cluster.name is required. This cluster will be reported with the name of the zone unless you specify a cluster name.
{{- else if (and (not .Values.zone.name) (not .Values.cluster.name)) }}
##############################################################################
#### ERROR: You did not specify a zone or name for this cluster. ####
##############################################################################
This agent deployment will be incomplete until you set a zone for this cluster:
helm upgrade {{ .Release.Name }} --reuse-values \
--repo https://agents.instana.io/helm \
--set zone.name=$(YOUR_ZONE_NAME) instana-agent
Alternatively, you may specify a cluster name and the zone will be detected from availability zone information on the host:
helm upgrade {{ .Release.Name }} --reuse-values \
--repo https://agents.instana.io/helm \
--set cluster.name=$(YOUR_CLUSTER_NAME) instana-agent
- YOUR_ZONE_NAME should be the zone that detected technologies will be assigned to.
- YOUR_CLUSTER_NAME should be the custom name of your cluster.
At least one of zone.name or cluster.name is required. This cluster will be reported with the name of the zone unless you specify a cluster name.
{{- else if not (or .Values.agent.key .Values.agent.keysSecret )}}
##############################################################################
#### ERROR: You did not specify your secret agent key. ####
##############################################################################
This agent deployment will be incomplete until you set your agent key:
helm upgrade {{ .Release.Name }} --reuse-values \
--repo https://agents.instana.io/helm \
--set agent.key=$(YOUR_SECRET_AGENT_KEY) instana-agent
- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
{{- else -}}
Ensure to run `oc adm policy add-scc-to-user privileged -z instana-agent -n instana-agent` if running on OCP, otherwise agent pods will not be scheduled correctly.
It may take a few moments for the agents to fully deploy. You can see what agents are running by listing resources in the {{ .Release.Namespace }} namespace:
kubectl get all -n {{ .Release.Namespace }}
You can get the logs for all of the agents with `kubectl logs`:
kubectl logs -l app.kubernetes.io/name={{ .Release.Name }} -n {{ .Release.Namespace }} -c instana-agent
{{- end }}

View File

@ -0,0 +1,307 @@
---
apiVersion: instana.io/v1
kind: InstanaAgent
metadata:
name: instana-agent
namespace: instana-agent
spec:
{{- if .Values.zone }}
zone:
name: {{ .Values.zone.name }}
{{- end }}
{{- if .Values.zones }}
zones:
{{- toYaml $.Values.zones | nindent 4 }}
{{- end }}
cluster:
name: {{ .Values.cluster.name }}
agent:
{{- if .Values.agent.mode }}
mode: {{ .Values.agent.mode }}
{{- end }}
{{- if .Values.agent.key }}
key: {{ .Values.agent.key }}
{{- end }}
{{- if .Values.agent.downloadKey }}
downloadKey: {{ .Values.agent.downloadKey }}
{{- end }}
{{- if .Values.agent.keysSecret }}
keysSecret: {{ .Values.agent.keysSecret }}
{{- end }}
{{- if .Values.agent.listenAddress }}
listenAddress: {{ .Values.agent.listenAddress }}
{{- end }}
endpointHost: {{ .Values.agent.endpointHost }}
{{- if eq (typeOf .Values.agent.endpointPort) "string" }}
endpointPort: {{ .Values.agent.endpointPort }}
{{- else }}
endpointPort: {{ .Values.agent.endpointPort | quote }}
{{- end }}
{{- if .Values.agent.instanaMvnRepoUrl }}
instanaMvnRepoUrl: {{ .Values.agent.instanaMvnRepoUrl }}
{{- end }}
{{- if .Values.agent.instanaMvnRepoFeaturesPath }}
instanaMvnRepoFeaturesPath: {{ .Values.agent.instanaMvnRepoFeaturesPath }}
{{- end }}
{{- if .Values.agent.instanaMvnRepoSharedPath }}
instanaMvnRepoSharedPath: {{ .Values.agent.instanaMvnRepoSharedPath }}
{{- end }}
{{- if .Values.agent.agentReleaseRepoMirrorUrl }}
agentReleaseRepoMirrorUrl: {{ .Values.agent.agentReleaseRepoMirrorUrl }}
{{- end }}
{{- if .Values.agent.agentReleaseRepoMirrorUsername }}
agentReleaseRepoMirrorUsername: {{ .Values.agent.agentReleaseRepoMirrorUsername }}
{{- end }}
{{- if .Values.agent.agentReleaseRepoMirrorPassword }}
agentReleaseRepoMirrorPassword: {{ .Values.agent.agentReleaseRepoMirrorPassword }}
{{- end }}
{{- if .Values.agent.instanaSharedRepoMirrorUrl }}
instanaSharedRepoMirrorUrl: {{ .Values.agent.instanaSharedRepoMirrorUrl }}
{{- end }}
{{- if .Values.agent.instanaSharedRepoMirrorUsername }}
instanaSharedRepoMirrorUsername: {{ .Values.agent.instanaSharedRepoMirrorUsername }}
{{- end }}
{{- if .Values.agent.instanaSharedRepoMirrorPassword }}
instanaSharedRepoMirrorPassword: {{ .Values.agent.instanaSharedRepoMirrorPassword }}
{{- end }}
{{- if .Values.agent.additionalBackends }}
additionalBackends:
{{- range $.Values.agent.additionalBackends }}
- endpointHost: {{ .endpointHost }}
{{- if eq (typeOf .endpointPort) "string" }}
endpointPort: {{ .endpointPort }}
{{- else }}
endpointPort: {{ .endpointPort | quote }}
{{- end }}
{{- if .key }}
key: {{ .key }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.tls }}
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
tls:
{{- if .Values.agent.tls.secretName }}
secretName: {{ .Values.agent.tls.secretName }}
{{- end }}
{{- if .Values.agent.tls.certificate }}
certificate: {{ .Values.agent.tls.certificate }}
{{- end }}
{{- if .Values.agent.tls.key }}
key: {{ .Values.agent.tls.key }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.image }}
{{- if or .Values.agent.image.name .Values.agent.image.digest .Values.agent.image.tag .Values.agent.image.pullPolicy .Values.agent.image.pullSecrets }}
image:
{{- if .Values.agent.image.name }}
name: {{ .Values.agent.image.name }}
{{- end }}
{{- if .Values.agent.image.digest }}
digest: {{ .Values.agent.image.digest }}
{{- end }}
{{- if .Values.agent.image.tag }}
tag: {{ .Values.agent.image.tag }}
{{- end }}
{{- if .Values.agent.image.pullPolicy }}
pullPolicy: {{ .Values.agent.image.pullPolicy }}
{{- end }}
{{- if .Values.agent.image.pullSecrets }}
pullSecrets:
{{- toYaml $.Values.agent.image.pullSecrets | nindent 6 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.minReadySeconds }}
minReadySeconds: {{ .Values.agent.minReadySeconds }}
{{- end }}
{{- if .Values.agent.updateStrategy }}
updateStrategy:
{{- if .Values.agent.updateStrategy.type }}
type: {{ .Values.agent.updateStrategy.type }}
{{- end }}
{{- if .Values.agent.updateStrategy.rollingUpdate }}
{{- if .Values.agent.updateStrategy.rollingUpdate.maxUnavailable }}
rollingUpdate:
maxUnavailable: {{ .Values.agent.updateStrategy.maxUnavailable }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.pod }}
{{- if or .Values.agent.pod.annotations .Values.agent.pod.labels .Values.agent.pod.tolerations .Values.agent.pod.affinity .Values.agent.pod.priorityClassName .Values.agent.pod.requests .Values.agent.pod.limits .Values.agent.pod.nodeSelector .Values.agent.pod.volumeMounts .Values.agent.pod.mounts }}
pod:
{{- if .Values.agent.pod.annotations }}
annotations:
{{- toYaml $.Values.agent.pod.annotations | nindent 8 }}
{{- end }}
{{- if .Values.agent.pod.labels }}
labels:
{{- toYaml $.Values.agent.pod.labels | nindent 8 }}
{{- end }}
{{- if .Values.agent.pod.tolerations }}
tolerations:
{{- toYaml $.Values.agent.pod.tolerations | nindent 8 }}
{{- end }}
{{- if .Values.agent.pod.affinity }}
affinity:
{{- toYaml $.Values.agent.pod.affinity | nindent 8 }}
{{- end }}
{{- if .Values.agent.pod.priorityClassName }}
priorityClassName: {{ .Values.agent.pod.priorityClassName }}
{{- end }}
{{- if .Values.agent.pod.requests }}
{{- if or .Values.agent.pod.requests.memory .Values.agent.pod.requests.cpu }}
requests:
{{- if .Values.agent.pod.requests.memory }}
memory: {{ .Values.agent.pod.requests.memory }}
{{- end }}
{{- if .Values.agent.pod.requests.cpu }}
cpu: {{ .Values.agent.pod.requests.cpu | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.pod.limits }}
{{- if or .Values.agent.pod.limits.memory .Values.agent.pod.limits.cpu }}
limits:
{{- if .Values.agent.pod.limits.memory }}
memory: {{ .Values.agent.pod.limits.memory }}
{{- end }}
{{- if .Values.agent.pod.limits.cpu }}
cpu: {{ .Values.agent.pod.limits.cpu | quote }}
{{- end }}
{{- if .Values.agent.pod.nodeSelector }}
nodeSelector:
{{- toYaml $.Values.agent.pod.nodeSelector | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.pod.volumeMounts }}
volumeMounts:
{{- toYaml $.Values.agent.pod.volumeMounts | nindent 8 }}
{{- end }}
{{- if .Values.agent.pod.volumes }}
volumes:
{{- toYaml $.Values.agent.pod.volumes | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.proxyHost }}
proxyHost: {{ .Values.agent.proxyHost }}
{{- end }}
{{- if .Values.agent.proxyPort }}
proxyPort: {{ .Values.agent.proxyPort }}
{{- end }}
{{- if .Values.agent.proxyProtocol }}
proxyProtocol: {{ .Values.agent.proxyProtocol }}
{{- end }}
{{- if .Values.agent.proxyUser }}
proxyUser: {{ .Values.agent.proxyUser }}
{{- end }}
{{- if .Values.agent.proxyPassword }}
proxyPassword: {{ .Values.agent.proxyPassword }}
{{- end }}
{{- if .Values.agent.proxyUseDNS }}
proxyUseDNS: {{ .Values.agent.proxyUseDNS }}
{{- end }}
{{- if .Values.agent.env }}
env:
{{- range $key, $value := .Values.agent.env }}
{{- if eq (typeOf $value) "string" }}
{{ $key }}: {{ $value }}
{{- else }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.agent.configuration_yaml }}
{{ $configuration_yaml_string := .Values.agent.configuration_yaml }}
configuration_yaml: |-
{{ $configuration_yaml_string | indent 6}}
{{- end }}
{{- if and .Values.agent.host .Values.agent.host.repository }}
host:
repository: {{ .Values.agent.host.repository }}
{{- end }}
{{- if .Values.agent.serviceMesh}}
{{- if .Values.agent.serviceMesh.enabled }}
serviceMesh:
enabled: {{ .Values.agent.serviceMesh.enabled }}
{{- end }}
{{- end }}
{{- if .Values.opentelemetry }}
{{- if or ( and (hasKey .Values.opentelemetry "grpc") (hasKey .Values.opentelemetry.grpc "enabled")) ( and (hasKey .Values.opentelemetry "http") (hasKey .Values.opentelemetry.http "enabled")) }}
opentelemetry:
{{- if and (hasKey .Values.opentelemetry "grpc") (hasKey .Values.opentelemetry.grpc "enabled") }}
grpc:
enabled: {{ .Values.opentelemetry.grpc.enabled }}
{{- end }}
{{- if and (hasKey .Values.opentelemetry "http") (hasKey .Values.opentelemetry.http "enabled") }}
http:
enabled: {{ .Values.opentelemetry.http.enabled }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.prometheus }}
{{- if .Values.prometheus.remoteWrite }}
{{- if .Values.prometheus.remoteWrite.enabled }}
prometheus:
remoteWrite:
enabled: {{ .Values.prometheus.remoteWrite.enabled }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.serviceAccount }}
{{- if or .Values.serviceAccount.create .Values.serviceAccount.annotations }}
serviceAccount:
{{- if .Values.serviceAccount.create }}
create: {{ .Values.serviceAccount.create }}
{{- end }}
{{- if .Values.serviceAccount.name }}
name: {{ .Values.serviceAccount.name }}
{{- end }}
{{- if .Values.serviceAccount.annotations }}
annotations: {{ .Values.serviceAccount.annotations }}
{{- toYaml $.Values.serviceAccount.annotations | nindent 6 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.podSecurityPolicy }}
{{- if or .Values.podSecurityPolicy.enable .Values.podSecurityPolicy.name }}
podSecurityPolicy:
{{- if .Values.podSecurityPolicy.enable }}
enable: {{ .Values.podSecurityPolicy.enable }}
{{- end }}
{{- if .Values.podSecurityPolicy.name }}
name: {{ .Values.podSecurityPolicy.name }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.k8s_sensor }}
{{- if or .Values.k8s_sensor.image .Values.k8s_sensor.deployment .Values.k8s_sensor.podDisruptionBudget }}
k8s_sensor:
{{- if .Values.k8s_sensor.image }}
image:
{{- if .Values.k8s_sensor.image.name }}
name: {{ .Values.k8s_sensor.image.name }}
{{- end }}
{{- if .Values.k8s_sensor.image.digest }}
digest: {{ .Values.k8s_sensor.image.digest }}
{{- end }}
{{- if .Values.k8s_sensor.image.tag }}
tag: {{ .Values.k8s_sensor.image.tag }}
{{- end }}
{{- if .Values.k8s_sensor.image.pullPolicy }}
pullPolicy: {{ .Values.k8s_sensor.image.pullPolicy }}
{{- end }}
{{- end }}
{{- if .Values.k8s_sensor.deployment }}
deployment:
{{- toYaml $.Values.k8s_sensor.deployment | nindent 6 }}
{{- end }}
{{- if .Values.k8s_sensor.podDisruptionBudget }}
podDisruptionBudget:
{{- toYaml $.Values.k8s_sensor.podDisruptionBudget | nindent 6 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,27 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: leader-election-role
rules:
- apiGroups:
- ""
- coordination.k8s.io
resources:
- configmaps
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch

View File

@ -0,0 +1,178 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
rules:
- nonResourceURLs:
- /healthz
- /version
verbs:
- get
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- apps.openshift.io
resources:
- deploymentconfigs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- events
- namespaces
- nodes
- nodes/metrics
- nodes/stats
- persistentvolumeclaims
- persistentvolumes
- pods
- pods/log
- replicationcontrollers
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- secrets
- serviceaccounts
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- extensions
resources:
- deployments
- ingresses
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- instana.io
resources:
- agents
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- instana.io
resources:
- agents/finalizers
verbs:
- update
- apiGroups:
- instana.io
resources:
- agents/status
verbs:
- get
- patch
- update
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- policy
resourceNames:
- instana-agent-k8sensor
resources:
- podsecuritypolicies
verbs:
- use
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
verbs:
- bind
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- use

View File

@ -0,0 +1,13 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: leader-election-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: leader-election-role
subjects:
- kind: ServiceAccount
name: instana-agent-operator
namespace: instana-agent

View File

@ -0,0 +1,13 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: instana-agent-operator
namespace: instana-agent

View File

@ -0,0 +1,17 @@
---
apiVersion: v1
data:
controller_manager_config.yaml: |
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
leaderElection:
leaderElect: true
resourceName: 819a9291.instana.io
kind: ConfigMap
metadata:
name: manager-config
namespace: instana-agent

View File

@ -0,0 +1,66 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: instana-agent-operator
name: controller-manager
namespace: instana-agent
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: instana-agent-operator
template:
metadata:
labels:
app.kubernetes.io/name: instana-agent-operator
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- arm64
containers:
- args:
- --leader-elect
command:
- /manager
image: icr.io/instana/instana-agent-operator:2.1.14
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
name: manager
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: 200m
memory: 600Mi
requests:
cpu: 200m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: true
serviceAccountName: instana-agent-operator
terminationGracePeriodSeconds: 10

View File

@ -0,0 +1,6 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: instana-agent-operator
namespace: instana-agent

View File

@ -0,0 +1,282 @@
# name is the value which will be used as the base resource name for various resources associated with the agent.
# name: instana-agent
agent:
# agent.mode is used to set agent mode and it can be APM, INFRASTRUCTURE or AWS
# mode: APM
# agent.key is the secret token which your agent uses to authenticate to Instana's servers.
key: null
# agent.downloadKey is key, sometimes known as "sales key", that allows you to download,
# software from Instana.
# downloadKey: null
# Rather than specifying the agent key and optionally the download key, you can "bring your
# own secret" creating it in the namespace in which you install the `instana-agent` and
# specify its name in the `keysSecret` field. The secret you create must contains
# a field called `key` and optionally one called `downloadKey`, which contain, respectively,
# the values you'd otherwise set in `.agent.key` and `agent.downloadKey`.
# keysSecret: null
# agent.listenAddress is the IP address the agent HTTP server will listen to.
# listenAddress: "*"
# agent.endpointHost is the hostname of the Instana server your agents will connect to.
# endpointHost: ingress-red-saas.instana.io
# agent.endpointPort is the port number (as a String) of the Instana server your agents will connect to.
# endpointPort: 443
# These are additional backends the Instana agent will report to besides
# the one configured via the `agent.endpointHost`, `agent.endpointPort` and `agent.key` setting
# additionalBackends: []
# - endpointHost: ingress.instana.io
# endpointPort: 443
# key: <agent_key>
# TLS for end-to-end encryption between Instana agent and clients accessing the agent.
# The Instana agent does not yet allow enforcing TLS encryption.
# TLS is only enabled on a connection when requested by the client.
# tls:
# In order to enable TLS, a secret of type kubernetes.io/tls must be specified.
# secretName is the name of the secret that has the relevant files.
# secretName: null
# Otherwise, the certificate and the private key must be provided as base64 encoded.
# certificate: null
# key: null
# image:
# agent.image.name is the name of the container image of the Instana agent.
# name: icr.io/instana/agent
# agent.image.digest is the digest (a.k.a. Image ID) of the agent container image; if specified, it has priority over agent.image.tag, which will be ignored.
# digest:
# agent.image.tag is the tag name of the agent container image; if agent.image.digest is specified, this property is ignored.
# tag: latest
# agent.image.pullPolicy specifies when to pull the image container.
# pullPolicy: Always
# agent.image.pullSecrets allows you to override the default pull secret that is created when agent.image.name starts with "containers.instana.io"
# Setting agent.image.pullSecrets prevents the creation of the default "containers-instana-io" secret.
# pullSecrets:
# - name: my_awesome_secret_instead
# If you want no imagePullSecrets to be specified in the agent pod, you can just pass an empty array to agent.image.pullSecrets
# pullSecrets: []
# The minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available
# minReadySeconds: 0
# updateStrategy:
# type: RollingUpdate
# rollingUpdate:
# maxUnavailable: 1
# pod:
# agent.pod.annotations are additional annotations to be added to the agent pods.
# annotations: {}
# agent.pod.labels are additional labels to be added to the agent pods.
# labels: {}
# agent.pod.tolerations are tolerations to influence agent pod assignment.
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
# tolerations: []
# agent.pod.affinity are affinities to influence agent pod assignment.
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
# affinity: {}
# agent.pod.priorityClassName is the name of an existing PriorityClass that should be set on the agent pods
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: null
# agent.pod.nodeSelector are selectors to influence where agent pods should be scheduled.
# nodeSelector:
# location: 'us-central1-c'
#nodeSelector: null
# agent.pod.requests and agent.pod.limits adjusts the resource assignments for the DaemonSet agent
# regardless of the kubernetes.deployment.enabled setting
# requests:
# agent.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
# memory: 768Mi
# agent.pod.requests.cpu are the requested CPU units allocation for the agent pods.
# cpu: 0.5
# limits:
# agent.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
# memory: 768Mi
# agent.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
# cpu: 1.5
# agent.pod.volumes and agent.pod.volumeMounts are additional volumes and volumeMounts for user-specific files.
# For example, a certificate may need to be mounted for an agent sensor to connect to the monitored target.
# https://kubernetes.io/docs/concepts/storage/volumes/
# volumes:
# - name: my-secret-volume
# secret:
# secretName: instana-agent-key
# volumeMounts:
# - name: my-secret-volume
# mountPath: /secrets
# agent.proxyHost sets the INSTANA_AGENT_PROXY_HOST environment variable.
# proxyHost: null
# agent.proxyPort sets the INSTANA_AGENT_PROXY_PORT environment variable.
# proxyPort: 80
# agent.proxyProtocol sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable.
# proxyProtocol: HTTP
# agent.proxyUser sets the INSTANA_AGENT_PROXY_USER environment variable.
# proxyUser: null
# agent.proxyPassword sets the INSTANA_AGENT_PROXY_PASSWORD environment variable.
# proxyPassword: null
# agent.proxyUseDNS sets the INSTANA_AGENT_PROXY_USE_DNS environment variable.
# proxyUseDNS: false
# use this to set additional environment variables for the instana agent
# for example:
# env:
# INSTANA_AGENT_TAGS: dev
# env: {}
configuration_yaml: |
# Manual a-priori configuration. Configuration will be only used when the sensor
# is actually installed by the agent.
# The commented out example values represent example configuration and are not
# necessarily defaults. Defaults are usually 'absent' or mentioned separately.
# Changes are hot reloaded unless otherwise mentioned.
# It is possible to create files called 'configuration-abc.yaml' which are
# merged with this file in file system order. So 'configuration-cde.yaml' comes
# after 'configuration-abc.yaml'. Only nested structures are merged, values are
# overwritten by subsequent configurations.
# Secrets
# To filter sensitive data from collection by the agent, all sensors respect
# the following secrets configuration. If a key collected by a sensor matches
# an entry from the list, the value is redacted.
#com.instana.secrets:
# matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
# list:
# - 'key'
# - 'password'
# - 'secret'
# Host
#com.instana.plugin.host:
# tags:
# - 'dev'
# - 'app1'
# agent.redactKubernetesSecrets sets the INSTANA_KUBERNETES_REDACT_SECRETS environment variable.
# redactKubernetesSecrets: null
# agent.host.repository sets a host path to be mounted as the agent maven repository (for debugging or development purposes)
host:
repository: null
# agent.serviceMesh.enabled sets the ENABLE_AGENT_SOCKET environment variable.
serviceMesh:
# enabled: true
cluster:
# cluster.name represents the name that will be assigned to this cluster in Instana
name: null
# openshift specifies whether the cluster role should include openshift permissions and other tweaks to the YAML.
# The chart will try to auto-detect if the cluster is OpenShift, so you will likely not even need to set this explicitly.
# openshift: true
# rbac:
# Specifies whether RBAC resources should be created
# create: true
# opentelemetry:
# enabled: false # legacy setting, will only enable grpc, defaults to false
# grpc:
# enabled: true # takes precedence over legacy settings above, defaults to true if "grpc:" is present
# http:
# enabled: true # allows to enable http endpoints, defaults to true if "http:" is present
# prometheus:
# remoteWrite:
# enabled: false # If true, it will also apply `service.create=true`
# serviceAccount:
# Specifies whether a ServiceAccount should be created
# create: true
# The name of the ServiceAccount to use.
# If not set and `create` is true, a name is generated using the fullname template
# name: instana-agent
# Annotations to add to the service account
# annotations: {}
# podSecurityPolicy:
# Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods.
# Requires `rbac.create` to be `true` as well and K8s version below v1.25.
# enable: false
# The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods.
# If not set and `enable` is true, a PodSecurityPolicy will be created with a name generated using the fullname template.
# name: null
zone:
# zone.name is the custom zone that detected technologies will be assigned to
name: null
# k8s_sensor:
# image:
# k8s_sensor.image.name is the name of the container image of the Instana agent.
# name: icr.io/instana/k8sensor
# k8s_sensor.image.digest is the digest (a.k.a. Image ID) of the agent container image; if specified, it has priority over agent.image.tag, which will be ignored.
#digest:
# k8s_sensor.image.tag is the tag name of the agent container image; if agent.image.digest is specified, this property is ignored.
# tag: latest
# k8s_sensor.image.pullPolicy specifies when to pull the image container.
# pullPolicy: Always
# deployment:
# Specifies whether or not to enable the Deployment and turn off the Kubernetes sensor in the DaemonSet
# enabled: true
# Use three replicas to ensure the HA by the default.
# replicas: 3
# k8s_sensor.deployment.pod adjusts the resource assignments for the agent independently of the DaemonSet agent when k8s_sensor.deployment.enabled=true
# pod:
# requests:
# k8s_sensor.deployment.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
# memory: 128Mi
# k8s_sensor.deployment.pod.requests.cpu are the requested CPU units allocation for the agent pods.
# cpu: 120m
# limits:
# k8s_sensor.deployment.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
# memory: 2048Mi
# k8s_sensor.deployment.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
# cpu: 500m
# affinity:
# podAntiAffinity:
# Soft anti-affinity policy: try not to schedule multiple kubernetes-sensor pods on the same node.
# If the policy is set to "requiredDuringSchedulingIgnoredDuringExecution", if the cluster has
# fewer nodes than the amount of desired replicas, `helm install/upgrade --wait` will not return.
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 100
# podAffinityTerm:
# labelSelector:
# matchExpressions:
# - key: instana/agent-mode
# operator: In
# values: [ KUBERNETES ]
# topologyKey: "kubernetes.io/hostname"
# The minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available
# minReadySeconds: 0
# podDisruptionBudget:
# Specifies whether or not to setup a pod disruption budget for the k8sensor deployment
# enabled: false
# zones:
# # Configure use of zones to use tolerations as the basis to associate a specific daemonset per tainted node pool
# - name: pool-01
# tolerations:
# - key: "pool"
# operator: "Equal"
# value: "pool-01"
# effect: "NoExecute"
# - name: pool-02
# tolerations:
# - key: "pool"
# operator: "Equal"
# value: "pool-02"
# effect: "NoExecute"

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,39 @@
dependencies:
- name: newrelic-infrastructure
repository: https://newrelic.github.io/nri-kubernetes
version: 3.37.4
- name: nri-prometheus
repository: https://newrelic.github.io/nri-prometheus
version: 2.1.19
- name: newrelic-prometheus-agent
repository: https://newrelic.github.io/newrelic-prometheus-configurator
version: 1.15.6
- name: nri-metadata-injection
repository: https://newrelic.github.io/k8s-metadata-injection
version: 4.22.5
- name: newrelic-k8s-metrics-adapter
repository: https://newrelic.github.io/newrelic-k8s-metrics-adapter
version: 1.13.4
- name: kube-state-metrics
repository: https://prometheus-community.github.io/helm-charts
version: 5.26.0
- name: nri-kube-events
repository: https://newrelic.github.io/nri-kube-events
version: 3.11.5
- name: newrelic-logging
repository: https://newrelic.github.io/helm-charts
version: 1.23.5
- name: newrelic-pixie
repository: https://newrelic.github.io/helm-charts
version: 2.1.6
- name: k8s-agents-operator
repository: https://newrelic.github.io/k8s-agents-operator
version: 0.19.0
- name: pixie-operator-chart
repository: https://pixie-operator-charts.storage.googleapis.com
version: 0.1.7
- name: newrelic-infra-operator
repository: https://newrelic.github.io/newrelic-infra-operator
version: 2.13.4
digest: sha256:c36c2fee765ab81cf0c8c2962500bc567428003d5faf5699279233279083ad5b
generated: "2025-01-14T07:35:33.886018057Z"

View File

@ -0,0 +1,85 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: New Relic
catalog.cattle.io/release-name: nri-bundle
apiVersion: v2
dependencies:
- condition: infrastructure.enabled,newrelic-infrastructure.enabled
name: newrelic-infrastructure
repository: https://newrelic.github.io/nri-kubernetes
version: 3.37.4
- condition: prometheus.enabled,nri-prometheus.enabled
name: nri-prometheus
repository: https://newrelic.github.io/nri-prometheus
version: 2.1.19
- condition: newrelic-prometheus-agent.enabled
name: newrelic-prometheus-agent
repository: https://newrelic.github.io/newrelic-prometheus-configurator
version: 1.15.6
- condition: webhook.enabled,nri-metadata-injection.enabled
name: nri-metadata-injection
repository: https://newrelic.github.io/k8s-metadata-injection
version: 4.22.5
- condition: metrics-adapter.enabled,newrelic-k8s-metrics-adapter.enabled
name: newrelic-k8s-metrics-adapter
repository: https://newrelic.github.io/newrelic-k8s-metrics-adapter
version: 1.13.4
- condition: ksm.enabled,kube-state-metrics.enabled
name: kube-state-metrics
repository: https://prometheus-community.github.io/helm-charts
version: 5.26.0
- condition: kubeEvents.enabled,nri-kube-events.enabled
name: nri-kube-events
repository: https://newrelic.github.io/nri-kube-events
version: 3.11.5
- condition: logging.enabled,newrelic-logging.enabled
name: newrelic-logging
repository: https://newrelic.github.io/helm-charts
version: 1.23.5
- condition: newrelic-pixie.enabled
name: newrelic-pixie
repository: https://newrelic.github.io/helm-charts
version: 2.1.6
- condition: k8s-agents-operator.enabled
name: k8s-agents-operator
repository: https://newrelic.github.io/k8s-agents-operator
version: 0.19.0
- alias: pixie-chart
condition: pixie-chart.enabled
name: pixie-operator-chart
repository: https://pixie-operator-charts.storage.googleapis.com
version: 0.1.7
- condition: newrelic-infra-operator.enabled
name: newrelic-infra-operator
repository: https://newrelic.github.io/newrelic-infra-operator
version: 2.13.4
description: Groups together the individual charts for the New Relic Kubernetes solution
for a more comfortable deployment.
home: https://github.com/newrelic/helm-charts
icon: file://assets/icons/nri-bundle.svg
keywords:
- infrastructure
- newrelic
- monitoring
maintainers:
- name: juanjjaramillo
url: https://github.com/juanjjaramillo
- name: csongnr
url: https://github.com/csongnr
- name: dbudziwojskiNR
url: https://github.com/dbudziwojskiNR
name: nri-bundle
sources:
- https://github.com/newrelic/nri-bundle/
- https://github.com/newrelic/nri-bundle/tree/master/charts/nri-bundle
- https://github.com/newrelic/nri-kubernetes/tree/master/charts/newrelic-infrastructure
- https://github.com/newrelic/nri-prometheus/tree/master/charts/nri-prometheus
- https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent
- https://github.com/newrelic/k8s-metadata-injection/tree/master/charts/nri-metadata-injection
- https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/master/charts/newrelic-k8s-metrics-adapter
- https://github.com/newrelic/nri-kube-events/tree/master/charts/nri-kube-events
- https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging
- https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie
- https://github.com/newrelic/newrelic-infra-operator/tree/master/charts/newrelic-infra-operator
- https://github.com/newrelic/k8s-agents-operator/tree/master/charts/k8s-agents-operator
version: 5.0.106

View File

@ -0,0 +1,200 @@
# nri-bundle
Groups together the individual charts for the New Relic Kubernetes solution for a more comfortable deployment.
**Homepage:** <https://github.com/newrelic/helm-charts>
## Bundled charts
This chart does not deploy anything by itself but has many charts as dependencies. This allows you to easily install and upgrade the New Relic
Kubernetes Integration using only one chart.
In case you need more information about each component this chart installs, or you are an advanced user that want to install each component separately,
here is a list of components that this chart installs and where you can find more information about them:
| Component | Installed by default? | Description |
|------------------------------|-----------------------|-------------|
| [newrelic-infrastructure](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) | Yes | Sends metrics about nodes, cluster objects (e.g. Deployments, Pods), and the control plane to New Relic. |
| [nri-metadata-injection](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) | Yes | Enriches New Relic-instrumented applications (APM) with Kubernetes information. |
| [kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) | | Required for `newrelic-infrastructure` to gather cluster-level metrics. |
| [nri-kube-events](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) | | Reports Kubernetes events to New Relic. |
| [newrelic-infra-operator](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) | | (Beta) Used with Fargate or serverless environments to inject `newrelic-infrastructure` as a sidecar instead of the usual DaemonSet. |
| [newrelic-k8s-metrics-adapter](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) | | (Beta) Provides a source of data for Horizontal Pod Autoscalers (HPA) based on a NRQL query from New Relic. |
| [newrelic-logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) | | Sends logs for Kubernetes components and workloads running on the cluster to New Relic. |
| [nri-prometheus](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) | | Sends metrics from applications exposing Prometheus metrics to New Relic. |
| [newrelic-prometheus-configurator](https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent) | | Configures instances of Prometheus in Agent mode to send metrics to the New Relic Prometheus endpoint. |
| [newrelic-pixie](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) | | Connects to the Pixie API and enables the New Relic plugin in Pixie. The plugin allows you to export data from Pixie to New Relic for long-term data retention. |
| [Pixie](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) | | Is an open source observability tool for Kubernetes applications that uses eBPF to automatically capture telemetry data without the need for manual instrumentation. |
| [k8s-agents-operator](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) | | (Preview) Streamlines full-stack observability for Kubernetes environments by automating APM instrumentation alongside Kubernetes agent deployment. |
## Configure components
It is possible to configure settings for the individual charts this chart groups by specifying values for them under a key using the name of the chart,
as specified in [helm documentation](https://helm.sh/docs/chart_template_guide/subcharts_and_globals).
For example, by adding the following to the `values.yml` file:
```yaml
# Configuration settings for the newrelic-infrastructure chart
newrelic-infrastructure:
# Any key defined in the values.yml file for the newrelic-infrastructure chart can be configured here:
# https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml
verboseLog: false
resources:
limits:
memory: 512M
```
It is possible to override any entry of the [`newrelic-infrastructure`](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure)
chart, as defined in their [`values.yml` file](https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml).
The same approach can be followed to update any of the subcharts.
After making these changes to the `values.yml` file, or a custom values file, make sure to apply them using:
```
$ helm upgrade --reuse-values -f values.yaml [RELEASE] newrelic/nri-bundle
```
Where `[RELEASE]` is the name of the helm release, e.g. `newrelic-bundle`.
## Monitor on host integrations
If you wish to monitor services running on Kubernetes you can provide integrations
configuration under `integrations_config` that it will passed down to the `newrelic-infrastructure` chart.
You just need to create a new entry where the "name" is the filename of the configuration file and the data is the content of
the integration configuration. The name must end in ".yaml" as this will be the
filename generated and the Infrastructure agent only looks for YAML files.
The data part is the actual integration configuration as described in the spec here:
https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180
In the following example you can see how to monitor a Redis integration with autodiscovery
```yaml
newrelic-infrastructure:
integrations:
nri-redis-sampleapp:
discovery:
command:
exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250
match:
label.app: sampleapp
integrations:
- name: nri-redis
env:
# using the discovered IP as the hostname address
HOSTNAME: ${discovery.ip}
PORT: 6379
labels:
env: test
```
## Bring your own KSM
New Relic Kubernetes Integration requires an instance of kube-state-metrics (KSM) to be running in the cluster, which this chart pulls as a dependency. If you are already running or want to run your own KSM instance, you will need to make some small adjustments as described below.
### Bring your own KSM
If you already have one KSM instance running, you can point `nri-kubernetes` to your instance:
```yaml
kube-state-metrics:
# Disable bundled KSM.
enabled: false
newrelic-infrastructure:
ksm:
config:
# Selector for your pre-installed KSM Service. You may need to adjust this to fit your existing installation.
selector: "app.kubernetes.io/name=kube-state-metrics"
# Alternatively, you can specify a fixed URL where KSM is available. Doing so will bypass autodiscovery.
#staticUrl: http://ksm.ksm.svc.cluster.local:8080/metrics
```
### <span id="ksm-different-version">Run KSM alongside a different version</span>
If you need to run a different instance of KSM in your cluster, you can still run a separate instance for the Kubernetes Integration to work as intended:
```yaml
kube-state-metrics:
# Enable bundled KSM.
enabled: true
prometheusScrape: false
customLabels:
# Label unique to this KSM instance.
newrelic.com/custom-ksm: "true"
newrelic-infrastructure:
ksm:
config:
# Use label above as a selector.
selector: "newrelic.com/custom-ksm=true"
```
For more information on supported KSM version visit the [requirements documentation](https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements#reqs)
## Values managed globally
Some of the subchart implement the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which
means that it honors a wide range of defaults and globals common to most New Relic Helm charts.
Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at
[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md).
At the time of writing this document, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this library and
honors global options as described below.
Note, the value table below is automatically generated from `values.yaml` by `helm-docs`. If you need to add new fields or update existing fields, please update the `values.yaml` and then run `helm-docs` to update this value table.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| global | object | See [`values.yaml`](values.yaml) | change the behaviour globally to all the supported helm charts. See [user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) for further information. |
| global.affinity | object | `{}` | Sets pod/node affinities |
| global.cluster | string | `""` | The cluster name for the Kubernetes cluster. |
| global.containerSecurityContext | object | `{}` | Sets security context (at container level) |
| global.customAttributes | object | `{}` | Adds extra attributes to the cluster and all the metrics emitted to the backend |
| global.customSecretLicenseKey | string | `""` | Key in the Secret object where the license key is stored |
| global.customSecretName | string | `""` | Name of the Secret object where the license key is stored |
| global.dnsConfig | object | `{}` | Sets pod's dnsConfig |
| global.fargate | bool | false | Must be set to `true` when deploying in an EKS Fargate environment |
| global.hostNetwork | bool | false | Sets pod's hostNetwork |
| global.images.pullSecrets | list | `[]` | Set secrets to be able to fetch images |
| global.images.registry | string | `""` | Changes the registry where to get the images. Useful when there is an internal image cache/proxy |
| global.insightsKey | string | `""` | The license key for your New Relic Account. This will be preferred configuration option if both `insightsKey` and `customSecret` are specified. |
| global.labels | object | `{}` | Additional labels for chart objects |
| global.licenseKey | string | `""` | The license key for your New Relic Account. This will be preferred configuration option if both `licenseKey` and `customSecret` are specified. |
| global.lowDataMode | bool | false | Reduces number of metrics sent in order to reduce costs |
| global.nodeSelector | object | `{}` | Sets pod's node selector |
| global.nrStaging | bool | false | Send the metrics to the staging backend. Requires a valid staging license key |
| global.podLabels | object | `{}` | Additional labels for chart pods |
| global.podSecurityContext | object | `{}` | Sets security context (at pod level) |
| global.priorityClassName | string | `""` | Sets pod's priorityClassName |
| global.privileged | bool | false | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | |
| global.proxy | string | `""` | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` |
| global.serviceAccount.annotations | object | `{}` | Add these annotations to the service account we create |
| global.serviceAccount.create | string | `nil` | Configures if the service account should be created or not |
| global.serviceAccount.name | string | `nil` | Change the name of the service account. This is honored if you disable on this chart the creation of the service account so you can use your own |
| global.tolerations | list | `[]` | Sets pod's tolerations to node taints |
| global.verboseLog | bool | false | Sets the debug logs to this integration or all integrations if it is set globally |
| k8s-agents-operator.enabled | bool | `false` | Install the [`k8s-agents-operator` chart](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) |
| kube-state-metrics.enabled | bool | `false` | Install the [`kube-state-metrics` chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) from the stable helm charts repository. This is mandatory if `infrastructure.enabled` is set to `true` and the user does not provide its own instance of KSM version >=1.8 and <=2.0. Note, kube-state-metrics v2+ disables labels/annotations metrics by default. You can enable the target labels/annotations metrics to be monitored by using the metricLabelsAllowlist/metricAnnotationsAllowList options described [here](https://github.com/prometheus-community/helm-charts/blob/159cd8e4fb89b8b107dcc100287504bb91bf30e0/charts/kube-state-metrics/values.yaml#L274) in your Kubernetes clusters. |
| newrelic-infra-operator.enabled | bool | `false` | Install the [`newrelic-infra-operator` chart](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) (Beta) |
| newrelic-infrastructure.enabled | bool | `true` | Install the [`newrelic-infrastructure` chart](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) |
| newrelic-k8s-metrics-adapter.enabled | bool | `false` | Install the [`newrelic-k8s-metrics-adapter.` chart](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) (Beta) |
| newrelic-logging.enabled | bool | `false` | Install the [`newrelic-logging` chart](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) |
| newrelic-pixie.enabled | bool | `false` | Install the [`newrelic-pixie`](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) |
| newrelic-prometheus-agent.enabled | bool | `false` | Install the [`newrelic-prometheus-agent` chart](https://github.com/newrelic/newrelic-prometheus-configurator/tree/main/charts/newrelic-prometheus-agent) |
| nri-kube-events.enabled | bool | `false` | Install the [`nri-kube-events` chart](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) |
| nri-metadata-injection.enabled | bool | `true` | Install the [`nri-metadata-injection` chart](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) |
| nri-prometheus.enabled | bool | `false` | Install the [`nri-prometheus` chart](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) |
| pixie-chart.enabled | bool | `false` | Install the [`pixie-chart` chart](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) |
## Maintainers
* [juanjjaramillo](https://github.com/juanjjaramillo)
* [csongnr](https://github.com/csongnr)
* [dbudziwojskiNR](https://github.com/dbudziwojskiNR)

View File

@ -0,0 +1,166 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
## Bundled charts
This chart does not deploy anything by itself but has many charts as dependencies. This allows you to easily install and upgrade the New Relic
Kubernetes Integration using only one chart.
In case you need more information about each component this chart installs, or you are an advanced user that want to install each component separately,
here is a list of components that this chart installs and where you can find more information about them:
| Component | Installed by default? | Description |
|------------------------------|-----------------------|-------------|
| [newrelic-infrastructure](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) | Yes | Sends metrics about nodes, cluster objects (e.g. Deployments, Pods), and the control plane to New Relic. |
| [nri-metadata-injection](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) | Yes | Enriches New Relic-instrumented applications (APM) with Kubernetes information. |
| [kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) | | Required for `newrelic-infrastructure` to gather cluster-level metrics. |
| [nri-kube-events](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) | | Reports Kubernetes events to New Relic. |
| [newrelic-infra-operator](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) | | (Beta) Used with Fargate or serverless environments to inject `newrelic-infrastructure` as a sidecar instead of the usual DaemonSet. |
| [newrelic-k8s-metrics-adapter](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) | | (Beta) Provides a source of data for Horizontal Pod Autoscalers (HPA) based on a NRQL query from New Relic. |
| [newrelic-logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) | | Sends logs for Kubernetes components and workloads running on the cluster to New Relic. |
| [nri-prometheus](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) | | Sends metrics from applications exposing Prometheus metrics to New Relic. |
| [newrelic-prometheus-configurator](https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent) | | Configures instances of Prometheus in Agent mode to send metrics to the New Relic Prometheus endpoint. |
| [newrelic-pixie](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) | | Connects to the Pixie API and enables the New Relic plugin in Pixie. The plugin allows you to export data from Pixie to New Relic for long-term data retention. |
| [Pixie](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) | | Is an open source observability tool for Kubernetes applications that uses eBPF to automatically capture telemetry data without the need for manual instrumentation. |
| [k8s-agents-operator](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) | | (Preview) Streamlines full-stack observability for Kubernetes environments by automating APM instrumentation alongside Kubernetes agent deployment. |
## Configure components
It is possible to configure settings for the individual charts this chart groups by specifying values for them under a key using the name of the chart,
as specified in [helm documentation](https://helm.sh/docs/chart_template_guide/subcharts_and_globals).
For example, by adding the following to the `values.yml` file:
```yaml
# Configuration settings for the newrelic-infrastructure chart
newrelic-infrastructure:
# Any key defined in the values.yml file for the newrelic-infrastructure chart can be configured here:
# https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml
verboseLog: false
resources:
limits:
memory: 512M
```
It is possible to override any entry of the [`newrelic-infrastructure`](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure)
chart, as defined in their [`values.yml` file](https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml).
The same approach can be followed to update any of the subcharts.
After making these changes to the `values.yml` file, or a custom values file, make sure to apply them using:
```
$ helm upgrade --reuse-values -f values.yaml [RELEASE] newrelic/nri-bundle
```
Where `[RELEASE]` is the name of the helm release, e.g. `newrelic-bundle`.
## Monitor on host integrations
If you wish to monitor services running on Kubernetes you can provide integrations
configuration under `integrations_config` that it will passed down to the `newrelic-infrastructure` chart.
You just need to create a new entry where the "name" is the filename of the configuration file and the data is the content of
the integration configuration. The name must end in ".yaml" as this will be the
filename generated and the Infrastructure agent only looks for YAML files.
The data part is the actual integration configuration as described in the spec here:
https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180
In the following example you can see how to monitor a Redis integration with autodiscovery
```yaml
newrelic-infrastructure:
integrations:
nri-redis-sampleapp:
discovery:
command:
exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250
match:
label.app: sampleapp
integrations:
- name: nri-redis
env:
# using the discovered IP as the hostname address
HOSTNAME: ${discovery.ip}
PORT: 6379
labels:
env: test
```
## Bring your own KSM
New Relic Kubernetes Integration requires an instance of kube-state-metrics (KSM) to be running in the cluster, which this chart pulls as a dependency. If you are already running or want to run your own KSM instance, you will need to make some small adjustments as described below.
### Bring your own KSM
If you already have one KSM instance running, you can point `nri-kubernetes` to your instance:
```yaml
kube-state-metrics:
# Disable bundled KSM.
enabled: false
newrelic-infrastructure:
ksm:
config:
# Selector for your pre-installed KSM Service. You may need to adjust this to fit your existing installation.
selector: "app.kubernetes.io/name=kube-state-metrics"
# Alternatively, you can specify a fixed URL where KSM is available. Doing so will bypass autodiscovery.
#staticUrl: http://ksm.ksm.svc.cluster.local:8080/metrics
```
### <span id="ksm-different-version">Run KSM alongside a different version</span>
If you need to run a different instance of KSM in your cluster, you can still run a separate instance for the Kubernetes Integration to work as intended:
```yaml
kube-state-metrics:
# Enable bundled KSM.
enabled: true
prometheusScrape: false
customLabels:
# Label unique to this KSM instance.
newrelic.com/custom-ksm: "true"
newrelic-infrastructure:
ksm:
config:
# Use label above as a selector.
selector: "newrelic.com/custom-ksm=true"
```
For more information on supported KSM version visit the [requirements documentation](https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements#reqs)
## Values managed globally
Some of the subchart implement the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which
means that it honors a wide range of defaults and globals common to most New Relic Helm charts.
Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at
[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md).
At the time of writing this document, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this library and
honors global options as described below.
Note, the value table below is automatically generated from `values.yaml` by `helm-docs`. If you need to add new fields or update existing fields, please update the `values.yaml` and then run `helm-docs` to update this value table.
{{ template "chart.valuesSection" . }}
{{ if .Maintainers }}
## Maintainers
{{ range .Maintainers }}
{{- if .Name }}
{{- if .Url }}
* [{{ .Name }}]({{ .Url }})
{{- else }}
* {{ .Name }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,5 @@
# New Relic Kubernetes Integration
New Relic's Kubernetes integration gives you full observability into the health and performance of your environment, no matter whether you run Kubernetes on-premises or in the cloud. With our [cluster explorer](https://docs.newrelic.com/docs/integrations/kubernetes-integration/cluster-explorer/kubernetes-cluster-explorer), you can cut through layers of complexity to see how your cluster is performing, from the heights of the control plane down to applications running on a single pod.
You can see the power of the Kubernetes integration in the [cluster explorer](https://docs.newrelic.com/docs/integrations/kubernetes-integration/cluster-explorer/kubernetes-cluster-explorer), where the full picture of a cluster is made available on a single screen: nodes and pods are visualized according to their health and performance, with pending and alerting nodes in the innermost circles. [Predefined alert conditions](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/kubernetes-integration-predefined-alert-policy) help you troubleshoot issues right from the start. Clicking each node reveals its status and how each app is performing.

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,6 @@
dependencies:
- name: common-library
repository: https://helm-charts.newrelic.com
version: 1.3.0
digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443
generated: "2024-10-25T18:35:38.878351812Z"

View File

@ -0,0 +1,20 @@
apiVersion: v2
appVersion: 0.19.0
dependencies:
- name: common-library
repository: https://helm-charts.newrelic.com
version: 1.3.0
description: A Helm chart for the Kubernetes Agents Operator
home: https://github.com/newrelic/k8s-agents-operator/blob/main/charts/k8s-agents-operator/README.md
maintainers:
- name: csongnr
url: https://github.com/csongnr
- name: dbudziwojskiNR
url: https://github.com/dbudziwojskiNR
- name: danielstokes
url: https://github.com/danielstokes
name: k8s-agents-operator
sources:
- https://github.com/newrelic/k8s-agents-operator
type: application
version: 0.19.0

View File

@ -0,0 +1,294 @@
# k8s-agents-operator
![Version: 0.19.0](https://img.shields.io/badge/Version-0.19.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.19.0](https://img.shields.io/badge/AppVersion-0.19.0-informational?style=flat-square)
A Helm chart for the Kubernetes Agents Operator
**Homepage:** <https://github.com/newrelic/k8s-agents-operator/blob/main/charts/k8s-agents-operator/README.md>
## Prerequisites
[Helm](https://helm.sh) must be installed to use the charts. Please refer to Helm's [documentation](https://helm.sh/docs) to get started.
## Installation
### Requirements
Add the `k8s-agents-operator` Helm chart repository:
```shell
helm repo add k8s-agents-operator https://newrelic.github.io/k8s-agents-operator
```
### Instrumentation
Install the [`k8s-agents-operator`](https://github.com/newrelic/k8s-agents-operator) Helm chart:
```shell
helm upgrade --install k8s-agents-operator k8s-agents-operator/k8s-agents-operator \
--namespace newrelic \
--create-namespace \
--values your-custom-values.yaml
```
### Monitored namespaces
For each namespace you want the operator to be instrumented, a secret will be replicated from the newrelic operator namespace.
For each `Instrumentation` custom resource created, specifying which APM agent you want to instrument for each language. All available APM
agent docker images and corresponding tags are listed on DockerHub:
* [.NET](https://hub.docker.com/repository/docker/newrelic/newrelic-dotnet-init/general)
* [Java](https://hub.docker.com/repository/docker/newrelic/newrelic-java-init/general)
* [Node](https://hub.docker.com/repository/docker/newrelic/newrelic-node-init/general)
* [Python](https://hub.docker.com/repository/docker/newrelic/newrelic-python-init/general)
* [Ruby](https://hub.docker.com/repository/docker/newrelic/newrelic-ruby-init/general)
For .NET
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-dotnet
spec:
agent:
language: dotnet
image: newrelic/newrelic-dotnet-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For Java
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-java
namespace: newrelic
spec:
agent:
language: java
image: newrelic/newrelic-java-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For NodeJS
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-nodejs
namespace: newrelic
spec:
agent:
language: nodejs
image: newrelic/newrelic-node-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For Python
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-python
namespace: newrelic
spec:
agent:
language: python
image: newrelic/newrelic-python-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For Ruby
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-ruby
namespace: newrelic
spec:
agent:
language: ruby
image: newrelic/newrelic-ruby-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For environment specific configurations
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
agent:
env:
# Example New Relic agent supported environment variables
- name: NEW_RELIC_LABELS
value: "environment:auto-injection"
# Example setting the pod name based on the metadata
- name: NEW_RELIC_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Example overriding the appName configuration
- name: NEW_RELIC_APP_NAME
value: "$(NEW_RELIC_LABELS)-$(NEW_RELIC_POD_NAME)"
```
Targeting everything in a specific namespace with a label
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
#agent: ...
namespaceLabelSelector:
matchExpressions:
- key: "app.newrelic.instrumentation"
operator: "In"
values: ["java"]
```
Targeting a pod with a specific label
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
# agent: ...
podLabelSelector:
matchExpressions:
- key: "app.newrelic.instrumentation"
operator: "In"
values: ["dotnet"]
```
Using a secret with a non-default name
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
# agent: ...
licenseKeySecret: the-name-of-the-custom-secret
```
In the example above, we show how you can configure the agent settings globally using environment variables. See each agent's configuration documentation for available configuration options:
* [Java](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/)
* [Node](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/nodejs-agent-configuration/)
* [Python](https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/)
* [.NET](https://docs.newrelic.com/docs/apm/agents/net-agent/configuration/net-agent-configuration/)
* [Ruby](https://docs.newrelic.com/docs/apm/agents/ruby-agent/configuration/ruby-agent-configuration/)
### cert-manager
The K8s Agents Operator supports the use of [`cert-manager`](https://github.com/cert-manager/cert-manager) if preferred.
Install the [`cert-manager`](https://github.com/cert-manager/cert-manager) Helm chart:
```shell
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
```
In your `values.yaml` file, set `admissionWebhooks.autoGenerateCert.enabled: false` and `admissionWebhooks.certManager.enabled: true`. Then install the chart as normal.
## Security
This operator requires a privileged environment to run correctly. As with all components that run in a privileged environment, please exercise caution when granting access to the namespace (and other resources) that the K8s Agent Operator is deployed on.
## Available Chart Releases
To see the available charts:
```shell
helm search repo k8s-agents-operator
```
If you want to see a list of all available charts and releases, check [index.yaml](https://newrelic.github.io/k8s-agents-operator/index.yaml).
## Source Code
* <https://github.com/newrelic/k8s-agents-operator>
## Requirements
| Repository | Name | Version |
|------------|------|---------|
| https://helm-charts.newrelic.com | common-library | 1.3.0 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| admissionWebhooks | object | `{"autoGenerateCert":{"certPeriodDays":365,"enabled":true,"recreate":true},"caFile":"","certFile":"","certManager":{"enabled":false},"create":true,"keyFile":""}` | Admission webhooks make sure only requests with correctly formatted rules will get into the Operator |
| admissionWebhooks.autoGenerateCert.certPeriodDays | int | `365` | Cert validity period time in days. |
| admissionWebhooks.autoGenerateCert.enabled | bool | `true` | If true and certManager.enabled is false, Helm will automatically create a self-signed cert and secret for you. |
| admissionWebhooks.autoGenerateCert.recreate | bool | `true` | If set to true, new webhook key/certificate is generated on helm upgrade. |
| admissionWebhooks.caFile | string | `""` | Path to the CA cert. |
| admissionWebhooks.certFile | string | `""` | Path to your own PEM-encoded certificate. |
| admissionWebhooks.certManager.enabled | bool | `false` | If true and autoGenerateCert.enabled is false, cert-manager will create a self-signed cert and secret for you. |
| admissionWebhooks.keyFile | string | `""` | Path to your own PEM-encoded private key. |
| affinity | object | `{}` | Sets all pods' affinities. Can be configured also with `global.affinity` |
| containerSecurityContext | object | `{}` | Sets all security context (at container level). Can be configured also with `global.securityContext.container` |
| controllerManager.kubeRbacProxy.containerSecurityContext | object | `{}` | Sets security context (at container level) for kubeRbacProxy. Overrides `containerSecurityContext` and `global.containerSecurityContext` |
| controllerManager.kubeRbacProxy.image.repository | string | `"gcr.io/kubebuilder/kube-rbac-proxy"` | Sets the repository and image to use for kube-rbac-proxy. Please ensure you're using a trusted image. |
| controllerManager.kubeRbacProxy.image.version | string | `"sha256:771a9a173e033a3ad8b46f5c00a7036eaa88c8d8d1fbd89217325168998113ea"` | Sets the kube-rbac-proxy image version to retrieve. Could be a tag i.e. "v0.16.0" or a SHA digest i.e. "sha256:771a9a173e033a3ad8b46f5c00a7036eaa88c8d8d1fbd89217325168998113ea" |
| controllerManager.kubeRbacProxy.resources.limits.cpu | string | `"500m"` | |
| controllerManager.kubeRbacProxy.resources.limits.memory | string | `"128Mi"` | |
| controllerManager.kubeRbacProxy.resources.requests.cpu | string | `"5m"` | |
| controllerManager.kubeRbacProxy.resources.requests.memory | string | `"64Mi"` | |
| controllerManager.manager.containerSecurityContext | object | `{}` | Sets security context (at container level) for the manager. Overrides `containerSecurityContext` and `global.containerSecurityContext` |
| controllerManager.manager.image.pullPolicy | string | `nil` | |
| controllerManager.manager.image.repository | string | `"newrelic/k8s-agents-operator"` | Sets the repository and image to use for the manager. Please ensure you're using trusted New Relic images. |
| controllerManager.manager.image.version | string | `nil` | Sets the manager image version to retrieve. Could be a tag i.e. "v0.17.0" or a SHA digest i.e. "sha256:e2399e70e99ac370ca6a3c7e5affa9655da3b246d0ada77c40ed155b3726ee2e" |
| controllerManager.manager.leaderElection | object | `{"enabled":true}` | Enable leader election mechanism for protecting against split brain if multiple operator pods/replicas are started |
| controllerManager.manager.resources.requests.cpu | string | `"100m"` | |
| controllerManager.manager.resources.requests.memory | string | `"64Mi"` | |
| controllerManager.replicas | int | `1` | |
| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` |
| kubernetesClusterDomain | string | `"cluster.local"` | |
| labels | object | `{}` | Additional labels for chart objects |
| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` |
| metricsService.ports[0].name | string | `"https"` | |
| metricsService.ports[0].port | int | `8443` | |
| metricsService.ports[0].protocol | string | `"TCP"` | |
| metricsService.ports[0].targetPort | string | `"https"` | |
| metricsService.type | string | `"ClusterIP"` | |
| nodeSelector | object | `{}` | Sets all pods' node selector. Can be configured also with `global.nodeSelector` |
| podAnnotations | object | `{}` | Annotations to be added to the deployment. |
| podLabels | object | `{}` | Additional labels for chart pods |
| podSecurityContext | object | `{"fsGroup":65532,"runAsGroup":65532,"runAsNonRoot":true,"runAsUser":65532}` | SecurityContext holds pod-level security attributes and common container settings |
| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` |
| serviceAccount | object | See `values.yaml` | Settings controlling ServiceAccount creation |
| serviceAccount.create | bool | `true` | Specifies whether a ServiceAccount should be created |
| tolerations | list | `[]` | Sets all pods' tolerations to node taints. Can be configured also with `global.tolerations` |
| webhookService.ports[0].port | int | `443` | |
| webhookService.ports[0].protocol | string | `"TCP"` | |
| webhookService.ports[0].targetPort | int | `9443` | |
| webhookService.type | string | `"ClusterIP"` | |
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| csongnr | | <https://github.com/csongnr> |
| dbudziwojskiNR | | <https://github.com/dbudziwojskiNR> |
| danielstokes | | <https://github.com/danielstokes> |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@ -0,0 +1,234 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.badgesSection" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
## Prerequisites
[Helm](https://helm.sh) must be installed to use the charts. Please refer to Helm's [documentation](https://helm.sh/docs) to get started.
## Installation
### Requirements
Add the `k8s-agents-operator` Helm chart repository:
```shell
helm repo add k8s-agents-operator https://newrelic.github.io/k8s-agents-operator
```
### Instrumentation
Install the [`k8s-agents-operator`](https://github.com/newrelic/k8s-agents-operator) Helm chart:
```shell
helm upgrade --install k8s-agents-operator k8s-agents-operator/k8s-agents-operator \
--namespace newrelic \
--create-namespace \
--values your-custom-values.yaml
```
### Monitored namespaces
For each namespace you want the operator to be instrumented, a secret will be replicated from the newrelic operator namespace.
For each `Instrumentation` custom resource created, specifying which APM agent you want to instrument for each language. All available APM
agent docker images and corresponding tags are listed on DockerHub:
* [.NET](https://hub.docker.com/repository/docker/newrelic/newrelic-dotnet-init/general)
* [Java](https://hub.docker.com/repository/docker/newrelic/newrelic-java-init/general)
* [Node](https://hub.docker.com/repository/docker/newrelic/newrelic-node-init/general)
* [Python](https://hub.docker.com/repository/docker/newrelic/newrelic-python-init/general)
* [Ruby](https://hub.docker.com/repository/docker/newrelic/newrelic-ruby-init/general)
For .NET
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-dotnet
spec:
agent:
language: dotnet
image: newrelic/newrelic-dotnet-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For Java
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-java
namespace: newrelic
spec:
agent:
language: java
image: newrelic/newrelic-java-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For NodeJS
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-nodejs
namespace: newrelic
spec:
agent:
language: nodejs
image: newrelic/newrelic-node-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For Python
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-python
namespace: newrelic
spec:
agent:
language: python
image: newrelic/newrelic-python-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For Ruby
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-ruby
namespace: newrelic
spec:
agent:
language: ruby
image: newrelic/newrelic-ruby-init:latest # Please ensure you're using a trusted New Relic image
# env: ...
```
For environment specific configurations
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
agent:
env:
# Example New Relic agent supported environment variables
- name: NEW_RELIC_LABELS
value: "environment:auto-injection"
# Example setting the pod name based on the metadata
- name: NEW_RELIC_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Example overriding the appName configuration
- name: NEW_RELIC_APP_NAME
value: "$(NEW_RELIC_LABELS)-$(NEW_RELIC_POD_NAME)"
```
Targeting everything in a specific namespace with a label
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
#agent: ...
namespaceLabelSelector:
matchExpressions:
- key: "app.newrelic.instrumentation"
operator: "In"
values: ["java"]
```
Targeting a pod with a specific label
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
# agent: ...
podLabelSelector:
matchExpressions:
- key: "app.newrelic.instrumentation"
operator: "In"
values: ["dotnet"]
```
Using a secret with a non-default name
```yaml
apiVersion: newrelic.com/v1alpha2
kind: Instrumentation
metadata:
name: newrelic-instrumentation-lang
namespace: newrelic
spec:
# agent: ...
licenseKeySecret: the-name-of-the-custom-secret
```
In the example above, we show how you can configure the agent settings globally using environment variables. See each agent's configuration documentation for available configuration options:
* [Java](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/)
* [Node](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/nodejs-agent-configuration/)
* [Python](https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/)
* [.NET](https://docs.newrelic.com/docs/apm/agents/net-agent/configuration/net-agent-configuration/)
* [Ruby](https://docs.newrelic.com/docs/apm/agents/ruby-agent/configuration/ruby-agent-configuration/)
### cert-manager
The K8s Agents Operator supports the use of [`cert-manager`](https://github.com/cert-manager/cert-manager) if preferred.
Install the [`cert-manager`](https://github.com/cert-manager/cert-manager) Helm chart:
```shell
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
```
In your `values.yaml` file, set `admissionWebhooks.autoGenerateCert.enabled: false` and `admissionWebhooks.certManager.enabled: true`. Then install the chart as normal.
## Security
This operator requires a privileged environment to run correctly. As with all components that run in a privileged environment, please exercise caution when granting access to the namespace (and other resources) that the K8s Agent Operator is deployed on.
## Available Chart Releases
To see the available charts:
```shell
helm search repo k8s-agents-operator
```
If you want to see a list of all available charts and releases, check [index.yaml](https://newrelic.github.io/k8s-agents-operator/index.yaml).
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
{{ template "chart.maintainersSection" . }}
{{ template "helm-docs.versionFooter" . }}

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,17 @@
apiVersion: v2
description: Provides helpers to provide consistency on all the charts
keywords:
- newrelic
- chart-library
maintainers:
- name: juanjjaramillo
url: https://github.com/juanjjaramillo
- name: csongnr
url: https://github.com/csongnr
- name: dbudziwojskiNR
url: https://github.com/dbudziwojskiNR
- name: kang-makes
url: https://github.com/kang-makes
name: common-library
type: library
version: 1.3.0

View File

@ -0,0 +1,747 @@
# Functions/templates documented for chart writers
Here is some rough documentation separated by the file that contains the function, the function
name and how to use it. We are not covering functions that start with `_` (e.g.
`newrelic.common.license._licenseKey`) because they are used internally by this library for
other helpers. Helm does not have the concept of "public" or "private" functions/templates so
this is a convention of ours.
## _naming.tpl
These functions are used to name objects.
### `newrelic.common.naming.name`
This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`.
It honors `.Values.nameOverride`.
Usage:
```mustache
{{ include "newrelic.common.naming.name" . }}
```
### `newrelic.common.naming.fullname`
This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create`
It honors `.Values.fullnameOverride`.
Usage:
```mustache
{{ include "newrelic.common.naming.fullname" . }}
```
### `newrelic.common.naming.chart`
This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`.
It is mostly useless for chart writers. It is used internally for templating the labels but there
is no reason to keep it "private".
Usage:
```mustache
{{ include "newrelic.common.naming.chart" . }}
```
### `newrelic.common.naming.truncateToDNS`
This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`).
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
Usage:
```mustache
{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something"
{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }}
{{- $truncatedName }}
{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}}
```
### `newrelic.common.naming.truncateToDNSWithSuffix`
This template function is the same as the above but instead of receiving a string you should give a `dict`
with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the
result of `name-suffix` is no more than 63 chars
Usage:
```mustache
{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something"
{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }}
{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }}
{{- $truncatedName }}
{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}}
```
## _labels.tpl
### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels`
These are functions that are used to label objects. They are configured by this `values.yaml`
```yaml
global:
podLabels: {} # included in all the pods of all the charts that implement this library
labels: {} # included in all the objects of all the charts that implement this library
podLabels: {} # included in all the pods of this chart
labels: {} # included in all the objects of this chart
```
label maps are merged from global to local values.
And chart writer should use them like this:
```mustache
metadata:
labels:
{{- include "newrelic.common.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "newrelic.common.labels.podLabels" . | nindent 8 }}
```
`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically.
## _priority-class-name.tpl
### `newrelic.common.priorityClassName`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
priorityClassName: ""
priorityClassName: ""
```
Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this
library to work properly. If in your values a non-falsy `priorityClassName` is found, the global
one is going to be always ignored.
Usage (example in a pod spec):
```mustache
spec:
{{- with include "newrelic.common.priorityClassName" . }}
priorityClassName: {{ . }}
{{- end }}
```
## _hostnetwork.tpl
### `newrelic.common.hostNetwork`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
hostNetwork: # Note that this is empty (nil)
hostNetwork: # Note that this is empty (nil)
```
Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you
values a `hostNetwork` is defined, the global one is going to be always ignored.
This function returns "true" of "" (empty string) so it can be used for evaluating conditionals.
Usage (example in a pod spec):
```mustache
spec:
{{- with include "newrelic.common.hostNetwork" . }}
hostNetwork: {{ . }}
{{- end }}
```
### `newrelic.common.hostNetwork.value`
This function is an abstraction of the function above but this returns directly "true" or "false".
Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`.
Usage (example in a pod spec):
```mustache
spec:
hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }}
```
## _dnsconfig.tpl
### `newrelic.common.dnsConfig`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
dnsConfig: {}
dnsConfig: {}
```
Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this
library to work properly. If in your values a non-falsy `dnsConfig` is found, the global
one is going to be always ignored.
Usage (example in a pod spec):
```mustache
spec:
{{- with include "newrelic.common.dnsConfig" . }}
dnsConfig:
{{- . | nindent 4 }}
{{- end }}
```
## _images.tpl
These functions help us to deal with how images are templated. This allows setting `registries`
where to fetch images globally while being flexible enough to fit in different maps of images
and deployments with one or more images. This is the example of a complex `values.yaml` that
we are going to use during the documentation of these functions:
```yaml
global:
images:
registry: nexus-3-instance.internal.clients-domain.tld
jobImage:
registry: # defaults to "example.tld" when empty in these examples
repository: ingress-nginx/kube-webhook-certgen
tag: v1.1.1
pullPolicy: IfNotPresent
pullSecrets: []
images:
integration:
registry:
repository: newrelic/nri-kube-events
tag: 1.8.0
pullPolicy: IfNotPresent
agent:
registry:
repository: newrelic/k8s-events-forwarder
tag: 1.22.0
pullPolicy: IfNotPresent
pullSecrets: []
```
### `newrelic.common.images.image`
This will return a string with the image ready to be downloaded that includes the registry, the image and the tag.
`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using
`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io`
(Or the default repository that the client could have set in the CRI).
Usage:
```mustache
{{- /* For the integration */}}
{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }}
{{- /* For the agent */}}
{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }}
{{- /* For jobImage */}}
{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }}
```
### `newrelic.common.images.registry`
It returns the registry from the global or local values. You should avoid using this helper to create your image
URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed.
Usage:
```mustache
{{- /* For the integration */}}
{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }}
{{- /* For the agent */}}
{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }}
{{- /* For jobImage */}}
{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }}
```
### `newrelic.common.images.repository`
It returns the image from the values. You should avoid using this helper to create your image
URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed.
Usage:
```mustache
{{- /* For jobImage */}}
{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }}
{{- /* For the integration */}}
{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }}
{{- /* For the agent */}}
{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }}
```
### `newrelic.common.images.tag`
It returns the image's tag from the values. You should avoid using this helper to create your image
URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed.
Usage:
```mustache
{{- /* For jobImage */}}
{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }}
{{- /* For the integration */}}
{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }}
{{- /* For the agent */}}
{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }}
```
### `newrelic.common.images.renderPullSecrets`
If returns a merged map that contains the pull secrets from the global configuration and the local one.
Usage:
```mustache
{{- /* For jobImage */}}
{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }}
{{- /* For the integration */}}
{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }}
{{- /* For the agent */}}
{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }}
```
## _serviceaccount.tpl
These functions are used to evaluate if the service account should be created, with which name and add annotations to it.
The functions that the common library has implemented for service accounts are:
* `newrelic.common.serviceAccount.create`
* `newrelic.common.serviceAccount.name`
* `newrelic.common.serviceAccount.annotations`
Usage:
```mustache
{{- if include "newrelic.common.serviceAccount.create" . -}}
apiVersion: v1
kind: ServiceAccount
metadata:
{{- with (include "newrelic.common.serviceAccount.annotations" .) }}
annotations:
{{- . | nindent 4 }}
{{- end }}
labels:
{{- include "newrelic.common.labels" . | nindent 4 }}
name: {{ include "newrelic.common.serviceAccount.name" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
```
## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl
These three files are almost the same and they follow the idiomatic way of `helm create`.
Each function also looks if there is a global value like the other helpers.
```yaml
global:
affinity: {}
nodeSelector: {}
tolerations: []
affinity: {}
nodeSelector: {}
tolerations: []
```
The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored.
Usage (example in a pod spec):
```mustache
spec:
{{- with include "newrelic.common.nodeSelector" . }}
nodeSelector:
{{- . | nindent 4 }}
{{- end }}
{{- with include "newrelic.common.affinity" . }}
affinity:
{{- . | nindent 4 }}
{{- end }}
{{- with include "newrelic.common.tolerations" . }}
tolerations:
{{- . | nindent 4 }}
{{- end }}
```
## _agent-config.tpl
### `newrelic.common.agentConfig.defaults`
This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode,
custom attributes, FedRAMP and such.
Usage:
```mustache
apiVersion: v1
kind: ConfigMap
metadata:
labels:
{{- include "newrelic.common.labels" . | nindent 4 }}
name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }}
namespace: {{ .Release.Namespace }}
data:
newrelic-infra.yml: |-
# This is the configuration file for the infrastructure agent. See:
# https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/
{{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }}
```
## _cluster.tpl
### `newrelic.common.cluster`
Returns the cluster name
Usage:
```mustache
{{ include "newrelic.common.cluster" . }}
```
## _custom-attributes.tpl
### `newrelic.common.customAttributes`
Return custom attributes in YAML format.
Usage:
```mustache
apiVersion: v1
kind: ConfigMap
metadata:
name: example
data:
custom-attributes.yaml: |
{{- include "newrelic.common.customAttributes" . | nindent 4 }}
custom-attributes.json: |
{{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }}
```
## _fedramp.tpl
### `newrelic.common.fedramp.enabled`
Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness.
Usage:
```mustache
{{ include "newrelic.common.fedramp.enabled" . }}
```
### `newrelic.common.fedramp.enabled.value`
Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated.
Usage:
```mustache
{{ include "newrelic.common.fedramp.enabled.value" . }}
```
## _license.tpl
### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName`
Returns the secret and key inside the secret where to read the license key.
The common library will take care of using a user-provided custom secret or creating a secret that contains the license key.
To create the secret use `newrelic.common.license.secret`.
Usage:
```mustache
{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }}
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: agent
env:
- name: "NRIA_LICENSE_KEY"
valueFrom:
secretKeyRef:
name: {{ include "newrelic.common.license.secretName" . }}
key: {{ include "newrelic.common.license.secretKeyName" . }}
```
## _license_secret.tpl
### `newrelic.common.license.secret`
This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will
template nothing (empty string) if the user provides a custom pair of secret name and key.
This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done
by chart writers.
You just must have a template with these two lines:
```mustache
{{- /* Common library will take care of creating the secret or not. */ -}}
{{- include "newrelic.common.license.secret" . -}}
```
## _insights.tpl
### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName`
Returns the secret and key inside the secret where to read the insights key.
The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key.
To create the secret use `newrelic.common.insightsKey.secret`.
Usage:
```mustache
apiVersion: v1
kind: Pod
metadata:
name: statsd
spec:
containers:
- name: statsd
env:
- name: "INSIGHTS_KEY"
valueFrom:
secretKeyRef:
name: {{ include "newrelic.common.insightsKey.secretName" . }}
key: {{ include "newrelic.common.insightsKey.secretKeyName" . }}
```
## _insights_secret.tpl
### `newrelic.common.insightsKey.secret`
This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will
template nothing (empty string) if the user provides a custom pair of secret name and key.
This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done
by chart writers.
You just must have a template with these two lines:
```mustache
{{- /* Common library will take care of creating the secret or not. */ -}}
{{- include "newrelic.common.insightsKey.secret" . -}}
```
## _userkey.tpl
### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName`
Returns the secret and key inside the secret where to read a user key.
The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key.
To create the secret use `newrelic.common.userKey.secret`.
Usage:
```mustache
apiVersion: v1
kind: Pod
metadata:
name: statsd
spec:
containers:
- name: statsd
env:
- name: "API_KEY"
valueFrom:
secretKeyRef:
name: {{ include "newrelic.common.userKey.secretName" . }}
key: {{ include "newrelic.common.userKey.secretKeyName" . }}
```
## _userkey_secret.tpl
### `newrelic.common.userKey.secret`
This function templates the secret that is used by agents and integrations with a user key provided by the user. It will
template nothing (empty string) if the user provides a custom pair of secret name and key.
This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done
by chart writers.
You just must have a template with these two lines:
```mustache
{{- /* Common library will take care of creating the secret or not. */ -}}
{{- include "newrelic.common.userKey.secret" . -}}
```
## _region.tpl
### `newrelic.common.region.validate`
Given a string, return a normalized name for the region if valid.
This function does not need the context of the chart, only the value to be validated. The region returned
honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21)
so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local".
In case the region provided does not match these 4, the helper calls `fail` and abort the templating.
Usage:
```mustache
{{ include "newrelic.common.region.validate" "us" }}
```
### `newrelic.common.region`
It reads global and local variables for `region`:
```yaml
global:
region: # Note that this can be empty (nil) or "" (empty string)
region: # Note that this can be empty (nil) or "" (empty string)
```
Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your
values a `region` is defined, the global one is going to be always ignored.
This function gives protection so it enforces users to give the license key as a value in their
`values.yaml` or specify a global or local `region` value. To understand how the `region` value
works, read the documentation of `newrelic.common.region.validate`.
The function will change the region from US, EU or Staging based of the license key and the
`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by
the `region` value.
Usage:
```mustache
{{ include "newrelic.common.region" . }}
```
## _low-data-mode.tpl
### `newrelic.common.lowDataMode`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
lowDataMode: # Note that this is empty (nil)
lowDataMode: # Note that this is empty (nil)
```
Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you
values a `lowdataMode` is defined, the global one is going to be always ignored.
This function returns "true" of "" (empty string) so it can be used for evaluating conditionals.
Usage:
```mustache
{{ include "newrelic.common.lowDataMode" . }}
```
## _privileged.tpl
### `newrelic.common.privileged`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
privileged: # Note that this is empty (nil)
privileged: # Note that this is empty (nil)
```
Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you
values a `privileged` is defined, the global one is going to be always ignored.
Chart writers could override this and put directly a `true` in the `values.yaml` to override the
default of the common library.
This function returns "true" of "" (empty string) so it can be used for evaluating conditionals.
Usage:
```mustache
{{ include "newrelic.common.privileged" . }}
```
### `newrelic.common.privileged.value`
Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated.
Usage:
```mustache
{{ include "newrelic.common.privileged.value" . }}
```
## _proxy.tpl
### `newrelic.common.proxy`
Returns the proxy URL configured by the user.
Usage:
```mustache
{{ include "newrelic.common.proxy" . }}
```
## _security-context.tpl
Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to
use the root user (like OpenShift) or users that have an admission webhooks.
The functions are:
* `newrelic.common.securityContext.container`
* `newrelic.common.securityContext.pod`
Usage:
```mustache
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
spec:
{{- with include "newrelic.common.securityContext.pod" . }}
securityContext:
{{- . | nindent 8 }}
{{- end }}
containers:
- name: example
{{- with include "nriKubernetes.securityContext.container" . }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
```
## _staging.tpl
### `newrelic.common.nrStaging`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
nrStaging: # Note that this is empty (nil)
nrStaging: # Note that this is empty (nil)
```
Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you
values a `nrStaging` is defined, the global one is going to be always ignored.
This function returns "true" of "" (empty string) so it can be used for evaluating conditionals.
Usage:
```mustache
{{ include "newrelic.common.nrStaging" . }}
```
### `newrelic.common.nrStaging.value`
Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated.
Usage:
```mustache
{{ include "newrelic.common.nrStaging.value" . }}
```
## _verbose-log.tpl
### `newrelic.common.verboseLog`
Like almost everything in this library, it reads global and local variables:
```yaml
global:
verboseLog: # Note that this is empty (nil)
verboseLog: # Note that this is empty (nil)
```
Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you
values a `verboseLog` is defined, the global one is going to be always ignored.
Usage:
```mustache
{{ include "newrelic.common.verboseLog" . }}
```
### `newrelic.common.verboseLog.valueAsBoolean`
Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean
Usage:
```mustache
{{ include "newrelic.common.verboseLog.valueAsBoolean" . }}
```
### `newrelic.common.verboseLog.valueAsInt`
Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer
Usage:
```mustache
{{ include "newrelic.common.verboseLog.valueAsInt" . }}
```

View File

@ -0,0 +1,106 @@
# Helm Common library
The common library is a way to unify the UX through all the Helm charts that implement it.
The tooling suite that New Relic is huge and growing and this allows to set things globally
and locally for a single chart.
## Documentation for chart writers
If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all
the functions/templates that we have implemented, what they do and how to use them.
## Values managed globally
We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour
of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected.
An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the
control plane listening in the node to `localhost`.
For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section
in its README.md that explains which is the expected behavior.
At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this
library and honors global options as described in this document.
Here is a list of global options:
| Global keys | Local keys | Default | Merged[<sup>1</sup>](#values-managed-globally-1) | Description |
|-------------|------------|---------|--------------------------------------------------|-------------|
| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored |
| global.licenseKey | licenseKey | `""` | | This set this license key to use |
| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there |
| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located |
| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods |
| global.labels | labels | `{}` | yes | Additional labels for chart objects |
| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName |
| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork |
| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig |
| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy |
| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images |
| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) |
| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) |
| global.affinity | affinity | `{}` | | Sets pod/node affinities |
| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector |
| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints |
| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not |
| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. |
| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create |
| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend |
| global.fedramp | fedramp | `false` | | Enables FedRAMP |
| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs |
| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs |
| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` |
| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key |
| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally |
### Further information
<a name="values-managed-globally-1"></a>
#### 1. Merged
Merged means that the values from global are not replaced by the local ones. Think in this example:
```yaml
global:
labels:
global: global
hostNetwork: true
nodeSelector:
global: global
labels:
local: local
nodeSelector:
local: local
hostNetwork: false
```
This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with
`{ "local": "local" }`.
As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`)
the `values` from global to local. This is the rationale behind this:
* `hostNetwork` is templated to `false` because is overriding the value defined globally.
* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for
clarity' sake.
* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because
of the logic that Helm follows merging maps.
<a name="values-managed-globally-2"></a>
#### 2. Fine grain registries
Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending
on the chart itself.
As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the
images that you can change.
This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration.
<a name="values-managed-globally-3"></a>
#### 3. Privileged mode
By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more
metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that
honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled.

View File

@ -0,0 +1,10 @@
{{- /* Defines the Pod affinity */ -}}
{{- define "newrelic.common.affinity" -}}
{{- if .Values.affinity -}}
{{- toYaml .Values.affinity -}}
{{- else if .Values.global -}}
{{- if .Values.global.affinity -}}
{{- toYaml .Values.global.affinity -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,26 @@
{{/*
This helper should return the defaults that all agents should have
*/}}
{{- define "newrelic.common.agentConfig.defaults" -}}
{{- if include "newrelic.common.verboseLog" . }}
log:
level: trace
{{- end }}
{{- if (include "newrelic.common.nrStaging" . ) }}
staging: true
{{- end }}
{{- with include "newrelic.common.proxy" . }}
proxy: {{ . | quote }}
{{- end }}
{{- with include "newrelic.common.fedramp.enabled" . }}
fedramp: {{ . }}
{{- end }}
{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }}
custom_attributes:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,15 @@
{{/*
Return the cluster
*/}}
{{- define "newrelic.common.cluster" -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- if .Values.cluster -}}
{{- .Values.cluster -}}
{{- else if $global.cluster -}}
{{- $global.cluster -}}
{{- else -}}
{{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,17 @@
{{/*
This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`.
*/}}
{{- define "newrelic.common.customAttributes" -}}
{{- $customAttributes := dict -}}
{{- $global := index .Values "global" | default dict -}}
{{- if $global.customAttributes -}}
{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}}
{{- end -}}
{{- if .Values.customAttributes -}}
{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}}
{{- end -}}
{{- toYaml $customAttributes -}}
{{- end -}}

View File

@ -0,0 +1,10 @@
{{- /* Defines the Pod dnsConfig */ -}}
{{- define "newrelic.common.dnsConfig" -}}
{{- if .Values.dnsConfig -}}
{{- toYaml .Values.dnsConfig -}}
{{- else if .Values.global -}}
{{- if .Values.global.dnsConfig -}}
{{- toYaml .Values.global.dnsConfig -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,25 @@
{{- /* Defines the fedRAMP flag */ -}}
{{- define "newrelic.common.fedramp.enabled" -}}
{{- if .Values.fedramp -}}
{{- if .Values.fedramp.enabled -}}
{{- .Values.fedramp.enabled -}}
{{- end -}}
{{- else if .Values.global -}}
{{- if .Values.global.fedramp -}}
{{- if .Values.global.fedramp.enabled -}}
{{- .Values.global.fedramp.enabled -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /* Return FedRAMP value directly ready to be templated */ -}}
{{- define "newrelic.common.fedramp.enabled.value" -}}
{{- if include "newrelic.common.fedramp.enabled" . -}}
true
{{- else -}}
false
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,39 @@
{{- /*
Abstraction of the hostNetwork toggle.
This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`.
Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string)
*/ -}}
{{- define "newrelic.common.hostNetwork" -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- /*
`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs
We also want only to return when this is true, returning `false` here will template "false" (string) when doing
an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used
as an evaluation somewhere else.
*/ -}}
{{- if get .Values "hostNetwork" | kindIs "bool" -}}
{{- if .Values.hostNetwork -}}
{{- .Values.hostNetwork -}}
{{- end -}}
{{- else if get $global "hostNetwork" | kindIs "bool" -}}
{{- if $global.hostNetwork -}}
{{- $global.hostNetwork -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /*
Abstraction of the hostNetwork toggle.
This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly.
*/ -}}
{{- define "newrelic.common.hostNetwork.value" -}}
{{- if include "newrelic.common.hostNetwork" . -}}
true
{{- else -}}
false
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,94 @@
{{- /*
Return the proper image name
{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }}
*/ -}}
{{- define "newrelic.common.images.image" -}}
{{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}}
{{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}}
{{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}}
{{- if $registryName -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}}
{{- else -}}
{{- printf "%s:%s" $repositoryName $tag | quote -}}
{{- end -}}
{{- end -}}
{{- /*
Return the proper image registry
{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }}
*/ -}}
{{- define "newrelic.common.images.registry" -}}
{{- $globalRegistry := "" -}}
{{- if .context.Values.global -}}
{{- if .context.Values.global.images -}}
{{- with .context.Values.global.images.registry -}}
{{- $globalRegistry = . -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- $localRegistry := "" -}}
{{- if .imageRoot.registry -}}
{{- $localRegistry = .imageRoot.registry -}}
{{- end -}}
{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}}
{{- if $registry -}}
{{- $registry -}}
{{- end -}}
{{- end -}}
{{- /*
Return the proper image repository
{{ include "newrelic.common.images.repository" .Values.path.to.the.image }}
*/ -}}
{{- define "newrelic.common.images.repository" -}}
{{- .repository -}}
{{- end -}}
{{- /*
Return the proper image tag
{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }}
*/ -}}
{{- define "newrelic.common.images.tag" -}}
{{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}}
{{- end -}}
{{- /*
Return the proper Image Pull Registry Secret Names evaluating values as templates
{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }}
*/ -}}
{{- define "newrelic.common.images.renderPullSecrets" -}}
{{- $flatlist := list }}
{{- if .context.Values.global -}}
{{- if .context.Values.global.images -}}
{{- if .context.Values.global.images.pullSecrets -}}
{{- range .context.Values.global.images.pullSecrets -}}
{{- $flatlist = append $flatlist . -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- range .pullSecrets -}}
{{- if not (empty .) -}}
{{- range . -}}
{{- $flatlist = append $flatlist . -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if $flatlist -}}
{{- toYaml $flatlist -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,56 @@
{{/*
Return the name of the secret holding the Insights Key.
*/}}
{{- define "newrelic.common.insightsKey.secretName" -}}
{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}}
{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}}
{{- end -}}
{{/*
Return the name key for the Insights Key inside the secret.
*/}}
{{- define "newrelic.common.insightsKey.secretKeyName" -}}
{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}}
{{- end -}}
{{/*
Return local insightsKey if set, global otherwise.
This helper is for internal use.
*/}}
{{- define "newrelic.common.insightsKey._licenseKey" -}}
{{- if .Values.insightsKey -}}
{{- .Values.insightsKey -}}
{{- else if .Values.global -}}
{{- if .Values.global.insightsKey -}}
{{- .Values.global.insightsKey -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return the name of the secret holding the Insights Key.
This helper is for internal use.
*/}}
{{- define "newrelic.common.insightsKey._customSecretName" -}}
{{- if .Values.customInsightsKeySecretName -}}
{{- .Values.customInsightsKeySecretName -}}
{{- else if .Values.global -}}
{{- if .Values.global.customInsightsKeySecretName -}}
{{- .Values.global.customInsightsKeySecretName -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return the name key for the Insights Key inside the secret.
This helper is for internal use.
*/}}
{{- define "newrelic.common.insightsKey._customSecretKey" -}}
{{- if .Values.customInsightsKeySecretKey -}}
{{- .Values.customInsightsKeySecretKey -}}
{{- else if .Values.global -}}
{{- if .Values.global.customInsightsKeySecretKey }}
{{- .Values.global.customInsightsKeySecretKey -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{/*
Renders the insights key secret if user has not specified a custom secret.
*/}}
{{- define "newrelic.common.insightsKey.secret" }}
{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }}
{{- /* Fail if licenseKey is empty and required: */ -}}
{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }}
{{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "newrelic.common.insightsKey.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "newrelic.common.labels" . | nindent 4 }}
data:
{{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,54 @@
{{/*
This will render the labels that should be used in all the manifests used by the helm chart.
*/}}
{{- define "newrelic.common.labels" -}}
{{- $global := index .Values "global" | default dict -}}
{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}}
{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}}
{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}}
{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}}
{{- if .Chart.AppVersion -}}
{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}}
{{- end -}}
{{- $globalUserLabels := $global.labels | default dict -}}
{{- $localUserLabels := .Values.labels | default dict -}}
{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}}
{{- toYaml $labels -}}
{{- end -}}
{{/*
This will render the labels that should be used in deployments/daemonsets template pods as a selector.
*/}}
{{- define "newrelic.common.labels.selectorLabels" -}}
{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}}
{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}}
{{- $selectorLabels := mustMergeOverwrite $name $instance -}}
{{- toYaml $selectorLabels -}}
{{- end }}
{{/*
Pod labels
*/}}
{{- define "newrelic.common.labels.podLabels" -}}
{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}}
{{- $global := index .Values "global" | default dict -}}
{{- $globalPodLabels := $global.podLabels | default dict }}
{{- $localPodLabels := .Values.podLabels | default dict }}
{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}}
{{- toYaml $podLabels -}}
{{- end }}

View File

@ -0,0 +1,68 @@
{{/*
Return the name of the secret holding the License Key.
*/}}
{{- define "newrelic.common.license.secretName" -}}
{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}}
{{- include "newrelic.common.license._customSecretName" . | default $default -}}
{{- end -}}
{{/*
Return the name key for the License Key inside the secret.
*/}}
{{- define "newrelic.common.license.secretKeyName" -}}
{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}}
{{- end -}}
{{/*
Return local licenseKey if set, global otherwise.
This helper is for internal use.
*/}}
{{- define "newrelic.common.license._licenseKey" -}}
{{- if .Values.licenseKey -}}
{{- .Values.licenseKey -}}
{{- else if .Values.global -}}
{{- if .Values.global.licenseKey -}}
{{- .Values.global.licenseKey -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return the name of the secret holding the License Key.
This helper is for internal use.
*/}}
{{- define "newrelic.common.license._customSecretName" -}}
{{- if .Values.customSecretName -}}
{{- .Values.customSecretName -}}
{{- else if .Values.global -}}
{{- if .Values.global.customSecretName -}}
{{- .Values.global.customSecretName -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return the name key for the License Key inside the secret.
This helper is for internal use.
*/}}
{{- define "newrelic.common.license._customSecretKey" -}}
{{- if .Values.customSecretLicenseKey -}}
{{- .Values.customSecretLicenseKey -}}
{{- else if .Values.global -}}
{{- if .Values.global.customSecretLicenseKey }}
{{- .Values.global.customSecretLicenseKey -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return empty string (falsehood) or "true" if the user set a custom secret for the license.
This helper is for internal use.
*/}}
{{- define "newrelic.common.license._usesCustomSecret" -}}
{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}}
true
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{/*
Renders the license key secret if user has not specified a custom secret.
*/}}
{{- define "newrelic.common.license.secret" }}
{{- if not (include "newrelic.common.license._customSecretName" .) }}
{{- /* Fail if licenseKey is empty and required: */ -}}
{{- if not (include "newrelic.common.license._licenseKey" .) }}
{{- fail "You must specify a licenseKey or a customSecretName containing it" }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "newrelic.common.license.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "newrelic.common.labels" . | nindent 4 }}
data:
{{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,26 @@
{{- /*
Abstraction of the lowDataMode toggle.
This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`.
Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string)
*/ -}}
{{- define "newrelic.common.lowDataMode" -}}
{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}}
{{- if (get .Values "lowDataMode" | kindIs "bool") -}}
{{- if .Values.lowDataMode -}}
{{- /*
We want only to return when this is true, returning `false` here will template "false" (string) when doing
an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used
as an evaluation somewhere else.
*/ -}}
{{- .Values.lowDataMode -}}
{{- end -}}
{{- else -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- if get $global "lowDataMode" | kindIs "bool" -}}
{{- if $global.lowDataMode -}}
{{- $global.lowDataMode -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,73 @@
{{/*
This is an function to be called directly with a string just to truncate strings to
63 chars because some Kubernetes name fields are limited to that.
*/}}
{{- define "newrelic.common.naming.truncateToDNS" -}}
{{- . | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- /*
Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name.
Usage:
{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "<my-name>" "suffix" "my-suffix" ) }}
*/ -}}
{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}}
{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}}
{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}}
{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}}
{{- if $newName -}}
{{- printf "%s-%s" $newName $suffix -}}
{{- else -}}
{{ $suffix }}
{{- end -}}
{{- end -}}
{{/*
Expand the name of the chart.
Uses the Chart name by default if nameOverride is not set.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "newrelic.common.naming.name" -}}
{{- $name := .Values.nameOverride | default .Chart.Name -}}
{{- include "newrelic.common.naming.truncateToDNS" $name -}}
{{- end }}
{{/*
Create a default fully qualified app name.
By default the full name will be "<release_name>" just in if it has the chart name included in that, if not
it will be concatenated like "<release_name>-<chart_chart>". This could change if fullnameOverride or
nameOverride are set.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "newrelic.common.naming.fullname" -}}
{{- $name := include "newrelic.common.naming.name" . -}}
{{- if .Values.fullnameOverride -}}
{{- $name = .Values.fullnameOverride -}}
{{- else if not (contains $name .Release.Name) -}}
{{- $name = printf "%s-%s" .Release.Name $name -}}
{{- end -}}
{{- include "newrelic.common.naming.truncateToDNS" $name -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead.
*/}}
{{- define "newrelic.common.naming.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end }}

View File

@ -0,0 +1,10 @@
{{- /* Defines the Pod nodeSelector */ -}}
{{- define "newrelic.common.nodeSelector" -}}
{{- if .Values.nodeSelector -}}
{{- toYaml .Values.nodeSelector -}}
{{- else if .Values.global -}}
{{- if .Values.global.nodeSelector -}}
{{- toYaml .Values.global.nodeSelector -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,10 @@
{{- /* Defines the pod priorityClassName */ -}}
{{- define "newrelic.common.priorityClassName" -}}
{{- if .Values.priorityClassName -}}
{{- .Values.priorityClassName -}}
{{- else if .Values.global -}}
{{- if .Values.global.priorityClassName -}}
{{- .Values.global.priorityClassName -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,28 @@
{{- /*
This is a helper that returns whether the chart should assume the user is fine deploying privileged pods.
*/ -}}
{{- define "newrelic.common.privileged" -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}}
{{- if get .Values "privileged" | kindIs "bool" -}}
{{- if .Values.privileged -}}
{{- .Values.privileged -}}
{{- end -}}
{{- else if get $global "privileged" | kindIs "bool" -}}
{{- if $global.privileged -}}
{{- $global.privileged -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}}
{{- define "newrelic.common.privileged.value" -}}
{{- if include "newrelic.common.privileged" . -}}
true
{{- else -}}
false
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,10 @@
{{- /* Defines the proxy */ -}}
{{- define "newrelic.common.proxy" -}}
{{- if .Values.proxy -}}
{{- .Values.proxy -}}
{{- else if .Values.global -}}
{{- if .Values.global.proxy -}}
{{- .Values.global.proxy -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,74 @@
{{/*
Return the region that is being used by the user
*/}}
{{- define "newrelic.common.region" -}}
{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}}
{{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}}
{{- end -}}
{{- /* Defaults */ -}}
{{- $region := "us" -}}
{{- if include "newrelic.common.nrStaging" . -}}
{{- $region = "staging" -}}
{{- else if include "newrelic.common.region._isEULicenseKey" . -}}
{{- $region = "eu" -}}
{{- end -}}
{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}}
{{- end -}}
{{/*
Returns the region from the values if valid. This only return the value from the `values.yaml`.
More intelligence should be used to compute the region.
Usage: `include "newrelic.common.region.validate" "us"`
*/}}
{{- define "newrelic.common.region.validate" -}}
{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}}
{{- $region := . | lower -}}
{{- if eq $region "us" -}}
US
{{- else if eq $region "eu" -}}
EU
{{- else if eq $region "staging" -}}
Staging
{{- else if eq $region "local" -}}
Local
{{- else -}}
{{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}}
{{- end -}}
{{- end -}}
{{/*
Returns the region from the values. This only return the value from the `values.yaml`.
More intelligence should be used to compute the region.
This helper is for internal use.
*/}}
{{- define "newrelic.common.region._fromValues" -}}
{{- if .Values.region -}}
{{- .Values.region -}}
{{- else if .Values.global -}}
{{- if .Values.global.region -}}
{{- .Values.global.region -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return empty string (falsehood) or "true" if the license is for EU region.
This helper is for internal use.
*/}}
{{- define "newrelic.common.region._isEULicenseKey" -}}
{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}}
{{- $license := include "newrelic.common.license._licenseKey" . -}}
{{- if hasPrefix "eu" $license -}}
true
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,23 @@
{{- /* Defines the container securityContext context */ -}}
{{- define "newrelic.common.securityContext.container" -}}
{{- $global := index .Values "global" | default dict -}}
{{- if .Values.containerSecurityContext -}}
{{- toYaml .Values.containerSecurityContext -}}
{{- else if $global.containerSecurityContext -}}
{{- toYaml $global.containerSecurityContext -}}
{{- end -}}
{{- end -}}
{{- /* Defines the pod securityContext context */ -}}
{{- define "newrelic.common.securityContext.pod" -}}
{{- $global := index .Values "global" | default dict -}}
{{- if .Values.podSecurityContext -}}
{{- toYaml .Values.podSecurityContext -}}
{{- else if $global.podSecurityContext -}}
{{- toYaml $global.podSecurityContext -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,90 @@
{{- /* Defines if the service account has to be created or not */ -}}
{{- define "newrelic.common.serviceAccount.create" -}}
{{- $valueFound := false -}}
{{- /* Look for a global creation of a service account */ -}}
{{- if get .Values "serviceAccount" | kindIs "map" -}}
{{- if (get .Values.serviceAccount "create" | kindIs "bool") -}}
{{- $valueFound = true -}}
{{- if .Values.serviceAccount.create -}}
{{- /*
We want only to return when this is true, returning `false` here will template "false" (string) when doing
an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used
as an evaluation somewhere else.
*/ -}}
{{- .Values.serviceAccount.create -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /* Look for a local creation of a service account */ -}}
{{- if not $valueFound -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- if get $global "serviceAccount" | kindIs "map" -}}
{{- if get $global.serviceAccount "create" | kindIs "bool" -}}
{{- $valueFound = true -}}
{{- if $global.serviceAccount.create -}}
{{- $global.serviceAccount.create -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /* In case no serviceAccount value has been found, default to "true" */ -}}
{{- if not $valueFound -}}
true
{{- end -}}
{{- end -}}
{{- /* Defines the name of the service account */ -}}
{{- define "newrelic.common.serviceAccount.name" -}}
{{- $localServiceAccount := "" -}}
{{- if get .Values "serviceAccount" | kindIs "map" -}}
{{- if (get .Values.serviceAccount "name" | kindIs "string") -}}
{{- $localServiceAccount = .Values.serviceAccount.name -}}
{{- end -}}
{{- end -}}
{{- $globalServiceAccount := "" -}}
{{- $global := index .Values "global" | default dict -}}
{{- if get $global "serviceAccount" | kindIs "map" -}}
{{- if get $global.serviceAccount "name" | kindIs "string" -}}
{{- $globalServiceAccount = $global.serviceAccount.name -}}
{{- end -}}
{{- end -}}
{{- if (include "newrelic.common.serviceAccount.create" .) -}}
{{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}}
{{- else -}}
{{- $localServiceAccount | default $globalServiceAccount | default "default" -}}
{{- end -}}
{{- end -}}
{{- /* Merge the global and local annotations for the service account */ -}}
{{- define "newrelic.common.serviceAccount.annotations" -}}
{{- $localServiceAccount := dict -}}
{{- if get .Values "serviceAccount" | kindIs "map" -}}
{{- if get .Values.serviceAccount "annotations" -}}
{{- $localServiceAccount = .Values.serviceAccount.annotations -}}
{{- end -}}
{{- end -}}
{{- $globalServiceAccount := dict -}}
{{- $global := index .Values "global" | default dict -}}
{{- if get $global "serviceAccount" | kindIs "map" -}}
{{- if get $global.serviceAccount "annotations" -}}
{{- $globalServiceAccount = $global.serviceAccount.annotations -}}
{{- end -}}
{{- end -}}
{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}}
{{- if $merged -}}
{{- toYaml $merged -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,39 @@
{{- /*
Abstraction of the nrStaging toggle.
This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`.
Returns "true" if `nrStaging` is enabled, otherwise "" (empty string)
*/ -}}
{{- define "newrelic.common.nrStaging" -}}
{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}}
{{- if (get .Values "nrStaging" | kindIs "bool") -}}
{{- if .Values.nrStaging -}}
{{- /*
We want only to return when this is true, returning `false` here will template "false" (string) when doing
an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used
as an evaluation somewhere else.
*/ -}}
{{- .Values.nrStaging -}}
{{- end -}}
{{- else -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- if get $global "nrStaging" | kindIs "bool" -}}
{{- if $global.nrStaging -}}
{{- $global.nrStaging -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /*
Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging"
*/ -}}
{{- define "newrelic.common.nrStaging.value" -}}
{{- if include "newrelic.common.nrStaging" . -}}
true
{{- else -}}
false
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,10 @@
{{- /* Defines the Pod tolerations */ -}}
{{- define "newrelic.common.tolerations" -}}
{{- if .Values.tolerations -}}
{{- toYaml .Values.tolerations -}}
{{- else if .Values.global -}}
{{- if .Values.global.tolerations -}}
{{- toYaml .Values.global.tolerations -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,56 @@
{{/*
Return the name of the secret holding the API Key.
*/}}
{{- define "newrelic.common.userKey.secretName" -}}
{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}}
{{- include "newrelic.common.userKey._customSecretName" . | default $default -}}
{{- end -}}
{{/*
Return the name key for the API Key inside the secret.
*/}}
{{- define "newrelic.common.userKey.secretKeyName" -}}
{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}}
{{- end -}}
{{/*
Return local API Key if set, global otherwise.
This helper is for internal use.
*/}}
{{- define "newrelic.common.userKey._userKey" -}}
{{- if .Values.userKey -}}
{{- .Values.userKey -}}
{{- else if .Values.global -}}
{{- if .Values.global.userKey -}}
{{- .Values.global.userKey -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return the name of the secret holding the API Key.
This helper is for internal use.
*/}}
{{- define "newrelic.common.userKey._customSecretName" -}}
{{- if .Values.customUserKeySecretName -}}
{{- .Values.customUserKeySecretName -}}
{{- else if .Values.global -}}
{{- if .Values.global.customUserKeySecretName -}}
{{- .Values.global.customUserKeySecretName -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Return the name key for the API Key inside the secret.
This helper is for internal use.
*/}}
{{- define "newrelic.common.userKey._customSecretKey" -}}
{{- if .Values.customUserKeySecretKey -}}
{{- .Values.customUserKeySecretKey -}}
{{- else if .Values.global -}}
{{- if .Values.global.customUserKeySecretKey }}
{{- .Values.global.customUserKeySecretKey -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{/*
Renders the user key secret if user has not specified a custom secret.
*/}}
{{- define "newrelic.common.userKey.secret" }}
{{- if not (include "newrelic.common.userKey._customSecretName" .) }}
{{- /* Fail if user key is empty and required: */ -}}
{{- if not (include "newrelic.common.userKey._userKey" .) }}
{{- fail "You must specify a userKey or a customUserKeySecretName containing it" }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "newrelic.common.userKey.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "newrelic.common.labels" . | nindent 4 }}
data:
{{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,54 @@
{{- /*
Abstraction of the verbose toggle.
This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`.
Returns "true" if `verbose` is enabled, otherwise "" (empty string)
*/ -}}
{{- define "newrelic.common.verboseLog" -}}
{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}}
{{- if (get .Values "verboseLog" | kindIs "bool") -}}
{{- if .Values.verboseLog -}}
{{- /*
We want only to return when this is true, returning `false` here will template "false" (string) when doing
an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used
as an evaluation somewhere else.
*/ -}}
{{- .Values.verboseLog -}}
{{- end -}}
{{- else -}}
{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}}
{{- $global := index .Values "global" | default dict -}}
{{- if get $global "verboseLog" | kindIs "bool" -}}
{{- if $global.verboseLog -}}
{{- $global.verboseLog -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /*
Abstraction of the verbose toggle.
This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly.
*/ -}}
{{- define "newrelic.common.verboseLog.valueAsBoolean" -}}
{{- if include "newrelic.common.verboseLog" . -}}
true
{{- else -}}
false
{{- end -}}
{{- end -}}
{{- /*
Abstraction of the verbose toggle.
This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly.
*/ -}}
{{- define "newrelic.common.verboseLog.valueAsInt" -}}
{{- if include "newrelic.common.verboseLog" . -}}
1
{{- else -}}
0
{{- end -}}
{{- end -}}

View File

@ -0,0 +1 @@
# values are not needed for the library chart, however this file is still needed for helm lint to work.

View File

@ -0,0 +1,36 @@
This project is currently in preview.
Issues and contributions should be reported to the project's GitHub.
{{- if (include "k8s-agents-operator.areValuesValid" .) }}
=====================================
********
****************
********** **********,
&&&**** ****/(((
&&&&&&& ((((((
&&&&&&&&&& ((((((
&&&&&&&& ((((((
&&&&& ((((((
&&&&& ((((((((
&&&&& .((((((((((
&&&&&((((((((
&&&(((,
Your deployment of the New Relic Agent Operator is complete.
You can check on the progress of this by running the following command:
kubectl get deployments -o wide -w --namespace {{ .Release.Namespace }} {{ include "newrelic.common.naming.fullname" . }}
WARNING: This deployment will be incomplete until you configure your Instrumentation custom resource definition.
=====================================
Please visit https://github.com/newrelic/k8s-agents-operator for instructions on how to create & configure the
Instrumentation custom resource definition required by the Operator.
{{- else }}
##############################################################################
#### ERROR: You did not set a license key. ####
##############################################################################
This deployment will be incomplete until you get your ingest license key from New Relic.
{{- end -}}

View File

@ -0,0 +1,25 @@
{{/*
Returns if the template should render, it checks if the required values are set.
*/}}
{{- define "k8s-agents-operator.areValuesValid" -}}
{{- $licenseKey := include "newrelic.common.license._licenseKey" . -}}
{{- and (or $licenseKey)}}
{{- end -}}
{{- define "k8s-agents-operator.manager.image" -}}
{{- $managerVersion := .Values.controllerManager.manager.image.version | default .Chart.AppVersion -}}
{{- if eq (substr 0 7 $managerVersion) "sha256:" -}}
{{- printf "%s@%s" .Values.controllerManager.manager.image.repository $managerVersion -}}
{{- else -}}
{{- printf "%s:%s" .Values.controllerManager.manager.image.repository $managerVersion -}}
{{- end -}}
{{- end -}}
{{- define "k8s-agents-operator.kubeRbacProxy.image" -}}
{{- $kubeRbacProxyVersion := .Values.controllerManager.kubeRbacProxy.image.version | default .Chart.AppVersion -}}
{{- if eq (substr 0 7 $kubeRbacProxyVersion) "sha256:" -}}
{{- printf "%s@%s" .Values.controllerManager.kubeRbacProxy.image.repository $kubeRbacProxyVersion -}}
{{- else -}}
{{- printf "%s:%s" .Values.controllerManager.kubeRbacProxy.image.repository $kubeRbacProxyVersion -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,52 @@
{{/* Controller manager service certificate's secret. */}}
{{- define "k8s-agents-operator.certificateSecret.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "controller-manager-service-cert") -}}
{{- end }}
{{- define "k8s-agents-operator.webhook.service.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "webhook-service") -}}
{{- end -}}
{{- define "k8s-agents-operator.webhook.mutating.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "mutation") -}}
{{- end -}}
{{- define "k8s-agents-operator.webhook.validating.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "validation") -}}
{{- end -}}
{{- define "k8s-agents-operator.cert-manager.issuer.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "selfsigned-issuer") -}}
{{- end -}}
{{- define "k8s-agents-operator.cert-manager.certificate.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "serving-cert") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.proxy.role.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "proxy-role") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.proxy.roleBinding.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "proxy-rolebinding") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.manager.role.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "manager-role") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.manager.roleBinding.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "manager-rolebinding") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.leaderElection.role.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "leader-election-role") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.leaderElection.roleBinding.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "leader-election-rolebinding") -}}
{{- end -}}
{{- define "k8s-agents-operator.rbac.metricsReader.role.name" -}}
{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "metrics-reader") -}}
{{- end -}}

Some files were not shown because too many files have changed in this diff Show More