Migrate charts directory (vendors starting with I-L) (#1046)
parent
745670abe5
commit
ed4002e003
|
@ -0,0 +1,26 @@
|
|||
annotations:
|
||||
artifacthub.io/links: |
|
||||
- name: Instana website
|
||||
url: https://www.instana.com
|
||||
- name: Instana Helm charts
|
||||
url: https://github.com/instana/helm-charts
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Instana Agent
|
||||
catalog.cattle.io/kube-version: '>=1.21-0'
|
||||
catalog.cattle.io/release-name: instana-agent
|
||||
apiVersion: v2
|
||||
appVersion: 1.251.0
|
||||
description: Instana Agent for Kubernetes
|
||||
home: https://www.instana.com/
|
||||
icon: https://agents.instana.io/helm/stan-logo-2020.png
|
||||
maintainers:
|
||||
- email: felix.marx@ibm.com
|
||||
name: FelixMarxIBM
|
||||
- email: henning.treu@ibm.com
|
||||
name: htreu
|
||||
- email: torsten.kohn@ibm.com
|
||||
name: tkohn
|
||||
name: instana-agent
|
||||
sources:
|
||||
- https://github.com/instana/instana-agent-docker
|
||||
version: 1.2.60
|
|
@ -0,0 +1,594 @@
|
|||
# Instana
|
||||
|
||||
Instana is an [APM solution](https://www.instana.com/) built for microservices that enables IT Ops to build applications faster and deliver higher quality services by automating monitoring, tracing and root cause analysis.
|
||||
This solution is optimized for [Kubernetes](https://www.instana.com/automatic-kubernetes-monitoring/).
|
||||
|
||||
This chart adds the Instana Agent to all schedulable nodes in your cluster via a privileged `DaemonSet` and accompanying resources like `ConfigurationMap`s, `Secret`s and RBAC settings.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Kubernetes 1.21+ OR OpenShift 4.8+
|
||||
* Helm 3
|
||||
|
||||
## Installation
|
||||
|
||||
To configure the installation you can either specify the options on the command line using the **--set** switch, or you can edit **values.yaml**.
|
||||
|
||||
First, create a namespace for the instana-agent
|
||||
|
||||
```bash
|
||||
kubectl create namespace instana-agent
|
||||
```
|
||||
|
||||
To install the chart with the release name `instana-agent` and set the values on the command line run:
|
||||
|
||||
```bash
|
||||
$ helm install instana-agent --namespace instana-agent \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set agent.key=INSTANA_AGENT_KEY \
|
||||
--set agent.endpointHost=HOST \
|
||||
--set zone.name=ZONE_NAME \
|
||||
instana-agent
|
||||
```
|
||||
|
||||
**OpenShift:** When targetting an OpenShift 4.x cluster, add `--set openshift=true`.
|
||||
|
||||
### Required Settings
|
||||
|
||||
#### Configuring the Instana Backend
|
||||
|
||||
In order to report the data it collects to the Instana backend for analysis, the Instana agent must know which backend to report to, and which credentials to use to authenticate, known as "agent key".
|
||||
|
||||
As described by the [Install Using the Helm Chart](https://www.instana.com/docs/setup_and_manage/host_agent/on/kubernetes#install-using-the-helm-chart) documentation, you will find the right values for the following fields inside Instana itself:
|
||||
|
||||
* `agent.endpointHost`
|
||||
* `agent.endpointPort`
|
||||
* `agent.key`
|
||||
|
||||
_Note:_ You can find the options mentioned in the [configuration section below](#Configuration-Reference)
|
||||
|
||||
If your agents report into a self-managed Instana unit (also known as "on-prem"), you will also need to configure a "download key", which allows the agent to fetch its components from the Instana repository.
|
||||
The download key is set via the following value:
|
||||
|
||||
* `agent.downloadKey`
|
||||
|
||||
#### Zone and Cluster
|
||||
|
||||
Instana needs to know how to name your Kubernetes cluster and, optionally, how to group your Instana agents in [Custom zones](https://www.instana.com/docs/setup_and_manage/host_agent/configuration/#custom-zones) using the following fields:
|
||||
|
||||
* `zone.name`
|
||||
* `cluster.name`
|
||||
|
||||
Either `zone.name` or `cluster.name` are required.
|
||||
If you omit `cluster.name`, the value of `zone.name` will be used as cluster name as well.
|
||||
If you omit `zone.name`, the host zone will be automatically determined by the availability zone information provided by the [supported Cloud providers](https://www.instana.com/docs/setup_and_manage/cloud_service_agents).
|
||||
|
||||
## Uninstallation
|
||||
|
||||
To uninstall/delete the `instana-agent` release:
|
||||
|
||||
```bash
|
||||
helm del instana-agent -n instana-agent
|
||||
```
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
The following table lists the configurable parameters of the Instana chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| --------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `agent.configuration_yaml` | Custom content for the agent configuration.yaml file | `nil` See [below](#Agent-Configuration) for more details |
|
||||
| `agent.configuration.autoMountConfigEntries` | (Experimental, needs Helm 3.1+) Automatically look up the entries of the default `instana-agent` ConfigMap, and mount as agent configuration files in the `instana-agent` container under the `/opt/instana/agent/etc/instana` directory all ConfigMap entries with keys that match the `configuration-*.yaml` scheme. | `false` |
|
||||
| `agent.configuration.hotreloadEnabled` | Enables hot-reload of a configuration.yaml upon changes in the `instana-agent` ConfigMap without requiring a restart of a pod | `false` |
|
||||
| `agent.endpointHost` | Instana Agent backend endpoint host | `ingress-red-saas.instana.io` (US and ROW). If in Europe, please override with `ingress-blue-saas.instana.io` |
|
||||
| `agent.endpointPort` | Instana Agent backend endpoint port | `443` |
|
||||
| `agent.key` | Your Instana Agent key | `nil` You must provide your own key unless `agent.keysSecret` is specified |
|
||||
| `agent.downloadKey` | Your Instana Download key | `nil` Usually not required |
|
||||
| `agent.keysSecret` | As an alternative to specifying `agent.key` and, optionally, `agent.downloadKey`, you can instead specify the name of the secret in the namespace in which you install the Instana agent that carries the agent key and download key | `nil` Usually not required, see [Bring your own Keys secret](#bring-your-own-keys-secret) for more details |
|
||||
| `agent.additionalBackends` | List of additional backends to report to; it must specify the `endpointHost` and `key` fields, and optionally `endpointPort` | `[]` Usually not required; see [Configuring Additional Backends](#configuring-additional-backends) for more info and examples |
|
||||
| `agent.tls.secretName` | The name of the secret of type `kubernetes.io/tls` which contains the TLS relevant data. If the name is provided, `agent.tls.certificate` and `agent.tls.key` will be ignored. | `nil` |
|
||||
| `agent.tls.certificate` | The certificate data encoded as base64. Which will be used to create a new secret of type `kubernetes.io/tls`. | `nil` |
|
||||
| `agent.tls.key` | The private key data encoded as base64. Which will be used to create a new secret of type `kubernetes.io/tls`. | `nil` |
|
||||
| `agent.image.name` | The image name to pull | `instana/agent` |
|
||||
| `agent.image.digest` | The image digest to pull; if specified, it causes `agent.image.tag` to be ignored | `nil` |
|
||||
| `agent.image.tag` | The image tag to pull; this property is ignored if `agent.image.digest` is specified | `latest` |
|
||||
| `agent.image.pullPolicy` | Image pull policy | `Always` |
|
||||
| `agent.image.pullSecrets` | Image pull secrets; if not specified (default) _and_ `agent.image.name` starts with `containers.instana.io`, it will be automatically set to `[{ "name": "containers-instana-io" }]` to match the default secret created in this case. | `nil` |
|
||||
| `agent.listenAddress` | List of addresses to listen on, or "*" for all interfaces | `nil` |
|
||||
| `agent.mode` | Agent mode. Supported values are `APM`, `INFRASTRUCTURE`, `AWS` | `APM` |
|
||||
| `agent.instanaMvnRepoUrl` | Override for the Maven repository URL when the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
|
||||
| `agent.instanaMvnRepoFeaturesPath` | Override for the Maven repository features path the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
|
||||
| `agent.instanaMvnRepoSharedPath` | Override for the Maven repository shared path when the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
|
||||
| `agent.updateStrategy.type` | [DaemonSet update strategy type](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/); valid values are `OnDelete` and `RollingUpdate` | `RollingUpdate` |
|
||||
| `agent.updateStrategy.rollingUpdate.maxUnavailable` | How many agent pods can be updated at once; this value is ignored if `agent.updateStrategy.type` is different than `RollingUpdate` | `1` |
|
||||
| `agent.pod.annotations` | Additional annotations to apply to the pod | `{}` |
|
||||
| `agent.pod.labels` | Additional labels to apply to the Agent pod | `{}` |
|
||||
| `agent.pod.priorityClassName` | Name of an _existing_ PriorityClass that should be set on the agent pods | `nil` |
|
||||
| `agent.proxyHost` | Hostname/address of a proxy | `nil` |
|
||||
| `agent.proxyPort` | Port of a proxy | `nil` |
|
||||
| `agent.proxyProtocol` | Proxy protocol. Supported proxy types are `http` (for both HTTP and HTTPS proxies), `socks4`, `socks5`. | `nil` |
|
||||
| `agent.proxyUser` | Username of the proxy auth | `nil` |
|
||||
| `agent.proxyPassword` | Password of the proxy auth | `nil` |
|
||||
| `agent.proxyUseDNS` | Boolean if proxy also does DNS | `nil` |
|
||||
| `agent.pod.limits.cpu` | Container cpu limits in cpu cores | `1.5` |
|
||||
| `agent.pod.limits.memory` | Container memory limits in MiB | `768Mi` |
|
||||
| `agent.pod.requests.cpu` | Container cpu requests in cpu cores | `0.5` |
|
||||
| `agent.pod.requests.memory` | Container memory requests in MiB | `512Mi` |
|
||||
| `agent.pod.tolerations` | Tolerations for pod assignment | `[]` |
|
||||
| `agent.pod.affinity` | Affinity for pod assignment | `{}` |
|
||||
| `agent.env` | Additional environment variables for the agent | `{}` |
|
||||
| `agent.redactKubernetesSecrets` | Enable additional secrets redaction for selected Kubernetes resources | `nil` See [Kubernetes secrets](https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/#secrets) for more details. |
|
||||
| `cluster.name` | Display name of the monitored cluster | Value of `zone.name` |
|
||||
| `leaderElector.port` | Instana leader elector sidecar port | `42655` |
|
||||
| `leaderElector.image.name` | The elector image name to pull | `instana/leader-elector` |
|
||||
| `leaderElector.image.digest` | The image digest to pull; if specified, it causes `leaderElector.image.tag` to be ignored | `nil` |
|
||||
| `leaderElector.image.tag` | The image tag to pull; this property is ignored if `leaderElector.image.digest` is specified | `latest` |
|
||||
| `k8s_sensor.deployment.enabled` | Isolate k8sensor with a deployment | `true` |
|
||||
| `k8s_sensor.image.name` | The k8sensor image name to pull | `gcr.io/instana/k8sensor` |
|
||||
| `k8s_sensor.image.digest` | The image digest to pull; if specified, it causes `k8s_sensor.image.tag` to be ignored | `nil` |
|
||||
| `k8s_sensor.image.tag` | The image tag to pull; this property is ignored if `k8s_sensor.image.digest` is specified | `latest` |
|
||||
| `k8s_sensor.deployment.pod.limits.cpu` | CPU request for the `k8sensor` pods | `4` |
|
||||
| `k8s_sensor.deployment.pod.limits.memory` | Memory request limits for the `k8sensor` pods | `6144Mi` |
|
||||
| `k8s_sensor.deployment.pod.requests.cpu` | CPU limit for the `k8sensor` pods | `1.5` |
|
||||
| `k8s_sensor.deployment.pod.requests.memory` | Memory limit for the `k8sensor` pods | `1024Mi` |
|
||||
| `podSecurityPolicy.enable` | Whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to be `true` as well and it is available until Kubernetes version v1.25. | `false` See [PodSecurityPolicy](https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/#podsecuritypolicy) for more details. |
|
||||
| `podSecurityPolicy.name` | Name of an _existing_ PodSecurityPolicy to authorize for the Instana Agent pods. If not provided and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created for you. | `nil` |
|
||||
| `rbac.create` | Whether RBAC resources should be created | `true` |
|
||||
| `openshift` | Whether to install the Helm chart as needed in OpenShift; this setting implies `rbac.create=true` | `false` |
|
||||
| `opentelemetry.grpc.enabled` | Whether to configure the agent to accept telemetry from OpenTelemetry applications via gRPC. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
|
||||
| `opentelemetry.http.enabled` | Whether to configure the agent to accept telemetry from OpenTelemetry applications via HTTP. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
|
||||
| `prometheus.remoteWrite.enabled` | Whether to configure the agent to accept metrics over its implementation of the `remote_write` Prometheus endpoint. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
|
||||
| `service.create` | Whether to create a service that exposes the agents' Prometheus, OpenTelemetry and other APIs inside the cluster. Requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. The `ServiceInternalTrafficPolicy` feature gate needs to be enabled (default: enabled). | `true` |
|
||||
| `serviceAccount.create` | Whether a ServiceAccount should be created | `true` |
|
||||
| `serviceAccount.name` | Name of the ServiceAccount to use | `instana-agent` |
|
||||
| `zone.name` | Zone that detected technologies will be assigned to | `nil` You must provide either `zone.name` or `cluster.name`, see [above](#Installation) for details |
|
||||
| `zones` | Multi-zone daemonset configuration. | `nil` see [below](#multiple-zones) for details |
|
||||
|
||||
### Agent Modes
|
||||
|
||||
Agent can have either `APM` or `INFRASTRUCTURE`.
|
||||
Default is APM and if you want to override that, ensure you set value:
|
||||
|
||||
* `agent.mode`
|
||||
|
||||
For more information on agent modes, refer to the [Host Agent Modes](https://www.instana.com/docs/setup_and_manage/host_agent#host-agent-modes) documentation.
|
||||
|
||||
### Agent Configuration
|
||||
|
||||
Besides the settings listed above, there are many more settings that can be applied to the agent via the so-called "Agent Configuration File", often also referred to as `configuration.yaml` file.
|
||||
An overview of the settings that can be applied is provided in the [Agent Configuration File](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#agent-configuration-file) documentation.
|
||||
To configure the agent, you can either:
|
||||
|
||||
* edit the [config map](templates/agent-configmap.yaml), or
|
||||
* provide the configuration via the `agent.configuration_yaml` parameter in [values.yaml](values.yaml)
|
||||
|
||||
This configuration will be used for all Instana Agents on all nodes. Visit the [agent configuration documentation](https://docs.instana.io/setup_and_manage/host_agent/#agent-configuration-file) for more details on configuration options.
|
||||
|
||||
_Note:_ This Helm Chart does not support configuring [Multiple Configuration Files](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#multiple-configuration-files).
|
||||
|
||||
### Agent Pod Sizing
|
||||
|
||||
The `agent.pod.requests.cpu`, `agent.pod.requests.memory`, `agent.pod.limits.cpu` and `agent.pod.limits.memory` settings allow you to change the sizing of the `instana-agent` pods.
|
||||
If you are using the [Kubernetes Sensor Deployment](#kubernetes-sensor-deployment) functionality, you may be able to reduce the default amount of resources, and especially memory, allocated to the Instana agents that monitor your applications.
|
||||
Actual sizing data depends very much on how many pods, containers and applications are monitored, and how much traces they generate, so we cannot really provide a rule of thumb for the sizing.
|
||||
|
||||
### Bring your own Keys secret
|
||||
|
||||
In case you have automation that creates secrets for you, it may not be desirable for this Helm chart to create a secret containing the `agent.key` and `agent.downloadKey`.
|
||||
In this case, you can instead specify the name of an alread-existing secret in the namespace in which you install the Instana agent that carries the agent key and download key.
|
||||
|
||||
The secret you specify The secret you specify _must_ have a field called `key`, which would contain the value you would otherwise set to `agent.key`, and _may_ contain a field called `downloadKey`, which would contain the value you would otherwise set to `agent.downloadKey`.
|
||||
|
||||
### Configuring Additional Configuration Files
|
||||
|
||||
[Multiple configuration files](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#multiple-configuration-files) is a capability of the Instana agent that allows for modularity in its configurations files.
|
||||
|
||||
The experimental `agent.configuration.autoMountConfigEntries`, which uses functionality available in Helm 3.1+ to automatically look up the entries of the default `instana-agent` ConfigMap, and mount as agent configuration files in the `instana-agent` container under the `/opt/instana/agent/etc/instana` directory all ConfigMap entries with keys that match the `configuration-*.yaml` scheme.
|
||||
|
||||
**IMPORTANT:** Needs Helm 3.1+ as it is built on the `lookup` function
|
||||
**IMPORTANT:** Editing the ConfigMap adding keys requires a `helm upgrade` to take effect
|
||||
|
||||
### Configuring Additional Backends
|
||||
|
||||
You may want to have your Instana agents report to multiple backends.
|
||||
The first backend must be configured as shown in the [Configuring the Instana Backend](#configuring-the-instana-backend); every backend after the first, is configured in the `agent.additionalBackends` list in the [values.yaml](values.yaml) as follows:
|
||||
|
||||
```yaml
|
||||
agent:
|
||||
additionalBackends:
|
||||
# Second backend
|
||||
- endpointHost: my-instana.instana.io # endpoint host; e.g., my-instana.instana.io
|
||||
endpointPort: 443 # default is 443, so this line could be omitted
|
||||
key: ABCDEFG # agent key for this backend
|
||||
# Third backend
|
||||
- endpointHost: another-instana.instana.io # endpoint host; e.g., my-instana.instana.io
|
||||
endpointPort: 1444 # default is 443, so this line could be omitted
|
||||
key: LMNOPQR # agent key for this backend
|
||||
```
|
||||
|
||||
The snippet above configures the agent to report to two additional backends.
|
||||
The same effect as the above can be accomplished via the command line via:
|
||||
|
||||
```sh
|
||||
$ helm install -n instana-agent instana-agent ... \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set 'agent.additionalBackends[0].endpointHost=my-instana.instana.io' \
|
||||
--set 'agent.additionalBackends[0].endpointPort=443' \
|
||||
--set 'agent.additionalBackends[0].key=ABCDEFG' \
|
||||
--set 'agent.additionalBackends[1].endpointHost=another-instana.instana.io' \
|
||||
--set 'agent.additionalBackends[1].endpointPort=1444' \
|
||||
--set 'agent.additionalBackends[1].key=LMNOPQR' \
|
||||
instana-agent
|
||||
```
|
||||
|
||||
_Note:_ There is no hard limitation on the number of backends an Instana agent can report to, although each comes at the cost of a slight increase in CPU and memory consumption.
|
||||
|
||||
### Configuring a Proxy between the Instana agents and the Instana backend
|
||||
|
||||
If your infrastructure uses a proxy, you should ensure that you set values for:
|
||||
|
||||
* `agent.pod.proxyHost`
|
||||
* `agent.pod.proxyPort`
|
||||
* `agent.pod.proxyProtocol`
|
||||
* `agent.pod.proxyUser`
|
||||
* `agent.pod.proxyPassword`
|
||||
* `agent.pod.proxyUseDNS`
|
||||
|
||||
### Configuring which Networks the Instana Agent should listen on
|
||||
|
||||
If your infrastructure has multiple networks defined, you might need to allow the agent to listen on all addresses (typically with value set to `*`):
|
||||
|
||||
* `agent.listenAddress`
|
||||
|
||||
### Setup TLS Encryption for Agent Endpoint
|
||||
|
||||
TLS encryption can be added via two variants.
|
||||
Either an existing secret can be used or a certificate and a private key can be used during the installation.
|
||||
|
||||
#### Using existing secret
|
||||
|
||||
An existing secret of type `kubernetes.io/tls` can be used.
|
||||
Only the `secretName` must be provided during the installation with `--set 'agent.tls.secretName=<YOUR_SECRET_NAME>'`.
|
||||
The files from the provided secret are then mounted into the agent.
|
||||
|
||||
#### Provide certificate and private key
|
||||
|
||||
On the other side, a certificate and a private key can be added during the installation.
|
||||
The certificate and private key must be base64 encoded.
|
||||
|
||||
To use this variant, execute `helm install` with the following additional parameters:
|
||||
|
||||
```
|
||||
--set 'agent.tls.certificate=<YOUR_CERTIFICATE_BASE64_ENCODED>'
|
||||
--set 'agent.tls.key=<YOUR_PRIVATE_KEY_BASE64_ENCODED>'
|
||||
```
|
||||
|
||||
If `agent.tls.secretName` is set, then `agent.tls.certificate` and `agent.tls.key` are ignored.
|
||||
|
||||
### Development and debugging options
|
||||
|
||||
These options will be rarely used outside of development or debugging of the agent.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ----------------------- | ------------------------------------------------ | ------- |
|
||||
| `agent.host.repository` | Host path to mount as the agent maven repository | `nil` |
|
||||
|
||||
### Kubernetes Sensor Deployment
|
||||
|
||||
The data about Kubernetes resources is collected by the Kubernetes sensor in the Instana agent.
|
||||
With default configurations, only one Instana agent at any one time is capturing the bulk of Kubernetes data.
|
||||
Which agent gets the task is coordinated by a leader elector mechanism running inside the `leader-elector` container of the `instana-agent` pods.
|
||||
However, on large Kubernetes clusters, the load on the one Instana agent that fetches the Kubernetes data can be substantial and, to some extent, has lead to rather "generous" resource requests and limits for all the Instana agents across the cluster, as any one of them could become the leader at some point.
|
||||
|
||||
The Helm chart has a special mode, enabled by setting `k8s_sensor.deployment.enabled=true`, that will actually schedule additional Instana agents running _only_ the Kubernetes sensor that run in a dedicated `k8sensor` Deployment inside the `instana-agent` namespace.
|
||||
The pods containing agents that run only the Kubernetes sensor are called `k8sensor` pods.
|
||||
When `k8s_sensor.deployment.enabled=true`, the `instana-agent` pods running inside the daemonset do _not_ contain the `leader-elector` container, which is instead scheduled inside the `k8sensor` pods.
|
||||
|
||||
The `instana-agent` and `k8sensor` pods share the same configurations in terms of backend-related configurations (including [additional backends](#configuring-additional-backends)).
|
||||
|
||||
It is advised to use the `k8s_sensor.deployment.enabled=true` mode on clusters of more than 10 nodes, and in that case, you may be able to reduce the amount of resources assigned to the `instana-agent` pods, especially in terms of memory, using the [Agent Pod Sizing](#agent-pod-sizing) settings.
|
||||
The `k8s_sensor.deployment.pod.requests.cpu`, `k8s_sensor.deployment.pod.requests.memory`, `k8s_sensor.deployment.pod.limits.cpu` and `k8s_sensor.deployment.pod.limits.memory` settings, on the other hand, allows you to change the sizing of the `k8sensor` pods.
|
||||
|
||||
#### Determine Special Mode Enabled
|
||||
To determine if Kubernetes sensor is running in a decidated `k8sensor` deployment, list deployments in the `instana-agent` namespace.
|
||||
```
|
||||
kubectl get deployments -n instana-agent
|
||||
```
|
||||
If it shows `k8sensor` in the list, then the special mode is enabled
|
||||
|
||||
#### Upgrade Kubernetes Sensor
|
||||
To upgrade the kubernetes sensor to the lastest version, perform a rolling restart of the `k8sensor` deployment using the following command:
|
||||
```
|
||||
kubectl rollout restart deployment k8sensor -n instana-agent
|
||||
```
|
||||
|
||||
### Multiple Zones
|
||||
You can list zones to use affinities and tolerations as the basis to associate a specific daemonset per tainted node pool. Each zone will have the following data:
|
||||
|
||||
* `name` (required) - zone name.
|
||||
* `mode` (optional) - instana agent mode (e.g. APM, INFRASTRUCTURE, etc).
|
||||
* `affinity` (optional) - standard kubernetes pod affinity list for the daemonset.
|
||||
* `tolerations` (optional) - standard kubernetes pod toleration list for the daemonset.
|
||||
|
||||
The following is an example that will create 2 zones an api-server and a worker zone:
|
||||
|
||||
```yaml
|
||||
zones:
|
||||
- name: workers
|
||||
mode: APM
|
||||
- name: api-server
|
||||
mode: INFRASTRUCTURE
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: Exists
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
## Changelog
|
||||
|
||||
### 1.2.60
|
||||
* Enable the k8s_sensor by default
|
||||
|
||||
### 1.2.59
|
||||
* Introduce unique selectorLabels and commonLabels for k8s-sensor deployment
|
||||
|
||||
### 1.2.58
|
||||
* Default to `internalTrafficPolicy` instead of `topologyKeys` for rendering of static YAMLs
|
||||
|
||||
### 1.2.57
|
||||
* Fix vulnerability in the leader-elector image
|
||||
|
||||
### 1.2.49
|
||||
* Add zone name to label `io.instana/zone` in daemonset
|
||||
|
||||
### 1.2.48
|
||||
|
||||
* Set env var INSTANA_KUBERNETES_REDACT_SECRETS true if agent.redactKubernetesSecrets is enabled.
|
||||
* Use feature PSP flag in k8sensor ClusterRole only when podsecuritypolicy.enable is true.
|
||||
|
||||
### 1.2.47
|
||||
|
||||
* Roll back the changes from version 1.2.46 to be compatible with the Agent Operator installation
|
||||
|
||||
### 1.2.46
|
||||
|
||||
* Use K8sensor by default.
|
||||
* kubernetes.deployment.enabled setting overrides k8s_sensor.deployment.enabled setting.
|
||||
* Use feature PSP flag in k8sensor ClusterRole only when podsecuritypolicy.enable is true.
|
||||
* Throw failure if customer specifies proxy with k8sensor.
|
||||
* Set env var INSTANA_KUBERNETES_REDACT_SECRETS true if agent.redactKubernetesSecrets is enabled.
|
||||
|
||||
### 1.2.45
|
||||
|
||||
* Use agent key secret in k8sensor deployment.
|
||||
|
||||
### 1.2.44
|
||||
|
||||
* Add support for enabling the hot-reload of `configuration.yaml` when the default `instana-agent` ConfigMap changes
|
||||
* Enablement is done via the flag `--set agent.configuration.hotreloadEnabled=true`
|
||||
|
||||
### 1.2.43
|
||||
|
||||
* Bump leader-elector image to v0.5.16 (Update dependencies)
|
||||
|
||||
### 1.2.42
|
||||
|
||||
* Add support for creating multiple zones within the same cluster using affinity and tolerations.
|
||||
|
||||
### 1.2.41
|
||||
|
||||
* Add additional permissions (HPA, ResourceQuotas, etc) to k8sensor clusterrole.
|
||||
|
||||
### 1.2.40
|
||||
|
||||
* Mount all system mounts mountPropagation: HostToContainer.
|
||||
|
||||
### 1.2.39
|
||||
|
||||
* Add NO_PROXY to k8sensor deployment to prevent api-server requests from being routed to the proxy.
|
||||
|
||||
### 1.2.38
|
||||
|
||||
* Fix issue related to EKS version format when enabling OTel service.
|
||||
|
||||
### 1.2.37
|
||||
|
||||
* Fix issue where cluster_zone is used as cluster_name when `k8s_sensor.deployment.enabled=true`.
|
||||
* Set `HTTPS_PROXY` in k8s deployment when proxy information is set.
|
||||
|
||||
### 1.2.36
|
||||
|
||||
* Remove Service `topologyKeys`, which was removed in Kubernetes v1.22. Replaced by `internalTrafficPolicy` which is available with Kubernetes v1.21+.
|
||||
|
||||
### 1.2.35
|
||||
|
||||
* Fix invalid backend port for new Kubernetes sensor (k8sensor)
|
||||
|
||||
### 1.2.34
|
||||
|
||||
* Add support for new Kubernetes sensor (k8sensor)
|
||||
* New Kubernetes sensor can be used via the flag `--set k8s_sensor.deployment.enabled=true`
|
||||
|
||||
### 1.2.33
|
||||
|
||||
* Bump leader-elector image to v0.5.15 (Update dependencies)
|
||||
|
||||
### 1.2.32
|
||||
|
||||
* Add support for containerd montoring on TKGI
|
||||
|
||||
### 1.2.31
|
||||
|
||||
* Bump leader-elector image to v0.5.14 (Update dependencies)
|
||||
|
||||
### 1.2.30
|
||||
|
||||
* Pull agent image from IBM Cloud Container Registry (icr.io/instana/agent). No code changes have been made.
|
||||
* Bump leader-elector image to v0.5.13 and pull from IBM Cloud Container Registry (icr.io/instana/leader-elector). No code changes have been made.
|
||||
|
||||
### 1.2.29
|
||||
|
||||
* Add an additional port to the Instana Agent `Service` definition, for the OpenTelemetry registered IANA port 4317.
|
||||
|
||||
### 1.2.28
|
||||
|
||||
* Fix deployment when `cluster.name` is not specified. Should be allowed according to docs but previously broke the Pod
|
||||
when starting up.
|
||||
|
||||
### 1.2.27
|
||||
|
||||
* Update leader elector image to `0.5.10` to tone down logging and make it configurable
|
||||
|
||||
### 1.2.26
|
||||
|
||||
* Add TLS support. An existing secret can be used of type `kubernetes.io/tls`. Or provide a certificate and a private key, which creates a new secret.
|
||||
* Update leader elector image version to 0.5.9 to support PPCle
|
||||
|
||||
### 1.2.25
|
||||
|
||||
* Add `agent.pod.labels` to add custom labels to the Instana Agent pods
|
||||
|
||||
### 1.2.24
|
||||
|
||||
* Bump leader-elector image to v0.5.8 which includes a health-check endpoint. Update the `livenessProbe`
|
||||
correspondingly.
|
||||
|
||||
### 1.2.23
|
||||
|
||||
* Bump leader-elector image to v0.5.7 to fix a potential Golang bug in the elector
|
||||
|
||||
### 1.2.22
|
||||
|
||||
* Fix templating scope when defining multiple backends
|
||||
|
||||
### 1.2.21
|
||||
|
||||
* Internal updates
|
||||
|
||||
### 1.2.20
|
||||
|
||||
* upgrade leader-elector image to v0.5.6 to enable usage on s390x and arm64
|
||||
|
||||
### 1.2.18 / 1.2.19
|
||||
|
||||
* Internal change on generated DaemonSet YAML from the Helm charts
|
||||
|
||||
### 1.2.17
|
||||
|
||||
* Update Pod Security Policies as the `readOnly: true` appears not to be working for the mount points and
|
||||
actually causes the Agent deployment to fail when these policies are enforced in the cluster.
|
||||
|
||||
### 1.2.16
|
||||
|
||||
* Add configuration option for `INSTANA_MVN_REPOSITORY_URL` setting on the Agent container.
|
||||
|
||||
### 1.2.15
|
||||
|
||||
* Internal pipeline changes. No significant changes to the Helm charts
|
||||
|
||||
### v1.2.14
|
||||
|
||||
* Update Agent container mounts. Make some read-only as we don't need all mounts with read-write permissions.
|
||||
Additionally add the mount for `/var/data` which is needed in certain environments for the Agent to function
|
||||
properly.
|
||||
|
||||
### v1.2.13
|
||||
|
||||
* Update memory settings specifically for the Kubernetes sensor (Technical Preview)
|
||||
|
||||
### v1.2.11
|
||||
|
||||
* Simplify setup for using OpenTelemetry and the Prometheus `remote_write` endpoint using the `opentelemetry.enabled` and `prometheus.remoteWrite.enabled` settings, respectively.
|
||||
|
||||
### v1.2.9
|
||||
|
||||
* **Technical Preview:** Introduce a new mode of running to the Kubernetes sensor using a dedicated deployment.
|
||||
See the [Kubernetes Sensor Deployment](#kubernetes-sensor-deployment) section for more information.
|
||||
|
||||
### v1.2.7
|
||||
|
||||
* Fix: Make service opt-in, as it uses functionality (`topologyKeys`) that is available only in K8S 1.17+.
|
||||
|
||||
### v1.2.6
|
||||
|
||||
* Fix bug that might cause some OpenShift-specific resources to be created in other flavours of Kubernetes.
|
||||
|
||||
### v1.2.5
|
||||
|
||||
* Introduce the `instana-agent:instana-agent` Kubernetes service that allows you to talk to the Instana agent on the same node.
|
||||
|
||||
### v1.2.3
|
||||
|
||||
* Bug fix: Extend the built-in Pod Security Policy to cover the Docker socket mount for Tanzu Kubernetes Grid systems.
|
||||
|
||||
### v1.2.1
|
||||
|
||||
* Support OpenShift 4.x: just add --set openshift=true to the usual settings, and off you go :-)
|
||||
* Restructure documentation for consistency and readability
|
||||
* Deprecation: Helm 2 is no longer supported; the minimum Helm API version is now v2, which will make Helm 2 refuse to process the chart.
|
||||
|
||||
### v1.1.10
|
||||
|
||||
* Some linting of the whitespaces in the generated YAML
|
||||
|
||||
### v1.1.9
|
||||
|
||||
* Update the README to replace all references of `stable/instana-agent` with specifically setting the repo flag to `https://agents.instana.io/helm`.
|
||||
* Add support for TKGI and PKS systems, providing a workaround for the [unexpected Docker socket location](https://github.com/cloudfoundry-incubator/kubo-release/issues/329).
|
||||
|
||||
### v1.1.7
|
||||
|
||||
* Store the cluster name in a new `cluster-name` entry of the `instana-agent` ConfigMap rather than directly as the value of the `INSTANA_KUBERNETES_CLUSTER_NAME`, so that you can edit the cluster name in the ConfigMap in deployments like VMware Tanzu Kubernetes Grid in which, when installing the Instana agent over the [Instana tile](https://www.instana.com/docs/setup_and_manage/host_agent/on/vmware_tanzu), you do not have directly control to the configuration of the cluster name.
|
||||
If you edit the ConfigMap, you will need to delete the `instana-agent` pods for its new value to take effect.
|
||||
|
||||
### v1.1.6
|
||||
|
||||
* Allow to use user-specified memony measurement units in `agent.pod.requests.memory` and `agent.pod.limits.memory`.
|
||||
If the value set is numerical, the Chart will assume it to be expressed in `Mi` for backwards compatibility.
|
||||
* Exposed `agent.updateStrategy.type` and `agent.updateStrategy.rollingUpdate.maxUnavailable` settings.
|
||||
|
||||
### v1.1.5
|
||||
|
||||
Restore compatibility with Helm 2 that was broken in v1.1.4 by the usage of the `lookup` function, a function actually introduced only with Helm 3.1.
|
||||
Coincidentally, this has been an _excellent_ opportunity to introduce `helm lint` to our validation pipeline and end-to-end tests with Helm 2 ;-)
|
||||
|
||||
### v1.1.4
|
||||
|
||||
* Bring-your-own secret for agent keys: using the new `agent.keysSecret` setting, you can specify the name of the secret that contains the agent key and, optionally, the download key; refer to [Bring your own Keys secret](#bring-your-own-keys-secret) for more details.
|
||||
* Add support for affinities for the instana agent pod via the `agent.pod.affinity` setting.
|
||||
* Put some love into the ArtifactHub.io metadata; likely to add some more improvements related to this over time.
|
||||
|
||||
### v1.1.3
|
||||
|
||||
* No new features, just ironing some wrinkles out of our release automation.
|
||||
|
||||
### v1.1.2
|
||||
|
||||
* Improvement: Seamless support for Instana static agent images: When using an `agent.image.name` starting with `containers.instana.io`, automatically create a secret called `containers-instana-io` containing the `.dockerconfigjson` for `containers.instana.io`, using `_` as username and `agent.downloadKey` or, if missing, `agent.key` as password. If you want to control the creation of the image pull secret, or disable it, you can use `agent.image.pullSecrets`, passing to it the YAML to use for the `imagePullSecrets` field of the Daemonset spec, including an empty array `[]` to mount no pull secrets, no matter what.
|
||||
|
||||
### v1.1.1
|
||||
|
||||
* Fix: Recreate the `instana-agent` pods when there is a change in one of the following configuration, which are mapped to the chart-managed ConfigMap:
|
||||
|
||||
* `agent.configuration_yaml`
|
||||
* `agent.additional_backends`
|
||||
|
||||
The pod recreation is achieved by annotating the `instana-agent` Pod with a new `instana-configuration-hash` annotation that has, as value, the SHA-1 hash of the configurations used to populate the ConfigMap.
|
||||
This way, when the configuration changes, the respective change in the `instana-configuration-hash` annotation will cause the agent pods to be recreated.
|
||||
This technique has been described at [1] (or, at least, that is were we learned about it) and it is pretty cool :-)
|
||||
|
||||
### v1.1.0
|
||||
|
||||
* Improvement: The `instana-agent` Helm chart has a new home at `https://agents.instana.io/helm` and `https://github.com/instana/helm-charts/instana-agent`!
|
||||
This release is functionally equivalent to `1.0.34`, but we bumped the major to denote the new location ;-)
|
||||
|
||||
## References
|
||||
|
||||
[1] ["Using Kubernetes Helm to push ConfigMap changes to your Deployments", by Sander Knape; Mar 7, 2019](https://sanderknape.com/2019/03/kubernetes-helm-configmaps-changes-deployments/)
|
Before Width: | Height: | Size: 70 KiB After Width: | Height: | Size: 70 KiB |
|
@ -0,0 +1,381 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "instana-agent.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "instana-agent.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "instana-agent.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
The name of the ServiceAccount used.
|
||||
*/}}
|
||||
{{- define "instana-agent.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "instana-agent.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
The name of the PodSecurityPolicy used.
|
||||
*/}}
|
||||
{{- define "instana-agent.podSecurityPolicyName" -}}
|
||||
{{- if .Values.podSecurityPolicy.enable -}}
|
||||
{{ default (include "instana-agent.fullname" .) .Values.podSecurityPolicy.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Prints out the name of the secret to use to retrieve the agent key
|
||||
*/}}
|
||||
{{- define "instana-agent.keysSecretName" -}}
|
||||
{{- if .Values.agent.keysSecret -}}
|
||||
{{ .Values.agent.keysSecret }}
|
||||
{{- else -}}
|
||||
{{ template "instana-agent.fullname" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to resource labels.
|
||||
*/}}
|
||||
{{- define "instana-agent.commonLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}
|
||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
helm.sh/chart: {{ include "instana-agent.chart" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to resource labels.
|
||||
*/}}
|
||||
{{- define "k8s-sensor.commonLabels" -}}
|
||||
{{/* Following label is used to determine whether to disable the Kubernetes host sensor */}}
|
||||
app: k8sensor
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}-k8s-sensor
|
||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
helm.sh/chart: {{ include "instana-agent.chart" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to selector labels specifically for deployments/daemonsets/statefulsets.
|
||||
*/}}
|
||||
{{- define "instana-agent.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to selector labels specifically for deployments/daemonsets/statefulsets.
|
||||
*/}}
|
||||
{{- define "k8s-sensor.selectorLabels" -}}
|
||||
app: k8sensor
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}-k8s-sensor
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generates the dockerconfig for the credentials to pull from containers.instana.io
|
||||
*/}}
|
||||
{{- define "imagePullSecretContainersInstanaIo" }}
|
||||
{{- $registry := "containers.instana.io" }}
|
||||
{{- $username := "_" }}
|
||||
{{- $password := default .Values.agent.key .Values.agent.downloadKey }}
|
||||
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" $registry (printf "%s:%s" $username $password | b64enc) | b64enc }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Output limits or defaults
|
||||
*/}}
|
||||
{{- define "instana-agent.resources" -}}
|
||||
{{- $memory := default "512Mi" .memory -}}
|
||||
{{- $cpu := default 0.5 .cpu -}}
|
||||
memory: "{{ dict "memory" $memory | include "ensureMemoryMeasurement" }}"
|
||||
cpu: {{ $cpu }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Ensure a unit of memory measurement is added to the value
|
||||
*/}}
|
||||
{{- define "ensureMemoryMeasurement" }}
|
||||
{{- $value := .memory }}
|
||||
{{- if kindIs "string" $value }}
|
||||
{{- print $value }}
|
||||
{{- else }}
|
||||
{{- print ($value | toString) "Mi" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Composes a container image from a dict containing a "name" field (required), "tag" and "digest" (both optional, if both provided, "digest" has priority)
|
||||
*/}}
|
||||
{{- define "image" }}
|
||||
{{- $name := .name }}
|
||||
{{- $tag := .tag }}
|
||||
{{- $digest := .digest }}
|
||||
{{- if $digest }}
|
||||
{{- printf "%s@%s" $name $digest }}
|
||||
{{- else if $tag }}
|
||||
{{- printf "%s:%s" $name $tag }}
|
||||
{{- else }}
|
||||
{{- print $name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "volumeMountsForConfigFileInConfigMap" }}
|
||||
{{- $configMapName := (include "instana-agent.fullname" .) }}
|
||||
{{- $configMapNameSpace := .Release.Namespace }}
|
||||
{{- $configMap := tpl ( ( "{{ lookup \"v1\" \"ConfigMap\" \"map-namespace\" \"map-name\" | toYaml }}" | replace "map-namespace" $configMapNameSpace ) | replace "map-name" $configMapName ) . }}
|
||||
{{- if $configMap }}
|
||||
{{- $configMapObject := $configMap | fromYaml }}
|
||||
{{- range $key, $val := $configMapObject.data }}
|
||||
{{- if regexMatch "configuration-disable-kubernetes-sensor\\.yaml" $key }}
|
||||
{{/* Nothing to do here, this is a special case we want to ignore */}}
|
||||
{{- else if regexMatch "configuration-opentelemetry\\.yaml" $key }}
|
||||
{{/* Nothing to do here, this is a special case we want to ignore */}}
|
||||
{{- else if regexMatch "configuration-prometheus-remote-write\\.yaml" $key }}
|
||||
{{/* Nothing to do here, this is a special case we want to ignore */}}
|
||||
{{- else if regexMatch "configuration-.*\\.yaml" $key }}
|
||||
- name: configuration
|
||||
subPath: {{ $key }}
|
||||
mountPath: /opt/instana/agent/etc/instana/{{ $key }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{- define "instana-agent.commonEnv" -}}
|
||||
- name: INSTANA_AGENT_LEADER_ELECTOR_PORT
|
||||
value: {{ .Values.leaderElector.port | quote }}
|
||||
{{- if .Values.zone.name }}
|
||||
- name: INSTANA_ZONE
|
||||
value: {{ .Values.zone.name | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.name }}
|
||||
- name: INSTANA_KUBERNETES_CLUSTER_NAME
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
key: cluster_name
|
||||
{{- end }}
|
||||
- name: INSTANA_AGENT_ENDPOINT
|
||||
value: {{ .Values.agent.endpointHost | quote }}
|
||||
- name: INSTANA_AGENT_ENDPOINT_PORT
|
||||
value: {{ .Values.agent.endpointPort | quote }}
|
||||
- name: INSTANA_AGENT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "instana-agent.keysSecretName" . }}
|
||||
key: key
|
||||
- name: INSTANA_DOWNLOAD_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "instana-agent.keysSecretName" . }}
|
||||
key: downloadKey
|
||||
optional: true
|
||||
{{- if .Values.agent.instanaMvnRepoUrl }}
|
||||
- name: INSTANA_MVN_REPOSITORY_URL
|
||||
value: {{ .Values.agent.instanaMvnRepoUrl | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.instanaMvnRepoFeaturesPath }}
|
||||
- name: INSTANA_MVN_REPOSITORY_FEATURES_PATH
|
||||
value: {{ .Values.agent.instanaMvnRepoFeaturesPath | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.instanaMvnRepoSharedPath }}
|
||||
- name: INSTANA_MVN_REPOSITORY_SHARED_PATH
|
||||
value: {{ .Values.agent.instanaMvnRepoSharedPath | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyHost }}
|
||||
- name: INSTANA_AGENT_PROXY_HOST
|
||||
value: {{ .Values.agent.proxyHost | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyPort }}
|
||||
- name: INSTANA_AGENT_PROXY_PORT
|
||||
value: {{ .Values.agent.proxyPort | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyProtocol }}
|
||||
- name: INSTANA_AGENT_PROXY_PROTOCOL
|
||||
value: {{ .Values.agent.proxyProtocol | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyUser }}
|
||||
- name: INSTANA_AGENT_PROXY_USER
|
||||
value: {{ .Values.agent.proxyUser | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyPassword }}
|
||||
- name: INSTANA_AGENT_PROXY_PASSWORD
|
||||
value: {{ .Values.agent.proxyPassword | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyUseDNS }}
|
||||
- name: INSTANA_AGENT_PROXY_USE_DNS
|
||||
value: {{ .Values.agent.proxyUseDNS | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.listenAddress }}
|
||||
- name: INSTANA_AGENT_HTTP_LISTEN
|
||||
value: {{ .Values.agent.listenAddress | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.redactKubernetesSecrets }}
|
||||
- name: INSTANA_KUBERNETES_REDACT_SECRETS
|
||||
value: {{ .Values.agent.redactKubernetesSecrets | quote }}
|
||||
{{- end }}
|
||||
- name: INSTANA_AGENT_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
{{- range $key, $value := .Values.agent.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.commonVolumeMounts" -}}
|
||||
{{- if .Values.agent.host.repository }}
|
||||
- name: repo
|
||||
mountPath: /opt/instana/agent/data/repo
|
||||
{{- end }}
|
||||
{{- if .Values.agent.additionalBackends -}}
|
||||
{{- range $index,$backend := .Values.agent.additionalBackends }}
|
||||
{{- $backendIndex :=add $index 2 }}
|
||||
- name: additional-backend-{{$backendIndex}}
|
||||
subPath: additional-backend-{{$backendIndex}}
|
||||
mountPath: /opt/instana/agent/etc/instana/com.instana.agent.main.sender.Backend-{{$backendIndex}}.cfg
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.commonVolumes" -}}
|
||||
- name: configuration
|
||||
configMap:
|
||||
name: {{ include "instana-agent.fullname" . }}
|
||||
{{- if .Values.agent.host.repository }}
|
||||
- name: repo
|
||||
hostPath:
|
||||
path: {{ .Values.agent.host.repository }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.additionalBackends }}
|
||||
{{- range $index,$backend := .Values.agent.additionalBackends }}
|
||||
{{ $backendIndex :=add $index 2 -}}
|
||||
- name: additional-backend-{{$backendIndex}}
|
||||
configMap:
|
||||
name: {{ include "instana-agent.fullname" $ }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.livenessProbe" -}}
|
||||
httpGet:
|
||||
host: 127.0.0.1 # localhost because Pod has hostNetwork=true
|
||||
path: /status
|
||||
port: 42699
|
||||
initialDelaySeconds: 300 # startupProbe isnt available before K8s 1.16
|
||||
timeoutSeconds: 3
|
||||
periodSeconds: 10
|
||||
failureThreshold: 3
|
||||
{{- end -}}
|
||||
|
||||
{{- define "leader-elector.container" -}}
|
||||
- name: leader-elector
|
||||
image: {{ include "image" .Values.leaderElector.image | quote }}
|
||||
env:
|
||||
- name: INSTANA_AGENT_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
command:
|
||||
- "/busybox/sh"
|
||||
- "-c"
|
||||
- "sleep 12 && /app/server --election=instana --http=localhost:{{ .Values.leaderElector.port }} --id=$(INSTANA_AGENT_POD_NAME)"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 0.1
|
||||
memory: "64Mi"
|
||||
livenessProbe:
|
||||
httpGet: # Leader elector /health endpoint expects version 0.5.8 minimum, otherwise always returns 200 OK
|
||||
host: 127.0.0.1 # localhost because Pod has hostNetwork=true
|
||||
path: /health
|
||||
port: {{ .Values.leaderElector.port }}
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 3
|
||||
periodSeconds: 3
|
||||
failureThreshold: 3
|
||||
ports:
|
||||
- containerPort: {{ .Values.leaderElector.port }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.tls-volume" -}}
|
||||
- name: {{ include "instana-agent.fullname" . }}-tls
|
||||
secret:
|
||||
secretName: {{ .Values.agent.tls.secretName | default (printf "%s-tls" (include "instana-agent.fullname" .)) }}
|
||||
defaultMode: 0440
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.tls-volumeMounts" -}}
|
||||
- name: {{ include "instana-agent.fullname" . }}-tls
|
||||
mountPath: /opt/instana/agent/etc/certs
|
||||
readOnly: true
|
||||
{{- end -}}
|
||||
|
||||
|
||||
{{- define "k8sensor.commonEnv" -}}
|
||||
{{- range $key, $value := .Values.agent.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*NOTE: These are nested templates not functions, if I format this to make it readable then it won't work the way */}}
|
||||
{{/*we need it to since all of the newlines and spaces will be included into the output. Helm is */}}
|
||||
{{/*not fundamentally designed to do what we are doing here.*/}}
|
||||
|
||||
{{- define "instana-agent.opentelemetry.grpc.isEnabled" -}}{{ if hasKey .Values "opentelemetry" }}{{ if hasKey .Values.opentelemetry "grpc" }}{{ if hasKey .Values.opentelemetry.grpc "enabled" }}{{ .Values.opentelemetry.grpc.enabled }}{{ else }}{{ true }}{{ end }}{{ else }}{{ if hasKey .Values.opentelemetry "enabled" }}{{ .Values.opentelemetry.enabled }}{{ else }}{{ false }}{{ end }}{{ end }}{{ else }}{{ false }}{{ end }}{{- end -}}
|
||||
|
||||
{{- define "instana-agent.opentelemetry.http.isEnabled" -}}{{ if hasKey .Values "opentelemetry" }}{{ if hasKey .Values.opentelemetry "http" }}{{ if hasKey .Values.opentelemetry.http "enabled" }}{{ .Values.opentelemetry.http.enabled }}{{ else }}{{ true }}{{ end }}{{ else }}{{ false }}{{ end }}{{ else }}{{ false }}{{ end }}{{- end -}}
|
||||
|
||||
{{- define "kubeVersion" -}}
|
||||
{{- if (regexMatch "\\d+\\.\\d+\\.\\d+-(?:eks|gke).+" .Capabilities.KubeVersion.Version) -}}
|
||||
{{- regexFind "\\d+\\.\\d+\\.\\d+" .Capabilities.KubeVersion.Version -}}
|
||||
{{- else -}}
|
||||
{{- printf .Capabilities.KubeVersion.Version }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,217 @@
|
|||
{{- if or .Values.agent.key .Values.agent.keysSecret }}
|
||||
{{- if and .Values.cluster.name .Values.zones }}
|
||||
{{ $opentelemetryIsEnabled := (or (eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .)) (eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .)) )}}
|
||||
{{- range $.Values.zones }}
|
||||
{{- $fullname := printf "%s-%s" (include "instana-agent.fullname" $) .name -}}
|
||||
{{- $tolerations := .tolerations -}}
|
||||
{{- $affinity := .affinity -}}
|
||||
{{- $mode := .mode -}}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: {{ $fullname }}
|
||||
namespace: {{ $.Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" $ | nindent 4 }}
|
||||
io.instana/zone: {{.name}}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "instana-agent.selectorLabels" $ | nindent 6 }}
|
||||
io.instana/zone: {{.name}}
|
||||
updateStrategy:
|
||||
type: {{ $.Values.agent.updateStrategy.type }}
|
||||
{{- if eq $.Values.agent.updateStrategy.type "RollingUpdate" }}
|
||||
rollingUpdate:
|
||||
maxUnavailable: {{ $.Values.agent.updateStrategy.rollingUpdate.maxUnavailable }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
io.instana/zone: {{.name}}
|
||||
{{- if $.Values.agent.pod.labels }}
|
||||
{{- toYaml $.Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonLabels" $ | nindent 8 }}
|
||||
instana/agent-mode: {{ $.Values.agent.mode | default "APM" | quote }}
|
||||
annotations:
|
||||
{{- if $.Values.agent.pod.annotations }}
|
||||
{{- toYaml $.Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ $.Values.agent.configuration_yaml | cat ";" | cat ( join "," $.Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "instana-agent.serviceAccountName" $ }}
|
||||
{{- if $.Values.agent.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := $.Values.agent.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
{{- if $.Values.agent.pod.priorityClassName }}
|
||||
priorityClassName: {{ $.Values.agent.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
{{- if typeIs "[]interface {}" $.Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml $.Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if $.Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" $.Values.agent.image | quote}}
|
||||
imagePullPolicy: {{ $.Values.agent.image.pullPolicy }}
|
||||
env:
|
||||
- name: INSTANA_ZONE
|
||||
value: {{ .name | quote }}
|
||||
{{- if $mode }}
|
||||
- name: INSTANA_AGENT_MODE
|
||||
value: {{ $mode | quote }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonEnv" $ | nindent 12 }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: dev
|
||||
mountPath: /dev
|
||||
mountPropagation: HostToContainer
|
||||
- name: run
|
||||
mountPath: /run
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run
|
||||
mountPath: /var/run
|
||||
mountPropagation: HostToContainer
|
||||
{{- if not (or $.Values.openshift ($.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
- name: var-run-kubo
|
||||
mountPath: /var/vcap/sys/run/docker
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run-containerd
|
||||
mountPath: /var/vcap/sys/run/containerd
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-containerd-config
|
||||
mountPath: /var/vcap/jobs/containerd/config
|
||||
mountPropagation: HostToContainer
|
||||
{{- end }}
|
||||
- name: sys
|
||||
mountPath: /sys
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-log
|
||||
mountPath: /var/log
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-lib
|
||||
mountPath: /var/lib
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-data
|
||||
mountPath: /var/data
|
||||
mountPropagation: HostToContainer
|
||||
- name: machine-id
|
||||
mountPath: /etc/machine-id
|
||||
- name: configuration
|
||||
{{- if $.Values.agent.configuration.hotreloadEnabled }}
|
||||
mountPath: /root/
|
||||
{{- else }}
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
{{- end }}
|
||||
{{- if $.Values.agent.tls }}
|
||||
{{- if or $.Values.agent.tls.secretName (and $.Values.agent.tls.certificate $.Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volumeMounts" $ | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumeMounts" $ | nindent 12 }}
|
||||
{{- if $.Values.agent.configuration.autoMountConfigEntries }}
|
||||
{{- include "volumeMountsForConfigFileInConfigMap" $ | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if or $.Values.kubernetes.deployment.enabled $.Values.k8s_sensor.deployment.enabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-disable-kubernetes-sensor.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-disable-kubernetes-sensor.yaml
|
||||
{{- end }}
|
||||
{{- if $opentelemetryIsEnabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-opentelemetry.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-opentelemetry.yaml
|
||||
{{- end }}
|
||||
{{- if $.Values.prometheus.remoteWrite.enabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-prometheus-remote-write.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-prometheus-remote-write.yaml
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- include "instana-agent.livenessProbe" $ | nindent 12 }}
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" $.Values.agent.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" $.Values.agent.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
{{- if and (not $.Values.kubernetes.deployment.enabled) (not $.Values.k8s_sensor.deployment.enabled) }}
|
||||
{{- include "leader-elector.container" $ | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{ if $tolerations -}}
|
||||
tolerations:
|
||||
{{- toYaml $tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{ if $affinity -}}
|
||||
affinity:
|
||||
{{- toYaml $affinity | nindent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: dev
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: run
|
||||
hostPath:
|
||||
path: /run
|
||||
- name: var-run
|
||||
hostPath:
|
||||
path: /var/run
|
||||
{{- if not (or $.Values.openshift ($.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
# Systems based on the kubo BOSH release (that is, VMware TKGI and older PKS) do not keep the Docker
|
||||
# socket in /var/run/docker.sock , but rather in /var/vcap/sys/run/docker/docker.sock .
|
||||
# The Agent images will check if there is a Docker socket here and, if so, adjust the symlinking before
|
||||
# starting the Agent. See https://github.com/cloudfoundry-incubator/kubo-release/issues/329
|
||||
- name: var-run-kubo
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/docker
|
||||
- name: var-run-containerd
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/containerd
|
||||
- name: var-containerd-config
|
||||
hostPath:
|
||||
path: /var/vcap/jobs/containerd/config
|
||||
{{- end }}
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: var-log
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: var-lib
|
||||
hostPath:
|
||||
path: /var/lib
|
||||
- name: var-data
|
||||
hostPath:
|
||||
path: /var/data
|
||||
- name: machine-id
|
||||
hostPath:
|
||||
path: /etc/machine-id
|
||||
{{- if $.Values.agent.tls }}
|
||||
{{- if or $.Values.agent.tls.secretName (and $.Values.agent.tls.certificate $.Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volume" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumes" $ | nindent 8 }}
|
||||
{{ printf "\n" }}
|
||||
{{ end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,204 @@
|
|||
# TODO: Combine into single template with agent-daemonset-with-zones.yaml
|
||||
{{- if or .Values.agent.key .Values.agent.keysSecret }}
|
||||
{{- if and (or .Values.zone.name .Values.cluster.name) (not .Values.zones) }}
|
||||
{{- $fullname := include "instana-agent.fullname" . -}}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: {{ $fullname }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 6 }}
|
||||
updateStrategy:
|
||||
type: {{ .Values.agent.updateStrategy.type }}
|
||||
{{- if eq .Values.agent.updateStrategy.type "RollingUpdate" }}
|
||||
rollingUpdate:
|
||||
maxUnavailable: {{ .Values.agent.updateStrategy.rollingUpdate.maxUnavailable }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- if .Values.agent.pod.labels }}
|
||||
{{- toYaml .Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonLabels" . | nindent 8 }}
|
||||
instana/agent-mode: {{ .Values.agent.mode | default "APM" | quote }}
|
||||
annotations:
|
||||
{{- if .Values.agent.pod.annotations }}
|
||||
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ .Values.agent.configuration_yaml | cat ";" | cat ( join "," .Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "instana-agent.serviceAccountName" . }}
|
||||
{{- if .Values.agent.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := .Values.agent.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
{{- if .Values.agent.pod.priorityClassName }}
|
||||
priorityClassName: {{ .Values.agent.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
{{- if typeIs "[]interface {}" .Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml .Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" .Values.agent.image | quote}}
|
||||
imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
|
||||
env:
|
||||
{{- if .Values.agent.mode }}
|
||||
- name: INSTANA_AGENT_MODE
|
||||
value: {{ .Values.agent.mode | quote }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonEnv" . | nindent 12 }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: dev
|
||||
mountPath: /dev
|
||||
mountPropagation: HostToContainer
|
||||
- name: run
|
||||
mountPath: /run
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run
|
||||
mountPath: /var/run
|
||||
mountPropagation: HostToContainer
|
||||
{{- if not (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
- name: var-run-kubo
|
||||
mountPath: /var/vcap/sys/run/docker
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run-containerd
|
||||
mountPath: /var/vcap/sys/run/containerd
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-containerd-config
|
||||
mountPath: /var/vcap/jobs/containerd/config
|
||||
mountPropagation: HostToContainer
|
||||
{{- end }}
|
||||
- name: sys
|
||||
mountPath: /sys
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-log
|
||||
mountPath: /var/log
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-lib
|
||||
mountPath: /var/lib
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-data
|
||||
mountPath: /var/data
|
||||
mountPropagation: HostToContainer
|
||||
- name: machine-id
|
||||
mountPath: /etc/machine-id
|
||||
- name: configuration
|
||||
{{- if $.Values.agent.configuration.hotreloadEnabled }}
|
||||
mountPath: /root/
|
||||
{{- else }}
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
{{- end }}
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volumeMounts" . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumeMounts" . | nindent 12 }}
|
||||
{{- if .Values.agent.configuration.autoMountConfigEntries }}
|
||||
{{- include "volumeMountsForConfigFileInConfigMap" . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.kubernetes.deployment.enabled .Values.k8s_sensor.deployment.enabled }}
|
||||
- name: configuration # TODO: These shouldn't have the same name
|
||||
subPath: configuration-disable-kubernetes-sensor.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-disable-kubernetes-sensor.yaml
|
||||
{{- end }}
|
||||
{{- if or (eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .)) (eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .)) }}
|
||||
- name: configuration
|
||||
subPath: configuration-opentelemetry.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-opentelemetry.yaml
|
||||
{{- end }}
|
||||
{{- if .Values.prometheus.remoteWrite.enabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-prometheus-remote-write.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-prometheus-remote-write.yaml
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- include "instana-agent.livenessProbe" . | nindent 12 }}
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" .Values.agent.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" .Values.agent.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
{{- if and (not .Values.kubernetes.deployment.enabled) (not .Values.k8s_sensor.deployment.enabled) }}
|
||||
{{- include "leader-elector.container" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.pod.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml .Values.agent.pod.tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.pod.affinity }}
|
||||
affinity:
|
||||
{{- toYaml .Values.agent.pod.affinity | nindent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: dev
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: run
|
||||
hostPath:
|
||||
path: /run
|
||||
- name: var-run
|
||||
hostPath:
|
||||
path: /var/run
|
||||
{{- if not (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
# Systems based on the kubo BOSH release (that is, VMware TKGI and older PKS) do not keep the Docker
|
||||
# socket in /var/run/docker.sock , but rather in /var/vcap/sys/run/docker/docker.sock .
|
||||
# The Agent images will check if there is a Docker socket here and, if so, adjust the symlinking before
|
||||
# starting the Agent. See https://github.com/cloudfoundry-incubator/kubo-release/issues/329
|
||||
- name: var-run-kubo
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/docker
|
||||
- name: var-run-containerd
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/containerd
|
||||
- name: var-containerd-config
|
||||
hostPath:
|
||||
path: /var/vcap/jobs/containerd/config
|
||||
{{- end }}
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: var-log
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: var-lib
|
||||
hostPath:
|
||||
path: /var/lib
|
||||
- name: var-data
|
||||
hostPath:
|
||||
path: /var/data
|
||||
- name: machine-id
|
||||
hostPath:
|
||||
path: /etc/machine-id
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volume" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumes" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,77 @@
|
|||
{{- if or .Values.rbac.create (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
rules:
|
||||
- nonResourceURLs:
|
||||
- "/version"
|
||||
- "/healthz"
|
||||
verbs: ["get"]
|
||||
{{- if or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1") }}
|
||||
apiGroups: []
|
||||
resources: []
|
||||
{{- end }}
|
||||
- apiGroups: ["batch"]
|
||||
resources:
|
||||
- "jobs"
|
||||
- "cronjobs"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["extensions"]
|
||||
resources:
|
||||
- "deployments"
|
||||
- "replicasets"
|
||||
- "ingresses"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
resources:
|
||||
- "deployments"
|
||||
- "replicasets"
|
||||
- "daemonsets"
|
||||
- "statefulsets"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- "namespaces"
|
||||
- "events"
|
||||
- "services"
|
||||
- "endpoints"
|
||||
- "nodes"
|
||||
- "pods"
|
||||
- "replicationcontrollers"
|
||||
- "componentstatuses"
|
||||
- "resourcequotas"
|
||||
- "persistentvolumes"
|
||||
- "persistentvolumeclaims"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- "endpoints"
|
||||
verbs: ["create", "update", "patch"]
|
||||
- apiGroups: ["networking.k8s.io"]
|
||||
resources:
|
||||
- "ingresses"
|
||||
verbs: ["get", "list", "watch"]
|
||||
{{- if or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1") }}
|
||||
- apiGroups: ["apps.openshift.io"]
|
||||
resources:
|
||||
- "deploymentconfigs"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["security.openshift.io"]
|
||||
resourceNames: ["privileged"]
|
||||
resources: ["securitycontextconstraints"]
|
||||
verbs: ["use"]
|
||||
{{- end -}}
|
||||
{{- if .Values.podSecurityPolicy.enable}}
|
||||
{{- if semverCompare "< 1.25.x" (include "kubeVersion" .) }}
|
||||
- apiGroups: ["policy"]
|
||||
resources: ["podsecuritypolicies"]
|
||||
verbs: ["use"]
|
||||
resourceNames:
|
||||
- {{ template "instana-agent.podSecurityPolicyName" . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,24 @@
|
|||
{{- if .Values.service.create -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}-headless
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 4 }}
|
||||
ports:
|
||||
# Prometheus remote_write, Trace Web SDK and other APIs
|
||||
- name: agent-apis
|
||||
protocol: TCP
|
||||
port: 42699
|
||||
targetPort: 42699
|
||||
- name: agent-socket
|
||||
protocol: TCP
|
||||
port: 42666
|
||||
targetPort: 42666
|
||||
{{- end -}}
|
|
@ -0,0 +1,142 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
{{- if or .Values.agent.key .Values.agent.keysSecret -}}
|
||||
{{- if or .Values.zone.name .Values.cluster.name -}}
|
||||
|
||||
{{- $user_name_password := "" -}}
|
||||
{{ if .Values.agent.proxyUser }}
|
||||
{{- $user_name_password = print .Values.agent.proxyUser ":" .Values.agent.proxyPass "@" -}}
|
||||
{{ end}}
|
||||
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: k8sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: k8sensor
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ default "1" .Values.k8s_sensor.deployment.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "k8s-sensor.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- if .Values.agent.pod.labels }}
|
||||
{{- toYaml .Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "k8s-sensor.commonLabels" . | nindent 8 }}
|
||||
instana/agent-mode: KUBERNETES
|
||||
annotations:
|
||||
{{- if .Values.agent.pod.annotations }}
|
||||
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ cat ( join "," .Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: k8sensor
|
||||
{{- if .Values.k8s_sensor.deployment.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := .Values.k8s_sensor.deployment.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.k8s_sensor.deployment.pod.priorityClassName }}
|
||||
priorityClassName: {{ .Values.k8s_sensor.deployment.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
{{- if typeIs "[]interface {}" .Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml .Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" .Values.k8s_sensor.image | quote }}
|
||||
imagePullPolicy: {{ .Values.k8s_sensor.image.pullPolicy }}
|
||||
env:
|
||||
- name: AGENT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "instana-agent.keysSecretName" . }}
|
||||
key: key
|
||||
- name: BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: k8sensor
|
||||
key: backend
|
||||
- name: BACKEND_URL
|
||||
value: "https://$(BACKEND)"
|
||||
- name: AGENT_ZONE
|
||||
value: {{ empty .Values.cluster.name | ternary .Values.zone.name .Values.cluster.name}}
|
||||
- name: POD_UID
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.uid
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
{{- if not (empty .Values.agent.proxyHost) }}
|
||||
- name: HTTPS_PROXY
|
||||
value: "http://{{ $user_name_password }}{{ .Values.agent.proxyHost }}:{{ .Values.agent.proxyPort }}"
|
||||
- name: NO_PROXY
|
||||
value: "kubernetes.default.svc"
|
||||
{{- end }}
|
||||
{{- if .Values.agent.redactKubernetesSecrets }}
|
||||
- name: INSTANA_KUBERNETES_REDACT_SECRETS
|
||||
value: {{ .Values.agent.redactKubernetesSecrets | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.configuration_yaml }}
|
||||
- name: CONFIG_PATH
|
||||
value: /root
|
||||
{{- end }}
|
||||
{{- include "k8sensor.commonEnv" . | nindent 12 }}
|
||||
|
||||
volumeMounts:
|
||||
- name: configuration
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" .Values.k8s_sensor.deployment.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" .Values.k8s_sensor.deployment.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
volumes:
|
||||
- name: configuration
|
||||
configMap:
|
||||
name: {{ include "instana-agent.fullname" . }}
|
||||
{{- if .Values.k8s_sensor.deployment.pod.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml .Values.k8s_sensor.deployment.pod.tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
# Soft anti-affinity policy: try not to schedule multiple kubernetes-sensor pods on the same node.
|
||||
# If the policy is set to "requiredDuringSchedulingIgnoredDuringExecution", if the cluster has
|
||||
# fewer nodes than the amount of desired replicas, `helm install/upgrade --wait` will not return.
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: instana/agent-mode
|
||||
operator: In
|
||||
values: [ KUBERNETES ]
|
||||
topologyKey: "kubernetes.io/hostname"
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,133 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: k8sensor
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
rules:
|
||||
-
|
||||
nonResourceURLs:
|
||||
- /version
|
||||
- /healthz
|
||||
verbs:
|
||||
- get
|
||||
-
|
||||
apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- deployments
|
||||
- replicasets
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- events
|
||||
- services
|
||||
- endpoints
|
||||
- namespaces
|
||||
- nodes
|
||||
- pods
|
||||
- replicationcontrollers
|
||||
- resourcequotas
|
||||
- persistentvolumes
|
||||
- persistentvolumeclaims
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
- deployments
|
||||
- replicasets
|
||||
- statefulsets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- batch
|
||||
resources:
|
||||
- cronjobs
|
||||
- jobs
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods/log
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- autoscaling/v1
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- autoscaling/v2
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- apps.openshift.io
|
||||
resources:
|
||||
- deploymentconfigs
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- security.openshift.io
|
||||
resourceNames:
|
||||
- privileged
|
||||
resources:
|
||||
- securitycontextconstraints
|
||||
verbs:
|
||||
- use
|
||||
{{ if .Values.podSecurityPolicy.enable }}
|
||||
-
|
||||
apiGroups:
|
||||
- policy
|
||||
resourceNames:
|
||||
- k8sensor
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
{{ end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,118 @@
|
|||
{{- if and .Values.kubernetes.deployment.enabled (not .Values.k8s_sensor.deployment.enabled) -}}
|
||||
{{- if or .Values.agent.key .Values.agent.keysSecret -}}
|
||||
{{- if or .Values.zone.name .Values.cluster.name -}}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kubernetes-sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ default "1" .Values.kubernetes.deployment.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- if .Values.agent.pod.labels }}
|
||||
{{- toYaml .Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonLabels" . | nindent 8 }}
|
||||
instana/agent-mode: KUBERNETES
|
||||
annotations:
|
||||
{{- if .Values.agent.pod.annotations }}
|
||||
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ cat ( join "," .Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "instana-agent.serviceAccountName" . }}
|
||||
{{- if .Values.kubernetes.deployment.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := .Values.kubernetes.deployment.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.kubernetes.deployment.pod.priorityClassName }}
|
||||
priorityClassName: {{ .Values.kubernetes.deployment.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
{{- if typeIs "[]interface {}" .Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml .Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" .Values.agent.image | quote }}
|
||||
imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
env:
|
||||
- name: INSTANA_AGENT_MODE
|
||||
value: KUBERNETES
|
||||
{{- include "instana-agent.commonEnv" . | nindent 12 }}
|
||||
volumeMounts:
|
||||
{{- include "instana-agent.commonVolumeMounts" . | nindent 12 }}
|
||||
- name: kubernetes-sensor-configuration
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volumeMounts" . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" .Values.kubernetes.deployment.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" .Values.kubernetes.deployment.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
- name: leader-elector
|
||||
image: {{ include "image" .Values.leaderElector.image | quote }}
|
||||
env:
|
||||
- name: INSTANA_AGENT_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
command:
|
||||
- "/busybox/sh"
|
||||
- "-c"
|
||||
- "sleep 12 && /app/server --election=instana --http=localhost:{{ .Values.leaderElector.port }} --id=$(INSTANA_AGENT_POD_NAME)"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 0.1
|
||||
memory: "64Mi"
|
||||
ports:
|
||||
- containerPort: {{ .Values.leaderElector.port }}
|
||||
{{- if .Values.kubernetes.deployment.pod.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml .Values.kubernetes.deployment.pod.tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: kubernetes.io/hostname
|
||||
whenUnsatisfiable: ScheduleAnyway
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: instana/agent-mode
|
||||
operator: In
|
||||
values: [ KUBERNETES ]
|
||||
volumes:
|
||||
{{- include "instana-agent.commonVolumes" . | nindent 8 }}
|
||||
- name: kubernetes-sensor-configuration
|
||||
configMap:
|
||||
name: kubernetes-sensor
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volume" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,10 @@
|
|||
{{- if .Values.serviceAccount.create }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "instana-agent.serviceAccountName" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,293 @@
|
|||
# name is the value which will be used as the base resource name for various resources associated with the agent.
|
||||
# name: instana-agent
|
||||
|
||||
agent:
|
||||
# agent.mode is used to set agent mode and it can be APM, INFRASTRUCTURE or AWS
|
||||
# mode: APM
|
||||
|
||||
# agent.key is the secret token which your agent uses to authenticate to Instana's servers.
|
||||
key: null
|
||||
# agent.downloadKey is key, sometimes known ass "sales key", that allows you to download,
|
||||
# software from Instana.
|
||||
# downloadKey: null
|
||||
|
||||
# Rather than specifying the agent key and optionally the download key, you can "bring your
|
||||
# own secret" creating it in the namespace in which you install the `instana-agent` and
|
||||
# specify its name in the `keysSecret` field. The secret you create must contains
|
||||
# a field called `key` and optionally one called `downloadKey`, which contain, respectively,
|
||||
# the values you'd otherwise set in `.agent.key` and `agent.downloadKey`.
|
||||
# keysSecret: null
|
||||
|
||||
# agent.listenAddress is the IP address the agent HTTP server will listen to.
|
||||
# listenAddress: "*"
|
||||
|
||||
# agent.endpointHost is the hostname of the Instana server your agents will connect to.
|
||||
endpointHost: ingress-red-saas.instana.io
|
||||
# agent.endpointPort is the port number (as a String) of the Instana server your agents will connect to.
|
||||
endpointPort: 443
|
||||
|
||||
# These are additional backends the Instana agent will report to besides
|
||||
# the one configured via the `agent.endpointHost`, `agent.endpointPort` and `agent.key` setting
|
||||
additionalBackends: []
|
||||
# - endpointHost: ingress.instana.io
|
||||
# endpointPort: 443
|
||||
# key: <agent_key>
|
||||
|
||||
# TLS for end-to-end encryption between Instana agent and clients accessing the agent.
|
||||
# The Instana agent does not yet allow enforcing TLS encryption.
|
||||
# TLS is only enabled on a connection when requested by the client.
|
||||
tls:
|
||||
# In order to enable TLS, a secret of type kubernetes.io/tls must be specified.
|
||||
# secretName is the name of the secret that has the relevant files.
|
||||
# secretName: null
|
||||
# Otherwise, the certificate and the private key must be provided as base64 encoded.
|
||||
# certificate: null
|
||||
# key: null
|
||||
|
||||
image:
|
||||
# agent.image.name is the name of the container image of the Instana agent.
|
||||
name: icr.io/instana/agent
|
||||
# agent.image.digest is the digest (a.k.a. Image ID) of the agent container image; if specified, it has priority over agent.image.tag, which will be ignored.
|
||||
#digest:
|
||||
# agent.image.tag is the tag name of the agent container image; if agent.image.digest is specified, this property is ignored.
|
||||
tag: latest
|
||||
# agent.image.pullPolicy specifies when to pull the image container.
|
||||
pullPolicy: Always
|
||||
# agent.image.pullSecrets allows you to override the default pull secret that is created when agent.image.name starts with "containers.instana.io"
|
||||
# Setting agent.image.pullSecrets prevents the creation of the default "containers-instana-io" secret.
|
||||
# pullSecrets:
|
||||
# - name: my_awesome_secret_instead
|
||||
# If you want no imagePullSecrets to be specified in the agent pod, you can just pass an empty array to agent.image.pullSecrets
|
||||
# pullSecrets: []
|
||||
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
|
||||
pod:
|
||||
# agent.pod.annotations are additional annotations to be added to the agent pods.
|
||||
annotations: {}
|
||||
|
||||
# agent.pod.labels are additional labels to be added to the agent pods.
|
||||
labels: {}
|
||||
|
||||
# agent.pod.tolerations are tolerations to influence agent pod assignment.
|
||||
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
tolerations: []
|
||||
|
||||
# agent.pod.affinity are affinities to influence agent pod assignment.
|
||||
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
affinity: {}
|
||||
|
||||
# agent.pod.priorityClassName is the name of an existing PriorityClass that should be set on the agent pods
|
||||
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
|
||||
priorityClassName: null
|
||||
|
||||
# agent.pod.requests and agent.pod.limits adjusts the resource assignments for the DaemonSet agent
|
||||
# regardless of the kubernetes.deployment.enabled setting
|
||||
requests:
|
||||
# agent.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
|
||||
memory: 512Mi
|
||||
# agent.pod.requests.cpu are the requested CPU units allocation for the agent pods.
|
||||
cpu: 0.5
|
||||
limits:
|
||||
# agent.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
|
||||
memory: 768Mi
|
||||
# agent.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
|
||||
cpu: 1.5
|
||||
|
||||
# agent.proxyHost sets the INSTANA_AGENT_PROXY_HOST environment variable.
|
||||
# proxyHost: null
|
||||
# agent.proxyPort sets the INSTANA_AGENT_PROXY_PORT environment variable.
|
||||
# proxyPort: 80
|
||||
# agent.proxyProtocol sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable.
|
||||
# proxyProtocol: HTTP
|
||||
# agent.proxyUser sets the INSTANA_AGENT_PROXY_USER environment variable.
|
||||
# proxyUser: null
|
||||
# agent.proxyPassword sets the INSTANA_AGENT_PROXY_PASSWORD environment variable.
|
||||
# proxyPassword: null
|
||||
# agent.proxyUseDNS sets the INSTANA_AGENT_PROXY_USE_DNS environment variable.
|
||||
# proxyUseDNS: false
|
||||
|
||||
# use this to set additional environment variables for the instana agent
|
||||
# for example:
|
||||
# env:
|
||||
# INSTANA_AGENT_TAGS: dev
|
||||
env: {}
|
||||
|
||||
configuration:
|
||||
# When setting this to true, the Helm chart will automatically look up the entries
|
||||
# of the default instana-agent ConfigMap, and mount as agent configuration files
|
||||
# under /opt/instana/agent/etc/instana all entries with keys that match the
|
||||
# 'configuration-*.yaml' scheme
|
||||
#
|
||||
# IMPORTANT: Needs Helm 3.1+ as it is built on the `lookup` function
|
||||
# IMPORTANT: Editing the ConfigMap adding keys requires a `helm upgrade` to take effect
|
||||
autoMountConfigEntries: false
|
||||
|
||||
# When setting this to true, the updates of the default instana-agent ConfigMap
|
||||
# will be reflected in the pod without requiring a pod restart
|
||||
hotreloadEnabled: false
|
||||
|
||||
configuration_yaml: |
|
||||
# Manual a-priori configuration. Configuration will be only used when the sensor
|
||||
# is actually installed by the agent.
|
||||
# The commented out example values represent example configuration and are not
|
||||
# necessarily defaults. Defaults are usually 'absent' or mentioned separately.
|
||||
# Changes are hot reloaded unless otherwise mentioned.
|
||||
|
||||
# It is possible to create files called 'configuration-abc.yaml' which are
|
||||
# merged with this file in file system order. So 'configuration-cde.yaml' comes
|
||||
# after 'configuration-abc.yaml'. Only nested structures are merged, values are
|
||||
# overwritten by subsequent configurations.
|
||||
|
||||
# Secrets
|
||||
# To filter sensitive data from collection by the agent, all sensors respect
|
||||
# the following secrets configuration. If a key collected by a sensor matches
|
||||
# an entry from the list, the value is redacted.
|
||||
#com.instana.secrets:
|
||||
# matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
|
||||
# list:
|
||||
# - 'key'
|
||||
# - 'password'
|
||||
# - 'secret'
|
||||
|
||||
# Host
|
||||
#com.instana.plugin.host:
|
||||
# tags:
|
||||
# - 'dev'
|
||||
# - 'app1'
|
||||
|
||||
# Hardware & Zone
|
||||
#com.instana.plugin.generic.hardware:
|
||||
# enabled: true # disabled by default
|
||||
# availability-zone: 'zone'
|
||||
|
||||
# agent.redactKubernetesSecrets sets the INSTANA_KUBERNETES_REDACT_SECRETS environment variable.
|
||||
# redactKubernetesSecrets: null
|
||||
|
||||
# agent.host.repository sets a host path to be mounted as the agent maven repository (for debugging or development purposes)
|
||||
host:
|
||||
repository: null
|
||||
|
||||
cluster:
|
||||
# cluster.name represents the name that will be assigned to this cluster in Instana
|
||||
name: null
|
||||
|
||||
leaderElector:
|
||||
image:
|
||||
# leaderElector.image.name is the name of the container image of the leader elector.
|
||||
name: icr.io/instana/leader-elector
|
||||
# leaderElector.image.digest is the digest (a.k.a. Image ID) of the leader elector container image; if specified, it has priority over leaderElector.image.digest, which will be ignored.
|
||||
#digest:
|
||||
# leaderElector.image.tag is the tag name of the agent container image; if leaderElector.image.digest is specified, this property is ignored.
|
||||
tag: 0.5.18
|
||||
port: 42655
|
||||
|
||||
# openshift specifies whether the cluster role should include openshift permissions and other tweaks to the YAML.
|
||||
# The chart will try to auto-detect if the cluster is OpenShift, so you will likely not even need to set this explicitly.
|
||||
# openshift: true
|
||||
|
||||
rbac:
|
||||
# Specifies whether RBAC resources should be created
|
||||
create: true
|
||||
|
||||
service:
|
||||
# Specifies whether to create the instana-agent service to expose within the cluster the Prometheus remote-write, OpenTelemetry GRCP endpoint and other APIs
|
||||
# Note: Requires Kubernetes 1.17+, as it uses topologyKeys
|
||||
create: true
|
||||
|
||||
#opentelemetry:
|
||||
# enabled: false # legacy setting, will only enable grpc, defaults to false
|
||||
# grpc:
|
||||
# enabled: false # takes precedence over legacy settings above, defaults to true if "grpc:" is present
|
||||
# http:
|
||||
# enabled: false # allows to enable http endpoints, defaults to true if "http:" is present
|
||||
|
||||
prometheus:
|
||||
remoteWrite:
|
||||
enabled: false # If true, it will also apply `service.create=true`
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a ServiceAccount should be created
|
||||
create: true
|
||||
# The name of the ServiceAccount to use.
|
||||
# If not set and `create` is true, a name is generated using the fullname template
|
||||
# name: instana-agent
|
||||
|
||||
podSecurityPolicy:
|
||||
# Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods.
|
||||
# Requires `rbac.create` to be `true` as well and K8s version below v1.25.
|
||||
enable: false
|
||||
# The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods.
|
||||
# If not set and `enable` is true, a PodSecurityPolicy will be created with a name generated using the fullname template.
|
||||
name: null
|
||||
|
||||
zone:
|
||||
# zone.name is the custom zone that detected technologies will be assigned to
|
||||
name: null
|
||||
|
||||
k8s_sensor:
|
||||
image:
|
||||
# k8s_sensor.image.name is the name of the container image of the Instana agent.
|
||||
name: icr.io/instana/k8sensor
|
||||
# k8s_sensor.image.digest is the digest (a.k.a. Image ID) of the agent container image; if specified, it has priority over agent.image.tag, which will be ignored.
|
||||
#digest:
|
||||
# k8s_sensor.image.tag is the tag name of the agent container image; if agent.image.digest is specified, this property is ignored.
|
||||
tag: latest
|
||||
# k8s_sensor.image.pullPolicy specifies when to pull the image container.
|
||||
pullPolicy: Always
|
||||
deployment:
|
||||
# Specifies whether or not to enable the Deployment and turn off the Kubernetes sensor in the DaemonSet
|
||||
enabled: true
|
||||
# Use three replicas to ensure the HA by the default.
|
||||
replicas: 3
|
||||
# k8s_sensor.deployment.pod adjusts the resource assignments for the agent independently of the DaemonSet agent when k8s_sensor.deployment.enabled=true
|
||||
pod:
|
||||
requests:
|
||||
# k8s_sensor.deployment.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
|
||||
memory: 128Mi
|
||||
# k8s_sensor.deployment.pod.requests.cpu are the requested CPU units allocation for the agent pods.
|
||||
cpu: 10m
|
||||
limits:
|
||||
# k8s_sensor.deployment.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
|
||||
memory: 1536Mi
|
||||
# k8s_sensor.deployment.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
|
||||
cpu: 500m
|
||||
|
||||
kubernetes:
|
||||
# Configures use of a Deployment for the Kubernetes sensor rather than as a potential member of the DaemonSet. Is only accepted if k8s_sensor.deployment.enabled=false
|
||||
deployment:
|
||||
# Specifies whether or not to enable the Deployment and turn off the Kubernetes sensor in the DaemonSet
|
||||
enabled: false
|
||||
# Use a single replica, the impact will generally be low and we need to address a host of other concerns where clusters are large.
|
||||
replicas: 1
|
||||
|
||||
# kubernetes.deployment.pod adjusts the resource assignments for the agent independently of the DaemonSet agent when kubernetes.deployment.enabled=true
|
||||
pod:
|
||||
requests:
|
||||
# kubernetes.deployment.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
|
||||
memory: 1024Mi
|
||||
# kubernetes.deployment.pod.requests.cpu are the requested CPU units allocation for the agent pods.
|
||||
cpu: 720m
|
||||
limits:
|
||||
# kubernetes.deployment.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
|
||||
memory: 3072Mi
|
||||
# kubernetes.deployment.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
|
||||
cpu: 4
|
||||
|
||||
# zones:
|
||||
# # Configure use of zones to use tolerations as the basis to associate a specific daemonset per tainted node pool
|
||||
# - name: pool-01
|
||||
# tolerations:
|
||||
# - key: "pool"
|
||||
# operator: "Equal"
|
||||
# value: "pool-01"
|
||||
# effect: "NoExecute"
|
||||
# - name: pool-02
|
||||
# tolerations:
|
||||
# - key: "pool"
|
||||
# operator: "Equal"
|
||||
# value: "pool-02"
|
||||
# effect: "NoExecute"
|
|
@ -0,0 +1,23 @@
|
|||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
# OWNERS file for helm
|
||||
OWNERS
|
|
@ -0,0 +1,26 @@
|
|||
annotations:
|
||||
artifacthub.io/links: |
|
||||
- name: Instana website
|
||||
url: https://www.instana.com
|
||||
- name: Instana Helm charts
|
||||
url: https://github.com/instana/helm-charts
|
||||
catalog.cattle.io/certified: partner
|
||||
catalog.cattle.io/display-name: Instana Agent
|
||||
catalog.cattle.io/kube-version: '>=1.21-0'
|
||||
catalog.cattle.io/release-name: instana-agent
|
||||
apiVersion: v2
|
||||
appVersion: 1.252.0
|
||||
description: Instana Agent for Kubernetes
|
||||
home: https://www.instana.com/
|
||||
icon: https://agents.instana.io/helm/stan-logo-2020.png
|
||||
maintainers:
|
||||
- email: felix.marx@ibm.com
|
||||
name: FelixMarxIBM
|
||||
- email: henning.treu@ibm.com
|
||||
name: htreu
|
||||
- email: torsten.kohn@ibm.com
|
||||
name: tkohn
|
||||
name: instana-agent
|
||||
sources:
|
||||
- https://github.com/instana/instana-agent-docker
|
||||
version: 1.2.61
|
|
@ -0,0 +1,54 @@
|
|||
# Kubernetes Deployment Mode (tech preview)
|
||||
|
||||
Instana has always endeavored to make the experience of using Instana as seamless as possible from auto-instrumentation to one-liner installs. To date for our customers with Kubernetes clusters containing more than 1,000 entities this wasn’t the case. The Kubernetes sensor as a deployment is one of many steps we’re taking to improve the experience of operating Instana in Kubernetes. This is a tech preview however we have a high degree of confidence it will work well in your production workloads. The fundamental change moves the Kubernetes sensor from the DaemonSet responsible for monitoring your hosts and processes into its own dedicated Deployment where it does not contend for resources with other sensors. An overview of this deployment is below:
|
||||
|
||||
![kubernetes.deployment.enabled=true](kubernetes.deployment.enabled.png)
|
||||
|
||||
This change provides a few primary benefits including:
|
||||
|
||||
* Lower load on the Kubernetes api-server as it eliminates per node pod monitoring.
|
||||
* Lower load on the Kubernetes api-server as it reduces the endpoint watch to 2 leader elector side cars.
|
||||
* Lower memory and CPU requests in the DaemonSet as it is no longer responsible for monitoring Kubernetes.
|
||||
* Elimination of the leader elector sidecar in the DaemonSet as it is only required for the Kubernetes sensor.
|
||||
* Better performance of the Kubernetes sensor as it is isolated from other sensors and does not contend for CPU and memory.
|
||||
* Better scaling behaviour as you can adjust the memory and CPU requirements to monitor your clusters without overprovisioning utilisation cluster wide.
|
||||
|
||||
The primary drawback of this model in the tech preview include:
|
||||
|
||||
* Reduced control and observability of the Kubernetes specific Agents in the Agent dashboard.
|
||||
* Some unnecessary features are still enabled in the Kubernetes sensor (e.g. trace sinks, and host monitoring).
|
||||
|
||||
Some limitations remain unchanged from the previous sensor:
|
||||
|
||||
* Clusters with a high number of entities (e.g. pods, deployments, etc) are likely to have non-deterministic behaviour due to limitations we impose on message sizes. This is unlikely to be experienced in clusters with fewer than 500 hosts.
|
||||
* The ServiceAccount is shared between both the DaemonSet and Deployment meaning no change in the security posture. We plan to add an additional service account to limit access to the api-server to only the Kubernetes sensor Deployment.
|
||||
|
||||
## Installation
|
||||
|
||||
For clusters with minimal controls you can install the tech preview with the following Helm install command:
|
||||
|
||||
```
|
||||
helm template instana-agent \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--namespace instana-agent \
|
||||
--create-namespace \
|
||||
--set agent.key=${AGENT_KEY} \
|
||||
--set agent.endpointHost=${BACKEND_URL} \
|
||||
--set agent.endpointPort=443 \
|
||||
--set cluster.name=${CLUSTER_NAME} \
|
||||
--set zone.name=${ZONE_NAME} \
|
||||
--set kubernetes.deployment.enabled=true \
|
||||
instana-agent
|
||||
```
|
||||
|
||||
If your cluster employs Pod Security Policies you will need the following additional flag:
|
||||
|
||||
```
|
||||
--set podSecurityPolicy.enable=true
|
||||
```
|
||||
|
||||
If you are deploying into an OpenShift 4.x cluster you will need the following additional flag:
|
||||
|
||||
```
|
||||
--set openshift=true
|
||||
```
|
|
@ -0,0 +1,600 @@
|
|||
# Instana
|
||||
|
||||
Instana is an [APM solution](https://www.instana.com/) built for microservices that enables IT Ops to build applications faster and deliver higher quality services by automating monitoring, tracing and root cause analysis.
|
||||
This solution is optimized for [Kubernetes](https://www.instana.com/automatic-kubernetes-monitoring/).
|
||||
|
||||
This chart adds the Instana Agent to all schedulable nodes in your cluster via a privileged `DaemonSet` and accompanying resources like `ConfigurationMap`s, `Secret`s and RBAC settings.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Kubernetes 1.21+ OR OpenShift 4.8+
|
||||
* Helm 3
|
||||
|
||||
## Installation
|
||||
|
||||
To configure the installation you can either specify the options on the command line using the **--set** switch, or you can edit **values.yaml**.
|
||||
|
||||
First, create a namespace for the instana-agent
|
||||
|
||||
```bash
|
||||
kubectl create namespace instana-agent
|
||||
```
|
||||
|
||||
To install the chart with the release name `instana-agent` and set the values on the command line run:
|
||||
|
||||
```bash
|
||||
$ helm install instana-agent --namespace instana-agent \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set agent.key=INSTANA_AGENT_KEY \
|
||||
--set agent.endpointHost=HOST \
|
||||
--set zone.name=ZONE_NAME \
|
||||
instana-agent
|
||||
```
|
||||
|
||||
**OpenShift:** When targetting an OpenShift 4.x cluster, add `--set openshift=true`.
|
||||
|
||||
### Required Settings
|
||||
|
||||
#### Configuring the Instana Backend
|
||||
|
||||
In order to report the data it collects to the Instana backend for analysis, the Instana agent must know which backend to report to, and which credentials to use to authenticate, known as "agent key".
|
||||
|
||||
As described by the [Install Using the Helm Chart](https://www.instana.com/docs/setup_and_manage/host_agent/on/kubernetes#install-using-the-helm-chart) documentation, you will find the right values for the following fields inside Instana itself:
|
||||
|
||||
* `agent.endpointHost`
|
||||
* `agent.endpointPort`
|
||||
* `agent.key`
|
||||
|
||||
_Note:_ You can find the options mentioned in the [configuration section below](#Configuration-Reference)
|
||||
|
||||
If your agents report into a self-managed Instana unit (also known as "on-prem"), you will also need to configure a "download key", which allows the agent to fetch its components from the Instana repository.
|
||||
The download key is set via the following value:
|
||||
|
||||
* `agent.downloadKey`
|
||||
|
||||
#### Zone and Cluster
|
||||
|
||||
Instana needs to know how to name your Kubernetes cluster and, optionally, how to group your Instana agents in [Custom zones](https://www.instana.com/docs/setup_and_manage/host_agent/configuration/#custom-zones) using the following fields:
|
||||
|
||||
* `zone.name`
|
||||
* `cluster.name`
|
||||
|
||||
Either `zone.name` or `cluster.name` are required.
|
||||
If you omit `cluster.name`, the value of `zone.name` will be used as cluster name as well.
|
||||
If you omit `zone.name`, the host zone will be automatically determined by the availability zone information provided by the [supported Cloud providers](https://www.instana.com/docs/setup_and_manage/cloud_service_agents).
|
||||
|
||||
## Uninstallation
|
||||
|
||||
To uninstall/delete the `instana-agent` release:
|
||||
|
||||
```bash
|
||||
helm del instana-agent -n instana-agent
|
||||
```
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
The following table lists the configurable parameters of the Instana chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| --------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `agent.configuration_yaml` | Custom content for the agent configuration.yaml file | `nil` See [below](#Agent-Configuration) for more details |
|
||||
| `agent.configuration.autoMountConfigEntries` | (Experimental, needs Helm 3.1+) Automatically look up the entries of the default `instana-agent` ConfigMap, and mount as agent configuration files in the `instana-agent` container under the `/opt/instana/agent/etc/instana` directory all ConfigMap entries with keys that match the `configuration-*.yaml` scheme. | `false` |
|
||||
| `agent.configuration.hotreloadEnabled` | Enables hot-reload of a configuration.yaml upon changes in the `instana-agent` ConfigMap without requiring a restart of a pod | `false` |
|
||||
| `agent.endpointHost` | Instana Agent backend endpoint host | `ingress-red-saas.instana.io` (US and ROW). If in Europe, please override with `ingress-blue-saas.instana.io` |
|
||||
| `agent.endpointPort` | Instana Agent backend endpoint port | `443` |
|
||||
| `agent.key` | Your Instana Agent key | `nil` You must provide your own key unless `agent.keysSecret` is specified |
|
||||
| `agent.downloadKey` | Your Instana Download key | `nil` Usually not required |
|
||||
| `agent.keysSecret` | As an alternative to specifying `agent.key` and, optionally, `agent.downloadKey`, you can instead specify the name of the secret in the namespace in which you install the Instana agent that carries the agent key and download key | `nil` Usually not required, see [Bring your own Keys secret](#bring-your-own-keys-secret) for more details |
|
||||
| `agent.additionalBackends` | List of additional backends to report to; it must specify the `endpointHost` and `key` fields, and optionally `endpointPort` | `[]` Usually not required; see [Configuring Additional Backends](#configuring-additional-backends) for more info and examples |
|
||||
| `agent.tls.secretName` | The name of the secret of type `kubernetes.io/tls` which contains the TLS relevant data. If the name is provided, `agent.tls.certificate` and `agent.tls.key` will be ignored. | `nil` |
|
||||
| `agent.tls.certificate` | The certificate data encoded as base64. Which will be used to create a new secret of type `kubernetes.io/tls`. | `nil` |
|
||||
| `agent.tls.key` | The private key data encoded as base64. Which will be used to create a new secret of type `kubernetes.io/tls`. | `nil` |
|
||||
| `agent.image.name` | The image name to pull | `instana/agent` |
|
||||
| `agent.image.digest` | The image digest to pull; if specified, it causes `agent.image.tag` to be ignored | `nil` |
|
||||
| `agent.image.tag` | The image tag to pull; this property is ignored if `agent.image.digest` is specified | `latest` |
|
||||
| `agent.image.pullPolicy` | Image pull policy | `Always` |
|
||||
| `agent.image.pullSecrets` | Image pull secrets; if not specified (default) _and_ `agent.image.name` starts with `containers.instana.io`, it will be automatically set to `[{ "name": "containers-instana-io" }]` to match the default secret created in this case. | `nil` |
|
||||
| `agent.listenAddress` | List of addresses to listen on, or "*" for all interfaces | `nil` |
|
||||
| `agent.mode` | Agent mode. Supported values are `APM`, `INFRASTRUCTURE`, `AWS` | `APM` |
|
||||
| `agent.instanaMvnRepoUrl` | Override for the Maven repository URL when the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
|
||||
| `agent.instanaMvnRepoFeaturesPath` | Override for the Maven repository features path the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
|
||||
| `agent.instanaMvnRepoSharedPath` | Override for the Maven repository shared path when the Agent needs to connect to a locally provided Maven repository 'proxy' | `nil` Usually not required |
|
||||
| `agent.updateStrategy.type` | [DaemonSet update strategy type](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/); valid values are `OnDelete` and `RollingUpdate` | `RollingUpdate` |
|
||||
| `agent.updateStrategy.rollingUpdate.maxUnavailable` | How many agent pods can be updated at once; this value is ignored if `agent.updateStrategy.type` is different than `RollingUpdate` | `1` |
|
||||
| `agent.pod.annotations` | Additional annotations to apply to the pod | `{}` |
|
||||
| `agent.pod.labels` | Additional labels to apply to the Agent pod | `{}` |
|
||||
| `agent.pod.priorityClassName` | Name of an _existing_ PriorityClass that should be set on the agent pods | `nil` |
|
||||
| `agent.proxyHost` | Hostname/address of a proxy | `nil` |
|
||||
| `agent.proxyPort` | Port of a proxy | `nil` |
|
||||
| `agent.proxyProtocol` | Proxy protocol. Supported proxy types are `http` (for both HTTP and HTTPS proxies), `socks4`, `socks5`. | `nil` |
|
||||
| `agent.proxyUser` | Username of the proxy auth | `nil` |
|
||||
| `agent.proxyPassword` | Password of the proxy auth | `nil` |
|
||||
| `agent.proxyUseDNS` | Boolean if proxy also does DNS | `nil` |
|
||||
| `agent.pod.limits.cpu` | Container cpu limits in cpu cores | `1.5` |
|
||||
| `agent.pod.limits.memory` | Container memory limits in MiB | `768Mi` |
|
||||
| `agent.pod.requests.cpu` | Container cpu requests in cpu cores | `0.5` |
|
||||
| `agent.pod.requests.memory` | Container memory requests in MiB | `512Mi` |
|
||||
| `agent.pod.tolerations` | Tolerations for pod assignment | `[]` |
|
||||
| `agent.pod.affinity` | Affinity for pod assignment | `{}` |
|
||||
| `agent.env` | Additional environment variables for the agent | `{}` |
|
||||
| `agent.redactKubernetesSecrets` | Enable additional secrets redaction for selected Kubernetes resources | `nil` See [Kubernetes secrets](https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/#secrets) for more details. |
|
||||
| `cluster.name` | Display name of the monitored cluster | Value of `zone.name` |
|
||||
| `leaderElector.port` | Instana leader elector sidecar port | `42655` |
|
||||
| `leaderElector.image.name` | The elector image name to pull. _Note: leader-elector is deprecated and will no longer be updated._ | `instana/leader-elector` |
|
||||
| `leaderElector.image.digest` | The image digest to pull; if specified, it causes `leaderElector.image.tag` to be ignored. _Note: leader-elector is deprecated and will no longer be updated._ | `nil` |
|
||||
| `leaderElector.image.tag` | The image tag to pull; this property is ignored if `leaderElector.image.digest` is specified. _Note: leader-elector is deprecated and will no longer be updated._ | `latest` |
|
||||
| `k8s_sensor.deployment.enabled` | Isolate k8sensor with a deployment | `true` |
|
||||
| `k8s_sensor.image.name` | The k8sensor image name to pull | `gcr.io/instana/k8sensor` |
|
||||
| `k8s_sensor.image.digest` | The image digest to pull; if specified, it causes `k8s_sensor.image.tag` to be ignored | `nil` |
|
||||
| `k8s_sensor.image.tag` | The image tag to pull; this property is ignored if `k8s_sensor.image.digest` is specified | `latest` |
|
||||
| `k8s_sensor.deployment.pod.limits.cpu` | CPU request for the `k8sensor` pods | `4` |
|
||||
| `k8s_sensor.deployment.pod.limits.memory` | Memory request limits for the `k8sensor` pods | `6144Mi` |
|
||||
| `k8s_sensor.deployment.pod.requests.cpu` | CPU limit for the `k8sensor` pods | `1.5` |
|
||||
| `k8s_sensor.deployment.pod.requests.memory` | Memory limit for the `k8sensor` pods | `1024Mi` |
|
||||
| `podSecurityPolicy.enable` | Whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to be `true` as well and it is available until Kubernetes version v1.25. | `false` See [PodSecurityPolicy](https://docs.instana.io/setup_and_manage/host_agent/on/kubernetes/#podsecuritypolicy) for more details. |
|
||||
| `podSecurityPolicy.name` | Name of an _existing_ PodSecurityPolicy to authorize for the Instana Agent pods. If not provided and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created for you. | `nil` |
|
||||
| `rbac.create` | Whether RBAC resources should be created | `true` |
|
||||
| `openshift` | Whether to install the Helm chart as needed in OpenShift; this setting implies `rbac.create=true` | `false` |
|
||||
| `opentelemetry.grpc.enabled` | Whether to configure the agent to accept telemetry from OpenTelemetry applications via gRPC. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
|
||||
| `opentelemetry.http.enabled` | Whether to configure the agent to accept telemetry from OpenTelemetry applications via HTTP. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
|
||||
| `prometheus.remoteWrite.enabled` | Whether to configure the agent to accept metrics over its implementation of the `remote_write` Prometheus endpoint. This option also implies `service.create=true`, and requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. | `false` |
|
||||
| `service.create` | Whether to create a service that exposes the agents' Prometheus, OpenTelemetry and other APIs inside the cluster. Requires Kubernetes 1.21+, as it relies on `internalTrafficPolicy`. The `ServiceInternalTrafficPolicy` feature gate needs to be enabled (default: enabled). | `true` |
|
||||
| `serviceAccount.create` | Whether a ServiceAccount should be created | `true` |
|
||||
| `serviceAccount.name` | Name of the ServiceAccount to use | `instana-agent` |
|
||||
| `zone.name` | Zone that detected technologies will be assigned to | `nil` You must provide either `zone.name` or `cluster.name`, see [above](#Installation) for details |
|
||||
| `zones` | Multi-zone daemonset configuration. | `nil` see [below](#multiple-zones) for details |
|
||||
|
||||
### Agent Modes
|
||||
|
||||
Agent can have either `APM` or `INFRASTRUCTURE`.
|
||||
Default is APM and if you want to override that, ensure you set value:
|
||||
|
||||
* `agent.mode`
|
||||
|
||||
For more information on agent modes, refer to the [Host Agent Modes](https://www.instana.com/docs/setup_and_manage/host_agent#host-agent-modes) documentation.
|
||||
|
||||
### Agent Configuration
|
||||
|
||||
Besides the settings listed above, there are many more settings that can be applied to the agent via the so-called "Agent Configuration File", often also referred to as `configuration.yaml` file.
|
||||
An overview of the settings that can be applied is provided in the [Agent Configuration File](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#agent-configuration-file) documentation.
|
||||
To configure the agent, you can either:
|
||||
|
||||
* edit the [config map](templates/agent-configmap.yaml), or
|
||||
* provide the configuration via the `agent.configuration_yaml` parameter in [values.yaml](values.yaml)
|
||||
|
||||
This configuration will be used for all Instana Agents on all nodes. Visit the [agent configuration documentation](https://docs.instana.io/setup_and_manage/host_agent/#agent-configuration-file) for more details on configuration options.
|
||||
|
||||
_Note:_ This Helm Chart does not support configuring [Multiple Configuration Files](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#multiple-configuration-files).
|
||||
|
||||
### Agent Pod Sizing
|
||||
|
||||
The `agent.pod.requests.cpu`, `agent.pod.requests.memory`, `agent.pod.limits.cpu` and `agent.pod.limits.memory` settings allow you to change the sizing of the `instana-agent` pods.
|
||||
If you are using the [Kubernetes Sensor Deployment](#kubernetes-sensor-deployment) functionality, you may be able to reduce the default amount of resources, and especially memory, allocated to the Instana agents that monitor your applications.
|
||||
Actual sizing data depends very much on how many pods, containers and applications are monitored, and how much traces they generate, so we cannot really provide a rule of thumb for the sizing.
|
||||
|
||||
### Bring your own Keys secret
|
||||
|
||||
In case you have automation that creates secrets for you, it may not be desirable for this Helm chart to create a secret containing the `agent.key` and `agent.downloadKey`.
|
||||
In this case, you can instead specify the name of an alread-existing secret in the namespace in which you install the Instana agent that carries the agent key and download key.
|
||||
|
||||
The secret you specify The secret you specify _must_ have a field called `key`, which would contain the value you would otherwise set to `agent.key`, and _may_ contain a field called `downloadKey`, which would contain the value you would otherwise set to `agent.downloadKey`.
|
||||
|
||||
### Configuring Additional Configuration Files
|
||||
|
||||
[Multiple configuration files](https://www.instana.com/docs/setup_and_manage/host_agent/configuration#multiple-configuration-files) is a capability of the Instana agent that allows for modularity in its configurations files.
|
||||
|
||||
The experimental `agent.configuration.autoMountConfigEntries`, which uses functionality available in Helm 3.1+ to automatically look up the entries of the default `instana-agent` ConfigMap, and mount as agent configuration files in the `instana-agent` container under the `/opt/instana/agent/etc/instana` directory all ConfigMap entries with keys that match the `configuration-*.yaml` scheme.
|
||||
|
||||
**IMPORTANT:** Needs Helm 3.1+ as it is built on the `lookup` function
|
||||
**IMPORTANT:** Editing the ConfigMap adding keys requires a `helm upgrade` to take effect
|
||||
|
||||
### Configuring Additional Backends
|
||||
|
||||
You may want to have your Instana agents report to multiple backends.
|
||||
The first backend must be configured as shown in the [Configuring the Instana Backend](#configuring-the-instana-backend); every backend after the first, is configured in the `agent.additionalBackends` list in the [values.yaml](values.yaml) as follows:
|
||||
|
||||
```yaml
|
||||
agent:
|
||||
additionalBackends:
|
||||
# Second backend
|
||||
- endpointHost: my-instana.instana.io # endpoint host; e.g., my-instana.instana.io
|
||||
endpointPort: 443 # default is 443, so this line could be omitted
|
||||
key: ABCDEFG # agent key for this backend
|
||||
# Third backend
|
||||
- endpointHost: another-instana.instana.io # endpoint host; e.g., my-instana.instana.io
|
||||
endpointPort: 1444 # default is 443, so this line could be omitted
|
||||
key: LMNOPQR # agent key for this backend
|
||||
```
|
||||
|
||||
The snippet above configures the agent to report to two additional backends.
|
||||
The same effect as the above can be accomplished via the command line via:
|
||||
|
||||
```sh
|
||||
$ helm install -n instana-agent instana-agent ... \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set 'agent.additionalBackends[0].endpointHost=my-instana.instana.io' \
|
||||
--set 'agent.additionalBackends[0].endpointPort=443' \
|
||||
--set 'agent.additionalBackends[0].key=ABCDEFG' \
|
||||
--set 'agent.additionalBackends[1].endpointHost=another-instana.instana.io' \
|
||||
--set 'agent.additionalBackends[1].endpointPort=1444' \
|
||||
--set 'agent.additionalBackends[1].key=LMNOPQR' \
|
||||
instana-agent
|
||||
```
|
||||
|
||||
_Note:_ There is no hard limitation on the number of backends an Instana agent can report to, although each comes at the cost of a slight increase in CPU and memory consumption.
|
||||
|
||||
### Configuring a Proxy between the Instana agents and the Instana backend
|
||||
|
||||
If your infrastructure uses a proxy, you should ensure that you set values for:
|
||||
|
||||
* `agent.pod.proxyHost`
|
||||
* `agent.pod.proxyPort`
|
||||
* `agent.pod.proxyProtocol`
|
||||
* `agent.pod.proxyUser`
|
||||
* `agent.pod.proxyPassword`
|
||||
* `agent.pod.proxyUseDNS`
|
||||
|
||||
### Configuring which Networks the Instana Agent should listen on
|
||||
|
||||
If your infrastructure has multiple networks defined, you might need to allow the agent to listen on all addresses (typically with value set to `*`):
|
||||
|
||||
* `agent.listenAddress`
|
||||
|
||||
### Setup TLS Encryption for Agent Endpoint
|
||||
|
||||
TLS encryption can be added via two variants.
|
||||
Either an existing secret can be used or a certificate and a private key can be used during the installation.
|
||||
|
||||
#### Using existing secret
|
||||
|
||||
An existing secret of type `kubernetes.io/tls` can be used.
|
||||
Only the `secretName` must be provided during the installation with `--set 'agent.tls.secretName=<YOUR_SECRET_NAME>'`.
|
||||
The files from the provided secret are then mounted into the agent.
|
||||
|
||||
#### Provide certificate and private key
|
||||
|
||||
On the other side, a certificate and a private key can be added during the installation.
|
||||
The certificate and private key must be base64 encoded.
|
||||
|
||||
To use this variant, execute `helm install` with the following additional parameters:
|
||||
|
||||
```
|
||||
--set 'agent.tls.certificate=<YOUR_CERTIFICATE_BASE64_ENCODED>'
|
||||
--set 'agent.tls.key=<YOUR_PRIVATE_KEY_BASE64_ENCODED>'
|
||||
```
|
||||
|
||||
If `agent.tls.secretName` is set, then `agent.tls.certificate` and `agent.tls.key` are ignored.
|
||||
|
||||
### Development and debugging options
|
||||
|
||||
These options will be rarely used outside of development or debugging of the agent.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ----------------------- | ------------------------------------------------ | ------- |
|
||||
| `agent.host.repository` | Host path to mount as the agent maven repository | `nil` |
|
||||
|
||||
### Kubernetes Sensor Deployment
|
||||
|
||||
_Note: leader-elector and kubernetes sensor is deprecated and will no longer be updated. Instead, k8s_sensor should be used._
|
||||
|
||||
The data about Kubernetes resources is collected by the Kubernetes sensor in the Instana agent.
|
||||
With default configurations, only one Instana agent at any one time is capturing the bulk of Kubernetes data.
|
||||
Which agent gets the task is coordinated by a leader elector mechanism running inside the `leader-elector` container of the `instana-agent` pods.
|
||||
However, on large Kubernetes clusters, the load on the one Instana agent that fetches the Kubernetes data can be substantial and, to some extent, has lead to rather "generous" resource requests and limits for all the Instana agents across the cluster, as any one of them could become the leader at some point.
|
||||
|
||||
The Helm chart has a special mode, enabled by setting `k8s_sensor.deployment.enabled=true`, that will actually schedule additional Instana agents running _only_ the Kubernetes sensor that run in a dedicated `k8sensor` Deployment inside the `instana-agent` namespace.
|
||||
The pods containing agents that run only the Kubernetes sensor are called `k8sensor` pods.
|
||||
When `k8s_sensor.deployment.enabled=true`, the `instana-agent` pods running inside the daemonset do _not_ contain the `leader-elector` container, which is instead scheduled inside the `k8sensor` pods.
|
||||
|
||||
The `instana-agent` and `k8sensor` pods share the same configurations in terms of backend-related configurations (including [additional backends](#configuring-additional-backends)).
|
||||
|
||||
It is advised to use the `k8s_sensor.deployment.enabled=true` mode on clusters of more than 10 nodes, and in that case, you may be able to reduce the amount of resources assigned to the `instana-agent` pods, especially in terms of memory, using the [Agent Pod Sizing](#agent-pod-sizing) settings.
|
||||
The `k8s_sensor.deployment.pod.requests.cpu`, `k8s_sensor.deployment.pod.requests.memory`, `k8s_sensor.deployment.pod.limits.cpu` and `k8s_sensor.deployment.pod.limits.memory` settings, on the other hand, allows you to change the sizing of the `k8sensor` pods.
|
||||
|
||||
#### Determine Special Mode Enabled
|
||||
To determine if Kubernetes sensor is running in a decidated `k8sensor` deployment, list deployments in the `instana-agent` namespace.
|
||||
```
|
||||
kubectl get deployments -n instana-agent
|
||||
```
|
||||
If it shows `k8sensor` in the list, then the special mode is enabled
|
||||
|
||||
#### Upgrade Kubernetes Sensor
|
||||
To upgrade the kubernetes sensor to the lastest version, perform a rolling restart of the `k8sensor` deployment using the following command:
|
||||
```
|
||||
kubectl rollout restart deployment k8sensor -n instana-agent
|
||||
```
|
||||
|
||||
### Multiple Zones
|
||||
You can list zones to use affinities and tolerations as the basis to associate a specific daemonset per tainted node pool. Each zone will have the following data:
|
||||
|
||||
* `name` (required) - zone name.
|
||||
* `mode` (optional) - instana agent mode (e.g. APM, INFRASTRUCTURE, etc).
|
||||
* `affinity` (optional) - standard kubernetes pod affinity list for the daemonset.
|
||||
* `tolerations` (optional) - standard kubernetes pod toleration list for the daemonset.
|
||||
|
||||
The following is an example that will create 2 zones an api-server and a worker zone:
|
||||
|
||||
```yaml
|
||||
zones:
|
||||
- name: workers
|
||||
mode: APM
|
||||
- name: api-server
|
||||
mode: INFRASTRUCTURE
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: Exists
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
## Changelog
|
||||
|
||||
### 1.2.61
|
||||
* Increase timeout and initialDelay for the Agent container
|
||||
* Add OTLP ports to headless service
|
||||
|
||||
### 1.2.60
|
||||
* Enable the k8s_sensor by default
|
||||
|
||||
### 1.2.59
|
||||
* Introduce unique selectorLabels and commonLabels for k8s-sensor deployment
|
||||
|
||||
### 1.2.58
|
||||
* Default to `internalTrafficPolicy` instead of `topologyKeys` for rendering of static YAMLs
|
||||
|
||||
### 1.2.57
|
||||
* Fix vulnerability in the leader-elector image
|
||||
|
||||
### 1.2.49
|
||||
* Add zone name to label `io.instana/zone` in daemonset
|
||||
|
||||
### 1.2.48
|
||||
|
||||
* Set env var INSTANA_KUBERNETES_REDACT_SECRETS true if agent.redactKubernetesSecrets is enabled.
|
||||
* Use feature PSP flag in k8sensor ClusterRole only when podsecuritypolicy.enable is true.
|
||||
|
||||
### 1.2.47
|
||||
|
||||
* Roll back the changes from version 1.2.46 to be compatible with the Agent Operator installation
|
||||
|
||||
### 1.2.46
|
||||
|
||||
* Use K8sensor by default.
|
||||
* kubernetes.deployment.enabled setting overrides k8s_sensor.deployment.enabled setting.
|
||||
* Use feature PSP flag in k8sensor ClusterRole only when podsecuritypolicy.enable is true.
|
||||
* Throw failure if customer specifies proxy with k8sensor.
|
||||
* Set env var INSTANA_KUBERNETES_REDACT_SECRETS true if agent.redactKubernetesSecrets is enabled.
|
||||
|
||||
### 1.2.45
|
||||
|
||||
* Use agent key secret in k8sensor deployment.
|
||||
|
||||
### 1.2.44
|
||||
|
||||
* Add support for enabling the hot-reload of `configuration.yaml` when the default `instana-agent` ConfigMap changes
|
||||
* Enablement is done via the flag `--set agent.configuration.hotreloadEnabled=true`
|
||||
|
||||
### 1.2.43
|
||||
|
||||
* Bump leader-elector image to v0.5.16 (Update dependencies)
|
||||
|
||||
### 1.2.42
|
||||
|
||||
* Add support for creating multiple zones within the same cluster using affinity and tolerations.
|
||||
|
||||
### 1.2.41
|
||||
|
||||
* Add additional permissions (HPA, ResourceQuotas, etc) to k8sensor clusterrole.
|
||||
|
||||
### 1.2.40
|
||||
|
||||
* Mount all system mounts mountPropagation: HostToContainer.
|
||||
|
||||
### 1.2.39
|
||||
|
||||
* Add NO_PROXY to k8sensor deployment to prevent api-server requests from being routed to the proxy.
|
||||
|
||||
### 1.2.38
|
||||
|
||||
* Fix issue related to EKS version format when enabling OTel service.
|
||||
|
||||
### 1.2.37
|
||||
|
||||
* Fix issue where cluster_zone is used as cluster_name when `k8s_sensor.deployment.enabled=true`.
|
||||
* Set `HTTPS_PROXY` in k8s deployment when proxy information is set.
|
||||
|
||||
### 1.2.36
|
||||
|
||||
* Remove Service `topologyKeys`, which was removed in Kubernetes v1.22. Replaced by `internalTrafficPolicy` which is available with Kubernetes v1.21+.
|
||||
|
||||
### 1.2.35
|
||||
|
||||
* Fix invalid backend port for new Kubernetes sensor (k8sensor)
|
||||
|
||||
### 1.2.34
|
||||
|
||||
* Add support for new Kubernetes sensor (k8sensor)
|
||||
* New Kubernetes sensor can be used via the flag `--set k8s_sensor.deployment.enabled=true`
|
||||
|
||||
### 1.2.33
|
||||
|
||||
* Bump leader-elector image to v0.5.15 (Update dependencies)
|
||||
|
||||
### 1.2.32
|
||||
|
||||
* Add support for containerd montoring on TKGI
|
||||
|
||||
### 1.2.31
|
||||
|
||||
* Bump leader-elector image to v0.5.14 (Update dependencies)
|
||||
|
||||
### 1.2.30
|
||||
|
||||
* Pull agent image from IBM Cloud Container Registry (icr.io/instana/agent). No code changes have been made.
|
||||
* Bump leader-elector image to v0.5.13 and pull from IBM Cloud Container Registry (icr.io/instana/leader-elector). No code changes have been made.
|
||||
|
||||
### 1.2.29
|
||||
|
||||
* Add an additional port to the Instana Agent `Service` definition, for the OpenTelemetry registered IANA port 4317.
|
||||
|
||||
### 1.2.28
|
||||
|
||||
* Fix deployment when `cluster.name` is not specified. Should be allowed according to docs but previously broke the Pod
|
||||
when starting up.
|
||||
|
||||
### 1.2.27
|
||||
|
||||
* Update leader elector image to `0.5.10` to tone down logging and make it configurable
|
||||
|
||||
### 1.2.26
|
||||
|
||||
* Add TLS support. An existing secret can be used of type `kubernetes.io/tls`. Or provide a certificate and a private key, which creates a new secret.
|
||||
* Update leader elector image version to 0.5.9 to support PPCle
|
||||
|
||||
### 1.2.25
|
||||
|
||||
* Add `agent.pod.labels` to add custom labels to the Instana Agent pods
|
||||
|
||||
### 1.2.24
|
||||
|
||||
* Bump leader-elector image to v0.5.8 which includes a health-check endpoint. Update the `livenessProbe`
|
||||
correspondingly.
|
||||
|
||||
### 1.2.23
|
||||
|
||||
* Bump leader-elector image to v0.5.7 to fix a potential Golang bug in the elector
|
||||
|
||||
### 1.2.22
|
||||
|
||||
* Fix templating scope when defining multiple backends
|
||||
|
||||
### 1.2.21
|
||||
|
||||
* Internal updates
|
||||
|
||||
### 1.2.20
|
||||
|
||||
* upgrade leader-elector image to v0.5.6 to enable usage on s390x and arm64
|
||||
|
||||
### 1.2.18 / 1.2.19
|
||||
|
||||
* Internal change on generated DaemonSet YAML from the Helm charts
|
||||
|
||||
### 1.2.17
|
||||
|
||||
* Update Pod Security Policies as the `readOnly: true` appears not to be working for the mount points and
|
||||
actually causes the Agent deployment to fail when these policies are enforced in the cluster.
|
||||
|
||||
### 1.2.16
|
||||
|
||||
* Add configuration option for `INSTANA_MVN_REPOSITORY_URL` setting on the Agent container.
|
||||
|
||||
### 1.2.15
|
||||
|
||||
* Internal pipeline changes. No significant changes to the Helm charts
|
||||
|
||||
### v1.2.14
|
||||
|
||||
* Update Agent container mounts. Make some read-only as we don't need all mounts with read-write permissions.
|
||||
Additionally add the mount for `/var/data` which is needed in certain environments for the Agent to function
|
||||
properly.
|
||||
|
||||
### v1.2.13
|
||||
|
||||
* Update memory settings specifically for the Kubernetes sensor (Technical Preview)
|
||||
|
||||
### v1.2.11
|
||||
|
||||
* Simplify setup for using OpenTelemetry and the Prometheus `remote_write` endpoint using the `opentelemetry.enabled` and `prometheus.remoteWrite.enabled` settings, respectively.
|
||||
|
||||
### v1.2.9
|
||||
|
||||
* **Technical Preview:** Introduce a new mode of running to the Kubernetes sensor using a dedicated deployment.
|
||||
See the [Kubernetes Sensor Deployment](#kubernetes-sensor-deployment) section for more information.
|
||||
|
||||
### v1.2.7
|
||||
|
||||
* Fix: Make service opt-in, as it uses functionality (`topologyKeys`) that is available only in K8S 1.17+.
|
||||
|
||||
### v1.2.6
|
||||
|
||||
* Fix bug that might cause some OpenShift-specific resources to be created in other flavours of Kubernetes.
|
||||
|
||||
### v1.2.5
|
||||
|
||||
* Introduce the `instana-agent:instana-agent` Kubernetes service that allows you to talk to the Instana agent on the same node.
|
||||
|
||||
### v1.2.3
|
||||
|
||||
* Bug fix: Extend the built-in Pod Security Policy to cover the Docker socket mount for Tanzu Kubernetes Grid systems.
|
||||
|
||||
### v1.2.1
|
||||
|
||||
* Support OpenShift 4.x: just add --set openshift=true to the usual settings, and off you go :-)
|
||||
* Restructure documentation for consistency and readability
|
||||
* Deprecation: Helm 2 is no longer supported; the minimum Helm API version is now v2, which will make Helm 2 refuse to process the chart.
|
||||
|
||||
### v1.1.10
|
||||
|
||||
* Some linting of the whitespaces in the generated YAML
|
||||
|
||||
### v1.1.9
|
||||
|
||||
* Update the README to replace all references of `stable/instana-agent` with specifically setting the repo flag to `https://agents.instana.io/helm`.
|
||||
* Add support for TKGI and PKS systems, providing a workaround for the [unexpected Docker socket location](https://github.com/cloudfoundry-incubator/kubo-release/issues/329).
|
||||
|
||||
### v1.1.7
|
||||
|
||||
* Store the cluster name in a new `cluster-name` entry of the `instana-agent` ConfigMap rather than directly as the value of the `INSTANA_KUBERNETES_CLUSTER_NAME`, so that you can edit the cluster name in the ConfigMap in deployments like VMware Tanzu Kubernetes Grid in which, when installing the Instana agent over the [Instana tile](https://www.instana.com/docs/setup_and_manage/host_agent/on/vmware_tanzu), you do not have directly control to the configuration of the cluster name.
|
||||
If you edit the ConfigMap, you will need to delete the `instana-agent` pods for its new value to take effect.
|
||||
|
||||
### v1.1.6
|
||||
|
||||
* Allow to use user-specified memony measurement units in `agent.pod.requests.memory` and `agent.pod.limits.memory`.
|
||||
If the value set is numerical, the Chart will assume it to be expressed in `Mi` for backwards compatibility.
|
||||
* Exposed `agent.updateStrategy.type` and `agent.updateStrategy.rollingUpdate.maxUnavailable` settings.
|
||||
|
||||
### v1.1.5
|
||||
|
||||
Restore compatibility with Helm 2 that was broken in v1.1.4 by the usage of the `lookup` function, a function actually introduced only with Helm 3.1.
|
||||
Coincidentally, this has been an _excellent_ opportunity to introduce `helm lint` to our validation pipeline and end-to-end tests with Helm 2 ;-)
|
||||
|
||||
### v1.1.4
|
||||
|
||||
* Bring-your-own secret for agent keys: using the new `agent.keysSecret` setting, you can specify the name of the secret that contains the agent key and, optionally, the download key; refer to [Bring your own Keys secret](#bring-your-own-keys-secret) for more details.
|
||||
* Add support for affinities for the instana agent pod via the `agent.pod.affinity` setting.
|
||||
* Put some love into the ArtifactHub.io metadata; likely to add some more improvements related to this over time.
|
||||
|
||||
### v1.1.3
|
||||
|
||||
* No new features, just ironing some wrinkles out of our release automation.
|
||||
|
||||
### v1.1.2
|
||||
|
||||
* Improvement: Seamless support for Instana static agent images: When using an `agent.image.name` starting with `containers.instana.io`, automatically create a secret called `containers-instana-io` containing the `.dockerconfigjson` for `containers.instana.io`, using `_` as username and `agent.downloadKey` or, if missing, `agent.key` as password. If you want to control the creation of the image pull secret, or disable it, you can use `agent.image.pullSecrets`, passing to it the YAML to use for the `imagePullSecrets` field of the Daemonset spec, including an empty array `[]` to mount no pull secrets, no matter what.
|
||||
|
||||
### v1.1.1
|
||||
|
||||
* Fix: Recreate the `instana-agent` pods when there is a change in one of the following configuration, which are mapped to the chart-managed ConfigMap:
|
||||
|
||||
* `agent.configuration_yaml`
|
||||
* `agent.additional_backends`
|
||||
|
||||
The pod recreation is achieved by annotating the `instana-agent` Pod with a new `instana-configuration-hash` annotation that has, as value, the SHA-1 hash of the configurations used to populate the ConfigMap.
|
||||
This way, when the configuration changes, the respective change in the `instana-configuration-hash` annotation will cause the agent pods to be recreated.
|
||||
This technique has been described at [1] (or, at least, that is were we learned about it) and it is pretty cool :-)
|
||||
|
||||
### v1.1.0
|
||||
|
||||
* Improvement: The `instana-agent` Helm chart has a new home at `https://agents.instana.io/helm` and `https://github.com/instana/helm-charts/instana-agent`!
|
||||
This release is functionally equivalent to `1.0.34`, but we bumped the major to denote the new location ;-)
|
||||
|
||||
## References
|
||||
|
||||
[1] ["Using Kubernetes Helm to push ConfigMap changes to your Deployments", by Sander Knape; Mar 7, 2019](https://sanderknape.com/2019/03/kubernetes-helm-configmaps-changes-deployments/)
|
|
@ -0,0 +1,5 @@
|
|||
# Instana
|
||||
|
||||
Instana is an [APM solution(https://www.instana.com/) built for microservices that enables IT Ops to build applications faster and deliver higher quality services by automating monitoring tracing and root cause analysis. This solution is optimized for [Rancher](https://www.instana.com/rancher/).
|
||||
|
||||
This chart adds the Instana Agent to all schedulable nodes in your cluster via a `DaemonSet`.
|
Binary file not shown.
After Width: | Height: | Size: 70 KiB |
|
@ -0,0 +1,20 @@
|
|||
from diagrams import Cluster, Diagram
|
||||
from diagrams.k8s.compute import Deploy, DaemonSet, Pod
|
||||
from diagrams.k8s.podconfig import ConfigMap
|
||||
|
||||
with Diagram("kubernetes.deployment.enabled", show=True, direction="LR"):
|
||||
ds = None
|
||||
deploy = None
|
||||
with Cluster("Namespace\ninstana-agent"):
|
||||
with Cluster("Deployment\nkubernetes-sensor"):
|
||||
deploy = Pod("2 Replicas\nKubernetes Sensor")
|
||||
|
||||
with Cluster("DaemonSet\ninstana-agent"):
|
||||
ds = Pod('Per Node\nHost & APM')
|
||||
|
||||
cm = ConfigMap("instana-agent")
|
||||
dcm = ConfigMap("instana-agent-deployment")
|
||||
|
||||
cm >> deploy
|
||||
cm >> ds
|
||||
dcm >> deploy
|
|
@ -0,0 +1,236 @@
|
|||
questions:
|
||||
# Basic agent configuration
|
||||
- variable: agent.key
|
||||
label: agent.key
|
||||
description: "Your Instana Agent key is the secret token which your agent uses to authenticate to Instana's servers"
|
||||
type: string
|
||||
required: true
|
||||
group: "Agent Configuration"
|
||||
- variable: agent.endpointHost
|
||||
label: agent.endpointHost
|
||||
description: "The hostname of the Instana server your agents will connect to. Defaults to ingress-red-saas.instana.io for US and ROW. If in Europe, please use ingress-blue-saas.instana.io"
|
||||
type: string
|
||||
required: true
|
||||
default: "ingress-red-saas.instana.io"
|
||||
group: "Agent Configuration"
|
||||
- variable: zone.name
|
||||
label: zone.name
|
||||
description: "Custom zone that detected technologies will be assigned to"
|
||||
type: string
|
||||
required: true
|
||||
group: "Agent Configuration"
|
||||
# Advanced agent configuration
|
||||
- variable: advancedAgentConfiguration
|
||||
description: "Show advanced configuration for the Instana Agent"
|
||||
label: Show advanced configuration
|
||||
type: boolean
|
||||
default: false
|
||||
show_subquestion_if: true
|
||||
group: "Advanced Agent Configuration"
|
||||
subquestions:
|
||||
- variable: agent.configuration_yaml
|
||||
label: agent.configuration_yaml (Optional)
|
||||
description: "Custom content for the agent configuration.yaml file in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.downloadKey
|
||||
label: agent.downloadKey (Optional)
|
||||
description: "Your Instana download key"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.endpointPort
|
||||
label: agent.endpointPort
|
||||
description: "The Agent backend port number (as a string) of the Instana server your agents will connect to"
|
||||
type: string
|
||||
required: true
|
||||
default: "443"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.image.name
|
||||
label: agent.image.name
|
||||
description: "The name of the Instana Agent container image"
|
||||
type: string
|
||||
required: true
|
||||
default: "instana/agent"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.image.tag
|
||||
label: agent.image.tag
|
||||
description: "The tag name of the Instana Agent container image"
|
||||
type: string
|
||||
required: true
|
||||
default: "latest"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.image.pullPolicy
|
||||
label: agent.image.pullPolicy
|
||||
description: "Specifies when to pull the Instana Agent image container"
|
||||
type: string
|
||||
required: true
|
||||
default: "Always"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.listenAddress
|
||||
label: agent.listenAddress (Optional)
|
||||
description: "The IP address the agent HTTP server will listen to, or '*' for all interfaces"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.mode
|
||||
label: agent.mode (Optional)
|
||||
description: "Agent mode. Possible options are: APM, INFRASTRUCTURE or AWS"
|
||||
type: enum
|
||||
options:
|
||||
- "APM"
|
||||
- "INFRASTRUCTURE"
|
||||
- "AWS"
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.annotations
|
||||
label: agent.pod.annotations (Optional)
|
||||
description: "Additional annotations to be added to the agent pods in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.limits.cpu
|
||||
label: agent.pod.limits.cpu
|
||||
description: "CPU units allocation limits for the agent pods"
|
||||
type: string
|
||||
required: true
|
||||
default: "1.5"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.limits.memory
|
||||
label: agent.pod.limits.memory
|
||||
description: "Memory allocation limits in MiB for the agent pods"
|
||||
type: int
|
||||
required: true
|
||||
default: 512
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.proxyHost
|
||||
label: agent.pod.proxyHost (Optional)
|
||||
description: "Hostname/address of a proxy. Sets the INSTANA_AGENT_PROXY_HOST environment variable"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.proxyPort
|
||||
label: agent.pod.proxyPort (Optional)
|
||||
description: "Port of a proxy. Sets the INSTANA_AGENT_PROXY_PORT environment variable"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.proxyProtocol
|
||||
label: agent.pod.proxyProtocol (Optional)
|
||||
description: "Proxy protocol. Sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable. Supported proxy types are http, socks4, socks5"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.proxyUser
|
||||
label: agent.pod.proxyUser (Optional)
|
||||
description: "Username of the proxy auth. Sets the INSTANA_AGENT_PROXY_USER environment variable"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.proxyPassword
|
||||
label: agent.pod.proxyPassword (Optional)
|
||||
description: "Password of the proxy auth. Sets the INSTANA_AGENT_PROXY_PASSWORD environment variable"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.proxyUseDNS
|
||||
label: agent.pod.proxyUseDNS. (Optional)
|
||||
description: "Boolean if proxy also does DNS. Sets the INSTANA_AGENT_PROXY_USE_DNS environment variable"
|
||||
type: enum
|
||||
options:
|
||||
- "true"
|
||||
- "false"
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.requests.cpu
|
||||
label: agent.pod.requests.cpu
|
||||
description: "Requested CPU units allocation for the agent pods"
|
||||
type: string
|
||||
required: true
|
||||
default: "0.5"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.requests.memory
|
||||
label: agent.pod.requests.memory
|
||||
description: "Requested memory allocation in MiB for the agent pods"
|
||||
type: int
|
||||
required: true
|
||||
default: 512
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.pod.tolerations
|
||||
label: agent.pod.tolerations (Optional)
|
||||
description: "Tolerations to influence agent pod assignment in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: agent.redactKubernetesSecrets
|
||||
label: agent.redactKubernetesSecrets (Optional)
|
||||
description: "Enable additional secrets redaction for selected Kubernetes resources"
|
||||
type: boolean
|
||||
required: false
|
||||
default: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: cluster.name
|
||||
label: cluster.name (Optional)
|
||||
description: "The name that will be assigned to this cluster in Instana. See the 'Installing the Chart' section in the 'Detailed Descriptions' tab for more details"
|
||||
type: string
|
||||
required: false
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: leaderElector.image.name
|
||||
label: leaderElector.image.name
|
||||
description: "The name of the leader elector container image"
|
||||
type: string
|
||||
required: true
|
||||
default: "instana/leader-elector"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: leaderElector.image.tag
|
||||
label: leaderElector.image.tag
|
||||
description: "The tag name of the leader elector container image"
|
||||
type: string
|
||||
required: true
|
||||
default: "0.5.4"
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: leaderElector.port
|
||||
label: leaderElector.port
|
||||
description: "The port on which the leader elector sidecar is exposed"
|
||||
type: int
|
||||
required: true
|
||||
default: 42655
|
||||
group: "Advanced Agent Configuration"
|
||||
- variable: podSecurityPolicy.enable
|
||||
label: podSecurityPolicy.enable (Optional)
|
||||
description: "Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to also be `true`"
|
||||
type: boolean
|
||||
show_if: "rbac.create=true"
|
||||
required: false
|
||||
default: false
|
||||
group: "Pod Security Policy Configuration"
|
||||
- variable: podSecurityPolicy.name
|
||||
label: podSecurityPolicy.name (Optional)
|
||||
description: "The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods. If not set and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created with a name generated using the fullname template"
|
||||
type: string
|
||||
show_if: "rbac.create=true&&podSecurityPolicy.enable=true"
|
||||
required: false
|
||||
group: "Pod Security Policy Configuration"
|
||||
- variable: rbac.create
|
||||
label: rbac.create
|
||||
description: "Specifies whether RBAC resources should be created"
|
||||
type: boolean
|
||||
required: true
|
||||
default: true
|
||||
group: "RBAC Configuration"
|
||||
- variable: serviceAccount.create
|
||||
label: serviceAccount.create
|
||||
description: "Specifies whether a ServiceAccount should be created"
|
||||
type: boolean
|
||||
required: true
|
||||
default: true
|
||||
show_subquestion_if: true
|
||||
group: "RBAC Configuration"
|
||||
subquestions:
|
||||
- variable: serviceAccount.name
|
||||
label: Name of the ServiceAccount (Optional)
|
||||
description: "The name of the ServiceAccount to use. If not set and `serviceAccount.create` is true, a name is generated using the fullname template."
|
||||
type: string
|
||||
required: false
|
||||
group: "RBAC Configuration"
|
|
@ -0,0 +1,71 @@
|
|||
{{- if (and (not (or .Values.agent.key .Values.agent.keysSecret )) (and (not .Values.zone.name) (not .Values.cluster.name))) }}
|
||||
##############################################################################
|
||||
#### ERROR: You did not specify your secret agent key. ####
|
||||
#### ERROR: You also did not specify a zone or name for this cluster. ####
|
||||
##############################################################################
|
||||
|
||||
This agent deployment will be incomplete until you set your agent key and zone or name for this cluster:
|
||||
|
||||
helm upgrade {{ .Release.Name }} --reuse-values \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set agent.key=$(YOUR_SECRET_AGENT_KEY) \
|
||||
--set zone.name=$(YOUR_ZONE_NAME) instana-agent
|
||||
|
||||
Alternatively, you may specify a cluster name and the zone will be detected from availability zone information on the host:
|
||||
|
||||
helm upgrade {{ .Release.Name }} --reuse-values \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set agent.key=$(YOUR_SECRET_AGENT_KEY) \
|
||||
--set cluster.name=$(YOUR_CLUSTER_NAME) instana-agent
|
||||
|
||||
- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
|
||||
- YOUR_ZONE_NAME should be the zone that detected technologies will be assigned to.
|
||||
- YOUR_CLUSTER_NAME should be the custom name of your cluster.
|
||||
|
||||
At least one of zone.name or cluster.name is required. This cluster will be reported with the name of the zone unless you specify a cluster name.
|
||||
|
||||
{{- else if (and (not .Values.zone.name) (not .Values.cluster.name)) }}
|
||||
##############################################################################
|
||||
#### ERROR: You did not specify a zone or name for this cluster. ####
|
||||
##############################################################################
|
||||
|
||||
This agent deployment will be incomplete until you set a zone for this cluster:
|
||||
|
||||
helm upgrade {{ .Release.Name }} --reuse-values \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set zone.name=$(YOUR_ZONE_NAME) instana-agent
|
||||
|
||||
Alternatively, you may specify a cluster name and the zone will be detected from availability zone information on the host:
|
||||
|
||||
helm upgrade {{ .Release.Name }} --reuse-values \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set cluster.name=$(YOUR_CLUSTER_NAME) instana-agent
|
||||
|
||||
- YOUR_ZONE_NAME should be the zone that detected technologies will be assigned to.
|
||||
- YOUR_CLUSTER_NAME should be the custom name of your cluster.
|
||||
|
||||
At least one of zone.name or cluster.name is required. This cluster will be reported with the name of the zone unless you specify a cluster name.
|
||||
|
||||
{{- else if not (or .Values.agent.key .Values.agent.keysSecret )}}
|
||||
##############################################################################
|
||||
#### ERROR: You did not specify your secret agent key. ####
|
||||
##############################################################################
|
||||
|
||||
This agent deployment will be incomplete until you set your agent key:
|
||||
|
||||
helm upgrade {{ .Release.Name }} --reuse-values \
|
||||
--repo https://agents.instana.io/helm \
|
||||
--set agent.key=$(YOUR_SECRET_AGENT_KEY) instana-agent
|
||||
|
||||
- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
|
||||
|
||||
{{- else -}}
|
||||
It may take a few moments for the agents to fully deploy. You can see what agents are running by listing resources in the {{ .Release.Namespace }} namespace:
|
||||
|
||||
kubectl get all -n {{ .Release.Namespace }}
|
||||
|
||||
You can get the logs for all of the agents with `kubectl logs`:
|
||||
|
||||
kubectl logs -l app.kubernetes.io/name={{ .Release.Name }} -n {{ .Release.Namespace }} -c instana-agent
|
||||
|
||||
{{- end }}
|
|
@ -0,0 +1,381 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "instana-agent.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "instana-agent.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "instana-agent.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
The name of the ServiceAccount used.
|
||||
*/}}
|
||||
{{- define "instana-agent.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "instana-agent.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
The name of the PodSecurityPolicy used.
|
||||
*/}}
|
||||
{{- define "instana-agent.podSecurityPolicyName" -}}
|
||||
{{- if .Values.podSecurityPolicy.enable -}}
|
||||
{{ default (include "instana-agent.fullname" .) .Values.podSecurityPolicy.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Prints out the name of the secret to use to retrieve the agent key
|
||||
*/}}
|
||||
{{- define "instana-agent.keysSecretName" -}}
|
||||
{{- if .Values.agent.keysSecret -}}
|
||||
{{ .Values.agent.keysSecret }}
|
||||
{{- else -}}
|
||||
{{ template "instana-agent.fullname" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to resource labels.
|
||||
*/}}
|
||||
{{- define "instana-agent.commonLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}
|
||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
helm.sh/chart: {{ include "instana-agent.chart" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to resource labels.
|
||||
*/}}
|
||||
{{- define "k8s-sensor.commonLabels" -}}
|
||||
{{/* Following label is used to determine whether to disable the Kubernetes host sensor */}}
|
||||
app: k8sensor
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}-k8s-sensor
|
||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
helm.sh/chart: {{ include "instana-agent.chart" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to selector labels specifically for deployments/daemonsets/statefulsets.
|
||||
*/}}
|
||||
{{- define "instana-agent.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Add Helm metadata to selector labels specifically for deployments/daemonsets/statefulsets.
|
||||
*/}}
|
||||
{{- define "k8s-sensor.selectorLabels" -}}
|
||||
app: k8sensor
|
||||
app.kubernetes.io/name: {{ include "instana-agent.name" . }}-k8s-sensor
|
||||
{{- if not .Values.templating }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generates the dockerconfig for the credentials to pull from containers.instana.io
|
||||
*/}}
|
||||
{{- define "imagePullSecretContainersInstanaIo" }}
|
||||
{{- $registry := "containers.instana.io" }}
|
||||
{{- $username := "_" }}
|
||||
{{- $password := default .Values.agent.key .Values.agent.downloadKey }}
|
||||
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" $registry (printf "%s:%s" $username $password | b64enc) | b64enc }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Output limits or defaults
|
||||
*/}}
|
||||
{{- define "instana-agent.resources" -}}
|
||||
{{- $memory := default "512Mi" .memory -}}
|
||||
{{- $cpu := default 0.5 .cpu -}}
|
||||
memory: "{{ dict "memory" $memory | include "ensureMemoryMeasurement" }}"
|
||||
cpu: {{ $cpu }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Ensure a unit of memory measurement is added to the value
|
||||
*/}}
|
||||
{{- define "ensureMemoryMeasurement" }}
|
||||
{{- $value := .memory }}
|
||||
{{- if kindIs "string" $value }}
|
||||
{{- print $value }}
|
||||
{{- else }}
|
||||
{{- print ($value | toString) "Mi" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Composes a container image from a dict containing a "name" field (required), "tag" and "digest" (both optional, if both provided, "digest" has priority)
|
||||
*/}}
|
||||
{{- define "image" }}
|
||||
{{- $name := .name }}
|
||||
{{- $tag := .tag }}
|
||||
{{- $digest := .digest }}
|
||||
{{- if $digest }}
|
||||
{{- printf "%s@%s" $name $digest }}
|
||||
{{- else if $tag }}
|
||||
{{- printf "%s:%s" $name $tag }}
|
||||
{{- else }}
|
||||
{{- print $name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "volumeMountsForConfigFileInConfigMap" }}
|
||||
{{- $configMapName := (include "instana-agent.fullname" .) }}
|
||||
{{- $configMapNameSpace := .Release.Namespace }}
|
||||
{{- $configMap := tpl ( ( "{{ lookup \"v1\" \"ConfigMap\" \"map-namespace\" \"map-name\" | toYaml }}" | replace "map-namespace" $configMapNameSpace ) | replace "map-name" $configMapName ) . }}
|
||||
{{- if $configMap }}
|
||||
{{- $configMapObject := $configMap | fromYaml }}
|
||||
{{- range $key, $val := $configMapObject.data }}
|
||||
{{- if regexMatch "configuration-disable-kubernetes-sensor\\.yaml" $key }}
|
||||
{{/* Nothing to do here, this is a special case we want to ignore */}}
|
||||
{{- else if regexMatch "configuration-opentelemetry\\.yaml" $key }}
|
||||
{{/* Nothing to do here, this is a special case we want to ignore */}}
|
||||
{{- else if regexMatch "configuration-prometheus-remote-write\\.yaml" $key }}
|
||||
{{/* Nothing to do here, this is a special case we want to ignore */}}
|
||||
{{- else if regexMatch "configuration-.*\\.yaml" $key }}
|
||||
- name: configuration
|
||||
subPath: {{ $key }}
|
||||
mountPath: /opt/instana/agent/etc/instana/{{ $key }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{- define "instana-agent.commonEnv" -}}
|
||||
- name: INSTANA_AGENT_LEADER_ELECTOR_PORT
|
||||
value: {{ .Values.leaderElector.port | quote }}
|
||||
{{- if .Values.zone.name }}
|
||||
- name: INSTANA_ZONE
|
||||
value: {{ .Values.zone.name | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.name }}
|
||||
- name: INSTANA_KUBERNETES_CLUSTER_NAME
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
key: cluster_name
|
||||
{{- end }}
|
||||
- name: INSTANA_AGENT_ENDPOINT
|
||||
value: {{ .Values.agent.endpointHost | quote }}
|
||||
- name: INSTANA_AGENT_ENDPOINT_PORT
|
||||
value: {{ .Values.agent.endpointPort | quote }}
|
||||
- name: INSTANA_AGENT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "instana-agent.keysSecretName" . }}
|
||||
key: key
|
||||
- name: INSTANA_DOWNLOAD_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "instana-agent.keysSecretName" . }}
|
||||
key: downloadKey
|
||||
optional: true
|
||||
{{- if .Values.agent.instanaMvnRepoUrl }}
|
||||
- name: INSTANA_MVN_REPOSITORY_URL
|
||||
value: {{ .Values.agent.instanaMvnRepoUrl | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.instanaMvnRepoFeaturesPath }}
|
||||
- name: INSTANA_MVN_REPOSITORY_FEATURES_PATH
|
||||
value: {{ .Values.agent.instanaMvnRepoFeaturesPath | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.instanaMvnRepoSharedPath }}
|
||||
- name: INSTANA_MVN_REPOSITORY_SHARED_PATH
|
||||
value: {{ .Values.agent.instanaMvnRepoSharedPath | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyHost }}
|
||||
- name: INSTANA_AGENT_PROXY_HOST
|
||||
value: {{ .Values.agent.proxyHost | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyPort }}
|
||||
- name: INSTANA_AGENT_PROXY_PORT
|
||||
value: {{ .Values.agent.proxyPort | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyProtocol }}
|
||||
- name: INSTANA_AGENT_PROXY_PROTOCOL
|
||||
value: {{ .Values.agent.proxyProtocol | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyUser }}
|
||||
- name: INSTANA_AGENT_PROXY_USER
|
||||
value: {{ .Values.agent.proxyUser | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyPassword }}
|
||||
- name: INSTANA_AGENT_PROXY_PASSWORD
|
||||
value: {{ .Values.agent.proxyPassword | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.proxyUseDNS }}
|
||||
- name: INSTANA_AGENT_PROXY_USE_DNS
|
||||
value: {{ .Values.agent.proxyUseDNS | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.listenAddress }}
|
||||
- name: INSTANA_AGENT_HTTP_LISTEN
|
||||
value: {{ .Values.agent.listenAddress | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.redactKubernetesSecrets }}
|
||||
- name: INSTANA_KUBERNETES_REDACT_SECRETS
|
||||
value: {{ .Values.agent.redactKubernetesSecrets | quote }}
|
||||
{{- end }}
|
||||
- name: INSTANA_AGENT_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
{{- range $key, $value := .Values.agent.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.commonVolumeMounts" -}}
|
||||
{{- if .Values.agent.host.repository }}
|
||||
- name: repo
|
||||
mountPath: /opt/instana/agent/data/repo
|
||||
{{- end }}
|
||||
{{- if .Values.agent.additionalBackends -}}
|
||||
{{- range $index,$backend := .Values.agent.additionalBackends }}
|
||||
{{- $backendIndex :=add $index 2 }}
|
||||
- name: additional-backend-{{$backendIndex}}
|
||||
subPath: additional-backend-{{$backendIndex}}
|
||||
mountPath: /opt/instana/agent/etc/instana/com.instana.agent.main.sender.Backend-{{$backendIndex}}.cfg
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.commonVolumes" -}}
|
||||
- name: configuration
|
||||
configMap:
|
||||
name: {{ include "instana-agent.fullname" . }}
|
||||
{{- if .Values.agent.host.repository }}
|
||||
- name: repo
|
||||
hostPath:
|
||||
path: {{ .Values.agent.host.repository }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.additionalBackends }}
|
||||
{{- range $index,$backend := .Values.agent.additionalBackends }}
|
||||
{{ $backendIndex :=add $index 2 -}}
|
||||
- name: additional-backend-{{$backendIndex}}
|
||||
configMap:
|
||||
name: {{ include "instana-agent.fullname" $ }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.livenessProbe" -}}
|
||||
httpGet:
|
||||
host: 127.0.0.1 # localhost because Pod has hostNetwork=true
|
||||
path: /status
|
||||
port: 42699
|
||||
initialDelaySeconds: 600 # startupProbe isnt available before K8s 1.16
|
||||
timeoutSeconds: 5
|
||||
periodSeconds: 10
|
||||
failureThreshold: 3
|
||||
{{- end -}}
|
||||
|
||||
{{- define "leader-elector.container" -}}
|
||||
- name: leader-elector
|
||||
image: {{ include "image" .Values.leaderElector.image | quote }}
|
||||
env:
|
||||
- name: INSTANA_AGENT_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
command:
|
||||
- "/busybox/sh"
|
||||
- "-c"
|
||||
- "sleep 12 && /app/server --election=instana --http=localhost:{{ .Values.leaderElector.port }} --id=$(INSTANA_AGENT_POD_NAME)"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 0.1
|
||||
memory: "64Mi"
|
||||
livenessProbe:
|
||||
httpGet: # Leader elector /health endpoint expects version 0.5.8 minimum, otherwise always returns 200 OK
|
||||
host: 127.0.0.1 # localhost because Pod has hostNetwork=true
|
||||
path: /health
|
||||
port: {{ .Values.leaderElector.port }}
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 3
|
||||
periodSeconds: 3
|
||||
failureThreshold: 3
|
||||
ports:
|
||||
- containerPort: {{ .Values.leaderElector.port }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.tls-volume" -}}
|
||||
- name: {{ include "instana-agent.fullname" . }}-tls
|
||||
secret:
|
||||
secretName: {{ .Values.agent.tls.secretName | default (printf "%s-tls" (include "instana-agent.fullname" .)) }}
|
||||
defaultMode: 0440
|
||||
{{- end -}}
|
||||
|
||||
{{- define "instana-agent.tls-volumeMounts" -}}
|
||||
- name: {{ include "instana-agent.fullname" . }}-tls
|
||||
mountPath: /opt/instana/agent/etc/certs
|
||||
readOnly: true
|
||||
{{- end -}}
|
||||
|
||||
|
||||
{{- define "k8sensor.commonEnv" -}}
|
||||
{{- range $key, $value := .Values.agent.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*NOTE: These are nested templates not functions, if I format this to make it readable then it won't work the way */}}
|
||||
{{/*we need it to since all of the newlines and spaces will be included into the output. Helm is */}}
|
||||
{{/*not fundamentally designed to do what we are doing here.*/}}
|
||||
|
||||
{{- define "instana-agent.opentelemetry.grpc.isEnabled" -}}{{ if hasKey .Values "opentelemetry" }}{{ if hasKey .Values.opentelemetry "grpc" }}{{ if hasKey .Values.opentelemetry.grpc "enabled" }}{{ .Values.opentelemetry.grpc.enabled }}{{ else }}{{ true }}{{ end }}{{ else }}{{ if hasKey .Values.opentelemetry "enabled" }}{{ .Values.opentelemetry.enabled }}{{ else }}{{ false }}{{ end }}{{ end }}{{ else }}{{ false }}{{ end }}{{- end -}}
|
||||
|
||||
{{- define "instana-agent.opentelemetry.http.isEnabled" -}}{{ if hasKey .Values "opentelemetry" }}{{ if hasKey .Values.opentelemetry "http" }}{{ if hasKey .Values.opentelemetry.http "enabled" }}{{ .Values.opentelemetry.http.enabled }}{{ else }}{{ true }}{{ end }}{{ else }}{{ false }}{{ end }}{{ else }}{{ false }}{{ end }}{{- end -}}
|
||||
|
||||
{{- define "kubeVersion" -}}
|
||||
{{- if (regexMatch "\\d+\\.\\d+\\.\\d+-(?:eks|gke).+" .Capabilities.KubeVersion.Version) -}}
|
||||
{{- regexFind "\\d+\\.\\d+\\.\\d+" .Capabilities.KubeVersion.Version -}}
|
||||
{{- else -}}
|
||||
{{- printf .Capabilities.KubeVersion.Version }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
data:
|
||||
{{- if .Values.cluster.name }}
|
||||
cluster_name: {{ .Values.cluster.name | quote }}
|
||||
{{- end }}
|
||||
configuration.yaml: |
|
||||
|
||||
{{- if .Values.agent.configuration_yaml }}
|
||||
{{ .Values.agent.configuration_yaml | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
{{ if or (eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .)) (eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .)) }}
|
||||
configuration-opentelemetry.yaml: |
|
||||
com.instana.plugin.opentelemetry: {{ toYaml .Values.opentelemetry | nindent 6 }}
|
||||
{{ end }}
|
||||
|
||||
{{- if .Values.prometheus.remoteWrite.enabled }}
|
||||
configuration-prometheus-remote-write.yaml: |
|
||||
com.instana.plugin.prometheus:
|
||||
remote_write:
|
||||
enabled: true
|
||||
{{- end }}
|
||||
|
||||
{{- if or .Values.kubernetes.deployment.enabled .Values.k8s_sensor.deployment.enabled }}
|
||||
configuration-disable-kubernetes-sensor.yaml: |
|
||||
com.instana.plugin.kubernetes:
|
||||
enabled: false
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.agent.additionalBackends }}
|
||||
{{- $proxyHost := .Values.agent.proxyHost }}
|
||||
{{- $proxyPort := .Values.agent.proxyPort }}
|
||||
{{- $proxyUser := .Values.agent.proxyUser }}
|
||||
{{- $proxyPassword := .Values.agent.proxyPassword }}
|
||||
{{- $proxyUseDNS := .Values.agent.proxyUseDNS }}
|
||||
{{- range $index,$backend := .Values.agent.additionalBackends }}
|
||||
{{ $backendIndex :=add $index 2 -}}
|
||||
additional-backend-{{$backendIndex}}: |
|
||||
host={{ .endpointHost }}
|
||||
port={{ default 443 .endpointPort }}
|
||||
key={{ .key }}
|
||||
protocol=HTTP/2
|
||||
{{- if $proxyHost }}
|
||||
proxy.type=HTTP
|
||||
proxy.host={{ $proxyHost }}
|
||||
proxy.port={{ $proxyPort }}
|
||||
{{- if $proxyUser }}
|
||||
proxy.user={{ $proxyUser }}
|
||||
proxy.password={{ $proxyPassword }}
|
||||
{{- end }}
|
||||
{{- if $proxyUseDNS }}
|
||||
proxyUseDNS=true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,217 @@
|
|||
{{- if or .Values.agent.key .Values.agent.keysSecret }}
|
||||
{{- if and .Values.cluster.name .Values.zones }}
|
||||
{{ $opentelemetryIsEnabled := (or (eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .)) (eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .)) )}}
|
||||
{{- range $.Values.zones }}
|
||||
{{- $fullname := printf "%s-%s" (include "instana-agent.fullname" $) .name -}}
|
||||
{{- $tolerations := .tolerations -}}
|
||||
{{- $affinity := .affinity -}}
|
||||
{{- $mode := .mode -}}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: {{ $fullname }}
|
||||
namespace: {{ $.Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" $ | nindent 4 }}
|
||||
io.instana/zone: {{.name}}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "instana-agent.selectorLabels" $ | nindent 6 }}
|
||||
io.instana/zone: {{.name}}
|
||||
updateStrategy:
|
||||
type: {{ $.Values.agent.updateStrategy.type }}
|
||||
{{- if eq $.Values.agent.updateStrategy.type "RollingUpdate" }}
|
||||
rollingUpdate:
|
||||
maxUnavailable: {{ $.Values.agent.updateStrategy.rollingUpdate.maxUnavailable }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
io.instana/zone: {{.name}}
|
||||
{{- if $.Values.agent.pod.labels }}
|
||||
{{- toYaml $.Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonLabels" $ | nindent 8 }}
|
||||
instana/agent-mode: {{ $.Values.agent.mode | default "APM" | quote }}
|
||||
annotations:
|
||||
{{- if $.Values.agent.pod.annotations }}
|
||||
{{- toYaml $.Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ $.Values.agent.configuration_yaml | cat ";" | cat ( join "," $.Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "instana-agent.serviceAccountName" $ }}
|
||||
{{- if $.Values.agent.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := $.Values.agent.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
{{- if $.Values.agent.pod.priorityClassName }}
|
||||
priorityClassName: {{ $.Values.agent.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
{{- if typeIs "[]interface {}" $.Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml $.Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if $.Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" $.Values.agent.image | quote}}
|
||||
imagePullPolicy: {{ $.Values.agent.image.pullPolicy }}
|
||||
env:
|
||||
- name: INSTANA_ZONE
|
||||
value: {{ .name | quote }}
|
||||
{{- if $mode }}
|
||||
- name: INSTANA_AGENT_MODE
|
||||
value: {{ $mode | quote }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonEnv" $ | nindent 12 }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: dev
|
||||
mountPath: /dev
|
||||
mountPropagation: HostToContainer
|
||||
- name: run
|
||||
mountPath: /run
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run
|
||||
mountPath: /var/run
|
||||
mountPropagation: HostToContainer
|
||||
{{- if not (or $.Values.openshift ($.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
- name: var-run-kubo
|
||||
mountPath: /var/vcap/sys/run/docker
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run-containerd
|
||||
mountPath: /var/vcap/sys/run/containerd
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-containerd-config
|
||||
mountPath: /var/vcap/jobs/containerd/config
|
||||
mountPropagation: HostToContainer
|
||||
{{- end }}
|
||||
- name: sys
|
||||
mountPath: /sys
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-log
|
||||
mountPath: /var/log
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-lib
|
||||
mountPath: /var/lib
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-data
|
||||
mountPath: /var/data
|
||||
mountPropagation: HostToContainer
|
||||
- name: machine-id
|
||||
mountPath: /etc/machine-id
|
||||
- name: configuration
|
||||
{{- if $.Values.agent.configuration.hotreloadEnabled }}
|
||||
mountPath: /root/
|
||||
{{- else }}
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
{{- end }}
|
||||
{{- if $.Values.agent.tls }}
|
||||
{{- if or $.Values.agent.tls.secretName (and $.Values.agent.tls.certificate $.Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volumeMounts" $ | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumeMounts" $ | nindent 12 }}
|
||||
{{- if $.Values.agent.configuration.autoMountConfigEntries }}
|
||||
{{- include "volumeMountsForConfigFileInConfigMap" $ | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if or $.Values.kubernetes.deployment.enabled $.Values.k8s_sensor.deployment.enabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-disable-kubernetes-sensor.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-disable-kubernetes-sensor.yaml
|
||||
{{- end }}
|
||||
{{- if $opentelemetryIsEnabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-opentelemetry.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-opentelemetry.yaml
|
||||
{{- end }}
|
||||
{{- if $.Values.prometheus.remoteWrite.enabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-prometheus-remote-write.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-prometheus-remote-write.yaml
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- include "instana-agent.livenessProbe" $ | nindent 12 }}
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" $.Values.agent.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" $.Values.agent.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
{{- if and (not $.Values.kubernetes.deployment.enabled) (not $.Values.k8s_sensor.deployment.enabled) }}
|
||||
{{- include "leader-elector.container" $ | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{ if $tolerations -}}
|
||||
tolerations:
|
||||
{{- toYaml $tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{ if $affinity -}}
|
||||
affinity:
|
||||
{{- toYaml $affinity | nindent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: dev
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: run
|
||||
hostPath:
|
||||
path: /run
|
||||
- name: var-run
|
||||
hostPath:
|
||||
path: /var/run
|
||||
{{- if not (or $.Values.openshift ($.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
# Systems based on the kubo BOSH release (that is, VMware TKGI and older PKS) do not keep the Docker
|
||||
# socket in /var/run/docker.sock , but rather in /var/vcap/sys/run/docker/docker.sock .
|
||||
# The Agent images will check if there is a Docker socket here and, if so, adjust the symlinking before
|
||||
# starting the Agent. See https://github.com/cloudfoundry-incubator/kubo-release/issues/329
|
||||
- name: var-run-kubo
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/docker
|
||||
- name: var-run-containerd
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/containerd
|
||||
- name: var-containerd-config
|
||||
hostPath:
|
||||
path: /var/vcap/jobs/containerd/config
|
||||
{{- end }}
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: var-log
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: var-lib
|
||||
hostPath:
|
||||
path: /var/lib
|
||||
- name: var-data
|
||||
hostPath:
|
||||
path: /var/data
|
||||
- name: machine-id
|
||||
hostPath:
|
||||
path: /etc/machine-id
|
||||
{{- if $.Values.agent.tls }}
|
||||
{{- if or $.Values.agent.tls.secretName (and $.Values.agent.tls.certificate $.Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volume" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumes" $ | nindent 8 }}
|
||||
{{ printf "\n" }}
|
||||
{{ end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,204 @@
|
|||
# TODO: Combine into single template with agent-daemonset-with-zones.yaml
|
||||
{{- if or .Values.agent.key .Values.agent.keysSecret }}
|
||||
{{- if and (or .Values.zone.name .Values.cluster.name) (not .Values.zones) }}
|
||||
{{- $fullname := include "instana-agent.fullname" . -}}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: {{ $fullname }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 6 }}
|
||||
updateStrategy:
|
||||
type: {{ .Values.agent.updateStrategy.type }}
|
||||
{{- if eq .Values.agent.updateStrategy.type "RollingUpdate" }}
|
||||
rollingUpdate:
|
||||
maxUnavailable: {{ .Values.agent.updateStrategy.rollingUpdate.maxUnavailable }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- if .Values.agent.pod.labels }}
|
||||
{{- toYaml .Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonLabels" . | nindent 8 }}
|
||||
instana/agent-mode: {{ .Values.agent.mode | default "APM" | quote }}
|
||||
annotations:
|
||||
{{- if .Values.agent.pod.annotations }}
|
||||
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ .Values.agent.configuration_yaml | cat ";" | cat ( join "," .Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "instana-agent.serviceAccountName" . }}
|
||||
{{- if .Values.agent.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := .Values.agent.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
{{- if .Values.agent.pod.priorityClassName }}
|
||||
priorityClassName: {{ .Values.agent.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
{{- if typeIs "[]interface {}" .Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml .Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" .Values.agent.image | quote}}
|
||||
imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
|
||||
env:
|
||||
{{- if .Values.agent.mode }}
|
||||
- name: INSTANA_AGENT_MODE
|
||||
value: {{ .Values.agent.mode | quote }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonEnv" . | nindent 12 }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: dev
|
||||
mountPath: /dev
|
||||
mountPropagation: HostToContainer
|
||||
- name: run
|
||||
mountPath: /run
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run
|
||||
mountPath: /var/run
|
||||
mountPropagation: HostToContainer
|
||||
{{- if not (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
- name: var-run-kubo
|
||||
mountPath: /var/vcap/sys/run/docker
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-run-containerd
|
||||
mountPath: /var/vcap/sys/run/containerd
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-containerd-config
|
||||
mountPath: /var/vcap/jobs/containerd/config
|
||||
mountPropagation: HostToContainer
|
||||
{{- end }}
|
||||
- name: sys
|
||||
mountPath: /sys
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-log
|
||||
mountPath: /var/log
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-lib
|
||||
mountPath: /var/lib
|
||||
mountPropagation: HostToContainer
|
||||
- name: var-data
|
||||
mountPath: /var/data
|
||||
mountPropagation: HostToContainer
|
||||
- name: machine-id
|
||||
mountPath: /etc/machine-id
|
||||
- name: configuration
|
||||
{{- if $.Values.agent.configuration.hotreloadEnabled }}
|
||||
mountPath: /root/
|
||||
{{- else }}
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
{{- end }}
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volumeMounts" . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumeMounts" . | nindent 12 }}
|
||||
{{- if .Values.agent.configuration.autoMountConfigEntries }}
|
||||
{{- include "volumeMountsForConfigFileInConfigMap" . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.kubernetes.deployment.enabled .Values.k8s_sensor.deployment.enabled }}
|
||||
- name: configuration # TODO: These shouldn't have the same name
|
||||
subPath: configuration-disable-kubernetes-sensor.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-disable-kubernetes-sensor.yaml
|
||||
{{- end }}
|
||||
{{- if or (eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .)) (eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .)) }}
|
||||
- name: configuration
|
||||
subPath: configuration-opentelemetry.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-opentelemetry.yaml
|
||||
{{- end }}
|
||||
{{- if .Values.prometheus.remoteWrite.enabled }}
|
||||
- name: configuration
|
||||
subPath: configuration-prometheus-remote-write.yaml
|
||||
mountPath: /opt/instana/agent/etc/instana/configuration-prometheus-remote-write.yaml
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- include "instana-agent.livenessProbe" . | nindent 12 }}
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" .Values.agent.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" .Values.agent.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
{{- if and (not .Values.kubernetes.deployment.enabled) (not .Values.k8s_sensor.deployment.enabled) }}
|
||||
{{- include "leader-elector.container" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.pod.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml .Values.agent.pod.tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.pod.affinity }}
|
||||
affinity:
|
||||
{{- toYaml .Values.agent.pod.affinity | nindent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: dev
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: run
|
||||
hostPath:
|
||||
path: /run
|
||||
- name: var-run
|
||||
hostPath:
|
||||
path: /var/run
|
||||
{{- if not (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
# Systems based on the kubo BOSH release (that is, VMware TKGI and older PKS) do not keep the Docker
|
||||
# socket in /var/run/docker.sock , but rather in /var/vcap/sys/run/docker/docker.sock .
|
||||
# The Agent images will check if there is a Docker socket here and, if so, adjust the symlinking before
|
||||
# starting the Agent. See https://github.com/cloudfoundry-incubator/kubo-release/issues/329
|
||||
- name: var-run-kubo
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/docker
|
||||
- name: var-run-containerd
|
||||
hostPath:
|
||||
path: /var/vcap/sys/run/containerd
|
||||
- name: var-containerd-config
|
||||
hostPath:
|
||||
path: /var/vcap/jobs/containerd/config
|
||||
{{- end }}
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: var-log
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: var-lib
|
||||
hostPath:
|
||||
path: /var/lib
|
||||
- name: var-data
|
||||
hostPath:
|
||||
path: /var/data
|
||||
- name: machine-id
|
||||
hostPath:
|
||||
path: /etc/machine-id
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volume" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonVolumes" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,77 @@
|
|||
{{- if or .Values.rbac.create (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
rules:
|
||||
- nonResourceURLs:
|
||||
- "/version"
|
||||
- "/healthz"
|
||||
verbs: ["get"]
|
||||
{{- if or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1") }}
|
||||
apiGroups: []
|
||||
resources: []
|
||||
{{- end }}
|
||||
- apiGroups: ["batch"]
|
||||
resources:
|
||||
- "jobs"
|
||||
- "cronjobs"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["extensions"]
|
||||
resources:
|
||||
- "deployments"
|
||||
- "replicasets"
|
||||
- "ingresses"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
resources:
|
||||
- "deployments"
|
||||
- "replicasets"
|
||||
- "daemonsets"
|
||||
- "statefulsets"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- "namespaces"
|
||||
- "events"
|
||||
- "services"
|
||||
- "endpoints"
|
||||
- "nodes"
|
||||
- "pods"
|
||||
- "replicationcontrollers"
|
||||
- "componentstatuses"
|
||||
- "resourcequotas"
|
||||
- "persistentvolumes"
|
||||
- "persistentvolumeclaims"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- "endpoints"
|
||||
verbs: ["create", "update", "patch"]
|
||||
- apiGroups: ["networking.k8s.io"]
|
||||
resources:
|
||||
- "ingresses"
|
||||
verbs: ["get", "list", "watch"]
|
||||
{{- if or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1") }}
|
||||
- apiGroups: ["apps.openshift.io"]
|
||||
resources:
|
||||
- "deploymentconfigs"
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["security.openshift.io"]
|
||||
resourceNames: ["privileged"]
|
||||
resources: ["securitycontextconstraints"]
|
||||
verbs: ["use"]
|
||||
{{- end -}}
|
||||
{{- if .Values.podSecurityPolicy.enable}}
|
||||
{{- if semverCompare "< 1.25.x" (include "kubeVersion" .) }}
|
||||
- apiGroups: ["policy"]
|
||||
resources: ["podsecuritypolicies"]
|
||||
verbs: ["use"]
|
||||
resourceNames:
|
||||
- {{ template "instana-agent.podSecurityPolicyName" . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,17 @@
|
|||
{{- if or .Values.rbac.create (or .Values.openshift (.Capabilities.APIVersions.Has "apps.openshift.io/v1")) }}
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "instana-agent.serviceAccountName" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
{{- end }}
|
|
@ -0,0 +1,43 @@
|
|||
{{- if .Values.service.create -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}-headless
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 4 }}
|
||||
ports:
|
||||
# Prometheus remote_write, Trace Web SDK and other APIs
|
||||
- name: agent-apis
|
||||
protocol: TCP
|
||||
port: 42699
|
||||
targetPort: 42699
|
||||
- name: agent-socket
|
||||
protocol: TCP
|
||||
port: 42666
|
||||
targetPort: 42666
|
||||
{{ if eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .) }}
|
||||
# OpenTelemetry original default port
|
||||
- name: opentelemetry
|
||||
protocol: TCP
|
||||
port: 55680
|
||||
targetPort: 55680
|
||||
# OpenTelemetry as registered and reserved by IANA
|
||||
- name: opentelemetry-iana
|
||||
protocol: TCP
|
||||
port: 4317
|
||||
targetPort: 4317
|
||||
{{- end -}}
|
||||
{{ if eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .) }}
|
||||
# OpenTelemetry HTTP port
|
||||
- name: opentelemetry-http
|
||||
protocol: TCP
|
||||
port: 4318
|
||||
targetPort: 4318
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,12 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: k8sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
data:
|
||||
backend: {{ printf "%s:%v" .Values.agent.endpointHost .Values.agent.endpointPort }}
|
||||
{{- end }}
|
|
@ -0,0 +1,142 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
{{- if or .Values.agent.key .Values.agent.keysSecret -}}
|
||||
{{- if or .Values.zone.name .Values.cluster.name -}}
|
||||
|
||||
{{- $user_name_password := "" -}}
|
||||
{{ if .Values.agent.proxyUser }}
|
||||
{{- $user_name_password = print .Values.agent.proxyUser ":" .Values.agent.proxyPass "@" -}}
|
||||
{{ end}}
|
||||
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: k8sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: k8sensor
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ default "1" .Values.k8s_sensor.deployment.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "k8s-sensor.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- if .Values.agent.pod.labels }}
|
||||
{{- toYaml .Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "k8s-sensor.commonLabels" . | nindent 8 }}
|
||||
instana/agent-mode: KUBERNETES
|
||||
annotations:
|
||||
{{- if .Values.agent.pod.annotations }}
|
||||
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ cat ( join "," .Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: k8sensor
|
||||
{{- if .Values.k8s_sensor.deployment.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := .Values.k8s_sensor.deployment.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.k8s_sensor.deployment.pod.priorityClassName }}
|
||||
priorityClassName: {{ .Values.k8s_sensor.deployment.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
{{- if typeIs "[]interface {}" .Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml .Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" .Values.k8s_sensor.image | quote }}
|
||||
imagePullPolicy: {{ .Values.k8s_sensor.image.pullPolicy }}
|
||||
env:
|
||||
- name: AGENT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "instana-agent.keysSecretName" . }}
|
||||
key: key
|
||||
- name: BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: k8sensor
|
||||
key: backend
|
||||
- name: BACKEND_URL
|
||||
value: "https://$(BACKEND)"
|
||||
- name: AGENT_ZONE
|
||||
value: {{ empty .Values.cluster.name | ternary .Values.zone.name .Values.cluster.name}}
|
||||
- name: POD_UID
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.uid
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
{{- if not (empty .Values.agent.proxyHost) }}
|
||||
- name: HTTPS_PROXY
|
||||
value: "http://{{ $user_name_password }}{{ .Values.agent.proxyHost }}:{{ .Values.agent.proxyPort }}"
|
||||
- name: NO_PROXY
|
||||
value: "kubernetes.default.svc"
|
||||
{{- end }}
|
||||
{{- if .Values.agent.redactKubernetesSecrets }}
|
||||
- name: INSTANA_KUBERNETES_REDACT_SECRETS
|
||||
value: {{ .Values.agent.redactKubernetesSecrets | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.configuration_yaml }}
|
||||
- name: CONFIG_PATH
|
||||
value: /root
|
||||
{{- end }}
|
||||
{{- include "k8sensor.commonEnv" . | nindent 12 }}
|
||||
|
||||
volumeMounts:
|
||||
- name: configuration
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" .Values.k8s_sensor.deployment.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" .Values.k8s_sensor.deployment.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
volumes:
|
||||
- name: configuration
|
||||
configMap:
|
||||
name: {{ include "instana-agent.fullname" . }}
|
||||
{{- if .Values.k8s_sensor.deployment.pod.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml .Values.k8s_sensor.deployment.pod.tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
# Soft anti-affinity policy: try not to schedule multiple kubernetes-sensor pods on the same node.
|
||||
# If the policy is set to "requiredDuringSchedulingIgnoredDuringExecution", if the cluster has
|
||||
# fewer nodes than the amount of desired replicas, `helm install/upgrade --wait` will not return.
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: instana/agent-mode
|
||||
operator: In
|
||||
values: [ KUBERNETES ]
|
||||
topologyKey: "kubernetes.io/hostname"
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,27 @@
|
|||
{{- if and .Values.k8s_sensor.deployment.enabled .Values.podSecurityPolicy.enable -}}
|
||||
---
|
||||
kind: PodSecurityPolicy
|
||||
apiVersion: policy/v1beta1
|
||||
metadata:
|
||||
name: k8sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
volumes:
|
||||
- configMap
|
||||
- downwardAPI
|
||||
- emptyDir
|
||||
- persistentVolumeClaim
|
||||
- secret
|
||||
- projected
|
||||
- hostPath
|
||||
runAsUser:
|
||||
rule: "RunAsAny"
|
||||
seLinux:
|
||||
rule: "RunAsAny"
|
||||
supplementalGroups:
|
||||
rule: "RunAsAny"
|
||||
fsGroup:
|
||||
rule: "RunAsAny"
|
||||
{{- end }}
|
|
@ -0,0 +1,133 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: k8sensor
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
rules:
|
||||
-
|
||||
nonResourceURLs:
|
||||
- /version
|
||||
- /healthz
|
||||
verbs:
|
||||
- get
|
||||
-
|
||||
apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- deployments
|
||||
- replicasets
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- events
|
||||
- services
|
||||
- endpoints
|
||||
- namespaces
|
||||
- nodes
|
||||
- pods
|
||||
- replicationcontrollers
|
||||
- resourcequotas
|
||||
- persistentvolumes
|
||||
- persistentvolumeclaims
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
- deployments
|
||||
- replicasets
|
||||
- statefulsets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- batch
|
||||
resources:
|
||||
- cronjobs
|
||||
- jobs
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods/log
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- autoscaling/v1
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- autoscaling/v2
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- apps.openshift.io
|
||||
resources:
|
||||
- deploymentconfigs
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
-
|
||||
apiGroups:
|
||||
- security.openshift.io
|
||||
resourceNames:
|
||||
- privileged
|
||||
resources:
|
||||
- securitycontextconstraints
|
||||
verbs:
|
||||
- use
|
||||
{{ if .Values.podSecurityPolicy.enable }}
|
||||
-
|
||||
apiGroups:
|
||||
- policy
|
||||
resourceNames:
|
||||
- k8sensor
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
{{ end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,17 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: k8sensor
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: k8sensor
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: k8sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
|
@ -0,0 +1,10 @@
|
|||
{{- if .Values.k8s_sensor.deployment.enabled -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: k8sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,19 @@
|
|||
{{- if and .Values.kubernetes.deployment.enabled (not .Values.k8s_sensor.deployment.enabled) -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kubernetes-sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
data:
|
||||
# TODO We should get rid of this and imply the ring-fence iff the mode is "KUBERNETES"
|
||||
configuration.yaml: |
|
||||
com.instana.plugin.kubernetes:
|
||||
enabled: true
|
||||
|
||||
com.instana.kubernetes:
|
||||
leader:
|
||||
isRingFenced: true
|
||||
{{- end }}
|
|
@ -0,0 +1,118 @@
|
|||
{{- if and .Values.kubernetes.deployment.enabled (not .Values.k8s_sensor.deployment.enabled) -}}
|
||||
{{- if or .Values.agent.key .Values.agent.keysSecret -}}
|
||||
{{- if or .Values.zone.name .Values.cluster.name -}}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kubernetes-sensor
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ default "1" .Values.kubernetes.deployment.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- if .Values.agent.pod.labels }}
|
||||
{{- toYaml .Values.agent.pod.labels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "instana-agent.commonLabels" . | nindent 8 }}
|
||||
instana/agent-mode: KUBERNETES
|
||||
annotations:
|
||||
{{- if .Values.agent.pod.annotations }}
|
||||
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
|
||||
{{- end }}
|
||||
# To ensure that changes to agent.configuration_yaml or agent.additional_backends trigger a Pod recreation, we keep a SHA here
|
||||
# Unfortunately, we cannot use the lookup function to check on the values in the configmap, otherwise we break Helm < 3.2
|
||||
instana-configuration-hash: {{ cat ( join "," .Values.agent.additionalBackends ) | sha1sum }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "instana-agent.serviceAccountName" . }}
|
||||
{{- if .Values.kubernetes.deployment.pod.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- range $key, $value := .Values.kubernetes.deployment.pod.nodeSelector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.kubernetes.deployment.pod.priorityClassName }}
|
||||
priorityClassName: {{ .Values.kubernetes.deployment.pod.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
{{- if typeIs "[]interface {}" .Values.agent.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml .Values.agent.image.pullSecrets | nindent 8 }}
|
||||
{{- else if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
imagePullSecrets:
|
||||
- name: containers-instana-io
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: instana-agent
|
||||
image: {{ include "image" .Values.agent.image | quote }}
|
||||
imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
env:
|
||||
- name: INSTANA_AGENT_MODE
|
||||
value: KUBERNETES
|
||||
{{- include "instana-agent.commonEnv" . | nindent 12 }}
|
||||
volumeMounts:
|
||||
{{- include "instana-agent.commonVolumeMounts" . | nindent 12 }}
|
||||
- name: kubernetes-sensor-configuration
|
||||
subPath: configuration.yaml
|
||||
mountPath: /root/configuration.yaml
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volumeMounts" . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
{{- include "instana-agent.resources" .Values.kubernetes.deployment.pod.requests | nindent 14 }}
|
||||
limits:
|
||||
{{- include "instana-agent.resources" .Values.kubernetes.deployment.pod.limits | nindent 14 }}
|
||||
ports:
|
||||
- containerPort: 42699
|
||||
- name: leader-elector
|
||||
image: {{ include "image" .Values.leaderElector.image | quote }}
|
||||
env:
|
||||
- name: INSTANA_AGENT_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
command:
|
||||
- "/busybox/sh"
|
||||
- "-c"
|
||||
- "sleep 12 && /app/server --election=instana --http=localhost:{{ .Values.leaderElector.port }} --id=$(INSTANA_AGENT_POD_NAME)"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 0.1
|
||||
memory: "64Mi"
|
||||
ports:
|
||||
- containerPort: {{ .Values.leaderElector.port }}
|
||||
{{- if .Values.kubernetes.deployment.pod.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml .Values.kubernetes.deployment.pod.tolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: kubernetes.io/hostname
|
||||
whenUnsatisfiable: ScheduleAnyway
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: instana/agent-mode
|
||||
operator: In
|
||||
values: [ KUBERNETES ]
|
||||
volumes:
|
||||
{{- include "instana-agent.commonVolumes" . | nindent 8 }}
|
||||
- name: kubernetes-sensor-configuration
|
||||
configMap:
|
||||
name: kubernetes-sensor
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if or .Values.agent.tls.secretName (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
{{- include "instana-agent.tls-volume" . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,9 @@
|
|||
{{- if .Values.templating }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,65 @@
|
|||
{{- if .Values.rbac.create }}
|
||||
{{- if (and .Values.podSecurityPolicy.enable (not .Values.podSecurityPolicy.name)) }}
|
||||
{{- if semverCompare "< 1.25.x" (include "kubeVersion" .) }}
|
||||
---
|
||||
kind: PodSecurityPolicy
|
||||
apiVersion: policy/v1beta1
|
||||
metadata:
|
||||
name: {{ template "instana-agent.podSecurityPolicyName" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
privileged: true
|
||||
allowPrivilegeEscalation: true
|
||||
volumes:
|
||||
- configMap
|
||||
- downwardAPI
|
||||
- emptyDir
|
||||
- persistentVolumeClaim
|
||||
- secret
|
||||
- projected
|
||||
- hostPath
|
||||
allowedHostPaths:
|
||||
- pathPrefix: "/dev"
|
||||
readOnly: false
|
||||
- pathPrefix: "/run"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/run"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/vcap/sys/run/docker"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/vcap/sys/run/containerd"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/vcap/jobs/containerd/config"
|
||||
readOnly: false
|
||||
- pathPrefix: "/sys"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/log"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/lib"
|
||||
readOnly: false
|
||||
- pathPrefix: "/var/data"
|
||||
readOnly: false
|
||||
- pathPrefix: "/etc/machine-id"
|
||||
readOnly: false
|
||||
{{- if .Values.agent.host.repository }}
|
||||
- pathPrefix: {{ .Values.agent.host.repository }}
|
||||
readOnly: false
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
hostPorts:
|
||||
- min: 0
|
||||
max: 65535
|
||||
hostPID: true
|
||||
runAsUser:
|
||||
rule: "RunAsAny"
|
||||
seLinux:
|
||||
rule: "RunAsAny"
|
||||
supplementalGroups:
|
||||
rule: "RunAsAny"
|
||||
fsGroup:
|
||||
rule: "RunAsAny"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,55 @@
|
|||
{{- if not (typeIs "[]interface {}" .Values.agent.image.pullSecrets) }}
|
||||
{{- if .Values.agent.image.name | hasPrefix "containers.instana.io" }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: containers-instana-io
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
data:
|
||||
.dockerconfigjson: {{ template "imagePullSecretContainersInstanaIo" . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- if not .Values.agent.keysSecret }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
type: Opaque
|
||||
data:
|
||||
{{- if .Values.templating }}
|
||||
key: {{ .Values.agent.key }}
|
||||
downloadKey: {{ default "''" .Values.agent.downloadKey }}
|
||||
{{- else }}
|
||||
{{- if .Values.agent.key }}
|
||||
key: {{ .Values.agent.key | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.agent.downloadKey }}
|
||||
downloadKey: {{ .Values.agent.downloadKey | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.agent.tls }}
|
||||
{{- if and (not .Values.agent.tls.secretName) (and .Values.agent.tls.certificate .Values.agent.tls.key) }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}-tls
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
tls.crt: {{ .Values.agent.tls.certificate }}
|
||||
tls.key: {{ .Values.agent.tls.key }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -0,0 +1,45 @@
|
|||
{{- if or .Values.service.create (eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .)) (eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .)) .Values.prometheus.remoteWrite.enabled -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "instana-agent.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
{{- include "instana-agent.selectorLabels" . | nindent 4 }}
|
||||
ports:
|
||||
# Prometheus remote_write, Trace Web SDK and other APIs
|
||||
- name: agent-apis
|
||||
protocol: TCP
|
||||
port: 42699
|
||||
targetPort: 42699
|
||||
{{ if eq "true" (include "instana-agent.opentelemetry.grpc.isEnabled" .) }}
|
||||
# OpenTelemetry original default port
|
||||
- name: opentelemetry
|
||||
protocol: TCP
|
||||
port: 55680
|
||||
targetPort: 55680
|
||||
# OpenTelemetry as registered and reserved by IANA
|
||||
- name: opentelemetry-iana
|
||||
protocol: TCP
|
||||
port: 4317
|
||||
targetPort: 4317
|
||||
{{- end -}}
|
||||
{{ if eq "true" (include "instana-agent.opentelemetry.http.isEnabled" .) }}
|
||||
# OpenTelemetry HTTP port
|
||||
- name: opentelemetry-http
|
||||
protocol: TCP
|
||||
port: 4318
|
||||
targetPort: 4318
|
||||
{{- end -}}
|
||||
{{- if semverCompare "< 1.22.x" (include "kubeVersion" .) }}
|
||||
# since we run agents as DaemonSets we assume every node has this Service available:
|
||||
topologyKeys:
|
||||
- "kubernetes.io/hostname"
|
||||
{{- else }}
|
||||
internalTrafficPolicy: Local
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -0,0 +1,10 @@
|
|||
{{- if .Values.serviceAccount.create }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "instana-agent.serviceAccountName" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "instana-agent.commonLabels" . | nindent 4 }}
|
||||
{{- end }}
|
|
@ -0,0 +1,293 @@
|
|||
# name is the value which will be used as the base resource name for various resources associated with the agent.
|
||||
# name: instana-agent
|
||||
|
||||
agent:
|
||||
# agent.mode is used to set agent mode and it can be APM, INFRASTRUCTURE or AWS
|
||||
# mode: APM
|
||||
|
||||
# agent.key is the secret token which your agent uses to authenticate to Instana's servers.
|
||||
key: null
|
||||
# agent.downloadKey is key, sometimes known ass "sales key", that allows you to download,
|
||||
# software from Instana.
|
||||
# downloadKey: null
|
||||
|
||||
# Rather than specifying the agent key and optionally the download key, you can "bring your
|
||||
# own secret" creating it in the namespace in which you install the `instana-agent` and
|
||||
# specify its name in the `keysSecret` field. The secret you create must contains
|
||||
# a field called `key` and optionally one called `downloadKey`, which contain, respectively,
|
||||
# the values you'd otherwise set in `.agent.key` and `agent.downloadKey`.
|
||||
# keysSecret: null
|
||||
|
||||
# agent.listenAddress is the IP address the agent HTTP server will listen to.
|
||||
# listenAddress: "*"
|
||||
|
||||
# agent.endpointHost is the hostname of the Instana server your agents will connect to.
|
||||
endpointHost: ingress-red-saas.instana.io
|
||||
# agent.endpointPort is the port number (as a String) of the Instana server your agents will connect to.
|
||||
endpointPort: 443
|
||||
|
||||
# These are additional backends the Instana agent will report to besides
|
||||
# the one configured via the `agent.endpointHost`, `agent.endpointPort` and `agent.key` setting
|
||||
additionalBackends: []
|
||||
# - endpointHost: ingress.instana.io
|
||||
# endpointPort: 443
|
||||
# key: <agent_key>
|
||||
|
||||
# TLS for end-to-end encryption between Instana agent and clients accessing the agent.
|
||||
# The Instana agent does not yet allow enforcing TLS encryption.
|
||||
# TLS is only enabled on a connection when requested by the client.
|
||||
tls:
|
||||
# In order to enable TLS, a secret of type kubernetes.io/tls must be specified.
|
||||
# secretName is the name of the secret that has the relevant files.
|
||||
# secretName: null
|
||||
# Otherwise, the certificate and the private key must be provided as base64 encoded.
|
||||
# certificate: null
|
||||
# key: null
|
||||
|
||||
image:
|
||||
# agent.image.name is the name of the container image of the Instana agent.
|
||||
name: icr.io/instana/agent
|
||||
# agent.image.digest is the digest (a.k.a. Image ID) of the agent container image; if specified, it has priority over agent.image.tag, which will be ignored.
|
||||
#digest:
|
||||
# agent.image.tag is the tag name of the agent container image; if agent.image.digest is specified, this property is ignored.
|
||||
tag: latest
|
||||
# agent.image.pullPolicy specifies when to pull the image container.
|
||||
pullPolicy: Always
|
||||
# agent.image.pullSecrets allows you to override the default pull secret that is created when agent.image.name starts with "containers.instana.io"
|
||||
# Setting agent.image.pullSecrets prevents the creation of the default "containers-instana-io" secret.
|
||||
# pullSecrets:
|
||||
# - name: my_awesome_secret_instead
|
||||
# If you want no imagePullSecrets to be specified in the agent pod, you can just pass an empty array to agent.image.pullSecrets
|
||||
# pullSecrets: []
|
||||
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
|
||||
pod:
|
||||
# agent.pod.annotations are additional annotations to be added to the agent pods.
|
||||
annotations: {}
|
||||
|
||||
# agent.pod.labels are additional labels to be added to the agent pods.
|
||||
labels: {}
|
||||
|
||||
# agent.pod.tolerations are tolerations to influence agent pod assignment.
|
||||
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
tolerations: []
|
||||
|
||||
# agent.pod.affinity are affinities to influence agent pod assignment.
|
||||
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
affinity: {}
|
||||
|
||||
# agent.pod.priorityClassName is the name of an existing PriorityClass that should be set on the agent pods
|
||||
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
|
||||
priorityClassName: null
|
||||
|
||||
# agent.pod.requests and agent.pod.limits adjusts the resource assignments for the DaemonSet agent
|
||||
# regardless of the kubernetes.deployment.enabled setting
|
||||
requests:
|
||||
# agent.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
|
||||
memory: 512Mi
|
||||
# agent.pod.requests.cpu are the requested CPU units allocation for the agent pods.
|
||||
cpu: 0.5
|
||||
limits:
|
||||
# agent.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
|
||||
memory: 768Mi
|
||||
# agent.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
|
||||
cpu: 1.5
|
||||
|
||||
# agent.proxyHost sets the INSTANA_AGENT_PROXY_HOST environment variable.
|
||||
# proxyHost: null
|
||||
# agent.proxyPort sets the INSTANA_AGENT_PROXY_PORT environment variable.
|
||||
# proxyPort: 80
|
||||
# agent.proxyProtocol sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable.
|
||||
# proxyProtocol: HTTP
|
||||
# agent.proxyUser sets the INSTANA_AGENT_PROXY_USER environment variable.
|
||||
# proxyUser: null
|
||||
# agent.proxyPassword sets the INSTANA_AGENT_PROXY_PASSWORD environment variable.
|
||||
# proxyPassword: null
|
||||
# agent.proxyUseDNS sets the INSTANA_AGENT_PROXY_USE_DNS environment variable.
|
||||
# proxyUseDNS: false
|
||||
|
||||
# use this to set additional environment variables for the instana agent
|
||||
# for example:
|
||||
# env:
|
||||
# INSTANA_AGENT_TAGS: dev
|
||||
env: {}
|
||||
|
||||
configuration:
|
||||
# When setting this to true, the Helm chart will automatically look up the entries
|
||||
# of the default instana-agent ConfigMap, and mount as agent configuration files
|
||||
# under /opt/instana/agent/etc/instana all entries with keys that match the
|
||||
# 'configuration-*.yaml' scheme
|
||||
#
|
||||
# IMPORTANT: Needs Helm 3.1+ as it is built on the `lookup` function
|
||||
# IMPORTANT: Editing the ConfigMap adding keys requires a `helm upgrade` to take effect
|
||||
autoMountConfigEntries: false
|
||||
|
||||
# When setting this to true, the updates of the default instana-agent ConfigMap
|
||||
# will be reflected in the pod without requiring a pod restart
|
||||
hotreloadEnabled: false
|
||||
|
||||
configuration_yaml: |
|
||||
# Manual a-priori configuration. Configuration will be only used when the sensor
|
||||
# is actually installed by the agent.
|
||||
# The commented out example values represent example configuration and are not
|
||||
# necessarily defaults. Defaults are usually 'absent' or mentioned separately.
|
||||
# Changes are hot reloaded unless otherwise mentioned.
|
||||
|
||||
# It is possible to create files called 'configuration-abc.yaml' which are
|
||||
# merged with this file in file system order. So 'configuration-cde.yaml' comes
|
||||
# after 'configuration-abc.yaml'. Only nested structures are merged, values are
|
||||
# overwritten by subsequent configurations.
|
||||
|
||||
# Secrets
|
||||
# To filter sensitive data from collection by the agent, all sensors respect
|
||||
# the following secrets configuration. If a key collected by a sensor matches
|
||||
# an entry from the list, the value is redacted.
|
||||
#com.instana.secrets:
|
||||
# matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
|
||||
# list:
|
||||
# - 'key'
|
||||
# - 'password'
|
||||
# - 'secret'
|
||||
|
||||
# Host
|
||||
#com.instana.plugin.host:
|
||||
# tags:
|
||||
# - 'dev'
|
||||
# - 'app1'
|
||||
|
||||
# Hardware & Zone
|
||||
#com.instana.plugin.generic.hardware:
|
||||
# enabled: true # disabled by default
|
||||
# availability-zone: 'zone'
|
||||
|
||||
# agent.redactKubernetesSecrets sets the INSTANA_KUBERNETES_REDACT_SECRETS environment variable.
|
||||
# redactKubernetesSecrets: null
|
||||
|
||||
# agent.host.repository sets a host path to be mounted as the agent maven repository (for debugging or development purposes)
|
||||
host:
|
||||
repository: null
|
||||
|
||||
cluster:
|
||||
# cluster.name represents the name that will be assigned to this cluster in Instana
|
||||
name: null
|
||||
|
||||
leaderElector:
|
||||
image:
|
||||
# leaderElector.image.name is the name of the container image of the leader elector.
|
||||
name: icr.io/instana/leader-elector
|
||||
# leaderElector.image.digest is the digest (a.k.a. Image ID) of the leader elector container image; if specified, it has priority over leaderElector.image.digest, which will be ignored.
|
||||
#digest:
|
||||
# leaderElector.image.tag is the tag name of the agent container image; if leaderElector.image.digest is specified, this property is ignored.
|
||||
tag: 0.5.18
|
||||
port: 42655
|
||||
|
||||
# openshift specifies whether the cluster role should include openshift permissions and other tweaks to the YAML.
|
||||
# The chart will try to auto-detect if the cluster is OpenShift, so you will likely not even need to set this explicitly.
|
||||
# openshift: true
|
||||
|
||||
rbac:
|
||||
# Specifies whether RBAC resources should be created
|
||||
create: true
|
||||
|
||||
service:
|
||||
# Specifies whether to create the instana-agent service to expose within the cluster the Prometheus remote-write, OpenTelemetry GRCP endpoint and other APIs
|
||||
# Note: Requires Kubernetes 1.17+, as it uses topologyKeys
|
||||
create: true
|
||||
|
||||
#opentelemetry:
|
||||
# enabled: false # legacy setting, will only enable grpc, defaults to false
|
||||
# grpc:
|
||||
# enabled: false # takes precedence over legacy settings above, defaults to true if "grpc:" is present
|
||||
# http:
|
||||
# enabled: false # allows to enable http endpoints, defaults to true if "http:" is present
|
||||
|
||||
prometheus:
|
||||
remoteWrite:
|
||||
enabled: false # If true, it will also apply `service.create=true`
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a ServiceAccount should be created
|
||||
create: true
|
||||
# The name of the ServiceAccount to use.
|
||||
# If not set and `create` is true, a name is generated using the fullname template
|
||||
# name: instana-agent
|
||||
|
||||
podSecurityPolicy:
|
||||
# Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods.
|
||||
# Requires `rbac.create` to be `true` as well and K8s version below v1.25.
|
||||
enable: false
|
||||
# The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods.
|
||||
# If not set and `enable` is true, a PodSecurityPolicy will be created with a name generated using the fullname template.
|
||||
name: null
|
||||
|
||||
zone:
|
||||
# zone.name is the custom zone that detected technologies will be assigned to
|
||||
name: null
|
||||
|
||||
k8s_sensor:
|
||||
image:
|
||||
# k8s_sensor.image.name is the name of the container image of the Instana agent.
|
||||
name: icr.io/instana/k8sensor
|
||||
# k8s_sensor.image.digest is the digest (a.k.a. Image ID) of the agent container image; if specified, it has priority over agent.image.tag, which will be ignored.
|
||||
#digest:
|
||||
# k8s_sensor.image.tag is the tag name of the agent container image; if agent.image.digest is specified, this property is ignored.
|
||||
tag: latest
|
||||
# k8s_sensor.image.pullPolicy specifies when to pull the image container.
|
||||
pullPolicy: Always
|
||||
deployment:
|
||||
# Specifies whether or not to enable the Deployment and turn off the Kubernetes sensor in the DaemonSet
|
||||
enabled: true
|
||||
# Use three replicas to ensure the HA by the default.
|
||||
replicas: 3
|
||||
# k8s_sensor.deployment.pod adjusts the resource assignments for the agent independently of the DaemonSet agent when k8s_sensor.deployment.enabled=true
|
||||
pod:
|
||||
requests:
|
||||
# k8s_sensor.deployment.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
|
||||
memory: 128Mi
|
||||
# k8s_sensor.deployment.pod.requests.cpu are the requested CPU units allocation for the agent pods.
|
||||
cpu: 10m
|
||||
limits:
|
||||
# k8s_sensor.deployment.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
|
||||
memory: 1536Mi
|
||||
# k8s_sensor.deployment.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
|
||||
cpu: 500m
|
||||
|
||||
kubernetes:
|
||||
# Configures use of a Deployment for the Kubernetes sensor rather than as a potential member of the DaemonSet. Is only accepted if k8s_sensor.deployment.enabled=false
|
||||
deployment:
|
||||
# Specifies whether or not to enable the Deployment and turn off the Kubernetes sensor in the DaemonSet
|
||||
enabled: false
|
||||
# Use a single replica, the impact will generally be low and we need to address a host of other concerns where clusters are large.
|
||||
replicas: 1
|
||||
|
||||
# kubernetes.deployment.pod adjusts the resource assignments for the agent independently of the DaemonSet agent when kubernetes.deployment.enabled=true
|
||||
pod:
|
||||
requests:
|
||||
# kubernetes.deployment.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
|
||||
memory: 1024Mi
|
||||
# kubernetes.deployment.pod.requests.cpu are the requested CPU units allocation for the agent pods.
|
||||
cpu: 720m
|
||||
limits:
|
||||
# kubernetes.deployment.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
|
||||
memory: 3072Mi
|
||||
# kubernetes.deployment.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
|
||||
cpu: 4
|
||||
|
||||
# zones:
|
||||
# # Configure use of zones to use tolerations as the basis to associate a specific daemonset per tainted node pool
|
||||
# - name: pool-01
|
||||
# tolerations:
|
||||
# - key: "pool"
|
||||
# operator: "Equal"
|
||||
# value: "pool-01"
|
||||
# effect: "NoExecute"
|
||||
# - name: pool-02
|
||||
# tolerations:
|
||||
# - key: "pool"
|
||||
# operator: "Equal"
|
||||
# value: "pool-02"
|
||||
# effect: "NoExecute"
|
|
@ -0,0 +1,23 @@
|
|||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
# OWNERS file for helm
|
||||
OWNERS
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue