Remove old versions of istio not being used

pull/3165/head
Venkata Krishna Rohit Sakala 2023-10-23 13:12:31 -07:00
parent a66ccb8f1f
commit ab4be07ab3
94 changed files with 0 additions and 3797 deletions

View File

@ -1,24 +0,0 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/display-name: Istio
catalog.cattle.io/kube-version: '>= 1.23.0-0 < 1.27.0-0'
catalog.cattle.io/namespace: istio-system
catalog.cattle.io/os: linux
catalog.cattle.io/permits-os: linux,windows
catalog.cattle.io/rancher-version: '>= 2.8.0-0 < 2.9.0-0'
catalog.cattle.io/release-name: rancher-istio
catalog.cattle.io/requests-cpu: 710m
catalog.cattle.io/requests-memory: 2314Mi
catalog.cattle.io/type: cluster-tool
catalog.cattle.io/ui-component: istio
catalog.cattle.io/upstream-version: 1.17.2
apiVersion: v1
appVersion: 1.17.2
description: A basic Istio setup that installs with the istioctl. Refer to https://istio.io/latest/
for details.
icon: https://charts.rancher.io/assets/logos/istio.svg
keywords:
- networking
- infrastructure
name: rancher-istio
version: 1.17.2

View File

@ -1,79 +0,0 @@
# Rancher-Istio Chart
Our [Istio](https://istio.io/) installer wraps the istioctl binary commands in a handy helm chart, including an overlay file option to allow complex customization.
See the app-readme for known issues and deprecations.
## Installation Requirements
#### Chart Dependencies
- rancher-monitoring chart or other Prometheus installation
#### Install
To install the rancher-istio chart with helm, use the following command:
```
helm install rancher-istio <location/of/the/rancher-istio/chart> --create-namespace -n istio-system
```
#### Uninstall
To ensure rancher-istio uninstalls correctly, you must uninstall rancher-istio prior to uninstalling chart dependencies (see chart dependencies for list of dependencies). This is because all definitions need to be available in order to properly build the rancher-istio objects for removal.
**If you remove dependent CRD charts prior to removing rancher-istio, you may encounter the following error:**
`Error: uninstallation completed with 1 error(s): unable to build kubernetes objects for delete: unable to recognize "": no matches for kind "MonitoringDashboard" in version "monitoring.kiali.io/v1alpha1"`
## Addons
The addons that are included with rancher-istio are:
- Kiali
- Jaeger
Each addon has additional customization and dependencies required for them to work as expected. Use the values.yaml to customize or to enable/disable each addon.
### Kiali Addon
Kiali allows you to view and manage your istio-based service mesh through an easy to use dashboard.
#### Kiali Dependencies
##### rancher-monitoring chart or other Prometheus installation
This dependecy installs the required CRDs for installing Kiali. Since Kiali is bundled in with Istio in this chart, if you do not have these dependencies installed, your Istio installation will fail. If you do not plan on using Kiali, set `kiali.enabled=false` when installing Istio for a succesful installation.
#### Prometheus Configuration for Kiali
> **Note:** The following configuration options assume you have installed the dependecies for Kiali. Please ensure you have Promtheus in your cluster before proceeding.
The Rancher Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which means all namespaces will be scraped by Prometheus by default. This ensures you can view traffic, metrics and graphs for resources deployed in other namespaces.
To limit scraping to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true` and add one of the following configurations to ensure you can continue to view traffic, metrics and graphs for your deployed resources.
1. Add a Service Monitor or Pod Monitor in the namespace with the targets you want to scrape.
1. Add an additionalScrapeConfig to your rancher-monitoring instance to scrape all targets in all namespaces.
#### Kiali External Services
The external services that can be configured in Kiali are: Prometheus, Grafana and Tracing.
##### Prometheus
The `kiali.external_services.prometheus` url is set in the values.yaml:
```
http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc:{{ prometheus.service.port }}
```
The url depends on the default values for `nameOverride`, `namespaceOverride`, and `prometheus.service.port` being set in your rancher-monitoring or other monitoring instance.
##### Grafana
The `kiali.external_services.grafana` url is set in the values.yaml:
```
http://{{ .Values.nameOverride }}-grafana.{{ .Values.namespaceOverride }}.svc:{{ grafana.service.port }}
```
The url depends on the default values for `nameOverride`, `namespaceOverride`, and `grafana.service.port` being set in your rancher-monitoring or other monitoring instance.
##### Tracing
The `kiali.external_services.tracing` url and `.Values.tracing.contextPath` is set in the rancher-istio values.yaml:
```
http://tracing.{{ .Values.namespaceOverride }}.svc:{{ .Values.service.externalPort }}/{{ .Values.tracing.contextPath }}
```
The url depends on the default values for `namespaceOverride`, and `.Values.service.externalPort` being set in your rancher-tracing or other tracing instance.
## Jaeger Addon
Jaeger allows you to trace and monitor distributed microservices.
> **Note:** This addon is using the all-in-one Jaeger installation which is not qualified for production. Use the [Jaeger Tracing](https://www.jaegertracing.io/docs/1.21/getting-started/) documentation to determine which installation you will need for your production needs.

View File

@ -1,65 +0,0 @@
# Rancher Istio
Our [Istio](https://istio.io/) installer wraps the istioctl binary commands in a handy helm chart, including an overlay file option to allow complex customization. It also includes:
* **[Kiali](https://kiali.io/)**: Used for graphing traffic flow throughout the mesh
* **[Jaeger](https://www.jaegertracing.io/)**: A quick start, all-in-one installation used for tracing distributed system. This is not production qualified, please refer to jaeger documentation to determine which installation you may need instead.
For more information on how to use the feature, refer to our [docs](https://rancher.com/docs/rancher/v2.x/en/istio/v2.5/).
## Upgrading to Kubernetes v1.25+
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`.
> **Note:**
> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`.
> **Note:**
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
>
> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets.
Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
## Warnings
- Upgrading across more than two minor versions (e.g., 1.6.x to 1.9.x) in one step is not officially tested or recommended. See [Istio upgrade docs](https://istio.io/latest/docs/setup/upgrade/) for more details.
## Known Issues
#### Airgapped Environments
**A temporary fix has been added to this chart to allow upgrades to succeed in an airgapped environment. See [this issue](https://github.com/rancher/rancher/issues/30842) for details.** We are still advocating for an upstream fix in Istio to formally resolve this issue. The root cause is the Istio Operator upgrade command reaches out to an external repo on upgrades and the external repo is not configurable. We are tracking the fix for this issue [here](https://github.com/rancher/rancher/issues/33402)
#### Installing Istio with CNI component enabled on RHEL 8.4 SElinux enabled cluster.
To install istio with CNI enabled, e.g. when cluster has a default PSP set to "restricted", on a cluster using nodes with RHEL 8.4 SElinux enabled, run the following command on each cluster node before creating a cluster.
`mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni`
See [this issue](https://github.com/rancher/rancher/issues/33291) for details.
## Installing istio with distroless-images.
Istio `102.2.0+up1.17.2` uses distroless images for `istio-proxyv2`, `istio-install-cni` and `istio-pilot`. Distroless images don't have the common debugging tools like `bash`, `curl`, etc. If you wish to troubleshoot Istio, you can switch to regular images by updating `values.yaml` file.
## Deprecations
#### v1alpha1 security policies
As of 1.6, Istio removed support for `v1alpha1` security policies resource and replaced the API with `v1beta1` authorization policies. https://istio.io/latest/docs/reference/config/security/authorization-policy/
If you are currently running rancher-istio <= 1.7.x, you need to migrate any existing `v1alpha1` security policies to `v1beta1` authorization policies prior to upgrading to the next minor version.
> **Note:** If you attempt to upgrade prior to migrating your policy resources, you might see errors similar to:
```
Error: found 6 CRD of unsupported v1alpha1 security policy
```
```
Error: found 1 unsupported v1alpha1 security policy
```
```
Control Plane - policy pod - istio-policy - version: x.x.x does not match the target version x.x.x
```
Continue with the migration steps below before retrying the upgrade process.
#### Migrating Resources:
Migration steps can be found in this [istio blog post](https://istio.io/latest/blog/2021/migrate-alpha-policy/ "istio blog post").
You can also use these [quick steps](https://github.com/rancher/rancher/issues/34699#issuecomment-921995917 "quick steps") to determine if you need to follow the more extensive migration steps.

View File

@ -1,135 +0,0 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
base:
enabled: {{ .Values.base.enabled }}
cni:
enabled: {{ .Values.cni.enabled }}
k8s:
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
egressGateways:
- enabled: {{ .Values.egressGateways.enabled }}
name: istio-egressgateway
k8s:
{{- if .Values.egressGateways.hpaSpec }}
hpaSpec: {{ toYaml .Values.egressGateways.hpaSpec | nindent 12 }}
{{- end }}
{{- if .Values.egressGateways.podDisruptionBudget }}
podDisruptionBudget: {{ toYaml .Values.egressGateways.podDisruptionBudget | nindent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
ingressGateways:
- enabled: {{ .Values.ingressGateways.enabled }}
name: istio-ingressgateway
k8s:
{{- if .Values.ingressGateways.hpaSpec }}
hpaSpec: {{ toYaml .Values.ingressGateways.hpaSpec | nindent 12 }}
{{- end }}
{{- if .Values.ingressGateways.podDisruptionBudget }}
podDisruptionBudget: {{ toYaml .Values.ingressGateways.podDisruptionBudget | nindent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
nodePort: 31380
- name: https
port: 443
targetPort: 8443
nodePort: 31390
- name: tcp
port: 31400
targetPort: 31400
nodePort: 31400
- name: tls
port: 15443
targetPort: 15443
istiodRemote:
enabled: {{ .Values.istiodRemote.enabled }}
pilot:
enabled: {{ .Values.pilot.enabled }}
k8s:
{{- if .Values.pilot.hpaSpec }}
hpaSpec: {{ toYaml .Values.pilot.hpaSpec | nindent 12 }}
{{- end }}
{{- if .Values.pilot.podDisruptionBudget }}
podDisruptionBudget: {{ toYaml .Values.pilot.podDisruptionBudget | nindent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
hub: {{ .Values.systemDefaultRegistry | default "docker.io" }}
profile: default
tag: {{ .Values.tag }}
revision: {{ .Values.revision }}
meshConfig:
defaultConfig:
proxyMetadata:
{{- if .Values.dns.enabled }}
ISTIO_META_DNS_CAPTURE: "true"
{{- end }}
values:
gateways:
istio-egressgateway:
name: istio-egressgateway
type: {{ .Values.egressGateways.type }}
istio-ingressgateway:
name: istio-ingressgateway
type: {{ .Values.ingressGateways.type }}
global:
istioNamespace: {{ template "istio.namespace" . }}
proxy:
image: {{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}
proxy_init:
image: {{ template "system_default_registry" . }}{{ .Values.global.proxy_init.repository }}:{{ .Values.global.proxy_init.tag }}
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
defaultPodDisruptionBudget:
enabled: {{ .Values.global.defaultPodDisruptionBudget.enabled }}
{{- end }}
{{- if .Values.pilot.enabled }}
pilot:
image: {{ template "system_default_registry" . }}{{ .Values.pilot.repository }}:{{ .Values.pilot.tag }}
{{- end }}
telemetry:
enabled: {{ .Values.telemetry.enabled }}
v2:
enabled: {{ .Values.telemetry.v2.enabled }}
{{- if .Values.cni.enabled }}
cni:
image: {{ template "system_default_registry" . }}{{ .Values.cni.repository }}:{{ .Values.cni.tag }}
excludeNamespaces:
{{- toYaml .Values.cni.excludeNamespaces | nindent 8 }}
logLevel: {{ .Values.cni.logLevel }}
{{- end }}

View File

@ -1,7 +0,0 @@
dependencies:
- condition: kiali.enabled
name: kiali
repository: file://./charts/kiali
- condition: tracing.enabled
name: tracing
repository: file://./charts/tracing

View File

@ -1,37 +0,0 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: ilb-gateway
namespace: user-ingressgateway-ns
k8s:
resources:
requests:
cpu: 200m
service:
ports:
- name: tcp-citadel-grpc-tls
port: 8060
targetPort: 8060
- name: tcp-dns
port: 5353
serviceAnnotations:
cloud.google.com/load-balancer-type: internal
- enabled: true
name: other-gateway
namespace: cattle-istio-system
k8s:
resources:
requests:
cpu: 200m
service:
ports:
- name: tcp-citadel-grpc-tls
port: 8060
targetPort: 8060
- name: tcp-dns
port: 5353
serviceAnnotations:
cloud.google.com/load-balancer-type: internal

View File

@ -1,27 +0,0 @@
{{/* Ensure namespace is set the same everywhere */}}
{{- define "istio.namespace" -}}
{{- .Release.Namespace | default "istio-system" -}}
{{- end -}}
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
kubernetes.io/os: linux
{{- end -}}

View File

@ -1,43 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
name: istio-admin
namespace: {{ template "istio.namespace" . }}
rules:
- apiGroups:
- config.istio.io
resources:
- adapters
- attributemanifests
- handlers
- httpapispecbindings
- httpapispecs
- instances
- quotaspecbindings
- quotaspecs
- rules
- templates
verbs: ["get", "watch", "list"]
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
- workloadentries
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- authorizationpolicies
- peerauthentications
- requestauthentications
verbs:
- '*'

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-installer-base
namespace: {{ template "istio.namespace" . }}
data:
{{ tpl (.Files.Glob "configs/*").AsConfig . | indent 2 }}

View File

@ -1,134 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-installer
rules:
# istio groups
- apiGroups:
- extensions.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- authentication.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- config.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- install.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- networking.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- rbac.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- telemetry.istio.io
resources:
- '*'
verbs:
- '*'
# k8s groups
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions.apiextensions.k8s.io
- customresourcedefinitions
verbs:
- '*'
- apiGroups:
- apps
- extensions
resources:
- daemonsets
- deployments
- deployments/finalizers
- ingresses
- replicasets
- statefulsets
verbs:
- '*'
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
- roles
- rolebindings
verbs:
- '*'
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- events
- namespaces
- pods
- pods/exec
- persistentvolumeclaims
- secrets
- services
- serviceaccounts
verbs:
- '*'
{{- if and .Values.global.cattle.psp.enabled }}
- apiGroups:
- policy
resourceNames:
- istio-installer
resources:
- podsecuritypolicies
verbs:
- use
{{- end }}

View File

@ -1,12 +0,0 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: istio-installer
subjects:
- kind: ServiceAccount
name: istio-installer
namespace: {{ template "istio.namespace" . }}
roleRef:
kind: ClusterRole
name: istio-installer
apiGroup: rbac.authorization.k8s.io

View File

@ -1,43 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
namespace: {{ template "istio.namespace" . }}
name: istio-edit
rules:
- apiGroups:
- config.istio.io
resources:
- adapters
- attributemanifests
- handlers
- httpapispecbindings
- httpapispecs
- instances
- quotaspecbindings
- quotaspecs
- rules
- templates
verbs: ["get", "watch", "list"]
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
- workloadentries
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- authorizationpolicies
- peerauthentications
- requestauthentications
verbs:
- '*'

View File

@ -1,51 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp-istio-cni
namespace: {{ template "istio.namespace" . }}
spec:
allowPrivilegeEscalation: true
fsGroup:
rule: RunAsAny
hostNetwork: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- emptyDir
- hostPath
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-istio-cni
namespace: {{ template "istio.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: psp-istio-cni
subjects:
- kind: ServiceAccount
name: istio-cni
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: psp-istio-cni
namespace: {{ template "istio.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- psp-istio-cni
resources:
- podsecuritypolicies
verbs:
- use
{{- end }}

View File

@ -1,66 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: istioctl-installer
namespace: {{ template "istio.namespace" . }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
backoffLimit: 1
template:
spec:
{{- if .Values.installer.releaseMirror.enabled }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "github.com"
{{- end }}
containers:
- name: istioctl-installer
image: {{ template "system_default_registry" . }}{{ .Values.installer.repository }}:{{ .Values.installer.tag }}
env:
- name: RELEASE_NAME
value: {{ .Release.Name }}
- name: ISTIO_NAMESPACE
value: {{ template "istio.namespace" . }}
- name: FORCE_INSTALL
value: {{ .Values.forceInstall | default "false" | quote }}
- name: RELEASE_MIRROR_ENABLED
value: {{ .Values.installer.releaseMirror.enabled | quote }}
- name: SECONDS_SLEEP
value: {{ .Values.installer.debug.secondsSleep | quote}}
command: ["/bin/sh","-c"]
args: ["/usr/local/app/scripts/run.sh"]
volumeMounts:
- name: config-volume
mountPath: /app/istio-base.yaml
subPath: istio-base.yaml
{{- if .Values.overlayFile }}
- name: overlay-volume
mountPath: /app/overlay-config.yaml
subPath: overlay-config.yaml
{{- end }}
volumes:
- name: config-volume
configMap:
name: istio-installer-base
{{- if .Values.overlayFile }}
- name: overlay-volume
configMap:
name: istio-installer-overlay
{{- end }}
serviceAccountName: istio-installer
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
securityContext:
runAsUser: 499
runAsGroup: 487
restartPolicy: Never

View File

@ -1,30 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: istio-installer
namespace: {{ template "istio.namespace" . }}
spec:
privileged: false
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
volumes:
- 'configMap'
- 'secret'
{{- end }}

View File

@ -1,81 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: istio-psp
namespace: {{ template "istio.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-psp
subjects:
- kind: ServiceAccount
name: istio-egressgateway-service-account
- kind: ServiceAccount
name: istio-ingressgateway-service-account
- kind: ServiceAccount
name: istio-mixer-service-account
- kind: ServiceAccount
name: istio-operator-authproxy
- kind: ServiceAccount
name: istiod-service-account
- kind: ServiceAccount
name: istio-sidecar-injector-service-account
- kind: ServiceAccount
name: istiocoredns-service-account
- kind: ServiceAccount
name: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: istio-psp
namespace: {{ template "istio.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- istio-psp
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: istio-psp
namespace: {{ template "istio.namespace" . }}
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
runAsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end }}

View File

@ -1,53 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: istioctl-uninstaller
namespace: {{ template "istio.namespace" . }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: istioctl-uninstaller
image: {{ template "system_default_registry" . }}{{ .Values.installer.repository }}:{{ .Values.installer.tag }}
env:
- name: RELEASE_NAME
value: {{ .Release.Name }}
- name: ISTIO_NAMESPACE
value: {{ template "istio.namespace" . }}
command: ["/bin/sh","-c"]
args: ["/usr/local/app/scripts/uninstall_istio_system.sh"]
volumeMounts:
- name: config-volume
mountPath: /app/istio-base.yaml
subPath: istio-base.yaml
{{- if .Values.overlayFile }}
- name: overlay-volume
mountPath: /app/overlay-config.yaml
subPath: overlay-config.yaml
{{ end }}
volumes:
- name: config-volume
configMap:
name: istio-installer-base
{{- if .Values.overlayFile }}
- name: overlay-volume
configMap:
name: istio-installer-overlay
{{ end }}
serviceAccountName: istio-installer
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
securityContext:
runAsUser: 101
runAsGroup: 101
restartPolicy: OnFailure

View File

@ -1,9 +0,0 @@
{{- if .Values.overlayFile }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-installer-overlay
namespace: {{ template "istio.namespace" . }}
data:
overlay-config.yaml: {{ toYaml .Values.overlayFile | indent 2 }}
{{- end }}

View File

@ -1,51 +0,0 @@
{{- if .Values.kiali.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: envoy-stats-monitor
namespace: {{ template "istio.namespace" . }}
labels:
monitoring: istio-proxies
spec:
selector:
matchExpressions:
- {key: istio-prometheus-ignore, operator: DoesNotExist}
namespaceSelector:
any: true
jobLabel: envoy-stats
endpoints:
- path: /stats/prometheus
targetPort: 15090
interval: 15s
relabelings:
- sourceLabels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: istio-component-monitor
namespace: {{ template "istio.namespace" . }}
labels:
monitoring: istio-components
spec:
jobLabel: istio
targetLabels: [app]
selector:
matchExpressions:
- {key: istio, operator: In, values: [pilot]}
namespaceSelector:
any: true
endpoints:
- port: http-monitoring
interval: 15s
{{- end -}}

View File

@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-installer
namespace: {{ template "istio.namespace" . }}

View File

@ -1,7 +0,0 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -1,41 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
namespace: {{ template "istio.namespace" . }}
name: istio-view
rules:
- apiGroups:
- config.istio.io
resources:
- adapters
- attributemanifests
- handlers
- httpapispecbindings
- httpapispecs
- instances
- quotaspecbindings
- quotaspecs
- rules
- templates
verbs: ["get", "watch", "list"]
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
- workloadentries
verbs: ["get", "watch", "list"]
- apiGroups:
- security.istio.io
resources:
- authorizationpolicies
- peerauthentications
- requestauthentications
verbs: ["get", "watch", "list"]

View File

@ -1,116 +0,0 @@
overlayFile: ""
tag: 1.17.2
##Setting forceInstall: true will remove the check for istio version < 1.6.x and will not analyze your install cluster prior to install
forceInstall: false
installer:
repository: rancher/istio-installer
tag: 1.17.2-rancher1
##releaseMirror are configurations for istio upgrades.
##Setting releaseMirror.enabled: true will cause istio to use bundled in images from rancher/istio-installer to perfom an upgrade - this is ideal
##for airgap setups. Setting releaseMirror.enabled to false means istio will call externally to github to fetch the required assets.
releaseMirror:
enabled: false
##Set the secondsSleep to run a sleep command `sleep <secondsSleep>s` to allow time to exec into istio-installer pod for debugging
debug:
secondsSleep: 0
##Native support for dns added in 1.8
dns:
enabled: false
base:
enabled: true
cni:
enabled: false
repository: rancher/mirrored-istio-install-cni
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.17.2
tag: 1.17.2-distroless
logLevel: info
excludeNamespaces:
- istio-system
- kube-system
egressGateways:
enabled: false
type: NodePort
hpaSpec: {}
podDisruptionBudget: {}
ingressGateways:
enabled: true
type: NodePort
hpaSpec: {}
podDisruptionBudget: {}
istiodRemote:
enabled: false
pilot:
enabled: true
repository: rancher/mirrored-istio-pilot
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.17.2
tag: 1.17.2-distroless
hpaSpec: {}
podDisruptionBudget: {}
telemetry:
enabled: true
v2:
enabled: true
global:
cattle:
systemDefaultRegistry: ""
psp:
enabled: false
proxy:
repository: rancher/mirrored-istio-proxyv2
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.17.2
tag: 1.17.2-distroless
proxy_init:
repository: rancher/mirrored-istio-proxyv2
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.17.2
tag: 1.17.2-distroless
defaultPodDisruptionBudget:
enabled: true
# Kiali subchart from rancher-kiali-server
kiali:
enabled: true
auth:
strategy: anonymous
deployment:
ingress_enabled: false
external_services:
prometheus:
custom_metrics_url: "http://rancher-monitoring-prometheus.cattle-monitoring-system.svc:9090"
url: "http://rancher-monitoring-prometheus.cattle-monitoring-system.svc:9090"
tracing:
in_cluster_url: "http://tracing.istio-system.svc:16686/jaeger"
use_grpc: false
grafana:
in_cluster_url: "http://rancher-monitoring-grafana.cattle-monitoring-system.svc:80"
url: "http://rancher-monitoring-grafana.cattle-monitoring-system.svc:80"
tracing:
enabled: false
contextPath: "/jaeger"
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## List of node taints to tolerate (requires Kubernetes >= 1.6)
tolerations: []

View File

@ -1,2 +0,0 @@
workingDir: ""
url: packages/rancher-istio/1.17/rancher-kiali-server

View File

@ -1,2 +0,0 @@
workingDir: ""
url: packages/rancher-istio/1.17/rancher-tracing

View File

@ -1,3 +0,0 @@
url: local
version: 103.0.0+up1.17.2
doNotRelease: true

View File

@ -1,67 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "kiali-server.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "kiali-server.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: kiali
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "kiali-server.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- policy
resourceNames:
- {{ include "kiali-server.fullname" . }}-psp
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "kiali-server.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
runAsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end }}

View File

@ -1,7 +0,0 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -1,12 +0,0 @@
{{- if .Values.web_root_override }}
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali-console
namespace: {{ .Release.Namespace }}
labels:
{{- include "kiali-server.labels" . | nindent 4 }}
data:
env.js: |
window.WEB_ROOT='/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:kiali:20001/proxy/kiali';
{{- end }}

View File

@ -1,31 +0,0 @@
--- charts-original/Chart.yaml
+++ charts/Chart.yaml
@@ -1,17 +1,26 @@
+annotations:
+ catalog.cattle.io/hidden: "true"
+ catalog.cattle.io/os: linux
+ catalog.cattle.io/requires-gvr: monitoring.coreos.com.prometheus/v1
+ catalog.rancher.io/namespace: cattle-istio-system
+ catalog.rancher.io/release-name: rancher-kiali-server
apiVersion: v2
appVersion: v1.66.0
description: Kiali is an open source project for service mesh observability, refer
- to https://www.kiali.io for details.
+ to https://www.kiali.io for details. This is installed as sub-chart with customized
+ values in Rancher's Istio.
home: https://github.com/kiali/kiali
icon: https://raw.githubusercontent.com/kiali/kiali.io/master/themes/kiali/static/img/kiali_logo_masthead.png
keywords:
- istio
- kiali
+- networking
+- infrastructure
maintainers:
- email: kiali-users@googlegroups.com
name: Kiali
url: https://kiali.io
-name: kiali-server
+name: rancher-kiali-server
sources:
- https://github.com/kiali/kiali
- https://github.com/kiali/kiali-operator

View File

@ -1,49 +0,0 @@
--- charts-original/templates/_helpers.tpl
+++ charts/templates/_helpers.tpl
@@ -50,8 +50,15 @@
Selector labels
*/}}
{{- define "kiali-server.selectorLabels" -}}
+{{- $releaseName := .Release.Name -}}
+{{- $fullName := include "kiali-server.fullname" . -}}
+{{- $deployment := (lookup "apps/v1" "Deployment" .Release.Namespace $fullName) -}}
app.kubernetes.io/name: kiali
-app.kubernetes.io/instance: {{ include "kiali-server.fullname" . }}
+{{- if (and .Release.IsUpgrade $deployment)}}
+app.kubernetes.io/instance: {{ (get (($deployment).metadata.labels) "app.kubernetes.io/instance") | default $fullName }}
+{{- else }}
+app.kubernetes.io/instance: {{ $fullName }}
+{{- end }}
{{- end }}
{{/*
@@ -172,6 +179,29 @@
{{- end }}
{{- end }}
+{{- define "system_default_registry" -}}
+{{- if .Values.global.cattle.systemDefaultRegistry -}}
+{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
+{{- else -}}
+{{- "" -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Windows cluster will add default taint for linux nodes,
+add below linux tolerations to workloads could be scheduled to those linux nodes
+*/}}
+{{- define "linux-node-tolerations" -}}
+- key: "cattle.io/os"
+ value: "linux"
+ effect: "NoSchedule"
+ operator: "Equal"
+{{- end -}}
+
+{{- define "linux-node-selector" -}}
+kubernetes.io/os: linux
+{{- end -}}
+
{{/*
Autodetect remote cluster secrets if enabled - looks for secrets in the same namespace where Kiali is installed.
Returns a JSON dict whose keys are the cluster names and values are the cluster secret data.

View File

@ -1,59 +0,0 @@
--- charts-original/templates/deployment.yaml
+++ charts/templates/deployment.yaml
@@ -53,7 +53,7 @@
{{- toYaml .Values.deployment.host_aliases | nindent 6 }}
{{- end }}
containers:
- - image: "{{ .Values.deployment.image_name }}{{ if .Values.deployment.image_digest }}@{{ .Values.deployment.image_digest }}{{ end }}:{{ .Values.deployment.image_version }}"
+ - image: "{{ template "system_default_registry" . }}{{ .Values.deployment.repository }}{{ if .Values.deployment.image_digest }}@{{ .Values.deployment.image_digest }}{{ end }}:{{ .Values.deployment.tag }}"
imagePullPolicy: {{ .Values.deployment.image_pull_policy | default "Always" }}
name: {{ include "kiali-server.fullname" . }}
command:
@@ -115,6 +115,11 @@
- name: LOG_SAMPLER_RATE
value: "{{ .Values.deployment.logger.sampler_rate }}"
volumeMounts:
+ {{- if .Values.web_root_override }}
+ - name: kiali-console
+ subPath: env.js
+ mountPath: /opt/kiali/console/env.js
+ {{- end }}
- name: {{ include "kiali-server.fullname" . }}-configuration
mountPath: "/kiali-configuration"
- name: {{ include "kiali-server.fullname" . }}-cert
@@ -140,6 +145,14 @@
{{- toYaml .Values.deployment.resources | nindent 10 }}
{{- end }}
volumes:
+ {{- if .Values.web_root_override }}
+ - name: kiali-console
+ configMap:
+ name: kiali-console
+ items:
+ - key: env.js
+ path: env.js
+ {{- end }}
- name: {{ include "kiali-server.fullname" . }}-configuration
configMap:
name: {{ include "kiali-server.fullname" . }}
@@ -194,12 +207,12 @@
{{- toYaml .Values.deployment.affinity.pod_anti | nindent 10 }}
{{- end }}
{{- end }}
- {{- if .Values.deployment.tolerations }}
- tolerations:
- {{- toYaml .Values.deployment.tolerations | nindent 8 }}
- {{- end }}
- {{- if .Values.deployment.node_selector }}
- nodeSelector:
- {{- toYaml .Values.deployment.node_selector | nindent 8 }}
- {{- end }}
+ tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
+{{- if .Values.deployment.tolerations }}
+{{ toYaml .Values.deployment.tolerations | indent 8 }}
+{{- end }}
+ nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
+{{- if .Values.deployment.node_selector }}
+{{ toYaml .Values.deployment.node_selector | indent 8 }}
+{{- end }}
...

View File

@ -1,40 +0,0 @@
--- charts-original/values.yaml
+++ charts/values.yaml
@@ -13,6 +13,9 @@
# do this, a PR would be welcome.
kiali_route_url: ""
+# rancher specific override that allows proxy access to kiali url
+web_root_override: true
+
#
# Settings that mimic the Kiali CR which are placed in the ConfigMap.
# Note that only those values used by the Helm Chart will be here.
@@ -42,10 +45,10 @@
api_version: "autoscaling/v2"
spec: {}
image_digest: "" # use "sha256" if image_version is a sha256 hash (do NOT prefix this value with a "@")
- image_name: quay.io/kiali/kiali
+ repository: rancher/mirrored-kiali-kiali
image_pull_policy: "Always"
image_pull_secrets: []
- image_version: v1.66.0 # version like "v1.39" (see: https://quay.io/repository/kiali/kiali?tab=tags) or a digest hash
+ tag: v1.66.0 # version like "v1.66" (see: https://quay.io/repository/kiali/kiali?tab=tags) or a digest hash
ingress:
additional_labels: {}
class_name: "nginx"
@@ -110,3 +113,13 @@
metrics_enabled: true
metrics_port: 9090
web_root: ""
+
+# Common settings used among istio subcharts.
+global:
+ # Specify rancher clusterId of external tracing config
+ # https://github.com/istio/istio.io/issues/4146#issuecomment-493543032
+ cattle:
+ systemDefaultRegistry: ""
+ clusterId:
+ psp:
+ enabled: false
\ No newline at end of file

View File

@ -1,3 +0,0 @@
url: https://kiali.org/helm-charts/kiali-server-1.66.0.tgz
version: 103.0.0
doNotRelease: true

View File

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,12 +0,0 @@
annotations:
catalog.cattle.io/hidden: "true"
catalog.cattle.io/os: linux
catalog.rancher.io/certified: rancher
catalog.rancher.io/namespace: istio-system
catalog.rancher.io/release-name: rancher-tracing
apiVersion: v1
appVersion: 1.43.0
description: A quick start Jaeger Tracing installation using the all-in-one demo.
This is not production qualified. Refer to https://www.jaegertracing.io/ for details.
name: rancher-tracing
version: 1.43.0

View File

@ -1,5 +0,0 @@
# Jaeger
A Rancher chart based on the Jaeger all-in-one quick installation option. This chart will allow you to trace and monitor distributed microservices.
> **Note:** The basic all-in-one Jaeger installation which is not qualified for production. Use the [Jaeger Tracing](https://www.jaegertracing.io) documentation to determine which installation you will need for your production needs.

View File

@ -1,92 +0,0 @@
{{/* affinity - https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ */}}
{{- define "nodeAffinity" }}
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "nodeAffinityRequiredDuringScheduling" . }}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "nodeAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- define "nodeAffinityRequiredDuringScheduling" }}
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
{{- range $key, $val := .Values.global.arch }}
{{- if gt ($val | int) 0 }}
- {{ $key | quote }}
{{- end }}
{{- end }}
{{- $nodeSelector := default .Values.global.defaultNodeSelector .Values.nodeSelector -}}
{{- range $key, $val := $nodeSelector }}
- key: {{ $key }}
operator: In
values:
- {{ $val | quote }}
{{- end }}
{{- end }}
{{- define "nodeAffinityPreferredDuringScheduling" }}
{{- range $key, $val := .Values.global.arch }}
{{- if gt ($val | int) 0 }}
- weight: {{ $val | int }}
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- {{ $key | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- define "podAntiAffinity" }}
{{- if or .Values.podAntiAffinityLabelSelector .Values.podAntiAffinityTermLabelSelector}}
podAntiAffinity:
{{- if .Values.podAntiAffinityLabelSelector }}
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "podAntiAffinityRequiredDuringScheduling" . }}
{{- end }}
{{- if or .Values.podAntiAffinityTermLabelSelector}}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "podAntiAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- end }}
{{- end }}
{{- define "podAntiAffinityRequiredDuringScheduling" }}
{{- range $index, $item := .Values.podAntiAffinityLabelSelector }}
- labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v | quote }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
{{- end }}
{{- end }}
{{- define "podAntiAffinityPreferredDuringScheduling" }}
{{- range $index, $item := .Values.podAntiAffinityTermLabelSelector }}
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v | quote }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
weight: 100
{{- end }}
{{- end }}

View File

@ -1,47 +0,0 @@
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
{{- define "tracing.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "tracing.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
kubernetes.io/os: linux
{{- end -}}

View File

@ -1,94 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
matchLabels:
app: {{ .Values.provider }}
template:
metadata:
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
prometheus.io/scrape: "true"
prometheus.io/port: "14269"
{{- if .Values.jaeger.podAnnotations }}
{{ toYaml .Values.jaeger.podAnnotations | indent 8 }}
{{- end }}
spec:
containers:
- name: jaeger
image: "{{ template "system_default_registry" . }}{{ .Values.jaeger.repository }}:{{ .Values.jaeger.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
env:
{{- if eq .Values.jaeger.spanStorageType "badger" }}
- name: BADGER_EPHEMERAL
value: "false"
- name: SPAN_STORAGE_TYPE
value: "badger"
- name: BADGER_DIRECTORY_VALUE
value: "/badger/data"
- name: BADGER_DIRECTORY_KEY
value: "/badger/key"
{{- end }}
- name: COLLECTOR_ZIPKIN_HOST_PORT
value: "9411"
- name: MEMORY_MAX_TRACES
value: "{{ .Values.jaeger.memory.max_traces }}"
- name: QUERY_BASE_PATH
value: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} /{{ .Values.provider }} {{ end }}
livenessProbe:
httpGet:
path: /
port: 14269
readinessProbe:
httpGet:
path: /
port: 14269
{{- if eq .Values.jaeger.spanStorageType "badger" }}
volumeMounts:
- name: data
mountPath: /badger
{{- end }}
resources:
{{- if .Values.jaeger.resources }}
{{ toYaml .Values.jaeger.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 12 }}
{{- end }}
affinity:
{{- include "nodeAffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.global.cattle.psp.enabled }}
securityContext:
runAsNonRoot: true
runAsUser: 1000
{{- end }}
serviceAccountName: {{ include "tracing.fullname" . }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if eq .Values.jaeger.spanStorageType "badger" }}
volumes:
- name: data
{{- if .Values.jaeger.persistentVolumeClaim.enabled }}
persistentVolumeClaim:
claimName: istio-jaeger-pvc
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}

View File

@ -1,76 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "tracing.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "tracing.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups:
- policy
resourceNames:
- {{ include "tracing.fullname" . }}
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
runAsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- emptyDir
- secret
- persistentVolumeClaim
{{- end }}

View File

@ -1,16 +0,0 @@
{{- if .Values.jaeger.persistentVolumeClaim.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: istio-jaeger-pvc
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
spec:
storageClassName: {{ .Values.jaeger.storageClassName }}
accessModes:
- {{ .Values.jaeger.accessMode }}
resources:
requests:
storage: {{.Values.jaeger.persistentVolumeClaim.storage }}
{{- end }}

View File

@ -1,63 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: tracing
namespace: {{ .Release.Namespace }}
annotations:
{{- range $key, $val := .Values.service.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- name: {{ .Values.service.name }}
port: {{ .Values.service.externalPort }}
protocol: TCP
targetPort: 16686
selector:
app: {{ .Values.provider }}
---
# Jaeger implements the Zipkin API. To support swapping out the tracing backend, we use a Service named Zipkin.
apiVersion: v1
kind: Service
metadata:
name: zipkin
namespace: {{ .Release.Namespace }}
labels:
name: zipkin
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
ports:
- name: {{ .Values.service.name }}
port: {{ .Values.zipkin.queryPort }}
targetPort: {{ .Values.zipkin.queryPort }}
selector:
app: {{ .Values.provider }}
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-collector
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
type: ClusterIP
ports:
- name: jaeger-collector-http
port: 14268
targetPort: 14268
protocol: TCP
- name: jaeger-collector-grpc
port: 14250
targetPort: 14250
protocol: TCP
selector:
app: {{ .Values.provider }}

View File

@ -1,9 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}

View File

@ -1,7 +0,0 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -1,52 +0,0 @@
provider: jaeger
contextPath: ""
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## List of node taints to tolerate (requires Kubernetes >= 1.6)
tolerations: []
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
nameOverride: ""
fullnameOverride: ""
global:
cattle:
systemDefaultRegistry: ""
psp:
enabled: false
defaultResources: {}
imagePullPolicy: IfNotPresent
imagePullSecrets: []
arch:
amd64: 2
s390x: 2
ppc64le: 2
defaultNodeSelector:
kubernetes.io/os: linux
rbac:
pspEnabled: false
jaeger:
repository: rancher/mirrored-jaegertracing-all-in-one
tag: 1.43.0
# spanStorageType value can be "memory" and "badger" for all-in-one image
spanStorageType: badger
resources:
requests:
cpu: 10m
persistentVolumeClaim:
enabled: false
storage: 5Gi
storageClassName: ""
accessMode: ReadWriteMany
memory:
max_traces: 50000
zipkin:
queryPort: 9411
service:
annotations: {}
name: http-query
type: ClusterIP
externalPort: 16686

View File

@ -1,3 +0,0 @@
url: local
version: 103.0.0
doNotRelease: true

View File

@ -1,24 +0,0 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/display-name: Istio
catalog.cattle.io/kube-version: '>= 1.23.0-0 < 1.28.0-0'
catalog.cattle.io/namespace: istio-system
catalog.cattle.io/os: linux
catalog.cattle.io/permits-os: linux,windows
catalog.cattle.io/rancher-version: '>= 2.8.0-0 < 2.9.0-0'
catalog.cattle.io/release-name: rancher-istio
catalog.cattle.io/requests-cpu: 710m
catalog.cattle.io/requests-memory: 2314Mi
catalog.cattle.io/type: cluster-tool
catalog.cattle.io/ui-component: istio
catalog.cattle.io/upstream-version: 1.18.2
apiVersion: v1
appVersion: 1.18.2
description: A basic Istio setup that installs with the istioctl. Refer to https://istio.io/latest/
for details.
icon: https://charts.rancher.io/assets/logos/istio.svg
keywords:
- networking
- infrastructure
name: rancher-istio
version: 1.18.2

View File

@ -1,79 +0,0 @@
# Rancher-Istio Chart
Our [Istio](https://istio.io/) installer wraps the istioctl binary commands in a handy helm chart, including an overlay file option to allow complex customization.
See the app-readme for known issues and deprecations.
## Installation Requirements
#### Chart Dependencies
- rancher-monitoring chart or other Prometheus installation
#### Install
To install the rancher-istio chart with helm, use the following command:
```
helm install rancher-istio <location/of/the/rancher-istio/chart> --create-namespace -n istio-system
```
#### Uninstall
To ensure rancher-istio uninstalls correctly, you must uninstall rancher-istio prior to uninstalling chart dependencies (see chart dependencies for list of dependencies). This is because all definitions need to be available in order to properly build the rancher-istio objects for removal.
**If you remove dependent CRD charts prior to removing rancher-istio, you may encounter the following error:**
`Error: uninstallation completed with 1 error(s): unable to build kubernetes objects for delete: unable to recognize "": no matches for kind "MonitoringDashboard" in version "monitoring.kiali.io/v1alpha1"`
## Addons
The addons that are included with rancher-istio are:
- Kiali
- Jaeger
Each addon has additional customization and dependencies required for them to work as expected. Use the values.yaml to customize or to enable/disable each addon.
### Kiali Addon
Kiali allows you to view and manage your istio-based service mesh through an easy to use dashboard.
#### Kiali Dependencies
##### rancher-monitoring chart or other Prometheus installation
This dependecy installs the required CRDs for installing Kiali. Since Kiali is bundled in with Istio in this chart, if you do not have these dependencies installed, your Istio installation will fail. If you do not plan on using Kiali, set `kiali.enabled=false` when installing Istio for a succesful installation.
#### Prometheus Configuration for Kiali
> **Note:** The following configuration options assume you have installed the dependecies for Kiali. Please ensure you have Promtheus in your cluster before proceeding.
The Rancher Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which means all namespaces will be scraped by Prometheus by default. This ensures you can view traffic, metrics and graphs for resources deployed in other namespaces.
To limit scraping to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true` and add one of the following configurations to ensure you can continue to view traffic, metrics and graphs for your deployed resources.
1. Add a Service Monitor or Pod Monitor in the namespace with the targets you want to scrape.
1. Add an additionalScrapeConfig to your rancher-monitoring instance to scrape all targets in all namespaces.
#### Kiali External Services
The external services that can be configured in Kiali are: Prometheus, Grafana and Tracing.
##### Prometheus
The `kiali.external_services.prometheus` url is set in the values.yaml:
```
http://{{ .Values.nameOverride }}-prometheus.{{ .Values.namespaceOverride }}.svc:{{ prometheus.service.port }}
```
The url depends on the default values for `nameOverride`, `namespaceOverride`, and `prometheus.service.port` being set in your rancher-monitoring or other monitoring instance.
##### Grafana
The `kiali.external_services.grafana` url is set in the values.yaml:
```
http://{{ .Values.nameOverride }}-grafana.{{ .Values.namespaceOverride }}.svc:{{ grafana.service.port }}
```
The url depends on the default values for `nameOverride`, `namespaceOverride`, and `grafana.service.port` being set in your rancher-monitoring or other monitoring instance.
##### Tracing
The `kiali.external_services.tracing` url and `.Values.tracing.contextPath` is set in the rancher-istio values.yaml:
```
http://tracing.{{ .Values.namespaceOverride }}.svc:{{ .Values.service.externalPort }}/{{ .Values.tracing.contextPath }}
```
The url depends on the default values for `namespaceOverride`, and `.Values.service.externalPort` being set in your rancher-tracing or other tracing instance.
## Jaeger Addon
Jaeger allows you to trace and monitor distributed microservices.
> **Note:** This addon is using the all-in-one Jaeger installation which is not qualified for production. Use the [Jaeger Tracing](https://www.jaegertracing.io/docs/1.21/getting-started/) documentation to determine which installation you will need for your production needs.

View File

@ -1,65 +0,0 @@
# Rancher Istio
Our [Istio](https://istio.io/) installer wraps the istioctl binary commands in a handy helm chart, including an overlay file option to allow complex customization. It also includes:
* **[Kiali](https://kiali.io/)**: Used for graphing traffic flow throughout the mesh
* **[Jaeger](https://www.jaegertracing.io/)**: A quick start, all-in-one installation used for tracing distributed system. This is not production qualified, please refer to jaeger documentation to determine which installation you may need instead.
For more information on how to use the feature, refer to our [docs](https://rancher.com/docs/rancher/v2.x/en/istio/v2.5/).
## Upgrading to Kubernetes v1.25+
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`.
> **Note:**
> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`.
> **Note:**
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
>
> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets.
Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
## Warnings
- Upgrading across more than two minor versions (e.g., 1.6.x to 1.9.x) in one step is not officially tested or recommended. See [Istio upgrade docs](https://istio.io/latest/docs/setup/upgrade/) for more details.
## Known Issues
#### Airgapped Environments
**A temporary fix has been added to this chart to allow upgrades to succeed in an airgapped environment. See [this issue](https://github.com/rancher/rancher/issues/30842) for details.** We are still advocating for an upstream fix in Istio to formally resolve this issue. The root cause is the Istio Operator upgrade command reaches out to an external repo on upgrades and the external repo is not configurable. We are tracking the fix for this issue [here](https://github.com/rancher/rancher/issues/33402)
#### Installing Istio with CNI component enabled on RHEL 8.4 SElinux enabled cluster.
To install istio with CNI enabled, e.g. when cluster has a default PSP set to "restricted", on a cluster using nodes with RHEL 8.4 SElinux enabled, run the following command on each cluster node before creating a cluster.
`mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni`
See [this issue](https://github.com/rancher/rancher/issues/33291) for details.
## Installing istio with distroless-images.
Istio `103.1.0+up1.18.2` uses distroless images for `istio-proxyv2`, `istio-install-cni` and `istio-pilot`. Distroless images don't have the common debugging tools like `bash`, `curl`, etc. If you wish to troubleshoot Istio, you can switch to regular images by updating `values.yaml` file.
## Deprecations
#### v1alpha1 security policies
As of 1.6, Istio removed support for `v1alpha1` security policies resource and replaced the API with `v1beta1` authorization policies. https://istio.io/latest/docs/reference/config/security/authorization-policy/
If you are currently running rancher-istio <= 1.7.x, you need to migrate any existing `v1alpha1` security policies to `v1beta1` authorization policies prior to upgrading to the next minor version.
> **Note:** If you attempt to upgrade prior to migrating your policy resources, you might see errors similar to:
```
Error: found 6 CRD of unsupported v1alpha1 security policy
```
```
Error: found 1 unsupported v1alpha1 security policy
```
```
Control Plane - policy pod - istio-policy - version: x.x.x does not match the target version x.x.x
```
Continue with the migration steps below before retrying the upgrade process.
#### Migrating Resources:
Migration steps can be found in this [istio blog post](https://istio.io/latest/blog/2021/migrate-alpha-policy/ "istio blog post").
You can also use these [quick steps](https://github.com/rancher/rancher/issues/34699#issuecomment-921995917 "quick steps") to determine if you need to follow the more extensive migration steps.

View File

@ -1,135 +0,0 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
base:
enabled: {{ .Values.base.enabled }}
cni:
enabled: {{ .Values.cni.enabled }}
k8s:
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
egressGateways:
- enabled: {{ .Values.egressGateways.enabled }}
name: istio-egressgateway
k8s:
{{- if .Values.egressGateways.hpaSpec }}
hpaSpec: {{ toYaml .Values.egressGateways.hpaSpec | nindent 12 }}
{{- end }}
{{- if .Values.egressGateways.podDisruptionBudget }}
podDisruptionBudget: {{ toYaml .Values.egressGateways.podDisruptionBudget | nindent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
ingressGateways:
- enabled: {{ .Values.ingressGateways.enabled }}
name: istio-ingressgateway
k8s:
{{- if .Values.ingressGateways.hpaSpec }}
hpaSpec: {{ toYaml .Values.ingressGateways.hpaSpec | nindent 12 }}
{{- end }}
{{- if .Values.ingressGateways.podDisruptionBudget }}
podDisruptionBudget: {{ toYaml .Values.ingressGateways.podDisruptionBudget | nindent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
nodePort: 31380
- name: https
port: 443
targetPort: 8443
nodePort: 31390
- name: tcp
port: 31400
targetPort: 31400
nodePort: 31400
- name: tls
port: 15443
targetPort: 15443
istiodRemote:
enabled: {{ .Values.istiodRemote.enabled }}
pilot:
enabled: {{ .Values.pilot.enabled }}
k8s:
{{- if .Values.pilot.hpaSpec }}
hpaSpec: {{ toYaml .Values.pilot.hpaSpec | nindent 12 }}
{{- end }}
{{- if .Values.pilot.podDisruptionBudget }}
podDisruptionBudget: {{ toYaml .Values.pilot.podDisruptionBudget | nindent 12 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 12 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 12 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 12 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 12 }}
{{- end }}
hub: {{ .Values.systemDefaultRegistry | default "docker.io" }}
profile: default
tag: {{ .Values.tag }}
revision: {{ .Values.revision }}
meshConfig:
defaultConfig:
proxyMetadata:
{{- if .Values.dns.enabled }}
ISTIO_META_DNS_CAPTURE: "true"
{{- end }}
values:
gateways:
istio-egressgateway:
name: istio-egressgateway
type: {{ .Values.egressGateways.type }}
istio-ingressgateway:
name: istio-ingressgateway
type: {{ .Values.ingressGateways.type }}
global:
istioNamespace: {{ template "istio.namespace" . }}
proxy:
image: {{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}
proxy_init:
image: {{ template "system_default_registry" . }}{{ .Values.global.proxy_init.repository }}:{{ .Values.global.proxy_init.tag }}
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
defaultPodDisruptionBudget:
enabled: {{ .Values.global.defaultPodDisruptionBudget.enabled }}
{{- end }}
{{- if .Values.pilot.enabled }}
pilot:
image: {{ template "system_default_registry" . }}{{ .Values.pilot.repository }}:{{ .Values.pilot.tag }}
{{- end }}
telemetry:
enabled: {{ .Values.telemetry.enabled }}
v2:
enabled: {{ .Values.telemetry.v2.enabled }}
{{- if .Values.cni.enabled }}
cni:
image: {{ template "system_default_registry" . }}{{ .Values.cni.repository }}:{{ .Values.cni.tag }}
excludeNamespaces:
{{- toYaml .Values.cni.excludeNamespaces | nindent 8 }}
logLevel: {{ .Values.cni.logLevel }}
{{- end }}

View File

@ -1,7 +0,0 @@
dependencies:
- condition: kiali.enabled
name: kiali
repository: file://./charts/kiali
- condition: tracing.enabled
name: tracing
repository: file://./charts/tracing

View File

@ -1,37 +0,0 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: ilb-gateway
namespace: user-ingressgateway-ns
k8s:
resources:
requests:
cpu: 200m
service:
ports:
- name: tcp-citadel-grpc-tls
port: 8060
targetPort: 8060
- name: tcp-dns
port: 5353
serviceAnnotations:
cloud.google.com/load-balancer-type: internal
- enabled: true
name: other-gateway
namespace: cattle-istio-system
k8s:
resources:
requests:
cpu: 200m
service:
ports:
- name: tcp-citadel-grpc-tls
port: 8060
targetPort: 8060
- name: tcp-dns
port: 5353
serviceAnnotations:
cloud.google.com/load-balancer-type: internal

View File

@ -1,27 +0,0 @@
{{/* Ensure namespace is set the same everywhere */}}
{{- define "istio.namespace" -}}
{{- .Release.Namespace | default "istio-system" -}}
{{- end -}}
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
kubernetes.io/os: linux
{{- end -}}

View File

@ -1,43 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
name: istio-admin
namespace: {{ template "istio.namespace" . }}
rules:
- apiGroups:
- config.istio.io
resources:
- adapters
- attributemanifests
- handlers
- httpapispecbindings
- httpapispecs
- instances
- quotaspecbindings
- quotaspecs
- rules
- templates
verbs: ["get", "watch", "list"]
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
- workloadentries
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- authorizationpolicies
- peerauthentications
- requestauthentications
verbs:
- '*'

View File

@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-installer-base
namespace: {{ template "istio.namespace" . }}
data:
{{ tpl (.Files.Glob "configs/*").AsConfig . | indent 2 }}

View File

@ -1,134 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-installer
rules:
# istio groups
- apiGroups:
- extensions.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- authentication.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- config.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- install.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- networking.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- rbac.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- telemetry.istio.io
resources:
- '*'
verbs:
- '*'
# k8s groups
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions.apiextensions.k8s.io
- customresourcedefinitions
verbs:
- '*'
- apiGroups:
- apps
- extensions
resources:
- daemonsets
- deployments
- deployments/finalizers
- ingresses
- replicasets
- statefulsets
verbs:
- '*'
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
- roles
- rolebindings
verbs:
- '*'
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- events
- namespaces
- pods
- pods/exec
- persistentvolumeclaims
- secrets
- services
- serviceaccounts
verbs:
- '*'
{{- if and .Values.global.cattle.psp.enabled }}
- apiGroups:
- policy
resourceNames:
- istio-installer
resources:
- podsecuritypolicies
verbs:
- use
{{- end }}

View File

@ -1,12 +0,0 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: istio-installer
subjects:
- kind: ServiceAccount
name: istio-installer
namespace: {{ template "istio.namespace" . }}
roleRef:
kind: ClusterRole
name: istio-installer
apiGroup: rbac.authorization.k8s.io

View File

@ -1,43 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
namespace: {{ template "istio.namespace" . }}
name: istio-edit
rules:
- apiGroups:
- config.istio.io
resources:
- adapters
- attributemanifests
- handlers
- httpapispecbindings
- httpapispecs
- instances
- quotaspecbindings
- quotaspecs
- rules
- templates
verbs: ["get", "watch", "list"]
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
- workloadentries
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- authorizationpolicies
- peerauthentications
- requestauthentications
verbs:
- '*'

View File

@ -1,51 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp-istio-cni
namespace: {{ template "istio.namespace" . }}
spec:
allowPrivilegeEscalation: true
fsGroup:
rule: RunAsAny
hostNetwork: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- emptyDir
- hostPath
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-istio-cni
namespace: {{ template "istio.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: psp-istio-cni
subjects:
- kind: ServiceAccount
name: istio-cni
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: psp-istio-cni
namespace: {{ template "istio.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- psp-istio-cni
resources:
- podsecuritypolicies
verbs:
- use
{{- end }}

View File

@ -1,66 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: istioctl-installer
namespace: {{ template "istio.namespace" . }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
backoffLimit: 1
template:
spec:
{{- if .Values.installer.releaseMirror.enabled }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "github.com"
{{- end }}
containers:
- name: istioctl-installer
image: {{ template "system_default_registry" . }}{{ .Values.installer.repository }}:{{ .Values.installer.tag }}
env:
- name: RELEASE_NAME
value: {{ .Release.Name }}
- name: ISTIO_NAMESPACE
value: {{ template "istio.namespace" . }}
- name: FORCE_INSTALL
value: {{ .Values.forceInstall | default "false" | quote }}
- name: RELEASE_MIRROR_ENABLED
value: {{ .Values.installer.releaseMirror.enabled | quote }}
- name: SECONDS_SLEEP
value: {{ .Values.installer.debug.secondsSleep | quote}}
command: ["/bin/sh","-c"]
args: ["/usr/local/app/scripts/run.sh"]
volumeMounts:
- name: config-volume
mountPath: /app/istio-base.yaml
subPath: istio-base.yaml
{{- if .Values.overlayFile }}
- name: overlay-volume
mountPath: /app/overlay-config.yaml
subPath: overlay-config.yaml
{{- end }}
volumes:
- name: config-volume
configMap:
name: istio-installer-base
{{- if .Values.overlayFile }}
- name: overlay-volume
configMap:
name: istio-installer-overlay
{{- end }}
serviceAccountName: istio-installer
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
securityContext:
runAsUser: 499
runAsGroup: 487
restartPolicy: Never

View File

@ -1,30 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: istio-installer
namespace: {{ template "istio.namespace" . }}
spec:
privileged: false
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
volumes:
- 'configMap'
- 'secret'
{{- end }}

View File

@ -1,81 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: istio-psp
namespace: {{ template "istio.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-psp
subjects:
- kind: ServiceAccount
name: istio-egressgateway-service-account
- kind: ServiceAccount
name: istio-ingressgateway-service-account
- kind: ServiceAccount
name: istio-mixer-service-account
- kind: ServiceAccount
name: istio-operator-authproxy
- kind: ServiceAccount
name: istiod-service-account
- kind: ServiceAccount
name: istio-sidecar-injector-service-account
- kind: ServiceAccount
name: istiocoredns-service-account
- kind: ServiceAccount
name: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: istio-psp
namespace: {{ template "istio.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- istio-psp
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: istio-psp
namespace: {{ template "istio.namespace" . }}
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
runAsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end }}

View File

@ -1,53 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: istioctl-uninstaller
namespace: {{ template "istio.namespace" . }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: istioctl-uninstaller
image: {{ template "system_default_registry" . }}{{ .Values.installer.repository }}:{{ .Values.installer.tag }}
env:
- name: RELEASE_NAME
value: {{ .Release.Name }}
- name: ISTIO_NAMESPACE
value: {{ template "istio.namespace" . }}
command: ["/bin/sh","-c"]
args: ["/usr/local/app/scripts/uninstall_istio_system.sh"]
volumeMounts:
- name: config-volume
mountPath: /app/istio-base.yaml
subPath: istio-base.yaml
{{- if .Values.overlayFile }}
- name: overlay-volume
mountPath: /app/overlay-config.yaml
subPath: overlay-config.yaml
{{ end }}
volumes:
- name: config-volume
configMap:
name: istio-installer-base
{{- if .Values.overlayFile }}
- name: overlay-volume
configMap:
name: istio-installer-overlay
{{ end }}
serviceAccountName: istio-installer
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
securityContext:
runAsUser: 101
runAsGroup: 101
restartPolicy: OnFailure

View File

@ -1,9 +0,0 @@
{{- if .Values.overlayFile }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-installer-overlay
namespace: {{ template "istio.namespace" . }}
data:
overlay-config.yaml: {{ toYaml .Values.overlayFile | indent 2 }}
{{- end }}

View File

@ -1,51 +0,0 @@
{{- if .Values.kiali.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: envoy-stats-monitor
namespace: {{ template "istio.namespace" . }}
labels:
monitoring: istio-proxies
spec:
selector:
matchExpressions:
- {key: istio-prometheus-ignore, operator: DoesNotExist}
namespaceSelector:
any: true
jobLabel: envoy-stats
endpoints:
- path: /stats/prometheus
targetPort: 15090
interval: 15s
relabelings:
- sourceLabels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: istio-component-monitor
namespace: {{ template "istio.namespace" . }}
labels:
monitoring: istio-components
spec:
jobLabel: istio
targetLabels: [app]
selector:
matchExpressions:
- {key: istio, operator: In, values: [pilot]}
namespaceSelector:
any: true
endpoints:
- port: http-monitoring
interval: 15s
{{- end -}}

View File

@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-installer
namespace: {{ template "istio.namespace" . }}

View File

@ -1,7 +0,0 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -1,41 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
namespace: {{ template "istio.namespace" . }}
name: istio-view
rules:
- apiGroups:
- config.istio.io
resources:
- adapters
- attributemanifests
- handlers
- httpapispecbindings
- httpapispecs
- instances
- quotaspecbindings
- quotaspecs
- rules
- templates
verbs: ["get", "watch", "list"]
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
- workloadentries
verbs: ["get", "watch", "list"]
- apiGroups:
- security.istio.io
resources:
- authorizationpolicies
- peerauthentications
- requestauthentications
verbs: ["get", "watch", "list"]

View File

@ -1,119 +0,0 @@
overlayFile: ""
tag: 1.18.2
##Setting forceInstall: true will remove the check for istio version < 1.6.x and will not analyze your install cluster prior to install
forceInstall: false
installer:
repository: rancher/istio-installer
tag: 1.18.2-rancher1
##releaseMirror are configurations for istio upgrades.
##Setting releaseMirror.enabled: true will cause istio to use bundled in images from rancher/istio-installer to perfom an upgrade - this is ideal
##for airgap setups. Setting releaseMirror.enabled to false means istio will call externally to github to fetch the required assets.
releaseMirror:
enabled: false
##Set the secondsSleep to run a sleep command `sleep <secondsSleep>s` to allow time to exec into istio-installer pod for debugging
debug:
secondsSleep: 0
##Native support for dns added in 1.8
dns:
enabled: false
base:
enabled: true
cni:
enabled: false
repository: rancher/mirrored-istio-install-cni
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.18.2
tag: 1.18.2-distroless
logLevel: info
excludeNamespaces:
- istio-system
- kube-system
egressGateways:
enabled: false
type: NodePort
hpaSpec: {}
podDisruptionBudget: {}
ingressGateways:
enabled: true
type: NodePort
hpaSpec: {}
podDisruptionBudget: {}
istiodRemote:
enabled: false
pilot:
enabled: true
repository: rancher/mirrored-istio-pilot
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.18.2
tag: 1.18.2-distroless
hpaSpec: {}
podDisruptionBudget: {}
telemetry:
enabled: true
v2:
enabled: true
global:
cattle:
systemDefaultRegistry: ""
psp:
enabled: false
proxy:
repository: rancher/mirrored-istio-proxyv2
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.18.2
tag: 1.18.2-distroless
proxy_init:
repository: rancher/mirrored-istio-proxyv2
# If you wish to troubleshoot Istio, you can switch to regular images by uncommenting the following tag and deleting
# the distroless tag:
# tag: 1.18.2
tag: 1.18.2-distroless
defaultPodDisruptionBudget:
enabled: true
# Kiali subchart from rancher-kiali-server
kiali:
enabled: true
# If you wish to change the authentication you can check the options in the Kiali documentation https://kiali.io/docs/configuration/authentication/
auth:
strategy: anonymous
server:
web_root: /
deployment:
ingress_enabled: false
external_services:
prometheus:
custom_metrics_url: "http://rancher-monitoring-prometheus.cattle-monitoring-system.svc:9090"
url: "http://rancher-monitoring-prometheus.cattle-monitoring-system.svc:9090"
tracing:
in_cluster_url: "http://tracing.istio-system.svc:16686/jaeger"
use_grpc: false
grafana:
in_cluster_url: "http://rancher-monitoring-grafana.cattle-monitoring-system.svc:80"
url: "http://rancher-monitoring-grafana.cattle-monitoring-system.svc:80"
tracing:
enabled: false
contextPath: "/jaeger"
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## List of node taints to tolerate (requires Kubernetes >= 1.6)
tolerations: []

View File

@ -1,2 +0,0 @@
workingDir: ""
url: packages/rancher-istio/1.18/rancher-kiali-server

View File

@ -1,2 +0,0 @@
workingDir: ""
url: packages/rancher-istio/1.18/rancher-tracing

View File

@ -1,2 +0,0 @@
url: local
version: 103.1.0+up1.18.2

View File

@ -1,67 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "kiali-server.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "kiali-server.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: kiali
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "kiali-server.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- policy
resourceNames:
- {{ include "kiali-server.fullname" . }}-psp
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "kiali-server.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
runAsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end }}

View File

@ -1,7 +0,0 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -1,12 +0,0 @@
{{- if .Values.web_root_override }}
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali-console
namespace: {{ .Release.Namespace }}
labels:
{{- include "kiali-server.labels" . | nindent 4 }}
data:
env.js: |
window.WEB_ROOT='/k8s/clusters/{{ .Values.global.cattle.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:kiali:20001/proxy/kiali';
{{- end }}

View File

@ -1,31 +0,0 @@
--- charts-original/Chart.yaml
+++ charts/Chart.yaml
@@ -1,17 +1,26 @@
+annotations:
+ catalog.cattle.io/hidden: "true"
+ catalog.cattle.io/os: linux
+ catalog.cattle.io/requires-gvr: monitoring.coreos.com.prometheus/v1
+ catalog.rancher.io/namespace: cattle-istio-system
+ catalog.rancher.io/release-name: rancher-kiali-server
apiVersion: v2
appVersion: v1.67.0
description: Kiali is an open source project for service mesh observability, refer
- to https://www.kiali.io for details.
+ to https://www.kiali.io for details. This is installed as sub-chart with customized
+ values in Rancher's Istio.
home: https://github.com/kiali/kiali
icon: https://raw.githubusercontent.com/kiali/kiali.io/master/themes/kiali/static/img/kiali_logo_masthead.png
keywords:
- istio
- kiali
+- networking
+- infrastructure
maintainers:
- email: kiali-users@googlegroups.com
name: Kiali
url: https://kiali.io
-name: kiali-server
+name: rancher-kiali-server
sources:
- https://github.com/kiali/kiali
- https://github.com/kiali/kiali-operator

View File

@ -1,49 +0,0 @@
--- charts-original/templates/_helpers.tpl
+++ charts/templates/_helpers.tpl
@@ -50,8 +50,15 @@
Selector labels
*/}}
{{- define "kiali-server.selectorLabels" -}}
+{{- $releaseName := .Release.Name -}}
+{{- $fullName := include "kiali-server.fullname" . -}}
+{{- $deployment := (lookup "apps/v1" "Deployment" .Release.Namespace $fullName) -}}
app.kubernetes.io/name: kiali
-app.kubernetes.io/instance: {{ include "kiali-server.fullname" . }}
+{{- if (and .Release.IsUpgrade $deployment)}}
+app.kubernetes.io/instance: {{ (get (($deployment).metadata.labels) "app.kubernetes.io/instance") | default $fullName }}
+{{- else }}
+app.kubernetes.io/instance: {{ $fullName }}
+{{- end }}
{{- end }}
{{/*
@@ -172,6 +179,29 @@
{{- end }}
{{- end }}
+{{- define "system_default_registry" -}}
+{{- if .Values.global.cattle.systemDefaultRegistry -}}
+{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
+{{- else -}}
+{{- "" -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Windows cluster will add default taint for linux nodes,
+add below linux tolerations to workloads could be scheduled to those linux nodes
+*/}}
+{{- define "linux-node-tolerations" -}}
+- key: "cattle.io/os"
+ value: "linux"
+ effect: "NoSchedule"
+ operator: "Equal"
+{{- end -}}
+
+{{- define "linux-node-selector" -}}
+kubernetes.io/os: linux
+{{- end -}}
+
{{/*
Autodetect remote cluster secrets if enabled - looks for secrets in the same namespace where Kiali is installed.
Returns a JSON dict whose keys are the cluster names and values are the cluster secret data.

View File

@ -1,59 +0,0 @@
--- charts-original/templates/deployment.yaml
+++ charts/templates/deployment.yaml
@@ -53,7 +53,7 @@
{{- toYaml .Values.deployment.host_aliases | nindent 6 }}
{{- end }}
containers:
- - image: "{{ .Values.deployment.image_name }}{{ if .Values.deployment.image_digest }}@{{ .Values.deployment.image_digest }}{{ end }}:{{ .Values.deployment.image_version }}"
+ - image: "{{ template "system_default_registry" . }}{{ .Values.deployment.repository }}{{ if .Values.deployment.image_digest }}@{{ .Values.deployment.image_digest }}{{ end }}:{{ .Values.deployment.tag }}"
imagePullPolicy: {{ .Values.deployment.image_pull_policy | default "Always" }}
name: {{ include "kiali-server.fullname" . }}
command:
@@ -115,6 +115,11 @@
- name: LOG_SAMPLER_RATE
value: "{{ .Values.deployment.logger.sampler_rate }}"
volumeMounts:
+ {{- if .Values.web_root_override }}
+ - name: kiali-console
+ subPath: env.js
+ mountPath: /opt/kiali/console/env.js
+ {{- end }}
- name: {{ include "kiali-server.fullname" . }}-configuration
mountPath: "/kiali-configuration"
- name: {{ include "kiali-server.fullname" . }}-cert
@@ -140,6 +145,14 @@
{{- toYaml .Values.deployment.resources | nindent 10 }}
{{- end }}
volumes:
+ {{- if .Values.web_root_override }}
+ - name: kiali-console
+ configMap:
+ name: kiali-console
+ items:
+ - key: env.js
+ path: env.js
+ {{- end }}
- name: {{ include "kiali-server.fullname" . }}-configuration
configMap:
name: {{ include "kiali-server.fullname" . }}
@@ -194,12 +207,12 @@
{{- toYaml .Values.deployment.affinity.pod_anti | nindent 10 }}
{{- end }}
{{- end }}
- {{- if .Values.deployment.tolerations }}
- tolerations:
- {{- toYaml .Values.deployment.tolerations | nindent 8 }}
- {{- end }}
- {{- if .Values.deployment.node_selector }}
- nodeSelector:
- {{- toYaml .Values.deployment.node_selector | nindent 8 }}
- {{- end }}
+ tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
+{{- if .Values.deployment.tolerations }}
+{{ toYaml .Values.deployment.tolerations | indent 8 }}
+{{- end }}
+ nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
+{{- if .Values.deployment.node_selector }}
+{{ toYaml .Values.deployment.node_selector | indent 8 }}
+{{- end }}
...

View File

@ -1,40 +0,0 @@
--- charts-original/values.yaml
+++ charts/values.yaml
@@ -13,6 +13,9 @@
# do this, a PR would be welcome.
kiali_route_url: ""
+# rancher specific override that allows proxy access to kiali url
+web_root_override: true
+
#
# Settings that mimic the Kiali CR which are placed in the ConfigMap.
# Note that only those values used by the Helm Chart will be here.
@@ -42,10 +45,10 @@
api_version: "autoscaling/v2"
spec: {}
image_digest: "" # use "sha256" if image_version is a sha256 hash (do NOT prefix this value with a "@")
- image_name: quay.io/kiali/kiali
+ repository: rancher/mirrored-kiali-kiali
image_pull_policy: "Always"
image_pull_secrets: []
- image_version: v1.67.0 # version like "v1.39" (see: https://quay.io/repository/kiali/kiali?tab=tags) or a digest hash
+ tag: v1.67.0 # version like "v1.67" (see: https://quay.io/repository/kiali/kiali?tab=tags) or a digest hash
ingress:
additional_labels: {}
class_name: "nginx"
@@ -110,3 +113,13 @@
metrics_enabled: true
metrics_port: 9090
web_root: ""
+
+# Common settings used among istio subcharts.
+global:
+ # Specify rancher clusterId of external tracing config
+ # https://github.com/istio/istio.io/issues/4146#issuecomment-493543032
+ cattle:
+ systemDefaultRegistry: ""
+ clusterId:
+ psp:
+ enabled: false
\ No newline at end of file

View File

@ -1,3 +0,0 @@
url: https://kiali.org/helm-charts/kiali-server-1.67.0.tgz
version: 103.1.0
doNotRelease: true

View File

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,12 +0,0 @@
annotations:
catalog.cattle.io/hidden: "true"
catalog.cattle.io/os: linux
catalog.rancher.io/certified: rancher
catalog.rancher.io/namespace: istio-system
catalog.rancher.io/release-name: rancher-tracing
apiVersion: v1
appVersion: 1.47.0
description: A quick start Jaeger Tracing installation using the all-in-one demo.
This is not production qualified. Refer to https://www.jaegertracing.io/ for details.
name: rancher-tracing
version: 1.47.0

View File

@ -1,5 +0,0 @@
# Jaeger
A Rancher chart based on the Jaeger all-in-one quick installation option. This chart will allow you to trace and monitor distributed microservices.
> **Note:** The basic all-in-one Jaeger installation which is not qualified for production. Use the [Jaeger Tracing](https://www.jaegertracing.io) documentation to determine which installation you will need for your production needs.

View File

@ -1,92 +0,0 @@
{{/* affinity - https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ */}}
{{- define "nodeAffinity" }}
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "nodeAffinityRequiredDuringScheduling" . }}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "nodeAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- define "nodeAffinityRequiredDuringScheduling" }}
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
{{- range $key, $val := .Values.global.arch }}
{{- if gt ($val | int) 0 }}
- {{ $key | quote }}
{{- end }}
{{- end }}
{{- $nodeSelector := default .Values.global.defaultNodeSelector .Values.nodeSelector -}}
{{- range $key, $val := $nodeSelector }}
- key: {{ $key }}
operator: In
values:
- {{ $val | quote }}
{{- end }}
{{- end }}
{{- define "nodeAffinityPreferredDuringScheduling" }}
{{- range $key, $val := .Values.global.arch }}
{{- if gt ($val | int) 0 }}
- weight: {{ $val | int }}
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- {{ $key | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- define "podAntiAffinity" }}
{{- if or .Values.podAntiAffinityLabelSelector .Values.podAntiAffinityTermLabelSelector}}
podAntiAffinity:
{{- if .Values.podAntiAffinityLabelSelector }}
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "podAntiAffinityRequiredDuringScheduling" . }}
{{- end }}
{{- if or .Values.podAntiAffinityTermLabelSelector}}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "podAntiAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- end }}
{{- end }}
{{- define "podAntiAffinityRequiredDuringScheduling" }}
{{- range $index, $item := .Values.podAntiAffinityLabelSelector }}
- labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v | quote }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
{{- end }}
{{- end }}
{{- define "podAntiAffinityPreferredDuringScheduling" }}
{{- range $index, $item := .Values.podAntiAffinityTermLabelSelector }}
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v | quote }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
weight: 100
{{- end }}
{{- end }}

View File

@ -1,47 +0,0 @@
{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
{{- define "tracing.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "tracing.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}
{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}
{{- define "linux-node-selector" -}}
kubernetes.io/os: linux
{{- end -}}

View File

@ -1,94 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
matchLabels:
app: {{ .Values.provider }}
template:
metadata:
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
prometheus.io/scrape: "true"
prometheus.io/port: "14269"
{{- if .Values.jaeger.podAnnotations }}
{{ toYaml .Values.jaeger.podAnnotations | indent 8 }}
{{- end }}
spec:
containers:
- name: jaeger
image: "{{ template "system_default_registry" . }}{{ .Values.jaeger.repository }}:{{ .Values.jaeger.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
env:
{{- if eq .Values.jaeger.spanStorageType "badger" }}
- name: BADGER_EPHEMERAL
value: "false"
- name: SPAN_STORAGE_TYPE
value: "badger"
- name: BADGER_DIRECTORY_VALUE
value: "/badger/data"
- name: BADGER_DIRECTORY_KEY
value: "/badger/key"
{{- end }}
- name: COLLECTOR_ZIPKIN_HOST_PORT
value: "9411"
- name: MEMORY_MAX_TRACES
value: "{{ .Values.jaeger.memory.max_traces }}"
- name: QUERY_BASE_PATH
value: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} /{{ .Values.provider }} {{ end }}
livenessProbe:
httpGet:
path: /
port: 14269
readinessProbe:
httpGet:
path: /
port: 14269
{{- if eq .Values.jaeger.spanStorageType "badger" }}
volumeMounts:
- name: data
mountPath: /badger
{{- end }}
resources:
{{- if .Values.jaeger.resources }}
{{ toYaml .Values.jaeger.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 12 }}
{{- end }}
affinity:
{{- include "nodeAffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.global.cattle.psp.enabled }}
securityContext:
runAsNonRoot: true
runAsUser: 1000
{{- end }}
serviceAccountName: {{ include "tracing.fullname" . }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if eq .Values.jaeger.spanStorageType "badger" }}
volumes:
- name: data
{{- if .Values.jaeger.persistentVolumeClaim.enabled }}
persistentVolumeClaim:
claimName: istio-jaeger-pvc
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}

View File

@ -1,76 +0,0 @@
{{- if .Values.global.cattle.psp.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "tracing.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "tracing.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups:
- policy
resourceNames:
- {{ include "tracing.fullname" . }}
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
runAsGroup:
rule: MustRunAs
ranges:
- min: 1
max: 65535
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- emptyDir
- secret
- persistentVolumeClaim
{{- end }}

View File

@ -1,16 +0,0 @@
{{- if .Values.jaeger.persistentVolumeClaim.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: istio-jaeger-pvc
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
spec:
storageClassName: {{ .Values.jaeger.storageClassName }}
accessModes:
- {{ .Values.jaeger.accessMode }}
resources:
requests:
storage: {{.Values.jaeger.persistentVolumeClaim.storage }}
{{- end }}

View File

@ -1,63 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: tracing
namespace: {{ .Release.Namespace }}
annotations:
{{- range $key, $val := .Values.service.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- name: {{ .Values.service.name }}
port: {{ .Values.service.externalPort }}
protocol: TCP
targetPort: 16686
selector:
app: {{ .Values.provider }}
---
# Jaeger implements the Zipkin API. To support swapping out the tracing backend, we use a Service named Zipkin.
apiVersion: v1
kind: Service
metadata:
name: zipkin
namespace: {{ .Release.Namespace }}
labels:
name: zipkin
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
ports:
- name: {{ .Values.service.name }}
port: {{ .Values.zipkin.queryPort }}
targetPort: {{ .Values.zipkin.queryPort }}
selector:
app: {{ .Values.provider }}
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-collector
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
type: ClusterIP
ports:
- name: jaeger-collector-http
port: 14268
targetPort: 14268
protocol: TCP
- name: jaeger-collector-grpc
port: 14250
targetPort: 14250
protocol: TCP
selector:
app: {{ .Values.provider }}

View File

@ -1,9 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "tracing.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.provider }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}

View File

@ -1,7 +0,0 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.global.cattle.psp.enabled }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -1,53 +0,0 @@
provider: jaeger
contextPath: ""
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## List of node taints to tolerate (requires Kubernetes >= 1.6)
tolerations: []
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
nameOverride: ""
fullnameOverride: ""
global:
cattle:
systemDefaultRegistry: ""
psp:
enabled: false
defaultResources: {}
imagePullPolicy: IfNotPresent
imagePullSecrets: []
arch:
arm64: 2
amd64: 2
s390x: 2
ppc64le: 2
defaultNodeSelector:
kubernetes.io/os: linux
rbac:
pspEnabled: false
jaeger:
repository: rancher/mirrored-jaegertracing-all-in-one
tag: 1.47.0
# spanStorageType value can be "memory" and "badger" for all-in-one image
spanStorageType: badger
resources:
requests:
cpu: 10m
persistentVolumeClaim:
enabled: false
storage: 5Gi
storageClassName: ""
accessMode: ReadWriteMany
memory:
max_traces: 50000
zipkin:
queryPort: 9411
service:
annotations: {}
name: http-query
type: ClusterIP
externalPort: 16686

View File

@ -1,3 +0,0 @@
url: local
version: 103.1.0
doNotRelease: true