Merge pull request #53 from aiyengar2/migrate-live

[Live] Migrate to charts-build-scripts
pull/60/head
Jacob Blain Christen 2021-03-01 17:26:00 -07:00 committed by GitHub
commit 1277f5bc80
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
372 changed files with 11304 additions and 461 deletions

2
.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
bin
*.DS_Store

10
Makefile Normal file
View File

@ -0,0 +1,10 @@
pull-scripts:
./scripts/pull-scripts
TARGETS := prepare patch charts clean sync validate rebase docs
$(TARGETS):
@ls ./bin/charts-build-scripts 1>/dev/null 2>/dev/null || ./scripts/pull-scripts
./bin/charts-build-scripts $@
.PHONY: $(TARGETS)

View File

@ -1,7 +1,62 @@
# Asset branch ## Live Branch
This branch is auto-generated from main-source branch, please open PRs to main-source. This branch contains generated assets that have been officially released on rke2-charts.rancher.io.
[asset](./assets) Folder contains all the helm chart artifacts. The following directory structure is expected:
```text
assets/
<package>/
<chart>-<packageVersion>.tgz
...
charts/
<package>
<chart>
<packageVersion>
# Unarchived Helm chart
```
[charts](./charts) Folder contains all the helm chart content of the latest version for browsing purpose. ### Configuration
This repository branch contains a `configuration.yaml` file that is used to specify how it interacts with other repository branches.
#### Sync
This branch syncs with the generated assets from the following branches:
- main-source at https://github.com/rancher/rke2-charts.git (only latest assets)
To release a new version of a chart, please open the relevant PRs to one of these branches.
Merging should trigger a sync workflow on pushing to these branches.
### Cutting a Release
In the Live branch, cutting a release requires you to run the `make sync` command.
This command will automatically get the latest charts / resources merged into the the branches you sync with (as indicated in this branch's `configuration.yaml`) and will fail if any of those branches try to modify already released assets.
If the `make sync` command fails, you might have to manually make changes to the contents of the Staging Branch to resolve any issues.
Once you successfully run the `make sync` command, the logs outputted will itemize the releaseCandidateVersions picked out from the Staging branch and make exactly two changes:
1. It will update the `Chart.yaml`'s version for each chart to drop the `-rcXX` from it
2. It will update the `Chart.yaml`'s annotations for each chart to drop the `-rcXX` from it only for some special annotations (note: currently, the only special annotation we track is `catalog.cattle.io/auto-install`).
Once you successfully run the `make release` command, ensure the following is true:
- The `assets/` and `charts/` directories each only have a single file contained within them: `README.md`
- The `released/assets/` directory has a .tgz file for each releaseCandidateVersion of a Chart that was created during this release.
- The `index.yaml` and `released/assets/index.yaml` both are identical and the `index.yaml`'s diff shows only two types of changes: a timestamp update or a modification of an existing URL from `assets/*` to `released/assets/*`.
No other changes are expected.
### Makefile
#### Basic Commands
`make pull-scripts`: Pulls in the version of the `charts-build-scripts` indicated in scripts.
`make sync`: Syncs the assets in your current repository with the merged contents of all of the repository branches indicated in your configuration.yaml
`make validate`: Validates your current repository branch against all the repository branches indicated in your configuration.yaml
`make docs`: Pulls in the latest docs, scripts, etc. from the charts-build-scripts repository

1
_config.yml Normal file
View File

@ -0,0 +1 @@
exclude: [charts]

3
assets/README.md Normal file
View File

@ -0,0 +1,3 @@
## Assets
This folder contains Helm chart archives that are served from rke2-charts.rancher.io.

View File

@ -1,381 +0,0 @@
apiVersion: v1
entries:
rke2-canal:
- apiVersion: v1
appVersion: v3.13.3
created: "2021-02-24T21:41:48.737080031Z"
description: Install Canal Network Plugin.
digest: 4b6ac74aec73a70d12186701660c1f221fdbcb582571029a6c8fbc2738065742
home: https://www.projectcalico.org/
keywords:
- canal
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-canal
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-canal/rke2-canal-v3.13.300-build20210223.tgz
version: v3.13.300-build20210223
- apiVersion: v1
appVersion: v3.13.3
created: "2021-02-19T16:11:27.472930693Z"
description: Install Canal Network Plugin.
digest: 2396b0aca28a6d4a373a251b02e4efa12bbfedf29e37e45904b860176d0c80f8
home: https://www.projectcalico.org/
keywords:
- canal
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-canal
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-canal/rke2-canal-v3.13.3.tgz
version: v3.13.3
rke2-coredns:
- apiVersion: v1
appVersion: 1.7.1
created: "2021-01-08T18:12:00.296423364Z"
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services
digest: 335099356a98589e09f1bb940913b0ed6abb8d2c4db91720f87d1cf7697a5cf7
home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords:
- coredns
- dns
- kubedns
name: rke2-coredns
sources:
- https://github.com/coredns/coredns
urls:
- assets/rke2-coredns/rke2-coredns-1.13.800.tgz
version: 1.13.800
- apiVersion: v1
appVersion: 1.6.9
created: "2021-01-22T21:35:45.403680219Z"
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services
digest: be60a62ec184cf6ca7b0ed917e6962e8a2578fa1eeef6a835e82d2b7709933d5
home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords:
- coredns
- dns
- kubedns
maintainers:
- email: hello@acale.ph
name: Acaleph
- email: shashidhara.huawei@gmail.com
name: shashidharatd
- email: andor44@gmail.com
name: andor44
- email: manuel@rueg.eu
name: mrueg
name: rke2-coredns
sources:
- https://github.com/coredns/coredns
urls:
- assets/rke2-coredns/rke2-coredns-1.10.101.tgz
version: 1.10.101
- apiVersion: v1
appVersion: 1.6.9
created: "2021-02-24T21:41:48.738290233Z"
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services
digest: 869cb592cac545f579b6de6b35de82de4904566fd91826bc16546fddc48fe1c4
home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords:
- coredns
- dns
- kubedns
maintainers:
- email: hello@acale.ph
name: Acaleph
- email: shashidhara.huawei@gmail.com
name: shashidharatd
- email: andor44@gmail.com
name: andor44
- email: manuel@rueg.eu
name: mrueg
name: rke2-coredns
sources:
- https://github.com/coredns/coredns
urls:
- assets/rke2-coredns/rke2-coredns-1.10.101-build2021022301.tgz
version: 1.10.101-build2021022301
rke2-ingress-nginx:
- apiVersion: v1
appVersion: 0.35.0
created: "2021-02-24T21:42:02.60663315Z"
description: Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer
digest: 2480ed0be9032f8f839913e12f0528128a15483ced57c851baed605156532782
home: https://github.com/kubernetes/ingress-nginx
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:
- ingress
- nginx
kubeVersion: '>=1.16.0-0'
maintainers:
- name: ChiefAlexander
name: rke2-ingress-nginx
sources:
- https://github.com/kubernetes/ingress-nginx
urls:
- assets/rke2-ingress-nginx/rke2-ingress-nginx-3.3.000.tgz
version: 3.3.000
- apiVersion: v1
appVersion: 0.30.0
created: "2021-02-19T16:11:27.47593126Z"
description: An nginx Ingress controller that uses ConfigMap to store the nginx configuration.
digest: 768ce303918a97a2d0f9a333f4eb0f2ebb3b7f54b849e83c6bdd52f8b513af9b
home: https://github.com/kubernetes/ingress-nginx
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:
- ingress
- nginx
kubeVersion: '>=1.10.0-0'
maintainers:
- name: ChiefAlexander
- email: Trevor.G.Wood@gmail.com
name: taharah
name: rke2-ingress-nginx
sources:
- https://github.com/kubernetes/ingress-nginx
urls:
- assets/rke2-ingress-nginx/rke2-ingress-nginx-1.36.300.tgz
version: 1.36.300
rke2-kube-proxy:
- apiVersion: v1
appVersion: v1.20.2
created: "2021-01-25T23:01:11.589999085Z"
description: Install Kube Proxy.
digest: 68f08c49c302bfe23e9c6f8074a21a6a3e0c90fdb16f5e6fb32a5a3ee3f7c717
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.20.2.tgz
version: v1.20.2
- apiVersion: v1
appVersion: v1.19.8
created: "2021-02-24T21:41:48.739048333Z"
description: Install Kube Proxy.
digest: f2bace51d33062e3ac713ebbedd48dd4df56c821dfa52da9fdf71891d601bcde
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.19.8.tgz
version: v1.19.8
- apiVersion: v1
appVersion: v1.19.7
created: "2021-01-22T21:35:45.405178128Z"
description: Install Kube Proxy.
digest: def9baa9bc5c12267d3575a03a2e5f2eccc907a6058202ed09a6cd39967790ca
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.19.7.tgz
version: v1.19.7
- apiVersion: v1
appVersion: v1.19.5
created: "2020-12-17T19:20:49.383692056Z"
description: Install Kube Proxy.
digest: f74f820857b79601f3b8e498e701297d71f3b37bbf94dc3ae96dfcca50fb80df
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.19.5.tgz
version: v1.19.5
- apiVersion: v1
appVersion: v1.18.16
created: "2021-02-19T17:03:49.957724823Z"
description: Install Kube Proxy.
digest: a57acde11e30a9a15330ffec38686b605325b145f21935e79843b28652d46a21
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.16.tgz
version: v1.18.16
- apiVersion: v1
appVersion: v1.18.15
created: "2021-01-14T18:05:30.822746229Z"
description: Install Kube Proxy.
digest: 3a6429d05a3d22e3959ceac27db15f922f1033553e8e6b5da2eb7cd18ed9309f
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.15.tgz
version: v1.18.15
- apiVersion: v1
appVersion: v1.18.13
created: "2020-12-10T22:07:42.184767459Z"
description: Install Kube Proxy.
digest: 15d192f5016b8573d2c6f17ab55fa6f14fa1352fcdef2c391a6a477b199867ec
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.13.tgz
version: v1.18.13
- apiVersion: v1
appVersion: v1.18.12
created: "2020-12-07T21:17:34.244857883Z"
description: Install Kube Proxy.
digest: e1da2b245da23aaa526cb94c04ed48cd3e730b848c0d33e420dcfd5b15374f5e
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.12.tgz
version: v1.18.12
- apiVersion: v1
appVersion: v1.18.10
created: "2020-10-15T22:21:23.252729387Z"
description: Install Kube Proxy.
digest: 1ae84231365f19d82a4ea7c6b069ce90308147ba77bef072290ef7464ff1694e
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.10.tgz
version: v1.18.10
- apiVersion: v1
appVersion: v1.18.9
created: "2020-10-14T23:04:28.48143194Z"
description: Install Kube Proxy.
digest: e1e5b6f98c535fa5d90469bd3f731d331bdaa3f9154157d7625b367a7023f399
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.9.tgz
version: v1.18.9
- apiVersion: v1
appVersion: v1.18.8
created: "2020-09-29T00:14:59.633896455Z"
description: Install Kube Proxy.
digest: 7765237ddc39c416178242e7a6798d679a50f466ac18d3a412207606cd0d66ed
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.8.tgz
version: v1.18.8
- apiVersion: v1
appVersion: v1.18.4
created: "2020-09-29T00:14:59.632610835Z"
description: Install Kube Proxy.
digest: b859363c5ecab8c46b53efa34d866b9c27840737ad1afec0eb9729b8968304fb
keywords:
- kube-proxy
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-kube-proxy
sources:
- https://github.com/rancher/rke2-charts
urls:
- assets/rke2-kube-proxy/rke2-kube-proxy-v1.18.4.tgz
version: v1.18.4
rke2-metrics-server:
- apiVersion: v1
appVersion: 0.3.6
created: "2021-02-19T16:11:27.477610954Z"
description: Metrics Server is a cluster-wide aggregator of resource usage data.
digest: 295435f65cc6c0c5ed8fd6b028cac5614b761789c5e09c0483170c3fd46f6e59
home: https://github.com/kubernetes-incubator/metrics-server
keywords:
- metrics-server
maintainers:
- email: o.with@sportradar.com
name: olemarkus
- email: k.aasan@sportradar.com
name: kennethaasan
name: rke2-metrics-server
sources:
- https://github.com/kubernetes-incubator/metrics-server
urls:
- assets/rke2-metrics-server/rke2-metrics-server-2.11.100.tgz
version: 2.11.100
- apiVersion: v1
appVersion: 0.3.6
created: "2021-02-24T21:41:48.739850734Z"
description: Metrics Server is a cluster-wide aggregator of resource usage data.
digest: a7cbec2f4764c99db298fb4e1f5297246253a3228daf2747281c953059160fc9
home: https://github.com/kubernetes-incubator/metrics-server
keywords:
- metrics-server
maintainers:
- email: o.with@sportradar.com
name: olemarkus
- email: k.aasan@sportradar.com
name: kennethaasan
name: rke2-metrics-server
sources:
- https://github.com/kubernetes-incubator/metrics-server
urls:
- assets/rke2-metrics-server/rke2-metrics-server-2.11.100-build2021022300.tgz
version: 2.11.100-build2021022300
generated: "2021-02-24T21:42:02.60300284Z"

3
charts/README.md Normal file
View File

@ -0,0 +1,3 @@
## Charts
This folder contains the unarchived Helm charts that are currently being served at rke2-charts.rancher.io.

View File

@ -1,13 +1,13 @@
apiVersion: v1 apiVersion: v1
name: rke2-canal
description: Install Canal Network Plugin.
version: v3.13.300-build20210223
appVersion: v3.13.3 appVersion: v3.13.3
description: Install Canal Network Plugin.
home: https://www.projectcalico.org/ home: https://www.projectcalico.org/
keywords: keywords:
- canal - canal
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-canal
sources: sources:
- https://github.com/rancher/rke2-charts - https://github.com/rancher/rke2-charts
maintainers: version: v3.13.300-build20210223
- name: Rancher Labs
email: charts@rancher.com

View File

@ -0,0 +1,13 @@
apiVersion: v1
appVersion: v3.13.3
description: Install Canal Network Plugin.
home: https://www.projectcalico.org/
keywords:
- canal
maintainers:
- email: charts@rancher.com
name: Rancher Labs
name: rke2-canal
sources:
- https://github.com/rancher/rke2-charts
version: v3.13.3

View File

@ -0,0 +1,3 @@
Canal network plugin has been installed.
NOTE: It may take few minutes until Canal image install CNI files and node become in ready state.

View File

@ -0,0 +1,7 @@
{{- define "system_default_registry" -}}
{{- if .Values.global.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,67 @@
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Canal installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Release.Name }}-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: {{ .Values.calico.typhaServiceName | quote }}
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosen using the node's
# default route.
canal_iface: {{ .Values.flannel.iface | quote }}
# Whether or not to masquerade traffic to destinations not within
# the pod network.
masquerade: {{ .Values.calico.masquerade | quote }}
# Configure the MTU to use
veth_mtu: {{ .Values.calico.vethuMTU | quote }}
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
# Flannel network configuration. Mounted into the flannel container.
net-conf.json: |
{
"Network": {{ .Values.podCidr | quote }},
"Backend": {
"Type": {{ .Values.flannel.backend | quote }}
}
}

View File

@ -0,0 +1,197 @@
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset

View File

@ -0,0 +1,262 @@
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the canal container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: {{ .Release.Name | quote }}
namespace: kube-system
labels:
k8s-app: canal
spec:
selector:
matchLabels:
k8s-app: canal
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: canal
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure canal gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: canal
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: {{ template "system_default_registry" . }}{{ .Values.calico.cniImage.repository }}:{{ .Values.calico.cniImage.tag }}
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-canal.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: {{ template "system_default_registry" . }}{{ .Values.calico.flexvolImage.repository }}:{{ .Values.calico.flexvolImage.tag }}
command: ['/usr/local/bin/flexvol.sh', '-s', '/usr/local/bin/flexvol', '-i', 'flexvoldriver']
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs canal container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
command:
- "start_runit"
image: {{ template "system_default_registry" . }}{{ .Values.calico.nodeImage.repository }}:{{ .Values.calico.nodeImage.tag }}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: {{ .Values.calico.datastoreType | quote }}
# Configure route aggregation based on pod CIDR.
- name: USE_POD_CIDR
value: {{ .Values.calico.usePodCIDR | quote }}
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: {{ .Values.calico.waitForDatastore | quote }}
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Don't enable BGP.
- name: CALICO_NETWORKING_BACKEND
value: {{ .Values.calico.networkingBackend | quote }}
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: {{ .Values.calico.clusterType | quote}}
# Period, in seconds, at which felix re-applies all iptables state
- name: FELIX_IPTABLESREFRESHINTERVAL
value: {{ .Values.calico.felixIptablesRefreshInterval | quote}}
- name: FELIX_IPTABLESBACKEND
value: {{ .Values.calico.felixIptablesBackend | quote}}
# No IP address needed.
- name: IP
value: ""
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: {{ .Values.calico.felixDefaultEndpointToHostAction | quote }}
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: {{ .Values.calico.felixIpv6Support | quote }}
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: {{ .Values.calico.felixLogSeverityScreen | quote }}
- name: FELIX_HEALTHENABLED
value: {{ .Values.calico.felixHealthEnabled | quote }}
# enable promentheus metrics
- name: FELIX_PROMETHEUSMETRICSENABLED
value: {{ .Values.calico.felixPrometheusMetricsEnabled | quote }}
- name: FELIX_XDPENABLED
value: {{ .Values.calico.felixXDPEnabled | quote }}
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
host: localhost
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
# This container runs flannel using the kube-subnet-mgr backend
# for allocating subnets.
- name: kube-flannel
image: {{ template "system_default_registry" . }}{{ .Values.flannel.image.repository }}:{{ .Values.flannel.image.tag }}
command:
- "/opt/bin/flanneld"
{{- range .Values.flannel.args }}
- {{ . | quote }}
{{- end }}
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: FLANNELD_IFACE
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: canal_iface
- name: FLANNELD_IP_MASQ
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: masquerade
volumeMounts:
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
# Used by canal.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used by flannel.
- name: flannel-cfg
configMap:
name: {{ .Release.Name }}-config
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds

View File

@ -0,0 +1,163 @@
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
---
# Flannel ClusterRole
# Pulled from https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- apiGroups: [""]
resources:
- nodes
verbs:
- list
- watch
- apiGroups: [""]
resources:
- nodes/status
verbs:
- patch
---
# Bind the flannel ClusterRole to the canal ServiceAccount.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: canal-flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: canal
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: canal-calico
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: canal
namespace: kube-system

View File

@ -0,0 +1,6 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: canal
namespace: kube-system

View File

@ -0,0 +1,74 @@
---
# The IPv4 cidr pool to create on startup if none exists. Pod IPs will be
# chosen from this range.
podCidr: "10.42.0.0/16"
flannel:
# kube-flannel image
image:
repository: rancher/hardened-flannel
tag: v0.13.0-rancher1
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosen using the node's
# default route.
iface: ""
# kube-flannel command arguments
args:
- "--ip-masq"
- "--kube-subnet-mgr"
# Backend for kube-flannel. Backend should not be changed
# at runtime.
backend: "vxlan"
calico:
# CNI installation image.
cniImage:
repository: rancher/hardened-calico
tag: v3.13.3
# Canal node image.
nodeImage:
repository: rancher/hardened-calico
tag: v3.13.3
# Flexvol Image.
flexvolImage:
repository: rancher/hardened-calico
tag: v3.13.3
# Datastore type for canal. It can be either kuberentes or etcd.
datastoreType: kubernetes
# Wait for datastore to initialize.
waitForDatastore: true
# Configure route aggregation based on pod CIDR.
usePodCIDR: true
# Disable BGP routing.
networkingBackend: none
# Cluster type to identify the deployment type.
clusterType: "k8s,canal"
# Disable file logging so `kubectl logs` works.
disableFileLogging: true
# Disable IPv6 on Kubernetes.
felixIpv6Support: false
# Period, in seconds, at which felix re-applies all iptables state
felixIptablesRefreshInterval: 60
# iptables backend to use for felix, defaults to auto but can also be set to nft or legacy
felixIptablesBackend: auto
# Set Felix logging to "info".
felixLogSeverityScreen: info
# Enable felix healthcheck.
felixHealthEnabled: true
# Enable prometheus metrics
felixPrometheusMetricsEnabled: true
# Disable XDP Acceleration as we do not support it with our ubi7 base image
felixXDPEnabled: false
# Whether or not to masquerade traffic to destinations not within
# the pod network.
masquerade: true
# Set Felix endpoint to host default action to ACCEPT.
felixDefaultEndpointToHostAction: ACCEPT
# Configure the MTU to use.
vethuMTU: 1450
# Typha is disabled.
typhaServiceName: none
global:
systemDefaultRegistry: ""

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
OWNERS

View File

@ -1,7 +1,6 @@
apiVersion: v1 apiVersion: v1
appVersion: 1.6.9 appVersion: 1.6.9
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services
Services
home: https://coredns.io home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords: keywords:

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
OWNERS

View File

@ -0,0 +1,23 @@
apiVersion: v1
appVersion: 1.6.9
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS
Services
home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords:
- coredns
- dns
- kubedns
maintainers:
- email: hello@acale.ph
name: Acaleph
- email: shashidhara.huawei@gmail.com
name: shashidharatd
- email: andor44@gmail.com
name: andor44
- email: manuel@rueg.eu
name: mrueg
name: rke2-coredns
sources:
- https://github.com/coredns/coredns
version: 1.10.101

View File

@ -0,0 +1,138 @@
# CoreDNS
[CoreDNS](https://coredns.io/) is a DNS server that chains plugins and provides DNS Services
# TL;DR;
```console
$ helm install --name coredns --namespace=kube-system stable/coredns
```
## Introduction
This chart bootstraps a [CoreDNS](https://github.com/coredns/coredns) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. This chart will provide DNS Services and can be deployed in multiple configuration to support various scenarios listed below:
- CoreDNS as a cluster dns service and a drop-in replacement for Kube/SkyDNS. This is the default mode and CoreDNS is deployed as cluster-service in kube-system namespace. This mode is chosen by setting `isClusterService` to true.
- CoreDNS as an external dns service. In this mode CoreDNS is deployed as any kubernetes app in user specified namespace. The CoreDNS service can be exposed outside the cluster by using using either the NodePort or LoadBalancer type of service. This mode is chosen by setting `isClusterService` to false.
- CoreDNS as an external dns provider for kubernetes federation. This is a sub case of 'external dns service' which uses etcd plugin for CoreDNS backend. This deployment mode as a dependency on `etcd-operator` chart, which needs to be pre-installed.
## Prerequisites
- Kubernetes 1.10 or later
## Installing the Chart
The chart can be installed as follows:
```console
$ helm install --name coredns --namespace=kube-system stable/coredns
```
The command deploys CoreDNS on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists various ways to override default configuration during deployment.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete coredns
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
| Parameter | Description | Default |
|:----------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------|
| `image.repository` | The image repository to pull from | coredns/coredns |
| `image.tag` | The image tag to pull from | `v1.6.9` |
| `image.pullPolicy` | Image pull policy | IfNotPresent |
| `replicaCount` | Number of replicas | 1 |
| `resources.limits.cpu` | Container maximum CPU | `100m` |
| `resources.limits.memory` | Container maximum memory | `128Mi` |
| `resources.requests.cpu` | Container requested CPU | `100m` |
| `resources.requests.memory` | Container requested memory | `128Mi` |
| `serviceType` | Kubernetes Service type | `ClusterIP` |
| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
| `prometheus.monitor.namespace` | Selector to select which namespaces the Endpoints objects are discovered from. | `""` |
| `service.clusterIP` | IP address to assign to service | `""` |
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""` |
| `service.externalTrafficPolicy` | Enable client source IP preservation | `[]` |
| `service.annotations` | Annotations to add to service | `{prometheus.io/scrape: "true", prometheus.io/port: "9153"}`|
| `serviceAccount.create` | If true, create & use serviceAccount | false |
| `serviceAccount.name` | If not set & create is true, use template fullname | |
| `rbac.create` | If true, create & use RBAC resources | true |
| `rbac.pspEnable` | Specifies whether a PodSecurityPolicy should be created. | `false` |
| `isClusterService` | Specifies whether chart should be deployed as cluster-service or normal k8s app. | true |
| `priorityClassName` | Name of Priority Class to assign pods | `""` |
| `servers` | Configuration for CoreDNS and plugins | See values.yml |
| `affinity` | Affinity settings for pod assignment | {} |
| `nodeSelector` | Node labels for pod assignment | {} |
| `tolerations` | Tolerations for pod assignment | [] |
| `zoneFiles` | Configure custom Zone files | [] |
| `extraSecrets` | Optional array of secrets to mount inside the CoreDNS container | [] |
| `customLabels` | Optional labels for Deployment(s), Pod, Service, ServiceMonitor objects | {} |
| `podDisruptionBudget` | Optional PodDisruptionBudget | {} |
| `autoscaler.enabled` | Optionally enabled a cluster-proportional-autoscaler for CoreDNS | `false` |
| `autoscaler.coresPerReplica` | Number of cores in the cluster per CoreDNS replica | `256` |
| `autoscaler.nodesPerReplica` | Number of nodes in the cluster per CoreDNS replica | `16` |
| `autoscaler.image.repository` | The image repository to pull autoscaler from | k8s.gcr.io/cluster-proportional-autoscaler-amd64 |
| `autoscaler.image.tag` | The image tag to pull autoscaler from | `1.7.1` |
| `autoscaler.image.pullPolicy` | Image pull policy for the autoscaler | IfNotPresent |
| `autoscaler.priorityClassName` | Optional priority class for the autoscaler pod. `priorityClassName` used if not set. | `""` |
| `autoscaler.affinity` | Affinity settings for pod assignment for autoscaler | {} |
| `autoscaler.nodeSelector` | Node labels for pod assignment for autoscaler | {} |
| `autoscaler.tolerations` | Tolerations for pod assignment for autoscaler | [] |
| `autoscaler.resources.limits.cpu` | Container maximum CPU for cluster-proportional-autoscaler | `20m` |
| `autoscaler.resources.limits.memory` | Container maximum memory for cluster-proportional-autoscaler | `10Mi` |
| `autoscaler.resources.requests.cpu` | Container requested CPU for cluster-proportional-autoscaler | `20m` |
| `autoscaler.resources.requests.memory` | Container requested memory for cluster-proportional-autoscaler | `10Mi` |
| `autoscaler.configmap.annotations` | Annotations to add to autoscaler config map. For example to stop CI renaming them | {} |
See `values.yaml` for configuration notes. Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install --name coredns \
--set rbac.create=false \
stable/coredns
```
The above command disables automatic creation of RBAC rules.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install --name coredns -f values.yaml stable/coredns
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Caveats
The chart will automatically determine which protocols to listen on based on
the protocols you define in your zones. This means that you could potentially
use both "TCP" and "UDP" on a single port.
Some cloud environments like "GCE" or "Azure container service" cannot
create external loadbalancers with both "TCP" and "UDP" protocols. So
When deploying CoreDNS with `serviceType="LoadBalancer"` on such cloud
environments, make sure you do not attempt to use both protocols at the same
time.
## Autoscaling
By setting `autoscaler.enabled = true` a
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler)
will be deployed. This will default to a coredns replica for every 256 cores, or
16 nodes in the cluster. These can be changed with `autoscaler.coresPerReplica`
and `autoscaler.nodesPerReplica`. When cluster is using large nodes (with more
cores), `coresPerReplica` should dominate. If using small nodes,
`nodesPerReplica` should dominate.
This also creates a ServiceAccount, ClusterRole, and ClusterRoleBinding for
the autoscaler deployment.
`replicaCount` is ignored if this is enabled.

View File

@ -0,0 +1,30 @@
{{- if .Values.isClusterService }}
CoreDNS is now running in the cluster as a cluster-service.
{{- else }}
CoreDNS is now running in the cluster.
It can be accessed using the below endpoint
{{- if contains "NodePort" .Values.serviceType }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "coredns.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo "$NODE_IP:$NODE_PORT"
{{- else if contains "LoadBalancer" .Values.serviceType }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl get svc -w {{ template "coredns.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "coredns.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $SERVICE_IP
{{- else if contains "ClusterIP" .Values.serviceType }}
"{{ template "coredns.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local"
from within the cluster
{{- end }}
{{- end }}
It can be tested with the following:
1. Launch a Pod with DNS tools:
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
2. Query the DNS server:
/ # host kubernetes

View File

@ -0,0 +1,158 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "coredns.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "coredns.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Generate the list of ports automatically from the server definitions
*/}}
{{- define "coredns.servicePorts" -}}
{{/* Set ports to be an empty dict */}}
{{- $ports := dict -}}
{{/* Iterate through each of the server blocks */}}
{{- range .Values.servers -}}
{{/* Capture port to avoid scoping awkwardness */}}
{{- $port := toString .port -}}
{{/* If none of the server blocks has mentioned this port yet take note of it */}}
{{- if not (hasKey $ports $port) -}}
{{- $ports := set $ports $port (dict "istcp" false "isudp" false) -}}
{{- end -}}
{{/* Retrieve the inner dict that holds the protocols for a given port */}}
{{- $innerdict := index $ports $port -}}
{{/*
Look at each of the zones and check which protocol they serve
At the moment the following are supported by CoreDNS:
UDP: dns://
TCP: tls://, grpc://
*/}}
{{- range .zones -}}
{{- if has (default "" .scheme) (list "dns://") -}}
{{/* Optionally enable tcp for this service as well */}}
{{- if eq .use_tcp true }}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end }}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- end -}}
{{- if has (default "" .scheme) (list "tls://" "grpc://") -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{- end -}}
{{/* If none of the zones specify scheme, default to dns:// on both tcp & udp */}}
{{- if and (not (index $innerdict "istcp")) (not (index $innerdict "isudp")) -}}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{/* Write the dict back into the outer dict */}}
{{- $ports := set $ports $port $innerdict -}}
{{- end -}}
{{/* Write out the ports according to the info collected above */}}
{{- range $port, $innerdict := $ports -}}
{{- if index $innerdict "isudp" -}}
{{- printf "- {port: %v, protocol: UDP, name: udp-%s}\n" $port $port -}}
{{- end -}}
{{- if index $innerdict "istcp" -}}
{{- printf "- {port: %v, protocol: TCP, name: tcp-%s}\n" $port $port -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Generate the list of ports automatically from the server definitions
*/}}
{{- define "coredns.containerPorts" -}}
{{/* Set ports to be an empty dict */}}
{{- $ports := dict -}}
{{/* Iterate through each of the server blocks */}}
{{- range .Values.servers -}}
{{/* Capture port to avoid scoping awkwardness */}}
{{- $port := toString .port -}}
{{/* If none of the server blocks has mentioned this port yet take note of it */}}
{{- if not (hasKey $ports $port) -}}
{{- $ports := set $ports $port (dict "istcp" false "isudp" false) -}}
{{- end -}}
{{/* Retrieve the inner dict that holds the protocols for a given port */}}
{{- $innerdict := index $ports $port -}}
{{/*
Look at each of the zones and check which protocol they serve
At the moment the following are supported by CoreDNS:
UDP: dns://
TCP: tls://, grpc://
*/}}
{{- range .zones -}}
{{- if has (default "" .scheme) (list "dns://") -}}
{{/* Optionally enable tcp for this service as well */}}
{{- if eq .use_tcp true }}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end }}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- end -}}
{{- if has (default "" .scheme) (list "tls://" "grpc://") -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{- end -}}
{{/* If none of the zones specify scheme, default to dns:// on both tcp & udp */}}
{{- if and (not (index $innerdict "istcp")) (not (index $innerdict "isudp")) -}}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{/* Write the dict back into the outer dict */}}
{{- $ports := set $ports $port $innerdict -}}
{{- end -}}
{{/* Write out the ports according to the info collected above */}}
{{- range $port, $innerdict := $ports -}}
{{- if index $innerdict "isudp" -}}
{{- printf "- {containerPort: %v, protocol: UDP, name: udp-%s}\n" $port $port -}}
{{- end -}}
{{- if index $innerdict "istcp" -}}
{{- printf "- {containerPort: %v, protocol: TCP, name: tcp-%s}\n" $port $port -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "coredns.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "coredns.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- define "system_default_registry" -}}
{{- if .Values.global.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,35 @@
{{- if and .Values.autoscaler.enabled .Values.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
- apiGroups: [""]
resources: ["replicationcontrollers/scale"]
verbs: ["get", "update"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments/scale", "replicasets/scale"]
verbs: ["get", "update"]
# Remove the configmaps rule once below issue is fixed:
# kubernetes-incubator/cluster-proportional-autoscaler#16
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create"]
{{- end }}

View File

@ -0,0 +1,38 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
{{- if .Values.rbac.pspEnable }}
- apiGroups:
- policy
- extensions
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- {{ template "coredns.fullname" . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,28 @@
{{- if and .Values.autoscaler.enabled .Values.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "coredns.fullname" . }}-autoscaler
subjects:
- kind: ServiceAccount
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,24 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "coredns.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "coredns.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,34 @@
{{- if .Values.autoscaler.enabled }}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{- toYaml .Values.customLabels | nindent 4 }}
{{- end }}
{{- if .Values.autoscaler.configmap.annotations }}
annotations:
{{- toYaml .Values.autoscaler.configmap.annotations | nindent 4 }}
{{- end }}
data:
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate.
linear: |-
{
"coresPerReplica": {{ .Values.autoscaler.coresPerReplica | float64 }},
"nodesPerReplica": {{ .Values.autoscaler.nodesPerReplica | float64 }},
"preventSinglePointFailure": true
}
{{- end }}

View File

@ -0,0 +1,30 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
data:
Corefile: |-
{{ range .Values.servers }}
{{- range $idx, $zone := .zones }}{{ if $idx }} {{ else }}{{ end }}{{ default "" $zone.scheme }}{{ default "." $zone.zone }}{{ else }}.{{ end -}}
{{- if .port }}:{{ .port }} {{ end -}}
{
{{- range .plugins }}
{{ .name }} {{ if .parameters }} {{if eq .name "kubernetes" }} {{ (lookup "v1" "ConfigMap" "kube-system" "cluster-dns").data.clusterDomain }} {{ end }} {{.parameters}}{{ end }}{{ if .configBlock }} {
{{ .configBlock | indent 12 }}
}{{ end }}
{{- end }}
}
{{ end }}
{{- range .Values.zoneFiles }}
{{ .filename }}: {{ toYaml .contents | indent 4 }}
{{- end }}

View File

@ -0,0 +1,77 @@
{{- if .Values.autoscaler.enabled }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
template:
metadata:
labels:
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | nindent 8 }}
{{- end }}
annotations:
checksum/configmap: {{ include (print $.Template.BasePath "/configmap-autoscaler.yaml") . | sha256sum }}
{{- if .Values.isClusterService }}
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
{{- end }}
spec:
serviceAccountName: {{ template "coredns.fullname" . }}-autoscaler
{{- $priorityClassName := default .Values.priorityClassName .Values.autoscaler.priorityClassName }}
{{- if $priorityClassName }}
priorityClassName: {{ $priorityClassName | quote }}
{{- end }}
{{- if .Values.autoscaler.affinity }}
affinity:
{{ toYaml .Values.autoscaler.affinity | indent 8 }}
{{- end }}
{{- if .Values.autoscaler.tolerations }}
tolerations:
{{ toYaml .Values.autoscaler.tolerations | indent 8 }}
{{- end }}
{{- if .Values.autoscaler.nodeSelector }}
nodeSelector:
{{ toYaml .Values.autoscaler.nodeSelector | indent 8 }}
{{- end }}
containers:
- name: autoscaler
image: {{ template "system_default_registry" . }}{{ .Values.autoscaler.image.repository }}:{{ .Values.autoscaler.image.tag }}
imagePullPolicy: {{ .Values.autoscaler.image.pullPolicy }}
resources:
{{ toYaml .Values.autoscaler.resources | indent 10 }}
command:
- /cluster-proportional-autoscaler
- --namespace={{ .Release.Namespace }}
- --configmap={{ template "coredns.fullname" . }}-autoscaler
- --target=Deployment/{{ template "coredns.fullname" . }}
- --logtostderr=true
- --v=2
{{- end }}

View File

@ -0,0 +1,127 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
{{- if not .Values.autoscaler.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 10%
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
template:
metadata:
labels:
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- if .Values.isClusterService }}
scheduler.alpha.kubernetes.io/critical-pod: ''
{{- end }}
spec:
serviceAccountName: {{ template "coredns.serviceAccountName" . }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
{{- if .Values.isClusterService }}
dnsPolicy: Default
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if or (.Values.isClusterService) (.Values.tolerations) }}
tolerations:
{{- if .Values.isClusterService }}
- key: CriticalAddonsOnly
operator: Exists
{{- end }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
containers:
- name: "coredns"
image: {{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
{{- range .Values.extraSecrets }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: true
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
ports:
{{ include "coredns.containerPorts" . | indent 8 }}
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
volumes:
- name: config-volume
configMap:
name: {{ template "coredns.fullname" . }}
items:
- key: Corefile
path: Corefile
{{ range .Values.zoneFiles }}
- key: {{ .filename }}
path: {{ .filename }}
{{ end }}
{{- range .Values.extraSecrets }}
- name: {{ .name }}
secret:
secretName: {{ .name }}
defaultMode: 400
{{- end }}

View File

@ -0,0 +1,28 @@
{{- if .Values.podDisruptionBudget -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
{{- end }}

View File

@ -0,0 +1,57 @@
{{- if .Values.rbac.pspEnable }}
{{ if .Capabilities.APIVersions.Has "policy/v1beta1" }}
apiVersion: policy/v1beta1
{{ else }}
apiVersion: extensions/v1beta1
{{ end -}}
kind: PodSecurityPolicy
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- else }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- end }}
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# Add back CAP_NET_BIND_SERVICE so that coredns can run on port 53
allowedCapabilities:
- CAP_NET_BIND_SERVICE
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "coredns.fullname" . }}-metrics
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
app.kubernetes.io/component: metrics
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
selector:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
ports:
- name: metrics
port: 9153
targetPort: 9153
{{- end }}

View File

@ -0,0 +1,40 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
selector:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{ else }}
clusterIP: {{ (lookup "v1" "ConfigMap" "kube-system" "cluster-dns").data.clusterDNS }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
{{ include "coredns.servicePorts" . | indent 2 -}}
type: {{ default "ClusterIP" .Values.serviceType }}

View File

@ -0,0 +1,21 @@
{{- if and .Values.autoscaler.enabled .Values.rbac.create }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "coredns.serviceAccountName" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "coredns.fullname" . }}
{{- if .Values.prometheus.monitor.namespace }}
namespace: {{ .Values.prometheus.monitor.namespace }}
{{- end }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.prometheus.monitor.additionalLabels }}
{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
app.kubernetes.io/component: metrics
endpoints:
- port: metrics
{{- end }}

View File

@ -0,0 +1,202 @@
# Default values for coredns.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: rancher/hardened-coredns
tag: "v1.6.9"
pullPolicy: IfNotPresent
replicaCount: 1
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
serviceType: "ClusterIP"
prometheus:
monitor:
enabled: false
additionalLabels: {}
namespace: ""
service:
# clusterIP: ""
# loadBalancerIP: ""
# externalTrafficPolicy: ""
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9153"
serviceAccount:
create: true
# The name of the ServiceAccount to use
# If not set and create is true, a name is generated using the fullname template
name: coredns
rbac:
# If true, create & use RBAC resources
create: true
# If true, create and use PodSecurityPolicy
pspEnable: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
# name:
# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
isClusterService: true
# Optional priority class to be used for the coredns pods. Used for autoscaler if autoscaler.priorityClassName not set.
priorityClassName: "system-cluster-critical"
# Default zone is what Kubernetes recommends:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options
servers:
- zones:
- zone: .
port: 53
plugins:
- name: errors
# Serves a /health endpoint on :8080, required for livenessProbe
- name: health
configBlock: |-
lameduck 5s
# Serves a /ready endpoint on :8181, required for readinessProbe
- name: ready
# Required to query kubernetes API for data
- name: kubernetes
parameters: cluster.local in-addr.arpa ip6.arpa
configBlock: |-
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
# Serves a /metrics endpoint on :9153, required for serviceMonitor
- name: prometheus
parameters: 0.0.0.0:9153
- name: forward
parameters: . /etc/resolv.conf
- name: cache
parameters: 30
- name: loop
- name: reload
- name: loadbalance
# Complete example with all the options:
# - zones: # the `zones` block can be left out entirely, defaults to "."
# - zone: hello.world. # optional, defaults to "."
# scheme: tls:// # optional, defaults to "" (which equals "dns://" in CoreDNS)
# - zone: foo.bar.
# scheme: dns://
# use_tcp: true # set this parameter to optionally expose the port on tcp as well as udp for the DNS protocol
# # Note that this will not work if you are also exposing tls or grpc on the same server
# port: 12345 # optional, defaults to "" (which equals 53 in CoreDNS)
# plugins: # the plugins to use for this server block
# - name: kubernetes # name of plugin, if used multiple times ensure that the plugin supports it!
# parameters: foo bar # list of parameters after the plugin
# configBlock: |- # if the plugin supports extra block style config, supply it here
# hello world
# foo bar
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
# for example:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: foo.bar.com/role
# operator: In
# values:
# - master
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
# for example:
# tolerations:
# - key: foo.bar.com/role
# operator: Equal
# value: master
# effect: NoSchedule
tolerations: []
# https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
podDisruptionBudget: {}
# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
zoneFiles: []
# - filename: example.db
# domain: example.com
# contents: |
# example.com. IN SOA sns.dns.icann.com. noc.dns.icann.com. 2015082541 7200 3600 1209600 3600
# example.com. IN NS b.iana-servers.net.
# example.com. IN NS a.iana-servers.net.
# example.com. IN A 192.168.99.102
# *.example.com. IN A 192.168.99.102
# optional array of secrets to mount inside coredns container
# possible usecase: need for secure connection with etcd backend
extraSecrets: []
# - name: etcd-client-certs
# mountPath: /etc/coredns/tls/etcd
# - name: some-fancy-secret
# mountPath: /etc/wherever
# Custom labels to apply to Deployment, Pod, Service, ServiceMonitor. Including autoscaler if enabled.
customLabels: {}
## Configue a cluster-proportional-autoscaler for coredns
# See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
autoscaler:
# Enabled the cluster-proportional-autoscaler
enabled: false
# Number of cores in the cluster per coredns replica
coresPerReplica: 256
# Number of nodes in the cluster per coredns replica
nodesPerReplica: 16
image:
repository: k8s.gcr.io/cluster-proportional-autoscaler-amd64
tag: "1.7.1"
pullPolicy: IfNotPresent
# Optional priority class to be used for the autoscaler pods. priorityClassName used if not set.
priorityClassName: ""
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
tolerations: []
# resources for autoscaler pod
resources:
requests:
cpu: "20m"
memory: "10Mi"
limits:
cpu: "20m"
memory: "10Mi"
# Options for autoscaler configmap
configmap:
## Annotations for the coredns-autoscaler configmap
# i.e. strategy.spinnaker.io/versioned: "false" to ensure configmap isn't renamed
annotations: {}
k8sApp : "kube-dns"
global:
systemDefaultRegistry: ""

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
OWNERS

View File

@ -0,0 +1,14 @@
apiVersion: v1
appVersion: 1.7.1
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS
Services
home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords:
- coredns
- dns
- kubedns
name: rke2-coredns
sources:
- https://github.com/coredns/coredns
version: 1.13.800

View File

@ -0,0 +1,169 @@
# ⚠️ Repo Archive Notice
As of Nov 13, 2020, charts in this repo will no longer be updated.
For more information, see the Helm Charts [Deprecation and Archive Notice](https://github.com/helm/charts#%EF%B8%8F-deprecation-and-archive-notice), and [Update](https://helm.sh/blog/charts-repo-deprecation/).
# CoreDNS
[CoreDNS](https://coredns.io/) is a DNS server that chains plugins and provides DNS Services
## DEPRECATION NOTICE
This chart is deprecated and no longer supported.
# TL;DR;
```console
$ helm install --name coredns --namespace=kube-system stable/coredns
```
## Introduction
This chart bootstraps a [CoreDNS](https://github.com/coredns/coredns) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. This chart will provide DNS Services and can be deployed in multiple configuration to support various scenarios listed below:
- CoreDNS as a cluster dns service and a drop-in replacement for Kube/SkyDNS. This is the default mode and CoreDNS is deployed as cluster-service in kube-system namespace. This mode is chosen by setting `isClusterService` to true.
- CoreDNS as an external dns service. In this mode CoreDNS is deployed as any kubernetes app in user specified namespace. The CoreDNS service can be exposed outside the cluster by using using either the NodePort or LoadBalancer type of service. This mode is chosen by setting `isClusterService` to false.
- CoreDNS as an external dns provider for kubernetes federation. This is a sub case of 'external dns service' which uses etcd plugin for CoreDNS backend. This deployment mode as a dependency on `etcd-operator` chart, which needs to be pre-installed.
## Prerequisites
- Kubernetes 1.10 or later
## Installing the Chart
The chart can be installed as follows:
```console
$ helm install --name coredns --namespace=kube-system stable/coredns
```
The command deploys CoreDNS on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists various ways to override default configuration during deployment.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete coredns
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
| Parameter | Description | Default |
|:----------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------|
| `image.repository` | The image repository to pull from | coredns/coredns |
| `image.tag` | The image tag to pull from | `v1.7.1` |
| `image.pullPolicy` | Image pull policy | IfNotPresent |
| `replicaCount` | Number of replicas | 1 |
| `resources.limits.cpu` | Container maximum CPU | `100m` |
| `resources.limits.memory` | Container maximum memory | `128Mi` |
| `resources.requests.cpu` | Container requested CPU | `100m` |
| `resources.requests.memory` | Container requested memory | `128Mi` |
| `serviceType` | Kubernetes Service type | `ClusterIP` |
| `prometheus.service.enabled` | Set this to `true` to create Service for Prometheus metrics | `false` |
| `prometheus.service.annotations` | Annotations to add to the metrics Service | `{prometheus.io/scrape: "true", prometheus.io/port: "9153"}`|
| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
| `prometheus.monitor.namespace` | Selector to select which namespaces the Endpoints objects are discovered from. | `""` |
| `service.clusterIP` | IP address to assign to service | `""` |
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""` |
| `service.externalIPs` | External IP addresses | [] |
| `service.externalTrafficPolicy` | Enable client source IP preservation | [] |
| `service.annotations` | Annotations to add to service | {} |
| `serviceAccount.create` | If true, create & use serviceAccount | false |
| `serviceAccount.name` | If not set & create is true, use template fullname | |
| `rbac.create` | If true, create & use RBAC resources | true |
| `rbac.pspEnable` | Specifies whether a PodSecurityPolicy should be created. | `false` |
| `isClusterService` | Specifies whether chart should be deployed as cluster-service or normal k8s app. | true |
| `priorityClassName` | Name of Priority Class to assign pods | `""` |
| `servers` | Configuration for CoreDNS and plugins | See values.yml |
| `affinity` | Affinity settings for pod assignment | {} |
| `nodeSelector` | Node labels for pod assignment | {} |
| `tolerations` | Tolerations for pod assignment | [] |
| `zoneFiles` | Configure custom Zone files | [] |
| `extraVolumes` | Optional array of volumes to create | [] |
| `extraVolumeMounts` | Optional array of volumes to mount inside the CoreDNS container | [] |
| `extraSecrets` | Optional array of secrets to mount inside the CoreDNS container | [] |
| `customLabels` | Optional labels for Deployment(s), Pod, Service, ServiceMonitor objects | {} |
| `rollingUpdate.maxUnavailable` | Maximum number of unavailable replicas during rolling update | `1` |
| `rollingUpdate.maxSurge` | Maximum number of pods created above desired number of pods | `25%` |
| `podDisruptionBudget` | Optional PodDisruptionBudget | {} |
| `podAnnotations` | Optional Pod only Annotations | {} |
| `terminationGracePeriodSeconds` | Optional duration in seconds the pod needs to terminate gracefully. | 30 |
| `preStopSleep` | Definition of Kubernetes preStop hook executed before Pod termination | {} |
| `hpa.enabled` | Enable Hpa autoscaler instead of proportional one | `false` |
| `hpa.minReplicas` | Hpa minimum number of CoreDNS replicas | `1` |
| `hpa.maxReplicas` | Hpa maximum number of CoreDNS replicas | `2` |
| `hpa.metrics` | Metrics definitions used by Hpa to scale up and down | {} |
| `autoscaler.enabled` | Optionally enabled a cluster-proportional-autoscaler for CoreDNS | `false` |
| `autoscaler.coresPerReplica` | Number of cores in the cluster per CoreDNS replica | `256` |
| `autoscaler.nodesPerReplica` | Number of nodes in the cluster per CoreDNS replica | `16` |
| `autoscaler.min` | Min size of replicaCount | 0 |
| `autoscaler.max` | Max size of replicaCount | 0 (aka no max) |
| `autoscaler.includeUnschedulableNodes` | Should the replicas scale based on the total number or only schedulable nodes | `false` |
| `autoscaler.preventSinglePointFailure` | If true does not allow single points of failure to form | `true` |
| `autoscaler.image.repository` | The image repository to pull autoscaler from | k8s.gcr.io/cluster-proportional-autoscaler-amd64 |
| `autoscaler.image.tag` | The image tag to pull autoscaler from | `1.7.1` |
| `autoscaler.image.pullPolicy` | Image pull policy for the autoscaler | IfNotPresent |
| `autoscaler.priorityClassName` | Optional priority class for the autoscaler pod. `priorityClassName` used if not set. | `""` |
| `autoscaler.affinity` | Affinity settings for pod assignment for autoscaler | {} |
| `autoscaler.nodeSelector` | Node labels for pod assignment for autoscaler | {} |
| `autoscaler.tolerations` | Tolerations for pod assignment for autoscaler | [] |
| `autoscaler.resources.limits.cpu` | Container maximum CPU for cluster-proportional-autoscaler | `20m` |
| `autoscaler.resources.limits.memory` | Container maximum memory for cluster-proportional-autoscaler | `10Mi` |
| `autoscaler.resources.requests.cpu` | Container requested CPU for cluster-proportional-autoscaler | `20m` |
| `autoscaler.resources.requests.memory` | Container requested memory for cluster-proportional-autoscaler | `10Mi` |
| `autoscaler.configmap.annotations` | Annotations to add to autoscaler config map. For example to stop CI renaming them | {} |
See `values.yaml` for configuration notes. Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install --name coredns \
--set rbac.create=false \
stable/coredns
```
The above command disables automatic creation of RBAC rules.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install --name coredns -f values.yaml stable/coredns
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Caveats
The chart will automatically determine which protocols to listen on based on
the protocols you define in your zones. This means that you could potentially
use both "TCP" and "UDP" on a single port.
Some cloud environments like "GCE" or "Azure container service" cannot
create external loadbalancers with both "TCP" and "UDP" protocols. So
When deploying CoreDNS with `serviceType="LoadBalancer"` on such cloud
environments, make sure you do not attempt to use both protocols at the same
time.
## Autoscaling
By setting `autoscaler.enabled = true` a
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler)
will be deployed. This will default to a coredns replica for every 256 cores, or
16 nodes in the cluster. These can be changed with `autoscaler.coresPerReplica`
and `autoscaler.nodesPerReplica`. When cluster is using large nodes (with more
cores), `coresPerReplica` should dominate. If using small nodes,
`nodesPerReplica` should dominate.
This also creates a ServiceAccount, ClusterRole, and ClusterRoleBinding for
the autoscaler deployment.
`replicaCount` is ignored if this is enabled.
By setting `hpa.enabled = true` a [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
is enabled for Coredns deployment. This can scale number of replicas based on meitrics
like CpuUtilization, MemoryUtilization or Custom ones.

View File

@ -0,0 +1,30 @@
{{- if .Values.isClusterService }}
CoreDNS is now running in the cluster as a cluster-service.
{{- else }}
CoreDNS is now running in the cluster.
It can be accessed using the below endpoint
{{- if contains "NodePort" .Values.serviceType }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "coredns.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo "$NODE_IP:$NODE_PORT"
{{- else if contains "LoadBalancer" .Values.serviceType }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl get svc -w {{ template "coredns.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "coredns.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $SERVICE_IP
{{- else if contains "ClusterIP" .Values.serviceType }}
"{{ template "coredns.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local"
from within the cluster
{{- end }}
{{- end }}
It can be tested with the following:
1. Launch a Pod with DNS tools:
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
2. Query the DNS server:
/ # host kubernetes

View File

@ -0,0 +1,158 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "coredns.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "coredns.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Generate the list of ports automatically from the server definitions
*/}}
{{- define "coredns.servicePorts" -}}
{{/* Set ports to be an empty dict */}}
{{- $ports := dict -}}
{{/* Iterate through each of the server blocks */}}
{{- range .Values.servers -}}
{{/* Capture port to avoid scoping awkwardness */}}
{{- $port := toString .port -}}
{{/* If none of the server blocks has mentioned this port yet take note of it */}}
{{- if not (hasKey $ports $port) -}}
{{- $ports := set $ports $port (dict "istcp" false "isudp" false) -}}
{{- end -}}
{{/* Retrieve the inner dict that holds the protocols for a given port */}}
{{- $innerdict := index $ports $port -}}
{{/*
Look at each of the zones and check which protocol they serve
At the moment the following are supported by CoreDNS:
UDP: dns://
TCP: tls://, grpc://
*/}}
{{- range .zones -}}
{{- if has (default "" .scheme) (list "dns://") -}}
{{/* Optionally enable tcp for this service as well */}}
{{- if eq (default false .use_tcp) true }}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end }}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- end -}}
{{- if has (default "" .scheme) (list "tls://" "grpc://") -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{- end -}}
{{/* If none of the zones specify scheme, default to dns:// on both tcp & udp */}}
{{- if and (not (index $innerdict "istcp")) (not (index $innerdict "isudp")) -}}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{/* Write the dict back into the outer dict */}}
{{- $ports := set $ports $port $innerdict -}}
{{- end -}}
{{/* Write out the ports according to the info collected above */}}
{{- range $port, $innerdict := $ports -}}
{{- if index $innerdict "isudp" -}}
{{- printf "- {port: %v, protocol: UDP, name: udp-%s}\n" $port $port -}}
{{- end -}}
{{- if index $innerdict "istcp" -}}
{{- printf "- {port: %v, protocol: TCP, name: tcp-%s}\n" $port $port -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Generate the list of ports automatically from the server definitions
*/}}
{{- define "coredns.containerPorts" -}}
{{/* Set ports to be an empty dict */}}
{{- $ports := dict -}}
{{/* Iterate through each of the server blocks */}}
{{- range .Values.servers -}}
{{/* Capture port to avoid scoping awkwardness */}}
{{- $port := toString .port -}}
{{/* If none of the server blocks has mentioned this port yet take note of it */}}
{{- if not (hasKey $ports $port) -}}
{{- $ports := set $ports $port (dict "istcp" false "isudp" false) -}}
{{- end -}}
{{/* Retrieve the inner dict that holds the protocols for a given port */}}
{{- $innerdict := index $ports $port -}}
{{/*
Look at each of the zones and check which protocol they serve
At the moment the following are supported by CoreDNS:
UDP: dns://
TCP: tls://, grpc://
*/}}
{{- range .zones -}}
{{- if has (default "" .scheme) (list "dns://") -}}
{{/* Optionally enable tcp for this service as well */}}
{{- if eq (default false .use_tcp) true }}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end }}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- end -}}
{{- if has (default "" .scheme) (list "tls://" "grpc://") -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{- end -}}
{{/* If none of the zones specify scheme, default to dns:// on both tcp & udp */}}
{{- if and (not (index $innerdict "istcp")) (not (index $innerdict "isudp")) -}}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{/* Write the dict back into the outer dict */}}
{{- $ports := set $ports $port $innerdict -}}
{{- end -}}
{{/* Write out the ports according to the info collected above */}}
{{- range $port, $innerdict := $ports -}}
{{- if index $innerdict "isudp" -}}
{{- printf "- {containerPort: %v, protocol: UDP, name: udp-%s}\n" $port $port -}}
{{- end -}}
{{- if index $innerdict "istcp" -}}
{{- printf "- {containerPort: %v, protocol: TCP, name: tcp-%s}\n" $port $port -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "coredns.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "coredns.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- define "system_default_registry" -}}
{{- if .Values.global.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.systemDefaultRegistry -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,35 @@
{{- if and .Values.autoscaler.enabled .Values.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
- apiGroups: [""]
resources: ["replicationcontrollers/scale"]
verbs: ["get", "update"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments/scale", "replicasets/scale"]
verbs: ["get", "update"]
# Remove the configmaps rule once below issue is fixed:
# kubernetes-incubator/cluster-proportional-autoscaler#16
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create"]
{{- end }}

View File

@ -0,0 +1,38 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
{{- if .Values.rbac.pspEnable }}
- apiGroups:
- policy
- extensions
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- {{ template "coredns.fullname" . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,28 @@
{{- if and .Values.autoscaler.enabled .Values.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "coredns.fullname" . }}-autoscaler
subjects:
- kind: ServiceAccount
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,24 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "coredns.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "coredns.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,37 @@
{{- if .Values.autoscaler.enabled }}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{- toYaml .Values.customLabels | nindent 4 }}
{{- end }}
{{- if .Values.autoscaler.configmap.annotations }}
annotations:
{{- toYaml .Values.autoscaler.configmap.annotations | nindent 4 }}
{{- end }}
data:
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate.
linear: |-
{
"coresPerReplica": {{ .Values.autoscaler.coresPerReplica | float64 }},
"nodesPerReplica": {{ .Values.autoscaler.nodesPerReplica | float64 }},
"preventSinglePointFailure": {{ .Values.autoscaler.preventSinglePointFailure }},
"min": {{ .Values.autoscaler.min | int }},
"max": {{ .Values.autoscaler.max | int }},
"includeUnschedulableNodes": {{ .Values.autoscaler.includeUnschedulableNodes }}
}
{{- end }}

View File

@ -0,0 +1,30 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
data:
Corefile: |-
{{ range .Values.servers }}
{{- range $idx, $zone := .zones }}{{ if $idx }} {{ else }}{{ end }}{{ default "" $zone.scheme }}{{ default "." $zone.zone }}{{ else }}.{{ end -}}
{{- if .port }}:{{ .port }} {{ end -}}
{
{{- range .plugins }}
{{ .name }} {{ if .parameters }} {{if eq .name "kubernetes" }} {{ (lookup "v1" "ConfigMap" "kube-system" "cluster-dns").data.clusterDomain }} {{ end }} {{.parameters}}{{ end }}{{ if .configBlock }} {
{{ .configBlock | indent 12 }}
}{{ end }}
{{- end }}
}
{{ end }}
{{- range .Values.zoneFiles }}
{{ .filename }}: {{ toYaml .contents | indent 4 }}
{{- end }}

View File

@ -0,0 +1,77 @@
{{- if and (.Values.autoscaler.enabled) (not .Values.hpa.enabled) }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
template:
metadata:
labels:
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | nindent 8 }}
{{- end }}
annotations:
checksum/configmap: {{ include (print $.Template.BasePath "/configmap-autoscaler.yaml") . | sha256sum }}
{{- if .Values.isClusterService }}
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
{{- end }}
spec:
serviceAccountName: {{ template "coredns.fullname" . }}-autoscaler
{{- $priorityClassName := default .Values.priorityClassName .Values.autoscaler.priorityClassName }}
{{- if $priorityClassName }}
priorityClassName: {{ $priorityClassName | quote }}
{{- end }}
{{- if .Values.autoscaler.affinity }}
affinity:
{{ toYaml .Values.autoscaler.affinity | indent 8 }}
{{- end }}
{{- if .Values.autoscaler.tolerations }}
tolerations:
{{ toYaml .Values.autoscaler.tolerations | indent 8 }}
{{- end }}
{{- if .Values.autoscaler.nodeSelector }}
nodeSelector:
{{ toYaml .Values.autoscaler.nodeSelector | indent 8 }}
{{- end }}
containers:
- name: autoscaler
image: {{ template "system_default_registry" . }}{{ .Values.autoscaler.image.repository }}:{{ .Values.autoscaler.image.tag }}
imagePullPolicy: {{ .Values.autoscaler.image.pullPolicy }}
resources:
{{ toYaml .Values.autoscaler.resources | indent 10 }}
command:
- /cluster-proportional-autoscaler
- --namespace={{ .Release.Namespace }}
- --configmap={{ template "coredns.fullname" . }}-autoscaler
- --target=Deployment/{{ template "coredns.fullname" . }}
- --logtostderr=true
- --v=2
{{- end }}

View File

@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }} {{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }} k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS" kubernetes.io/name: "CoreDNS"
{{- end }} {{- end }}
@ -28,14 +28,14 @@ spec:
matchLabels: matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }} {{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }} k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }} {{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }} app.kubernetes.io/name: {{ template "coredns.name" . }}
template: template:
metadata: metadata:
labels: labels:
{{- if .Values.isClusterService }} {{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }} k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }} {{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }} app.kubernetes.io/name: {{ template "coredns.name" . }}
app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }}
@ -76,7 +76,7 @@ spec:
{{- end }} {{- end }}
containers: containers:
- name: "coredns" - name: "coredns"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" image: {{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}
args: [ "-conf", "/etc/coredns/Corefile" ] args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts: volumeMounts:

View File

@ -0,0 +1,28 @@
{{- if .Values.podDisruptionBudget -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
{{- end }}

View File

@ -0,0 +1,57 @@
{{- if .Values.rbac.pspEnable }}
{{ if .Capabilities.APIVersions.Has "policy/v1beta1" }}
apiVersion: policy/v1beta1
{{ else }}
apiVersion: extensions/v1beta1
{{ end -}}
kind: PodSecurityPolicy
metadata:
name: {{ template "coredns.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- else }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- end }}
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# Add back CAP_NET_BIND_SERVICE so that coredns can run on port 53
allowedCapabilities:
- CAP_NET_BIND_SERVICE
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.prometheus.service.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "coredns.fullname" . }}-metrics
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
app.kubernetes.io/component: metrics
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
annotations:
{{ toYaml .Values.prometheus.service.annotations | indent 4 }}
spec:
selector:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
ports:
- name: metrics
port: 9153
targetPort: 9153
{{- end }}

View File

@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }} {{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }} k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS" kubernetes.io/name: "CoreDNS"
{{- end }} {{- end }}
@ -21,11 +21,13 @@ spec:
selector: selector:
app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }} {{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }} k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }} {{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }} app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.service.clusterIP }} {{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }} clusterIP: {{ .Values.service.clusterIP }}
{{ else }}
clusterIP: {{ (lookup "v1" "ConfigMap" "kube-system" "cluster-dns").data.clusterDNS }}
{{- end }} {{- end }}
{{- if .Values.service.externalIPs }} {{- if .Values.service.externalIPs }}
externalIPs: externalIPs:

View File

@ -0,0 +1,21 @@
{{- if and .Values.autoscaler.enabled .Values.rbac.create }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "coredns.fullname" . }}-autoscaler
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name }}-autoscaler
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}-autoscaler
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "coredns.serviceAccountName" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "coredns.fullname" . }}
{{- if .Values.prometheus.monitor.namespace }}
namespace: {{ .Values.prometheus.monitor.namespace }}
{{- end }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.prometheus.monitor.additionalLabels }}
{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Values.k8sApp | default .Chart.Name | quote }}
{{- end }}
app.kubernetes.io/name: {{ template "coredns.name" . }}
app.kubernetes.io/component: metrics
endpoints:
- port: metrics
{{- end }}

View File

@ -0,0 +1,259 @@
# Default values for coredns.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: rancher/hardened-coredns
tag: "v1.7.1"
pullPolicy: IfNotPresent
replicaCount: 1
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
## Create HorizontalPodAutoscaler object.
##
# autoscaling:
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
rollingUpdate:
maxUnavailable: 1
maxSurge: 25%
# Under heavy load it takes more that standard time to remove Pod endpoint from a cluster.
# This will delay termination of our pod by `preStopSleep`. To make sure kube-proxy has
# enough time to catch up.
# preStopSleep: 5
terminationGracePeriodSeconds: 30
podAnnotations: {}
# cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
serviceType: "ClusterIP"
prometheus:
service:
enabled: false
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9153"
monitor:
enabled: false
additionalLabels: {}
namespace: ""
service:
# clusterIP: ""
# loadBalancerIP: ""
# externalIPs: []
# externalTrafficPolicy: ""
annotations: {}
serviceAccount:
create: true
# The name of the ServiceAccount to use
# If not set and create is true, a name is generated using the fullname template
name: coredns
rbac:
# If true, create & use RBAC resources
create: true
# If true, create and use PodSecurityPolicy
pspEnable: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
# name:
# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
isClusterService: true
# Optional priority class to be used for the coredns pods. Used for autoscaler if autoscaler.priorityClassName not set.
priorityClassName: "system-cluster-critical"
# Default zone is what Kubernetes recommends:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options
servers:
- zones:
- zone: .
port: 53
plugins:
- name: errors
# Serves a /health endpoint on :8080, required for livenessProbe
- name: health
configBlock: |-
lameduck 5s
# Serves a /ready endpoint on :8181, required for readinessProbe
- name: ready
# Required to query kubernetes API for data
- name: kubernetes
parameters: cluster.local in-addr.arpa ip6.arpa
configBlock: |-
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
# Serves a /metrics endpoint on :9153, required for serviceMonitor
- name: prometheus
parameters: 0.0.0.0:9153
- name: forward
parameters: . /etc/resolv.conf
- name: cache
parameters: 30
- name: loop
- name: reload
- name: loadbalance
# Complete example with all the options:
# - zones: # the `zones` block can be left out entirely, defaults to "."
# - zone: hello.world. # optional, defaults to "."
# scheme: tls:// # optional, defaults to "" (which equals "dns://" in CoreDNS)
# - zone: foo.bar.
# scheme: dns://
# use_tcp: true # set this parameter to optionally expose the port on tcp as well as udp for the DNS protocol
# # Note that this will not work if you are also exposing tls or grpc on the same server
# port: 12345 # optional, defaults to "" (which equals 53 in CoreDNS)
# plugins: # the plugins to use for this server block
# - name: kubernetes # name of plugin, if used multiple times ensure that the plugin supports it!
# parameters: foo bar # list of parameters after the plugin
# configBlock: |- # if the plugin supports extra block style config, supply it here
# hello world
# foo bar
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
# for example:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: foo.bar.com/role
# operator: In
# values:
# - master
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
# for example:
# tolerations:
# - key: foo.bar.com/role
# operator: Equal
# value: master
# effect: NoSchedule
tolerations: []
# https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
podDisruptionBudget: {}
# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
zoneFiles: []
# - filename: example.db
# domain: example.com
# contents: |
# example.com. IN SOA sns.dns.icann.com. noc.dns.icann.com. 2015082541 7200 3600 1209600 3600
# example.com. IN NS b.iana-servers.net.
# example.com. IN NS a.iana-servers.net.
# example.com. IN A 192.168.99.102
# *.example.com. IN A 192.168.99.102
# optional array of extra volumes to create
extraVolumes: []
# - name: some-volume-name
# emptyDir: {}
# optional array of mount points for extraVolumes
extraVolumeMounts: []
# - name: some-volume-name
# mountPath: /etc/wherever
# optional array of secrets to mount inside coredns container
# possible usecase: need for secure connection with etcd backend
extraSecrets: []
# - name: etcd-client-certs
# mountPath: /etc/coredns/tls/etcd
# - name: some-fancy-secret
# mountPath: /etc/wherever
# Custom labels to apply to Deployment, Pod, Service, ServiceMonitor. Including autoscaler if enabled.
customLabels: {}
## Alternative configuration for HPA deployment if wanted
#
hpa:
enabled: false
minReplicas: 1
maxReplicas: 2
metrics: {}
## Configue a cluster-proportional-autoscaler for coredns
# See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
autoscaler:
# Enabled the cluster-proportional-autoscaler
enabled: false
# Number of cores in the cluster per coredns replica
coresPerReplica: 256
# Number of nodes in the cluster per coredns replica
nodesPerReplica: 16
# Min size of replicaCount
min: 0
# Max size of replicaCount (default of 0 is no max)
max: 0
# Whether to include unschedulable nodes in the nodes/cores calculations - this requires version 1.8.0+ of the autoscaler
includeUnschedulableNodes: false
# If true does not allow single points of failure to form
preventSinglePointFailure: true
image:
repository: k8s.gcr.io/cluster-proportional-autoscaler-amd64
tag: "1.8.0"
pullPolicy: IfNotPresent
# Optional priority class to be used for the autoscaler pods. priorityClassName used if not set.
priorityClassName: ""
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
tolerations: []
# resources for autoscaler pod
resources:
requests:
cpu: "20m"
memory: "10Mi"
limits:
cpu: "20m"
memory: "10Mi"
# Options for autoscaler configmap
configmap:
## Annotations for the coredns-autoscaler configmap
# i.e. strategy.spinnaker.io/versioned: "false" to ensure configmap isn't renamed
annotations: {}
k8sApp : "kube-dns"
global:
systemDefaultRegistry: ""

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,17 @@
apiVersion: v1
appVersion: 0.30.0
description: An nginx Ingress controller that uses ConfigMap to store the nginx configuration.
home: https://github.com/kubernetes/ingress-nginx
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:
- ingress
- nginx
kubeVersion: '>=1.10.0-0'
maintainers:
- name: ChiefAlexander
- email: Trevor.G.Wood@gmail.com
name: taharah
name: rke2-ingress-nginx
sources:
- https://github.com/kubernetes/ingress-nginx
version: 1.36.300

View File

@ -0,0 +1,6 @@
approvers:
- ChiefAlexander
- taharah
reviewers:
- ChiefAlexander
- taharah

View File

@ -0,0 +1,361 @@
# nginx-ingress
[nginx-ingress](https://github.com/kubernetes/ingress-nginx) is an Ingress controller that uses ConfigMap to store the nginx configuration.
To use, add the `kubernetes.io/ingress.class: nginx` annotation to your Ingress resources.
## TL;DR;
```console
$ helm install stable/nginx-ingress
```
## Introduction
This chart bootstraps an nginx-ingress deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.6+
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install --name my-release stable/nginx-ingress
```
The command deploys nginx-ingress on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the configurable parameters of the nginx-ingress chart and their default values.
Parameter | Description | Default
--- | --- | ---
`controller.name` | name of the controller component | `controller`
`controller.image.repository` | controller container image repository | `quay.io/kubernetes-ingress-controller/nginx-ingress-controller`
`controller.image.tag` | controller container image tag | `0.30.0`
`controller.image.pullPolicy` | controller container image pull policy | `IfNotPresent`
`controller.image.runAsUser` | User ID of the controller process. Value depends on the Linux distribution used inside of the container image. | `101`
`controller.useComponentLabel` | Wether to add component label so the HPA can work separately for controller and defaultBackend. *Note: don't change this if you have an already running deployment as it will need the recreation of the controller deployment* | `false`
`controller.containerPort.http` | The port that the controller container listens on for http connections. | `80`
`controller.containerPort.https` | The port that the controller container listens on for https connections. | `443`
`controller.config` | nginx [ConfigMap](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md) entries | none
`controller.hostNetwork` | If the nginx deployment / daemonset should run on the host's network namespace. Do not set this when `controller.service.externalIPs` is set and `kube-proxy` is used as there will be a port-conflict for port `80` | false
`controller.defaultBackendService` | default 404 backend service; needed only if `defaultBackend.enabled = false` and version < 0.21.0| `""`
`controller.dnsPolicy` | If using `hostNetwork=true`, change to `ClusterFirstWithHostNet`. See [pod's dns policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) for details | `ClusterFirst`
`controller.dnsConfig` | custom pod dnsConfig. See [pod's dns config](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) for details | `{}`
`controller.reportNodeInternalIp` | If using `hostNetwork=true`, setting `reportNodeInternalIp=true`, will pass the flag `report-node-internal-ip-address` to nginx-ingress. This sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller.
`controller.electionID` | election ID to use for the status update | `ingress-controller-leader`
`controller.extraEnvs` | any additional environment variables to set in the pods | `{}`
`controller.extraContainers` | Sidecar containers to add to the controller pod. See [LemonLDAP::NG controller](https://github.com/lemonldap-ng-controller/lemonldap-ng-controller) as example | `{}`
`controller.extraVolumeMounts` | Additional volumeMounts to the controller main container | `{}`
`controller.extraVolumes` | Additional volumes to the controller pod | `{}`
`controller.extraInitContainers` | Containers, which are run before the app containers are started | `[]`
`controller.ingressClass` | name of the ingress class to route through this controller | `nginx`
`controller.maxmindLicenseKey` | Maxmind license key to download GeoLite2 Databases. See [Accessing and using GeoLite2 database](https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/) | `""`
`controller.scope.enabled` | limit the scope of the ingress controller | `false` (watch all namespaces)
`controller.scope.namespace` | namespace to watch for ingress | `""` (use the release namespace)
`controller.extraArgs` | Additional controller container arguments | `{}`
`controller.kind` | install as Deployment, DaemonSet or Both | `Deployment`
`controller.deploymentAnnotations` | annotations to be added to deployment | `{}`
`controller.autoscaling.enabled` | If true, creates Horizontal Pod Autoscaler | false
`controller.autoscaling.minReplicas` | If autoscaling enabled, this field sets minimum replica count | `2`
`controller.autoscaling.maxReplicas` | If autoscaling enabled, this field sets maximum replica count | `11`
`controller.autoscaling.targetCPUUtilizationPercentage` | Target CPU utilization percentage to scale | `"50"`
`controller.autoscaling.targetMemoryUtilizationPercentage` | Target memory utilization percentage to scale | `"50"`
`controller.daemonset.useHostPort` | If `controller.kind` is `DaemonSet`, this will enable `hostPort` for TCP/80 and TCP/443 | false
`controller.daemonset.hostPorts.http` | If `controller.daemonset.useHostPort` is `true` and this is non-empty, it sets the hostPort | `"80"`
`controller.daemonset.hostPorts.https` | If `controller.daemonset.useHostPort` is `true` and this is non-empty, it sets the hostPort | `"443"`
`controller.tolerations` | node taints to tolerate (requires Kubernetes >=1.6) | `[]`
`controller.affinity` | node/pod affinities (requires Kubernetes >=1.6) | `{}`
`controller.terminationGracePeriodSeconds` | how many seconds to wait before terminating a pod | `60`
`controller.minReadySeconds` | how many seconds a pod needs to be ready before killing the next, during update | `0`
`controller.nodeSelector` | node labels for pod assignment | `{}`
`controller.podAnnotations` | annotations to be added to pods | `{}`
`controller.deploymentLabels` | labels to add to the deployment metadata | `{}`
`controller.podLabels` | labels to add to the pod container metadata | `{}`
`controller.podSecurityContext` | Security context policies to add to the controller pod | `{}`
`controller.replicaCount` | desired number of controller pods | `1`
`controller.minAvailable` | minimum number of available controller pods for PodDisruptionBudget | `1`
`controller.resources` | controller pod resource requests & limits | `{}`
`controller.priorityClassName` | controller priorityClassName | `nil`
`controller.lifecycle` | controller pod lifecycle hooks | `{}`
`controller.service.annotations` | annotations for controller service | `{}`
`controller.service.labels` | labels for controller service | `{}`
`controller.publishService.enabled` | if true, the controller will set the endpoint records on the ingress objects to reflect those on the service | `false`
`controller.publishService.pathOverride` | override of the default publish-service name | `""`
`controller.service.enabled` | if disabled no service will be created. This is especially useful when `controller.kind` is set to `DaemonSet` and `controller.daemonset.useHostPorts` is `true` | true
`controller.service.clusterIP` | internal controller cluster service IP (set to `"-"` to pass an empty value) | `nil`
`controller.service.omitClusterIP` | (Deprecated) To omit the `clusterIP` from the controller service | `false`
`controller.service.externalIPs` | controller service external IP addresses. Do not set this when `controller.hostNetwork` is set to `true` and `kube-proxy` is used as there will be a port-conflict for port `80` | `[]`
`controller.service.externalTrafficPolicy` | If `controller.service.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable [source IP preservation](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport) | `"Cluster"`
`controller.service.sessionAffinity` | Enables client IP based session affinity. Must be `ClientIP` or `None` if set. | `""`
`controller.service.healthCheckNodePort` | If `controller.service.type` is `NodePort` or `LoadBalancer` and `controller.service.externalTrafficPolicy` is set to `Local`, set this to [the managed health-check port the kube-proxy will expose](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport). If blank, a random port in the `NodePort` range will be assigned | `""`
`controller.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
`controller.service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | `[]`
`controller.service.enableHttp` | if port 80 should be opened for service | `true`
`controller.service.enableHttps` | if port 443 should be opened for service | `true`
`controller.service.targetPorts.http` | Sets the targetPort that maps to the Ingress' port 80 | `80`
`controller.service.targetPorts.https` | Sets the targetPort that maps to the Ingress' port 443 | `443`
`controller.service.ports.http` | Sets service http port | `80`
`controller.service.ports.https` | Sets service https port | `443`
`controller.service.type` | type of controller service to create | `LoadBalancer`
`controller.service.nodePorts.http` | If `controller.service.type` is either `NodePort` or `LoadBalancer` and this is non-empty, it sets the nodePort that maps to the Ingress' port 80 | `""`
`controller.service.nodePorts.https` | If `controller.service.type` is either `NodePort` or `LoadBalancer` and this is non-empty, it sets the nodePort that maps to the Ingress' port 443 | `""`
`controller.service.nodePorts.tcp` | Sets the nodePort for an entry referenced by its key from `tcp` | `{}`
`controller.service.nodePorts.udp` | Sets the nodePort for an entry referenced by its key from `udp` | `{}`
`controller.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 10
`controller.livenessProbe.periodSeconds` | How often to perform the probe | 10
`controller.livenessProbe.timeoutSeconds` | When the probe times out | 5
`controller.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1
`controller.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3
`controller.livenessProbe.port` | The port number that the liveness probe will listen on. | 10254
`controller.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 10
`controller.readinessProbe.periodSeconds` | How often to perform the probe | 10
`controller.readinessProbe.timeoutSeconds` | When the probe times out | 1
`controller.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1
`controller.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3
`controller.readinessProbe.port` | The port number that the readiness probe will listen on. | 10254
`controller.metrics.enabled` | if `true`, enable Prometheus metrics | `false`
`controller.metrics.service.annotations` | annotations for Prometheus metrics service | `{}`
`controller.metrics.service.clusterIP` | cluster IP address to assign to service (set to `"-"` to pass an empty value) | `nil`
`controller.metrics.service.omitClusterIP` | (Deprecated) To omit the `clusterIP` from the metrics service | `false`
`controller.metrics.service.externalIPs` | Prometheus metrics service external IP addresses | `[]`
`controller.metrics.service.labels` | labels for metrics service | `{}`
`controller.metrics.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
`controller.metrics.service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | `[]`
`controller.metrics.service.servicePort` | Prometheus metrics service port | `9913`
`controller.metrics.service.type` | type of Prometheus metrics service to create | `ClusterIP`
`controller.metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false`
`controller.metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}`
`controller.metrics.serviceMonitor.honorLabels` | honorLabels chooses the metric's labels on collisions with target labels. | `false`
`controller.metrics.serviceMonitor.namespace` | namespace where servicemonitor resource should be created | `the same namespace as nginx ingress`
`controller.metrics.serviceMonitor.namespaceSelector` | [namespaceSelector](https://github.com/coreos/prometheus-operator/blob/v0.34.0/Documentation/api.md#namespaceselector) to configure what namespaces to scrape | `will scrape the helm release namespace only`
`controller.metrics.serviceMonitor.scrapeInterval` | interval between Prometheus scraping | `30s`
`controller.metrics.prometheusRule.enabled` | Set this to `true` to create prometheusRules for Prometheus operator | `false`
`controller.metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}`
`controller.metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | `the same namespace as nginx ingress`
`controller.metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be prometheus in YAML format, check values for an example. | `[]`
`controller.admissionWebhooks.enabled` | Create Ingress admission webhooks. Validating webhook will check the ingress syntax. | `false`
`controller.admissionWebhooks.failurePolicy` | Failure policy for admission webhooks | `Fail`
`controller.admissionWebhooks.port` | Admission webhook port | `8080`
`controller.admissionWebhooks.service.annotations` | Annotations for admission webhook service | `{}`
`controller.admissionWebhooks.service.omitClusterIP` | (Deprecated) To omit the `clusterIP` from the admission webhook service | `false`
`controller.admissionWebhooks.service.clusterIP` | cluster IP address to assign to admission webhook service (set to `"-"` to pass an empty value) | `nil`
`controller.admissionWebhooks.service.externalIPs` | Admission webhook service external IP addresses | `[]`
`controller.admissionWebhooks.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
`controller.admissionWebhooks.service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | `[]`
`controller.admissionWebhooks.service.servicePort` | Admission webhook service port | `443`
`controller.admissionWebhooks.service.type` | Type of admission webhook service to create | `ClusterIP`
`controller.admissionWebhooks.patch.enabled` | If true, will use a pre and post install hooks to generate a CA and certificate to use for validating webhook endpoint, and patch the created webhooks with the CA. | `true`
`controller.admissionWebhooks.patch.image.repository` | Repository to use for the webhook integration jobs | `jettech/kube-webhook-certgen`
`controller.admissionWebhooks.patch.image.tag` | Tag to use for the webhook integration jobs | `v1.0.0`
`controller.admissionWebhooks.patch.image.pullPolicy` | Image pull policy for the webhook integration jobs | `IfNotPresent`
`controller.admissionWebhooks.patch.priorityClassName` | Priority class for the webhook integration jobs | `""`
`controller.admissionWebhooks.patch.podAnnotations` | Annotations for the webhook job pods | `{}`
`controller.admissionWebhooks.patch.nodeSelector` | Node selector for running admission hook patch jobs | `{}`
`controller.customTemplate.configMapName` | configMap containing a custom nginx template | `""`
`controller.customTemplate.configMapKey` | configMap key containing the nginx template | `""`
`controller.addHeaders` | configMap key:value pairs containing [custom headers](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers) added before sending response to the client | `{}`
`controller.proxySetHeaders` | configMap key:value pairs containing [custom headers](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-set-headers) added before sending request to the backends| `{}`
`controller.headers` | DEPRECATED, Use `controller.proxySetHeaders` instead. | `{}`
`controller.updateStrategy` | allows setting of RollingUpdate strategy | `{}`
`controller.configMapNamespace` | The nginx-configmap namespace name | `""`
`controller.tcp.configMapNamespace` | The tcp-services-configmap namespace name | `""`
`controller.udp.configMapNamespace` | The udp-services-configmap namespace name | `""`
`defaultBackend.enabled` | Use default backend component | `true`
`defaultBackend.name` | name of the default backend component | `default-backend`
`defaultBackend.image.repository` | default backend container image repository | `k8s.gcr.io/defaultbackend-amd64`
`defaultBackend.image.tag` | default backend container image tag | `1.5`
`defaultBackend.image.pullPolicy` | default backend container image pull policy | `IfNotPresent`
`defaultBackend.image.runAsUser` | User ID of the controller process. Value depends on the Linux distribution used inside of the container image. By default uses nobody user. | `65534`
`defaultBackend.useComponentLabel` | Whether to add component label so the HPA can work separately for controller and defaultBackend. *Note: don't change this if you have an already running deployment as it will need the recreation of the defaultBackend deployment* | `false`
`defaultBackend.extraArgs` | Additional default backend container arguments | `{}`
`defaultBackend.extraEnvs` | any additional environment variables to set in the defaultBackend pods | `[]`
`defaultBackend.port` | Http port number | `8080`
`defaultBackend.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30
`defaultBackend.livenessProbe.periodSeconds` | How often to perform the probe | 10
`defaultBackend.livenessProbe.timeoutSeconds` | When the probe times out | 5
`defaultBackend.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1
`defaultBackend.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3
`defaultBackend.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 0
`defaultBackend.readinessProbe.periodSeconds` | How often to perform the probe | 5
`defaultBackend.readinessProbe.timeoutSeconds` | When the probe times out | 5
`defaultBackend.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1
`defaultBackend.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6
`defaultBackend.tolerations` | node taints to tolerate (requires Kubernetes >=1.6) | `[]`
`defaultBackend.affinity` | node/pod affinities (requires Kubernetes >=1.6) | `{}`
`defaultBackend.nodeSelector` | node labels for pod assignment | `{}`
`defaultBackend.podAnnotations` | annotations to be added to pods | `{}`
`defaultBackend.deploymentLabels` | labels to add to the deployment metadata | `{}`
`defaultBackend.podLabels` | labels to add to the pod container metadata | `{}`
`defaultBackend.replicaCount` | desired number of default backend pods | `1`
`defaultBackend.minAvailable` | minimum number of available default backend pods for PodDisruptionBudget | `1`
`defaultBackend.resources` | default backend pod resource requests & limits | `{}`
`defaultBackend.priorityClassName` | default backend priorityClassName | `nil`
`defaultBackend.podSecurityContext` | Security context policies to add to the default backend | `{}`
`defaultBackend.service.annotations` | annotations for default backend service | `{}`
`defaultBackend.service.clusterIP` | internal default backend cluster service IP (set to `"-"` to pass an empty value) | `nil`
`defaultBackend.service.omitClusterIP` | (Deprecated) To omit the `clusterIP` from the default backend service | `false`
`defaultBackend.service.externalIPs` | default backend service external IP addresses | `[]`
`defaultBackend.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
`defaultBackend.service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | `[]`
`defaultBackend.service.type` | type of default backend service to create | `ClusterIP`
`defaultBackend.serviceAccount.create` | if `true`, create a backend service account. Only useful if you need a pod security policy to run the backend. | `true`
`defaultBackend.serviceAccount.name` | The name of the backend service account to use. If not set and `create` is `true`, a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. | ``
`imagePullSecrets` | name of Secret resource containing private registry credentials | `nil`
`rbac.create` | if `true`, create & use RBAC resources | `true`
`rbac.scope` | if `true`, do not create & use clusterrole and -binding. Set to `true` in combination with `controller.scope.enabled=true` to disable load-balancer status updates and scope the ingress entirely. | `false`
`podSecurityPolicy.enabled` | if `true`, create & use Pod Security Policy resources | `false`
`serviceAccount.create` | if `true`, create a service account for the controller | `true`
`serviceAccount.name` | The name of the controller service account to use. If not set and `create` is `true`, a name is generated using the fullname template. | ``
`revisionHistoryLimit` | The number of old history to retain to allow rollback. | `10`
`tcp` | TCP service key:value pairs. The value is evaluated as a template. | `{}`
`udp` | UDP service key:value pairs The value is evaluated as a template. | `{}`
`releaseLabelOverride` | If provided, the value will be used as the `release` label instead of .Release.Name | `""`
These parameters can be passed via Helm's `--set` option
```console
$ helm install stable/nginx-ingress --name my-release \
--set controller.metrics.enabled=true
```
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install stable/nginx-ingress --name my-release -f values.yaml
```
A useful trick to debug issues with ingress is to increase the logLevel
as described [here](https://github.com/kubernetes/ingress-nginx/blob/master/docs/troubleshooting.md#debug)
```console
$ helm install stable/nginx-ingress --set controller.extraArgs.v=2
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## PodDisruptionBudget
Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one,
else it would make it impossible to evacuate a node. See [gh issue #7127](https://github.com/helm/charts/issues/7127) for more info.
## Prometheus Metrics
The Nginx ingress controller can export Prometheus metrics.
```console
$ helm install stable/nginx-ingress --name my-release \
--set controller.metrics.enabled=true
```
You can add Prometheus annotations to the metrics service using `controller.metrics.service.annotations`. Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using `controller.metrics.serviceMonitor.enabled`.
## nginx-ingress nginx\_status page/stats server
Previous versions of this chart had a `controller.stats.*` configuration block, which is now obsolete due to the following changes in nginx ingress controller:
* in [0.16.1](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0161), the vts (virtual host traffic status) dashboard was removed
* in [0.23.0](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0230), the status page at port 18080 is now a unix socket webserver only available at localhost.
You can use `curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status` inside the controller container to access it locally, or use the snippet from [nginx-ingress changelog](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0230) to re-enable the http server
## ExternalDNS Service configuration
Add an [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) annotation to the LoadBalancer service:
```yaml
controller:
service:
annotations:
external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.
```
## AWS L7 ELB with SSL Termination
Annotate the controller as shown in the [nginx-ingress l7 patch](https://github.com/kubernetes/ingress-nginx/blob/master/deploy/aws/l7/service-l7.yaml):
```yaml
controller:
service:
targetPorts:
http: http
https: http
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
```
## AWS L4 NLB with SSL Redirection
`ssl-redirect` and `force-ssl-redirect` flag are not working with AWS Network Load Balancer. You need to turn if off and add additional port with `server-snippet` in order to make it work.
The port NLB `80` will be mapped to nginx container port `80` and NLB port `443` will be mapped to nginx container port `8000` (special). Then we use `$server_port` to manage redirection on port `80`
```
controller:
config:
ssl-redirect: "false" # we use `special` port to control ssl redirection
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
containerPort:
http: 80
https: 443
special: 8000
service:
targetPorts:
http: http
https: special
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
```
## AWS route53-mapper
To configure the LoadBalancer service with the [route53-mapper addon](https://github.com/kubernetes/kops/tree/master/addons/route53-mapper), add the `domainName` annotation and `dns` label:
```yaml
controller:
service:
labels:
dns: "route53"
annotations:
domainName: "kubernetes-example.com"
```
## Ingress Admission Webhooks
With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the `validatingwebhookconfiguration` Kubernetes feature to prevent bad ingress from being added to the cluster.
With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix [this issue](https://github.com/kubernetes/ingress-nginx/pull/4521)
## Helm error when upgrading: spec.clusterIP: Invalid value: ""
If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:
```
Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
```
Detail of how and why are in [this issue](https://github.com/helm/charts/pull/13646) but to resolve this you can set `xxxx.service.omitClusterIP` to `true` where `xxxx` is the service referenced in the error.
As of version `1.26.0` of this chart, by simply not providing any clusterIP value, `invalid: spec.clusterIP: Invalid value: "": field is immutable` will no longer occur since `clusterIP: ""` will not be rendered.

View File

@ -0,0 +1,4 @@
controller:
kind: DaemonSet
config:
use-proxy-protocol: "true"

View File

@ -0,0 +1,15 @@
controller:
kind: DaemonSet
service:
type: NodePort
nodePorts:
tcp:
9000: 30090
udp:
9001: 30091
tcp:
9000: "default/test:8080"
udp:
9001: "default/test:8080"

View File

@ -0,0 +1,6 @@
controller:
kind: DaemonSet
addHeaders:
X-Frame-Options: deny
proxySetHeaders:
X-Forwarded-Proto: https

View File

@ -0,0 +1,4 @@
controller:
kind: DaemonSet
service:
type: NodePort

View File

@ -0,0 +1,14 @@
controller:
kind: DaemonSet
service:
type: ClusterIP
tcp:
configMapNamespace: default
udp:
configMapNamespace: default
tcp:
9000: "default/test:8080"
udp:
9001: "default/test:8080"

View File

@ -0,0 +1,10 @@
controller:
kind: DaemonSet
service:
type: ClusterIP
tcp:
9000: "default/test:8080"
udp:
9001: "default/test:8080"

View File

@ -0,0 +1,6 @@
controller:
kind: DaemonSet
tcp:
9000: "default/test:8080"
9001: "default/test:8080"

Some files were not shown because too many files have changed in this diff Show More