Commit Graph

2 Commits (2dd35d89bb9d8060b628e8ae2dbf39071590a598)

Author SHA1 Message Date
Arvind Iyengar 8dabbb441c Validate that CRDs exist only on a helm install
This commit introduces a slight change to the CRD chart templates in order to only run the check for whether CRDs exist in the cluster when a user uses `helm install`. On a `helm template`, no error will ever be thrown and on a `helm install --dry-run`, an error will only be thrown if the CRD is required as part of the chart installation (which is the default behavior of --dry-run either way).

The way it accomplishes this is by using the Helm lookup function; based on the [Helm 3 docs](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/), the lookup function never gets called on a `helm install --dry-run` or a `helm template`, so the output of the lookup function will always be nil for those requests (i.e. the number of ClusterRoles returned will always be 0).

However, Kubernetes clusters have default ClusterRoles, so this ensures that CRDs are installed if at least one ClusterRole is returned (i.e. the most common setup).
2020-09-16 10:32:12 -07:00
Arvind Iyengar 9b28515507 Add generateCRDChart.assumeOwnershipOfCRDs flag
This commit adds a new flag to the experimental feature of generating a CRD chart for charts that need to be able to assume the ownership of any existing CRDs within a cluster. It also modifies the existing `prepare-crd` script to use template files stored in the `./scripts/chart-templates/` directory instead of utilizing numerous `cat` commands in order to achieve the same result.

Feature charts with this flag enabled will differ from the normal CRD chart in the following ways:
- Instead of having CRDs from `crd/` in `templates/`, they will be relocated to `crd-manifest/`.
- On render, the CRDs in `crd-manifest` are placed into a ConfigMap that will be deployed on the cluster.
- On install / upgrade / rollback, a pre-install / pre-upgrade / pre-rollback hook Job that does a `kubectl apply -f` on the manifest within the crd-manifest ConfigMap (with appropriate RBAC credentials via a ServiceAccount, CRB, and ClusterRole) will install the CRDs onto the cluster.
- On uninstall, a delete hook Job does a `kubectl delete -f` on the manifest within the crd-manifest ConfigMap (with the same RBAC credentials) to remove the CRDs from the cluster.

At the moment, this will only be used by the `rancher-monitoring` chart.

Related Issue: https://github.com/rancher/rancher/issues/28326
2020-09-09 15:25:13 -07:00