08ee85ddea
``` Updated: argo/argo-cd: - 5.28.1 asserts/asserts: - 1.35.0 cert-manager/cert-manager: - v1.11.1 crate/crate-operator: - 2.26.0 gitlab/gitlab: - 6.10.2 gluu/gluu: - 5.0.15 hashicorp/vault: - 0.24.0 jenkins/jenkins: - 4.3.20 kasten/k10: - 5.5.8 kong/kong: - 2.19.0 kubecost/cost-analyzer: - 1.102.1 kuma/kuma: - 2.1.2 minio/minio-operator: - 5.0.3 mongodb/community-operator: - 0.7.9 new-relic/nri-bundle: - 5.0.8 pixie/pixie-operator-chart: - 0.1.0 redpanda/redpanda: - 3.0.9 speedscale/speedscale-operator: - 1.2.31 traefik/traefik: - 22.1.0 yugabyte/yugabyte: - 2.14.8 yugabyte/yugaware: - 2.14.8 ``` |
||
---|---|---|
.. | ||
charts | ||
grafana-templates | ||
scripts | ||
templates | ||
Chart.yaml | ||
README.md | ||
app-readme.md | ||
attached-disks.json | ||
cluster-metrics.json | ||
cluster-utilization.json | ||
custom-pricing.csv | ||
deployment-utilization.json | ||
kubernetes-resource-efficiency.json | ||
label-cost-utilization.json | ||
namespace-utilization.json | ||
networkCosts-metrics.json | ||
node-utilization.json | ||
pod-utilization-multi-cluster.json | ||
pod-utilization.json | ||
prom-benchmark.json | ||
questions.yaml | ||
values-agent.yaml | ||
values-amp.yaml | ||
values-cloud-agent.yaml | ||
values-custom-pricing.yaml | ||
values-eks-cost-monitoring.yaml | ||
values-thanos.yaml | ||
values-windows-node-affinity.yaml | ||
values.yaml |
README.md
Kubecost helm chart
Helm chart for the Kubecost project, which is created to monitor and manage Kubernetes resource spend. Please contact team@kubecost.com or visit kubecost.com for more info.
While Helm is the recommended install path, these resources can also be deployed with the following command:
kubectl apply -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/master/kubecost.yaml --namespace kubecost
The following table lists the commonly used configurable parameters of the Kubecost Helm chart and their default values.
Parameter | Description | Default |
---|---|---|
global.prometheus.enabled |
If false, use an existing Prometheus install. More info. | true |
prometheus.kube-state-metrics.disabled |
If false, deploy kube-state-metrics for Kubernetes metrics | false |
prometheus.kube-state-metrics.resources |
Set kube-state-metrics resource requests and limits. | {} |
prometheus.server.persistentVolume.enabled |
If true, Prometheus server will create a Persistent Volume Claim. | true |
prometheus.server.persistentVolume.size |
Prometheus server data Persistent Volume size. Default set to retain ~6000 samples per second for 15 days. | 32Gi |
prometheus.server.retention |
Determines when to remove old data. | 15d |
prometheus.server.resources |
Prometheus server resource requests and limits. | {} |
prometheus.nodeExporter.resources |
Node exporter resource requests and limits. | {} |
prometheus.nodeExporter.enabled prometheus.serviceAccounts.nodeExporter.create |
If false, do not crate NodeExporter daemonset. | true |
prometheus.alertmanager.persistentVolume.enabled |
If true, Alertmanager will create a Persistent Volume Claim. | true |
prometheus.pushgateway.persistentVolume.enabled |
If true, Prometheus Pushgateway will create a Persistent Volume Claim. | true |
persistentVolume.enabled |
If true, Kubecost will create a Persistent Volume Claim for product config data. | true |
persistentVolume.size |
Define PVC size for cost-analyzer | 32.0Gi |
persistentVolume.dbSize |
Define PVC size for cost-analyzer's flat file database | 32.0Gi |
ingress.enabled |
If true, Ingress will be created | false |
ingress.annotations |
Ingress annotations | {} |
ingress.className |
Ingress class name | {} |
ingress.paths |
Ingress paths | ["/"] |
ingress.hosts |
Ingress hostnames | [cost-analyzer.local] |
ingress.tls |
Ingress TLS configuration (YAML) | [] |
networkPolicy.enabled |
If true, create a NetworkPolicy to deny egress | false |
networkPolicy.costAnalyzer.enabled |
If true, create a newtork policy for cost-analzyer | false |
networkPolicy.costAnalyzer.annotations |
Annotations to be added to the network policy | {} |
networkPolicy.costAnalyzer.additionalLabels |
Additional labels to be added to the network policy | {} |
networkPolicy.costAnalyzer.ingressRules |
A list of network policy ingress rules | null |
networkPolicy.costAnalyzer.egressRules |
A list of network policy egress rules | null |
networkCosts.enabled |
If true, collect network allocation metrics More info | false |
networkCosts.podMonitor.enabled |
If true, a PodMonitor for the network-cost daemonset is created | false |
serviceMonitor.enabled |
Set this to true to create ServiceMonitor for Prometheus operator |
false |
serviceMonitor.additionalLabels |
Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
prometheusRule.enabled |
Set this to true to create PrometheusRule for Prometheus operator |
false |
prometheusRule.additionalLabels |
Additional labels that can be used so PrometheusRule will be discovered by Prometheus | {} |
grafana.resources |
Grafana resource requests and limits. | {} |
grafana.sidecar.datasources.defaultDatasourceEnabled |
Set this to false to disable creation of Prometheus datasource in Grafana |
true |
serviceAccount.create |
Set this to false if you want to create the service account kubecost-cost-analyzer on your own |
true |
tolerations |
node taints to tolerate | [] |
affinity |
pod affinity | {} |
kubecostProductConfigs.productKey.mountPath |
Use instead of kubecostProductConfigs.productKey.secretname to declare the path at which the product key file is mounted (eg. by a secrets provisioner) |
N/A |
kubecostFrontend.api.fqdn |
Customize the upstream api FQDN | computed in terms of the service name and namespace |
kubecostFrontend.model.fqdn |
Customize the upstream model FQDN | computed in terms of the service name and namespace |
clusterController.fqdn |
Customize the upstream cluster controller FQDN | computed in terms of the service name and namespace |
global.grafana.fqdn |
Customize the upstream grafana FQDN | computed in terms of the release name and namespace |
Testing
To perform local testing do next:
- install locally kind according to documentation.
- install locally ct according to documentation.
- create local cluster using
kind
use image version from kind docker registry
kind create cluster --image kindest/node:<set-image-tag>
- perform ct execution
ct install --chart-dirs="." --charts="." --helm-repo-extra-args="--set=global.prometheus.enabled=false --set=global.grafana.enabled=false"