a89ec7bc4e
``` Updated: bitnami/cassandra: - 10.6.5 bitnami/redis: - 18.4.0 bitnami/spark: - 8.1.5 datadog/datadog: - 3.49.0 haproxy/haproxy: - 1.35.0 hashicorp/vault: - 0.27.0 jenkins/jenkins: - 4.8.3 kong/kong: - 2.32.0 kubecost/cost-analyzer: - 1.107.1 kuma/kuma: - 2.5.0 linkerd/linkerd-control-plane: - 1.16.5 minio/minio-operator: - 5.0.11 redpanda/redpanda: - 5.6.48 speedscale/speedscale-operator: - 1.4.5 sysdig/sysdig: - 1.16.21 ``` |
||
---|---|---|
.. | ||
charts | ||
grafana-templates | ||
scripts | ||
templates | ||
Chart.yaml | ||
README.md | ||
app-readme.md | ||
attached-disks.json | ||
cluster-metrics.json | ||
cluster-utilization.json | ||
custom-pricing.csv | ||
deployment-utilization.json | ||
kubernetes-resource-efficiency.json | ||
label-cost-utilization.json | ||
namespace-utilization.json | ||
networkCosts-metrics.json | ||
node-utilization.json | ||
pod-utilization-multi-cluster.json | ||
pod-utilization.json | ||
prom-benchmark.json | ||
questions.yaml | ||
values-agent.yaml | ||
values-amp.yaml | ||
values-cloud-agent.yaml | ||
values-custom-pricing.yaml | ||
values-eks-cost-monitoring.yaml | ||
values-thanos.yaml | ||
values-windows-node-affinity.yaml | ||
values.yaml |
README.md
Kubecost Helm chart
This is the official Helm chart for Kubecost, an enterprise-grade application to monitor and manage Kubernetes spend. Please see the website for more details on what Kubecost can do for you and the official documentation here, or contact team@kubecost.com for assistance.
To install via Helm, run the following command.
helm upgrade --install kubecost -n kubecost --create-namespace \
--repo https://kubecost.github.io/cost-analyzer/ cost-analyzer \
--set kubecostToken="aGVsbUBrdWJlY29zdC5jb20=xm343yadf98"
Alternatively, add the Helm repository first and scan for updates.
helm repo add kubecost https://kubecost.github.io/cost-analyzer/
helm repo update
Next, install the chart.
helm install kubecost kubecost/cost-analyzer -n kubecost --create-namespace \
--set kubecostToken="aGVsbUBrdWJlY29zdC5jb20=xm343yadf98"
While Helm is the recommended install path for Kubecost especially in production, Kubecost can alternatively be deployed with a single-file manifest using the following command. Keep in mind when choosing this method, Kubecost will be installed from a development branch and may include unreleased changes.
kubectl apply -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/kubecost.yaml
The following table lists commonly used configuration parameters for the Kubecost Helm chart and their default values. Please see the values file for the complete set of definable values.
Parameter | Description | Default |
---|---|---|
global.prometheus.enabled |
If false, use an existing Prometheus install. More info. | true |
prometheus.kube-state-metrics.disabled |
If false, deploy kube-state-metrics for Kubernetes metrics | false |
prometheus.kube-state-metrics.resources |
Set kube-state-metrics resource requests and limits. | {} |
prometheus.server.persistentVolume.enabled |
If true, Prometheus server will create a Persistent Volume Claim. | true |
prometheus.server.persistentVolume.size |
Prometheus server data Persistent Volume size. Default set to retain ~6000 samples per second for 15 days. | 32Gi |
prometheus.server.retention |
Determines when to remove old data. | 15d |
prometheus.server.resources |
Prometheus server resource requests and limits. | {} |
prometheus.nodeExporter.resources |
Node exporter resource requests and limits. | {} |
prometheus.nodeExporter.enabled prometheus.serviceAccounts.nodeExporter.create |
If false, do not crate NodeExporter daemonset. | true |
prometheus.alertmanager.persistentVolume.enabled |
If true, Alertmanager will create a Persistent Volume Claim. | true |
prometheus.pushgateway.persistentVolume.enabled |
If true, Prometheus Pushgateway will create a Persistent Volume Claim. | true |
persistentVolume.enabled |
If true, Kubecost will create a Persistent Volume Claim for product config data. | true |
persistentVolume.size |
Define PVC size for cost-analyzer | 32.0Gi |
persistentVolume.dbSize |
Define PVC size for cost-analyzer's flat file database | 32.0Gi |
ingress.enabled |
If true, Ingress will be created | false |
ingress.annotations |
Ingress annotations | {} |
ingress.className |
Ingress class name | {} |
ingress.paths |
Ingress paths | ["/"] |
ingress.hosts |
Ingress hostnames | [cost-analyzer.local] |
ingress.tls |
Ingress TLS configuration (YAML) | [] |
networkPolicy.enabled |
If true, create a NetworkPolicy to deny egress | false |
networkPolicy.costAnalyzer.enabled |
If true, create a newtork policy for cost-analzyer | false |
networkPolicy.costAnalyzer.annotations |
Annotations to be added to the network policy | {} |
networkPolicy.costAnalyzer.additionalLabels |
Additional labels to be added to the network policy | {} |
networkPolicy.costAnalyzer.ingressRules |
A list of network policy ingress rules | null |
networkPolicy.costAnalyzer.egressRules |
A list of network policy egress rules | null |
networkCosts.enabled |
If true, collect network allocation metrics More info | false |
networkCosts.podMonitor.enabled |
If true, a PodMonitor for the network-cost daemonset is created | false |
serviceMonitor.enabled |
Set this to true to create ServiceMonitor for Prometheus operator |
false |
serviceMonitor.additionalLabels |
Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
serviceMonitor.relabelings |
Sets Prometheus metric_relabel_configs on the scrape job | [] |
serviceMonitor.metricRelabelings |
Sets Prometheus relabel_configs on the scrape job | [] |
prometheusRule.enabled |
Set this to true to create PrometheusRule for Prometheus operator |
false |
prometheusRule.additionalLabels |
Additional labels that can be used so PrometheusRule will be discovered by Prometheus | {} |
grafana.resources |
Grafana resource requests and limits. | {} |
grafana.sidecar.datasources.defaultDatasourceEnabled |
Set this to false to disable creation of Prometheus datasource in Grafana |
true |
serviceAccount.create |
Set this to false if you want to create the service account kubecost-cost-analyzer on your own |
true |
tolerations |
node taints to tolerate | [] |
affinity |
pod affinity | {} |
kubecostProductConfigs.productKey.mountPath |
Use instead of kubecostProductConfigs.productKey.secretname to declare the path at which the product key file is mounted (eg. by a secrets provisioner) |
N/A |
kubecostFrontend.api.fqdn |
Customize the upstream api FQDN | computed in terms of the service name and namespace |
kubecostFrontend.model.fqdn |
Customize the upstream model FQDN | computed in terms of the service name and namespace |
clusterController.fqdn |
Customize the upstream cluster controller FQDN | computed in terms of the service name and namespace |
global.grafana.fqdn |
Customize the upstream grafana FQDN | computed in terms of the release name and namespace |
Adjusting Log Output
The log output can be customized during deployment by using the LOG_LEVEL
and/or LOG_FORMAT
environment variables.
Adjusting Log Level
Adjusting the log level increases or decreases the level of verbosity written to the logs. To set the log level to trace
, the following flag can be added to the helm
command.
--set 'kubecostModel.extraEnv[0].name=LOG_LEVEL,kubecostModel.extraEnv[0].value=trace'
Adjusting Log Format
Adjusting the log format changes the format in which the logs are output making it easier for log aggregators to parse and display logged messages. The LOG_FORMAT
environment variable accepts the values JSON
, for a structured output, and pretty
for a nice, human-readable output.
Value | Output |
---|---|
JSON |
{"level":"info","time":"2006-01-02T15:04:05.999999999Z07:00","message":"Starting cost-model (git commit \"1.91.0-rc.0\")"} |
pretty |
2006-01-02T15:04:05.999999999Z07:00 INF Starting cost-model (git commit "1.91.0-rc.0") |
Testing
To perform local testing do next:
- install locally kind according to documentation.
- install locally ct according to documentation.
- create local cluster using
kind
use image version from https://github.com/kubernetes-sigs/kind/releases e.g.kindest/node:v1.25.11@sha256:227fa11ce74ea76a0474eeefb84cb75d8dad1b08638371ecf0e86259b35be0c8
kind create cluster --image kindest/node:v1.25.11@sha256:227fa11ce74ea76a0474eeefb84cb75d8dad1b08638371ecf0e86259b35be0c8
- perform ct execution
ct install --chart-dirs="." --charts="."
- perform ct StatefulSet execution
# create multiple nodes kind config
cat > kind-config.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
# creaet kind cluster with kind config
kind create cluster --name kubecost-statefulset --config kind-config.yaml --image kindest/node:v1.25.11@sha256:227fa11ce74ea76a0474eeefb84cb75d8dad1b08638371ecf0e86259b35be0c8
# deploy an object storage for our testing purpose (https://min.io/docs/minio/kubernetes/upstream/index.html)
curl --silent https://raw.githubusercontent.com/minio/docs/master/source/extra/examples/minio-dev.yaml | sed -e "s/kubealpha.local/kubecost-statefulset-worker/" -e "s%minio server /data%mkdir -p /data/kubecost; minio server /data%" | kubectl apply -f -
# create a headless service to the minio S3 API port
kubectl create service clusterip -n minio-dev minio --tcp=9000:9000 --clusterip="None"
# create our testing namespace
kubectl create namespace kubecost-statefulset
# create the bucket config
cat > etlBucketConfigSecret.yaml <<EOF
type: s3
config:
bucket: kubecost
endpoint: minio.minio-dev:9000
insecure: true
access_key: minioadmin
secret_key: minioadmin
EOF
# create the secret with the object-store.yaml
kubectl create secret generic -n kubecost-statefulset object-store --from-file=object-store.yaml=etlBucketConfigSecret.yaml
# start our chart-testing
ct install --namespace kubecost-statefulset --chart-dirs="." --charts="." --helm-extra-set-args="--set=global.prometheus.enabled=true --set=global.grafana.enabled=true --set=kubecostDeployment.leaderFollower.enabled=true --set=kubecostDeployment.statefulSet.enabled=true --set=kubecostDeployment.replicas=2 --set=kubecostModel.etlBucketConfigSecret=object-store"
# cleanup
kind delete cluster --name kubecost-statefulset