description:"Specify CSI Driver Snapshotter image tag. Leave blank to autodetect."
type:string
label:Longhorn CSI Driver Snapshotter Image Tag
group:"Longhorn CSI Driver Images"
- variable:privateRegistry.registryUrl
label:Private registry URL
description:"URL of private registry. Leave blank to apply system default registry."
group:"Private Registry Settings"
type:string
default:""
- variable:privateRegistry.registrySecret
label:Private registry secret name
description:"If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry."
group:"Private Registry Settings"
type:string
default:""
- variable:privateRegistry.createSecret
default:"true"
description:"Create a new private registry secret"
type:boolean
group:"Private Registry Settings"
label:Create Secret for Private Registry Settings
show_subquestion_if:true
subquestions:
- variable:privateRegistry.registryUser
label:Private registry user
description:"User used to authenticate to private registry."
type:string
default:""
- variable:privateRegistry.registryPasswd
label:Private registry password
description:"Password used to authenticate to private registry."
type:password
default:""
- variable:longhorn.default_setting
default:"false"
description:"Customize the default settings before installing Longhorn for the first time. This option will only work if the cluster hasn't installed Longhorn."
label:"Customize Default Settings"
type:boolean
show_subquestion_if:true
group:"Longhorn Default Settings"
subquestions:
- variable:csi.kubeletRootDir
default:
description:"Specify kubelet root-dir. Leave blank to autodetect."
type:string
label:Kubelet Root Directory
group:"Longhorn CSI Driver Settings"
- variable:csi.attacherReplicaCount
type:int
default:3
min:1
max:10
description:"Specify replica count of CSI Attacher. By default 3."
label:Longhorn CSI Attacher replica count
group:"Longhorn CSI Driver Settings"
- variable:csi.provisionerReplicaCount
type:int
default:3
min:1
max:10
description:"Specify replica count of CSI Provisioner. By default 3."
label:Longhorn CSI Provisioner replica count
group:"Longhorn CSI Driver Settings"
- variable:csi.resizerReplicaCount
type:int
default:3
min:1
max:10
description:"Specify replica count of CSI Resizer. By default 3."
label:Longhorn CSI Resizer replica count
group:"Longhorn CSI Driver Settings"
- variable:csi.snapshotterReplicaCount
type:int
default:3
min:1
max:10
description:"Specify replica count of CSI Snapshotter. By default 3."
label:Longhorn CSI Snapshotter replica count
group:"Longhorn CSI Driver Settings"
- variable:defaultSettings.backupTarget
label:Backup Target
description:"The endpoint used to access the backupstore. NFS and S3 are supported."
label:Allow Recurring Job While Volume Is Detached
description:'If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup.
Note that the volume is not ready for workload during the period when the volume was automatically attached. Workload will have to wait until the recurring job finishes.'
description:'Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist. If disabled, the default disk will be created on all new nodes when each node is first added.'
group:"Longhorn Default Settings"
type:boolean
default:"false"
- variable:defaultSettings.defaultDataPath
label:Default Data Path
description:'Default path to use for storing data on a host. By default "/var/lib/longhorn/"'
group:"Longhorn Default Settings"
type:string
default:"/var/lib/longhorn/"
- variable:defaultSettings.defaultDataLocality
label:Default Data Locality
description:'We say a Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.
This setting specifies the default data locality when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `dataLocality` in the StorageClass
The available modes are:
- **disabled**.This is the default option. There may or may not be a replica on the same node as the attached volume (workload)
- **best-effort**.This option instructs Longhorn to try to keep a replica on the same node as the attached volume (workload). Longhorn will not stop the volume, even if it cannot keep a replica local to the attached volume (workload) due to environment limitation, e.g. not enough disk space, incompatible disk tags, etc.'
description:"If the minimum available disk capacity exceeds the actual percentage of available disk capacity, the disk becomes unschedulable until more space is freed up. By default 25."
group:"Longhorn Default Settings"
type:int
min:0
max:100
default:25
- variable:defaultSettings.upgradeChecker
label:Enable Upgrade Checker
description:'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true.'
group:"Longhorn Default Settings"
type:boolean
default:"true"
- variable:defaultSettings.defaultReplicaCount
label:Default Replica Count
description:"The default number of replicas when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3."
description:"The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'."
description:"In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups. Set to 0 to disable the polling. By default 300."
description:"In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion.
Failed backups will be checked and cleaned up during backupstore polling which is controlled by **Backupstore Poll Interval** setting.
Hence this value determines the minimal wait interval of the cleanup. And the actual cleanup interval is multiple of **Backupstore Poll Interval**.
Disabling **Backupstore Poll Interval** also means to disable failed backup auto-deletion."
description:"If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection.Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true."
label:Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly
description:'If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.
If disabled, Longhorn will not delete the workload pod that is managed by a controller. You will have to manually restart the pod to reattach and remount the volume.
**Note:**This setting does not apply to the workload pods that do not have a controller. Longhorn never deletes them.'
description:"Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas. Nodes don't belong to any Zone will be treated as in the same Zone. Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone. By default true."
description:"Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down.
- **do-nothing**is the default Kubernetes behavior of never force deleting StatefulSet/Deployment terminating pods. Since the pod on the node that is down isn't removed, Longhorn volumes are stuck on nodes that are down.
- **delete-statefulset-pod**Longhorn will force delete StatefulSet terminating pods on nodes that are down to release Longhorn volumes so that Kubernetes can spin up replacement pods.
- **delete-deployment-pod**Longhorn will force delete Deployment terminating pods on nodes that are down to release Longhorn volumes so that Kubernetes can spin up replacement pods.
- **delete-both-statefulset-and-deployment-pod**Longhorn will force delete StatefulSet/Deployment terminating pods on nodes that are down to release Longhorn volumes so that Kubernetes can spin up replacement pods."
label:Allow Node Drain with the Last Healthy Replica
description:"By default, Longhorn will block `kubectl drain` action on a node if the node contains the last healthy replica of a volume.
If this setting is enabled, Longhorn will **not** block `kubectl drain` action on a node even if the node contains the last healthy replica of a volume."
group:"Longhorn Default Settings"
type:boolean
default:"false"
- variable:defaultSettings.mkfsExt4Parameters
label:Custom mkfs.ext4 parameters
description:"Allows setting additional filesystem creation parameters for ext4. For older host kernels it might be necessary to disable the optional ext4 metadata_csum feature by specifying `-O ^64bit,^metadata_csum`."
group:"Longhorn Default Settings"
type:string
- variable:defaultSettings.disableReplicaRebuild
label:Disable Replica Rebuild
description:"This setting disable replica rebuild cross the whole cluster, eviction and data locality feature won't work if this setting is true. But doesn't have any impact to any current replica rebuild and restore disaster recovery volume."
description:"In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume.
Warning:This option works only when there is a failed replica in the volume. And this option may block the rebuilding for a while in the case."
description:"This setting controls how many replicas on a node can be rebuilt simultaneously.
Typically, Longhorn can block the replica starting once the current rebuilding count on a node exceeds the limit. But when the value is 0, it means disabling the replica rebuilding.
WARNING:
- The old setting \"Disable Replica Rebuild\" is replaced by this setting.
- Different from relying on replica starting delay to limit the concurrent rebuilding, if the rebuilding is disabled, replica object replenishment will be directly skipped.
- When the value is 0, the eviction and data locality feature won't work. But this shouldn't have any impact to any current replica rebuild and backup restore."
group:"Longhorn Default Settings"
type:int
min:0
default:5
- variable:defaultSettings.disableRevisionCounter
label:Disable Revision Counter
description:"This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume."
description:"This setting defines the Image Pull Policy of Longhorn system managed pods, e.g. instance manager, engine image, CSI driver, etc. The new Image Pull Policy will only apply after the system managed pods restart."
label:Concurrent Automatic Engine Upgrade Per Node Limit
description:"This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version."
description:"This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it."
description:"This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown.
WARNING:
- This recovery only works for the backing image of which the creation type is \"download\".
- File state \"unknown\" means the related manager pods on the pod is not running or the node itself is down/disconnected."
description:"This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each engine manager Pod. For example, 10 means 10% of the total CPU on a node will be allocated to each engine manager pod on this node. This will help maintain engine stability during high node workload.
In order to prevent unexpected volume engine crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
Guaranteed Engine Manager CPU = The estimated max Longhorn volume engine count on a node * 0.1 / The total allocatable CPUs on the node * 100.
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING:
- Value 0 means unsetting CPU requests for engine manager pods.
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Engine Manager CPU' should not be greater than 40.
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
- This global setting will be ignored for a node if the field \"EngineManagerCPURequest\" on the node is set.
- After this setting is changed, all engine manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
description:"This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each replica manager Pod. 10 means 10% of the total CPU on a node will be allocated to each replica manager pod on this node. This will help maintain replica stability during high node workload.
In order to prevent unexpected volume replica crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
Guaranteed Replica Manager CPU = The estimated max Longhorn volume replica count on a node * 0.1 / The total allocatable CPUs on the node * 100.
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING:
- Value 0 means unsetting CPU requests for replica manager pods.
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Replica Manager CPU' should not be greater than 40.
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
- This global setting will be ignored for a node if the field \"ReplicaManagerCPURequest\" on the node is set.
- After this setting is changed, all replica manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
description:"Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler.
Longhorn prevents data loss by only allowing the Cluster Autoscaler to scale down a node that met all conditions:
- Novolume attached to the node.
- Is not the last node containing the replica of any volume.
- Is not running backing image components pod.
- Is not running share manager components pod."
group:"Longhorn Default Settings"
type:boolean
default:false
- variable:defaultSettings.orphanAutoDeletion
label:Orphaned Data Cleanup
description:"This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically."
group:"Longhorn Default Settings"
type:boolean
default:false
- variable:defaultSettings.storageNetwork
label:Storage Network
description:"Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network.
To segregate the storage network, input the pre-existing NetworkAttachmentDefinition in \"<namespace>/<name>\" format.
WARNING:
- The cluster must have pre-existing Multus installed, and NetworkAttachmentDefinition IPs are reachable between nodes.
- DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES. Longhorn will try to block this setting update when there are attached volumes.
- When applying the setting, Longhorn will restart all manager, instance-manager, and backing-image-manager pods."
group:"Longhorn Default Settings"
type:string
default:
- variable:persistence.defaultClass
default:"true"
description:"Set as default StorageClass for Longhorn"
label:Default Storage Class
group:"Longhorn Storage Class Settings"
required:true
type:boolean
- variable:persistence.reclaimPolicy
label:Storage Class Retain Policy
description:"Define reclaim policy (Retain or Delete)"
group:"Longhorn Storage Class Settings"
required:true
type:enum
options:
- "Delete"
- "Retain"
default:"Delete"
- variable:persistence.defaultClassReplicaCount
description:"Set replica count for Longhorn StorageClass"
label:Default Storage Class Replica Count
group:"Longhorn Storage Class Settings"
type:int
min:1
max:10
default:3
- variable:persistence.defaultDataLocality
description:"Set data locality for Longhorn StorageClass"
description:'Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., [{"name":"backup", "isGroup":true}]'
label:Storage Class Recurring Job Selector List
group:"Longhorn Storage Class Settings"
type:string
default:
- variable:persistence.backingImage.enable
description:"Set backing image for Longhorn StorageClass"
group:"Longhorn Storage Class Settings"
label:Default Storage Class Backing Image
type:boolean
default:false
show_subquestion_if:true
subquestions:
- variable:persistence.backingImage.name
description:'Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it.'
description:'Specify the data source type for the backing image used in Longhorn StorageClass.
If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
WARNING:
- If the backing image name is not specified, setting this field is meaningless.
- As for backing image creation with data source type \"upload\", it is recommended to do it via UI rather than StorageClass here. Uploading requires file data sending to the Longhorn backend after the object creation, which is complicated if you want to handle it manually.'
label:Storage Class Backing Image Data Source Type
description:"Specify the data source parameters for the backing image used in Longhorn StorageClass.
If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
This option accepts a json string of a map. e.g., '{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'.
WARNING:
- If the backing image name is not specified, setting this field is meaningless.
- Be careful of the quotes here."
label:Storage Class Backing Image Data Source Parameters
group:"Longhorn Storage Class Settings"
type:string
default:
- variable:ingress.enabled
default:"false"
description:"Expose app using Layer 7 Load Balancer - ingress"
type:boolean
group:"Services and Load Balancing"
label:Expose app using Layer 7 Load Balancer
show_subquestion_if:true
subquestions:
- variable:ingress.host
default:"xip.io"
description:"layer 7 Load Balancer hostname"
type:hostname
required:true
label:Layer 7 Load Balancer Hostname
- variable:ingress.path
default:"/"
description:"If ingress is enabled you can set the default ingress path"
type:string
required:true
label:Ingress Path
- variable:service.ui.type
default:"Rancher-Proxy"
description:"Define Longhorn UI service type"
type:enum
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
- "Rancher-Proxy"
label:Longhorn UI Service
show_if:"ingress.enabled=false"
group:"Services and Load Balancing"
show_subquestion_if:"NodePort"
subquestions:
- variable:service.ui.nodePort
default:""
description:"NodePort port number(to set explicitly, choose port between 30000-32767)"