Introduction¶
A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider (CSP) to perform data management operations on storage resources. The architecture of the CSI driver allows block storage vendors to implement a CSP that follows the specification (a browser friendly version).
The CSI driver architecture allows a complete separation of concerns between upstream Kubernetes core, SIG Storage (CSI owners), CSI driver author (HPE) and the backend CSP developer.
Tip
The HPE CSI Driver for Kubernetes is vendor agnostic. Any entity may leverage the driver and provide their own Container Storage Provider.
Table of Contents¶
Features and Capabilities¶
CSI gradually mature features and capabilities in the specification at the pace of the community. HPE keep a close watch on differentiating features the primary storage family of products may be suitable for implementing in CSI and Kubernetes. HPE experiment early and often. That's why it's sometimes possible to observe a certain feature being available in the CSI driver although it hasn't been announced or isn't documented.
Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the platform matrix.
Feature | K8s maturity | Since K8s version | HPE CSI Driver |
---|---|---|---|
Dynamic Provisioning | Stable | 1.13 | 1.0.0 |
Volume Expansion | Stable | 1.24 | 1.1.0 |
Volume Snapshots | Stable | 1.20 | 1.1.0 |
PVC Data Source | Stable | 1.18 | 1.1.0 |
Raw Block Volume | Stable | 1.18 | 1.2.0 |
Inline Ephemeral Volumes | Beta | 1.16 | 1.2.0 |
Volume Limits | Stable | 1.17 | 1.2.0 |
Volume Mutator1 | N/A | 1.15 | 1.3.0 |
Generic Ephemeral Volumes | GA | 1.23 | 1.3.0 |
Volume Groups1 | N/A | 1.17 | 1.4.0 |
Snapshot Groups1 | N/A | 1.17 | 1.4.0 |
NFS Server Provisioner1 | N/A | 1.17 | 1.4.0 |
Volume Encryption1 | N/A | 1.18 | 2.0.0 |
Basic Topology3 | Stable | 1.17 | 2.5.0 |
Advanced Topology3 | Stable | 1.17 | Future |
Storage Capacity Tracking | Stable | 1.24 | Future |
Volume Expansion From Source | Stable | 1.27 | Future |
ReadWriteOncePod | Stable | 1.29 | Future |
Volume Populator | Beta | 1.24 | Future |
Volume Health | Alpha | 1.21 | Future |
Cross Namespace Snapshots | Alpha | 1.26 | Future |
Upstream Volume Group Snapshot | Alpha | 1.27 | Future |
Volume Attribute Classes | Alpha | 1.29 | Future |
1 = HPE CSI Driver for Kubernetes specific CSI sidecar. CSP support may vary.
2 = Alpha features are enabled by Kubernetes feature gates and are not formally supported by HPE.
3 = Topology information can only be used to describe accessibility relationships between a set of nodes and a single backend using a StorageClass
.
Depending on the CSP, it may support a number of different snapshotting, cloning and restoring operations by taking advantage of StorageClass
parameter overloading. Please see the respective CSP for additional functionality.
Refer to the official table of feature gates in the Kubernetes docs to find availability of beta and alpha features. HPE provide limited support on non-GA CSI features. Please file any issues, questions or feature requests here. You may also join our Slack community to chat with HPE folks close to this project. We hang out in #Alletra
, #NimbleStorage
, #3par-primera
and #Kubernetes
, sign up at slack.hpedev.io and login at hpedev.slack.com.
Tip
Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a Helm chart or an Operator.
Compatibility and Support¶
These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases. Each Container Storage Provider has it's own requirements in terms of storage platform OS and may have other constraints not listed here.
Note
For Kubernetes 1.12 and earlier please see legacy FlexVolume drivers, do note that the FlexVolume drivers are being deprecated.
HPE CSI Driver for Kubernetes 2.5.0¶
Release highlights:
- Support for Kubernetes 1.30 and OpenShift 4.16
- Introducing CSI Topology support for
StorageClasses
- A "Node Monitor" has been added to improve device management
- Support for attempting automatic filesystem repairs in the event of failed mounts ("fsRepair"
StorageClass
parameter) - Improved handling of iSCSI CHAP credentials
- Added "nfsNodeSelector", "nfsResourceRequestsCpuM" and "nfsResourceRequestsMemoryMi"
StorageClass
parameters - New Helm Chart parameters to control resource requests and limits for node, controller and CSP containers
- Reworked image handling in the Helm Chart to improve supportability
- Various improvements in
accessMode
handling
Upgrade considerations:
- Existing claims provisioned with the NFS Server Provisioner needs to be upgraded.
- Current users of CHAP needs to review the iSCSI CHAP Considerations
- The
importVol
parameter has been renamedimportVolumeName
for HPE Alletra Storage MP and Alletra 9000/Primera/3PAR
note
HPE CSI Driver v2.5.0 is deployed with v2.5.1 of the Helm chart and Operator
Kubernetes | 1.27-1.301 |
---|---|
Helm Chart | v2.5.1 on ArtifactHub |
Operators |
v2.5.1 on OperatorHub v2.5.1 via OpenShift console |
Worker OS |
Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16 Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04 SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro4 equivalents |
Platforms3 |
Alletra Storage MP5 10.2.x - 10.4.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x |
Data protocols | Fibre Channel, iSCSI |
Filesystems | XFS, ext3/ext4, btrfs, NFSv4* |
Release notes | v2.5.0 on GitHub |
Blogs | HPE CSI Driver for Kubernetes 2.5.0: Improved stateful workload resilience and robustness |
* = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
for volumeMode: Filesystem
.
1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited.
3 = Learn about each data platform's team support commitment.
4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils
and reboot if the CSI node driver doesn't start.
5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
HPE CSI Driver for Kubernetes 2.4.2¶
Release highlights:
- Patch release
Kubernetes | 1.26-1.291 |
---|---|
Helm Chart | v2.4.2 on ArtifactHub |
Operators |
v2.4.2 on OperatorHub v2.4.2 via OpenShift console |
Worker OS |
Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 Ubuntu 16.04, 18.04, 20.04, 22.04 SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro4 equivalents |
Platforms3 |
Alletra Storage MP5 10.2.x - 10.4.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x |
Data protocols | Fibre Channel, iSCSI |
Filesystems | XFS, ext3/ext4, btrfs, NFSv4* |
Release notes | v2.4.2 on GitHub |
* = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
.
1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
3 = Learn about each data platform's team support commitment.
4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils
and reboot if the CSI node driver doesn't start.
5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
HPE CSI Driver for Kubernetes 2.4.1¶
Release highlights:
- HPE Alletra Storage MP support
- Kubernetes 1.29 support
- Full KubeVirt, OpenShift Virtualization and SUSE Harvester support for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR
- Full ARM64 support for HPE Alletra 5000/6000 and Nimble Storage
- Support for foreign
StorageClasses
with the NFS Server Provisioner - SUSE Linux Enterprise Micro OS (SLE Micro) support
Upgrade considerations:
- Existing claims provisioned with the NFS Server Provisioner needs to be upgraded.
Kubernetes | 1.26-1.291 |
---|---|
Helm Chart | v2.4.1 on ArtifactHub |
Operators |
v2.4.1 on OperatorHub v2.4.1 via OpenShift console |
Worker OS |
Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 Ubuntu 16.04, 18.04, 20.04, 22.04 SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro4 equivalents |
Platforms3 |
Alletra Storage MP5 10.2.x - 10.3.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x |
Data protocols | Fibre Channel, iSCSI |
Filesystems | XFS, ext3/ext4, btrfs, NFSv4* |
Release notes | v2.4.1 on GitHub |
Blogs | Introducing HPE Alletra Storage MP to HPE CSI Driver for Kubernetes |
* = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
.
1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
3 = Learn about each data platform's team support commitment.
4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils
and reboot if the CSI node driver doesn't start.
5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
HPE CSI Driver for Kubernetes 2.4.0¶
Release highlights:
- Kubernetes 1.27 and 1.28 support
- KubeVirt and OpenShift Virtualization support for Nimble/Alletra 5000/6000
- Enhanced scheduling for the NFS Server Provisioner
- Multiarch images (Linux ARM64/AMD64) for the CSI driver components and Alletra 9000 CSP
- Major updates to SIG Storage images
Upgrade considerations:
- Existing claims provisioned with the NFS Server Provisioner needs to be upgraded.
Kubernetes | 1.25-1.281 |
---|---|
Helm Chart | v2.4.0 on ArtifactHub |
Operators |
v2.4.0 on OperatorHub v2.4.0 via OpenShift console |
Worker OS |
RHEL2 7.x, 8.x, 9.x, RHCOS 4.12-4.14 Ubuntu 16.04, 18.04, 20.04, 22.04 SLES 15 SP3, SP4, SP5 |
Platforms3 |
Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.1.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x |
Data protocols | Fibre Channel, iSCSI |
Filesystems | XFS, ext3/ext4, btrfs, NFSv4* |
Release notes | v2.4.0 on GitHub |
Blogs | Introduction to new workload paradigms with HPE CSI Driver for Kubernetes |
* = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
.
1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
3 = Learn about each data platform's team support commitment.
Release Archive¶
HPE currently supports up to three minor releases of the HPE CSI Driver for Kubernetes.
Known Limitations¶
- Always check with the Kubernetes vendor distribution which CSI features are available for use and supported by the vendor.
- When using Kubernetes in virtual machines on VMware vSphere, OpenStack or similiar, iSCSI is the only supported data protocol for the HPE CSI Driver when using block storage. The CSI driver does not support NPIV.
- Ephemeral, transient or non-persistent Kubernetes nodes are not supported unless the
/etc/hpe-storage
directory persists across node upgrades or reboots. The path is relocatable using a custom Helm chart or deployment manifest by altering themountPath
parameter for the directory. - The CSI driver support a fixed number of volumes per node. Inspect the current limitation by running
kubectl get csinodes -o yaml
and inspect.spec.drivers.allocatable
for "csi.hpe.com". The "count" element contains how many volumes the node can attach from the HPE CSI Driver (default is 100). - The HPE CSI Driver uses host networking for the node driver. Some CNIs have flaky implementations which prevents the CSI driver components to communicate properly. Especially notorious is Flannel on K3s. Use Calico if possible for the widest compatibility.
- The NFS Server Provisioner and each of the CSPs have known limitations listed separately.
iSCSI CHAP Considerations¶
If iSCSI CHAP is being used in the environment, consider the following.
Existing PVs and iSCSI sessions¶
It's not recommended to retro fit CHAP into an existing environment where PersistentVolumes
are already provisioned and attached. If necessary, all iSCSI sessions needs to be logged out from and the CSI driver Helm chart needs to be installed with cluster-wide iSCSI CHAP credentials for iSCSI CHAP to be effective, otherwise existing non-authenticated sessions will be reused.
CSI driver 2.5.0 and Above¶
In 2.5.0 and later the CHAP credentials must be supplied by a separate Secret
. The Secret
may be supplied when installing the Helm Chart (the Secret
must exist prior) or referened in the StorageClass
.
Upgrade Considerations¶
When using CHAP with 2.4.2 or older the CHAP credentials were provided in clear text in the Helm Chart. To continue to use CHAP for those existing PersistentVolumes
, a CHAP Secret
needs to be created and referenced in the Helm Chart install.
New StorageClasses
may reference the same Secret
, it's recommended to use a different Secret
to distinguish legacy and new PersistentVolumes
.
Enable iSCSI CHAP¶
How to enable iSCSI CHAP in the current version of the HPE CSI Driver is available in the user documentation.
CSI driver 1.3.0 to 2.4.2¶
CHAP is an optional part of the initial deployment of the driver with parameters passed to Helm or the Operator. For object definitions, the CHAP_USER
and CHAP_PASSWORD
needs to be supplied to the csi-node-driver
. The CHAP username and secret is picked up in the hpenodeinfo
Custom Resource Definition (CRD). The CSP is under contract to create the user if it doesn't exist on the backend.
CHAP is a good measure to prevent unauthorized access to iSCSI targets, it does not encrypt data on the wire. CHAP secrets should be at least twelve charcters in length.
CSI driver 1.2.1 and Below¶
In version 1.2.1 and below, the CSI driver did not support CHAP natively. CHAP must be enabled manually on the worker nodes before deploying the CSI driver on the cluster. This also needs to be applied to new worker nodes before they join the cluster.
Kubernetes Feature Gates¶
Different features mature at different rates. Refer to the official table of feature gates in the Kubernetes docs.
The following guidelines appliy to which feature gates got introduced as alphas for the corresponding version of Kubernetes. For example, ExpandCSIVolumes
got introduced in 1.14 but is still an alpha in 1.15, hence you need to enable that feature gate in 1.15 as well if you want to use it.
Kubernetes 1.13¶
--allow-privileged
flag must be set to true for the API server
Kubernetes 1.14¶
--allow-privileged
flag must be set to true for the API server--feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true
feature gate flags must be set to true for both the API server and kubelet for resize support
Kubernetes 1.15¶
--allow-privileged
flag must be set to true for the API server--feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true
feature gate flags must be set to true for both the API server and kubelet for resize support--feature-gates=CSIInlineVolume=true
feature gate flag must be set to true for both the API server and kubelet for pod inline volumes (Ephemeral Local Volumes) support--feature-gates=VolumePVCDataSource=true
feature gate flag must be set to true for both the API server and kubelet for Volume cloning support
Kubernetes 1.19¶
--feature-gates=GenericEphemeralVolume=true
feature gate flags needs to be passed to api-server, scheduler, controller-manager and kubelet to enable Generic Ephemeral Volumes