Introduction¶
The HPE Alletra Storage MP B10000, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) for Kubernetes is part of the HPE CSI Driver for Kubernetes. The CSP abstract the data management capabilities of the array for use by Kubernetes.
Attention
The HPE Alletra Storage MP B10000 File Service has its own CSP and does not share the same features and capabilities as the block protocol CSP on this page.
- Introduction
- Platform Requirements
- VLUN Templates
- StorageClass Parameters
- VolumeSnapshotClass Parameters
- VolumeGroupClass Parameters
- SnapshotGroupClass Parameters
- Static Provisioning
- Active Peer Persistence Limitations
- Classic Peer Persistence Limitations
- Support
Note
For help getting started with deploying the HPE CSI Driver using HPE Alletra Storage MP B10000, Alletra 9000, Primera or 3PAR storage, check out the tutorial over at HPE Developer.
Platform Requirements¶
Check the corresponding CSI driver version in the compatibility and support table for the latest updates on supported Kubernetes version, orchestrators and host OS.
Network Port Requirements¶
The HPE Alletra Storage MP B10000, Alletra 9000, Primera and 3PAR Container Storage Provider requires the following TCP ports to be open inbound to the array from the Kubernetes cluster worker nodes running the HPE CSI Driver for Kubernetes.
Port | Protocol | Description |
---|---|---|
443 | HTTPS | WSAPI (HPE Alletra Storage MP B10000, Alletra 9000 and Primera) |
8080 | HTTPS | WSAPI (HPE 3PAR) |
22 | SSH | Array communication (HPE 3PAR) |
HPE 3PAR
From HPE CSI Driver v2.5.2 onwards it's recommended to specify "<IP Addr>:443" in the backend Secret
to avoid using SSH for any HPE Alletra Storage MP B10000 derived platform except 3PAR. See Deployment for more information.
User Role Requirements¶
The CSP requires access to a user with either edit
or the super
role. It's recommended to use the edit
role for security best practices.
Note
LDAP accounts may be used from HPE CSI Driver v2.5.2 onwards.
Virtual Domains¶
Virtual Domains are not yet fully supported by the CSP. From HPE CSI Driver v2.5.0, it's possible to manually create the Kubernetes hosts connecting to storage within the Virtual Domain. Once the hosts have been created, deploy the CSI driver with the Helm chart using the "disableHostDeletion" parameter set to "true". The Virtual Domain user may create the hosts through the Virtual Domain if the "AllowDomainUsersAffectNoDomain" parameter is set to either "hostonly" or "yes" on the array.
Detailed steps to use Virtual Domains¶
These steps assumes access to the storage platform with privileges to create domains and change settings.
Login to the storage platform with SSH. Create an new domain:
cli% createdomain -comment "This is a test domain." my-kubernetes-domain-0
Then, create a new user and assign to the domain. These credentials will be used by the CSI driver.
cli% createuser -c my-password-0 domain-user-0 my-kubernetes-domain-0 edit
Next, make sure domain users are allowed to create hosts outside the domain.
cli% setsys AllowDomainUsersAffectNoDomain hostonly
Tip
Hosts can be created manually at any point. Make sure the name of the host matches the name of each of the compute (worker) nodes in the Kubernetes cluster.
The next steps involve installing the HPE CSI Driver for Kubernetes with disableHostDeletion
set to true
. The steps to supply the parameter depends on if the Helm chart or Operator is being used.
- Helm chart install from ArtifactHub.io.
- Operator install for OpenShift.
Once the CSI driver is installed and running, add an HPE storage backend with the credentials provided in the steps above.
Note
Remote Copy Groups managed by the CSP have not been tested with Virtual Domains at this time.
Limitations¶
These are the generally known limitation of the CSP.
- The CSP has been tested using iSCSI with up to 250
VolumeAttachments
per compute node. HPE recommends not exceeding 200VolumeAttachments
per node and leave headroom for emergencies. It's always recommended to test the upper bounds before deploying to production. Increasing the "maxVolumesPerNode" parameter from the default of 100 is explained in the Helm chart. - Compute node hostnames may not exceed 27 characters. The storage platform limitation is 31 characters. Since HPE CSI Driver 3.0.0, the node name has a 4 character protocol prefix such as "iqn-" or "wwn-". Further, the CSP truncates the domain name from the Kubernetes node name.
VLUN Templates¶
A VLUN template enables the export of a virtual volume as a VLUN to hosts. For more information, see the HPE Primera OS Commmand Line Interface - Installation and Reference Guide.
The CSP supports the following types of VLUN templates:
Template | Description |
---|---|
Matched set | The default VLUN template. The VLUN is visible to initiators with the host's WWNs only on the specified port(s). |
Host sees | The VLUN is visible to the initiators with any of the host's WWNs. |
The boolean string "hostSeesVLUN" StorageClass
parameter controls which VLUN template to use.
Recommendation
In most scenarios, "hostSeesVLUN" should be set to "true".
Change VLUN Template for existing PVCs¶
To modify an existing PVC
, "hostSeesVLUN" needs to be specified with the "allowMutations" parameter along with adding the PVC
annotation "csi.hpe.com/hostSeesVLUN" with the string values of either "true" or "false". The HPE CSI Driver creates the VLUN template based upon the hostSeesVLUN
parameter during the volume publish operation. For the change to take effect, the Pod
will need to be scheduled on another node by either deleting the Pod
or draining the node.
StorageClass Parameters¶
All parameters enumerated reflects the current version and may contain unannounced features and capabilities.
Example default StorageClass
(download):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
# There can only be one default StorageClass per cluster
storageclass.kubernetes.io/is-default-class: "true"
name: hpe-standard
parameters:
csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
csi.storage.k8s.io/node-publish-secret-name: hpe-backend
csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
csi.storage.k8s.io/node-stage-secret-name: hpe-backend
csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
csi.storage.k8s.io/provisioner-secret-name: hpe-backend
csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
csi.storage.k8s.io/fstype: xfs
accessProtocol: iscsi
description: Volume created by the HPE CSI Driver for Kubernetes
cpg: SSD_r6
snapCpg: SSD_r6
hostSeesVLUN: "true"
provisioningType: tpvv
# compression: "true" # For 3PAR only
provisioner: csi.hpe.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
Hint
If all mutable parameters have values provided during provisioning of the PersistentVolumes
, the Volume Mutator will later allow changes if needed.
Common Provisioning Parameters¶
Parameter | Option | Description |
---|---|---|
accessProtocol (Required) | fc or iscsi | The access protocol to use when attaching the persistent volume. |
cpg 1 | Text | The name of existing CPG to be used for volume provisioning. If the cpg parameter is not specified, the CSP will select a CPG available to the array. |
snapCpg 1 | Text | The name of the snapshot CPG to be used for volume provisioning. Defaults to value of cpg if not specified. |
compression 1 | Boolean | Indicates that the volume should be compressed. (3PAR only) |
provisioningType 1 | tpvv | Default. Indicates Thin provisioned volume type. |
full 3 | Indicates Full provisioned volume type. | |
dedup 3 | Indicates Thin Deduplication volume type. | |
reduce 4 | Indicates Data Reduction volume type. | |
hostSeesVLUN | Boolean | Enable "host sees" VLUN template. |
importVolumeName | Text | Name of the volume to import. |
importVolAsClone | Text | Name of the volume to clone and import. |
cloneOf 2 | Text | Name of the PersistentVolumeClaim to clone. |
virtualCopyOf 2 | Text | Name of the PersistentVolumeClaim to snapshot. |
qosName | Text | Name of the volume set which has QoS rules applied. |
iscsiPortalIps | Text | Comma separated list of the array iSCSI port IPs. |
fcPortsList | Text | Comma separated list of available FC ports. Example: "0:5:1,1:4:2,2:4:1,3:4:2" Default: Use all available ports. |
Restrictions applicable when using the CSI volume mutator:
1 = Parameters that are editable after provisioning.
2 = Volumes with snapshots/clones can't be modified.
3 = HPE 3PAR only parameter
4 = HPE Primera/Alletra 9000 only parameter
Please see using the HPE CSI Driver for additional StorageClass
examples like CSI snapshots and clones.
Important
The HPE CSI Driver allows the PersistentVolumeClaim
to override the StorageClass
parameters by annotating the PersistentVolumeClaim
. Please see Using PVC Overrides for more details.
Cloning Parameters¶
Cloning supports two modes of cloning. Either use cloneOf
and reference a PersistentVolumeClaim
in the current namespace to clone or use importVolAsClone
and reference an array volume name to clone and import into the Kubernetes cluster. Volumes with clones are immutable once created.
Parameter | Option | Description |
---|---|---|
cloneOf | Text | The name of the PersistentVolumeClaim to be cloned. cloneOf and importVolAsClone are mutually exclusive. |
importVolAsClone | Text | The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. |
accessProtocol | fc or iscsi | The access protocol to use when attaching the cloned volume. |
Important
• No other parameters are required in the StorageClass
while cloning outside of those parameters listed in the table above.
• Cloning using above parameters is independent of snapshot CRD
availability on Kubernetes and it can be performed on any supported Kubernetes version.
• Support for importVolAsClone
and cloneOf
is available from HPE CSI Driver 1.3.0+.
Array Snapshot Parameters¶
During the snapshotting process, any existing PersistentVolumeClaim
defined in the virtualCopyOf
parameter within a StorageClass
, will be snapped as PersistentVolumeClaim
and exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Volumes with snapshots are immutable once created.
Parameter | Option | Description |
---|---|---|
accessProtocol | fc or iscsi | The access protocol to use when attaching the snapshot volume. |
virtualCopyOf | Text | The name of existing PersistentVolumeClaim to be snapped |
Important
• No other parameters are required in the StorageClass
when snapshotting a volume outside of those parameters listed in the table above.
• Snapshotting using virtualCopyOf
is independent of snapshot CRD
availability on Kubernetes and it can be performed on any supported Kubernetes version.
• Support for virtualCopyOf
is available from HPE CSI Driver 1.3.0+.
Import Parameters¶
During the import volume process, any legacy (non-container volumes) defined in the ImportVol parameter, within a StorageClass
, will be renamed to match the PersistentVolumeClaim
that leverages the StorageClass
. The new volumes will be exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Note: All previous Access Control Records and Initiator Groups will be removed from the volume when it is imported.
Parameter | Option | Description |
---|---|---|
accessProtocol | fc or iscsi | The access protocol to use when importing the volume. |
importVolumeName | Text | The name of the array volume to import. |
Important
• No other parameters are required in the StorageClass
when importing a volume outside of those parameters listed in the table above.
• Support for importVolumeName
is available from HPE CSI Driver 1.2.0+.
Peer Persistence Configuration¶
The HPE Alletra Storage MP B10000 CSP supports two modes of performing synchrounous replication between two arrays, Active Peer Persistence (APP) and Classic Peer Persistence (CPP) using Remote Copy Groups (RCGs).
Mode | Supported Platforms | Description |
---|---|---|
APP | HPE Alletra Storage MP B10000 | Fully automated disaster recovery and workload failover with symmetric topology up to campus distance while requiring a third site for quorum. Restrictions apply, see Active Peer Persistence Limitations |
CPP | HPE Alletra 9000 HPE Primera HPE 3PAR |
Data path resillience only, no workload failover. See Classic Peer Persistence Limitations |
To enable replication within the HPE CSI Driver, the following steps must be completed:
- Create
Secrets
for both primary and target array. Refer to Configuring Additional Storage Backends. - Create a replication
HPEReplicationDeviceInfos
CRD. - Create a replication enabled
StorageClass
.
A CustomResourceDefinition
(CRD) of type hpereplicationdeviceinfos.storage.hpe.com
must be created to define the target array information. The CRD
resource name will be used to define the StorageClass
parameter "replicationDevices".
apiVersion: storage.hpe.com/v2
kind: HPEReplicationDeviceInfo
metadata:
name: my-peer-persistence-target
namespace: hpe-storage
spec:
target_array_details:
- targetCpg: <CPG name>
targetSnapCpg: <Snap CPG name> # optional
targetName: <Target array name>
targetSecret: <Target Secret name>
targetSecretNamespace: <Target Secret Namespace>
Info
The "targetCpg" and "targetSnapCpg" names might be difficult to find on newer systems. On those systems the default name is "SSD_r6", if multiple CPGs are present on the system, use showcpg
in the CLI to list the CPGs. The "targetName" can be listed on the primary using showrcopy targets
. The "targetSnapCpg" parameter is not applicable for HPE Alletra Storage MP B10000 and should be omitted.
Next, review and perform the prerequisites for:
Active Peer Persistence Prerequisites¶
Explaining all the requirements for using Active Peer Persistence is beyond the scope of this document. Be understood with the Active Peer Persistence limitations with the HPE CSI Driver before proceeding.
When using Active Peer Persistence, the CSP can't be allowed to manage the hosts on the backend arrays. While it's capable of creating hosts, the host needs to be admitted to RCGs manually to ensure the correct parameters are applied (see Manual Host and RCG Creation).
Important
The rest of this section assumes familiarity with the HPE Active Peer Persistence technical white paper and the terminology used therein.
HPE CSI Driver installation parameters¶
To prevent the CSP from deleting unused hosts, the HPE CSI Driver needs to be installed with the Helm chart or Operator using the "disableHostDeletion" parameter set to "true".
Helm chart install example:
helm install --create-namespace -n hpe-storage my-hpe-csi-driver --set disableHostDeletion=true hpe-storage/hpe-csi-driver
Tip
When installing using the Operator, the parameter is part of the HPECSIDriver
manifest.
Manual Host and RCG Creation¶
The CSP is at this time unable to add hosts with the correct proximity to RCGs. This is due to an API limitation, the limitation will be removed in a future version of the HPE CSI Driver.
Create the hosts (iSCSI example):
createhost -iscsi iqn-my-compute-node-1 iqn.1994-05.com.redhat:my-compute-node-1
createhost -iscsi iqn-my-compute-node-2 iqn.1994-05.com.redhat:my-compute-node-2
createhost -iscsi iqn-my-compute-node-3 iqn.1994-05.com.redhat:my-compute-node-3
Important
Since HPE CSI Driver 3.0.0 the initiator host name need to be prefixed with the protocol, i.e "iqn-" for iSCSI and "wwn-" for Fibre Channel.
Create the RCG and set the correct policy:
creatercopygroup -usr_cpg SSD_r6 my-target-FC:SSD_r6 my-csi-rcg my-target-FC:sync
setrcopygroup pol active_active my-csi-rcg
Admit the hosts to the RCG:
admitrcopyhost -proximity all my-csi-rcg iqn-my-compute-node-1
admitrcopyhost -proximity all my-csi-rcg iqn-my-compute-node-2
admitrcopyhost -proximity all my-csi-rcg iqn-my-compute-node-3
Verify that the hostset exist on both primary and target array:
showhostset RH2_my-csi-rcg*
Example output:
Id Name Members
102 RH2_my-csi-rcg iqn-my-compute-node-1
iqn-my-compute-node-2
iqn-my-compute-node-3
----------------------------------------
1 total 3
Id Name Members
374 RH2_my-csi-rcg.r123352 iqn-my-compute-node-1
iqn-my-compute-node-2
iqn-my-compute-node-3
----------------------------------------
1 total 3
It's highly desirable for an Active Peer Persistence protected workload to be rescheduled during a site outage. To allow Kubernetes to reschedule the workloads that ran on a partitioned host, the HPE CSI Driver Pod Monitor need to remove the VolumeAttachment
from the partitioned host.
Learn how to label Pods
to be monitored by the HPE CSI Driver:
Hint
This construct also applies to KubeVirt virtual machines, not just containers.
StorageClass Parameters for Active Peer Persistence¶
Due to an API limitation and the necessary manual steps to pre-create all the storage resources prior to creating PVCs, it's not currently recommended to add replication parameters directly in the StorageClass
. Instead, PVCs are annotated with the necessary parameters, either during creation or afterwards.
This section will be updated in a future revision, meanwhile, Add Non-Replicated Volume to Remote Copy Group.
Classic Peer Persistence Prerequisites¶
If using existing RCGs for replication, the RCGs need to have the correct sync mode policy applied, "path_management". Periodic sync mode is not supported at this time.
setrcopygroup pol path_management my-csi-rcg
Be understood with the limitations of Classic Peer Persistence integration with the HPE CSI Driver before proceeding.
Hint
Learn about Remote Copy and Peer Persistence mode in the HPE Alletra 9000: Getting started with data replication using Remote Copy and the CLI document.
For a tutorial on how to enable Classic Peer Persistence, check out the blog Enabling Remote Copy using the HPE CSI Driver for Kubernetes on HPE Primera
StorageClass Parameters for Classic Peer Persistence¶
These StorageClass
parameters are applicable only for replication, "remoteCopyGroup" and "replicationDevices" are mandatory. If the RCG defined in "remoteCopyGroup" doesn't exist on the array, then a new RCG will be created.
Parameter | Option | Description |
---|---|---|
remoteCopyGroup | Text | Name of new or existing RCG1 on the array. |
replicationDevices | Text | Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). |
allowBatchReplicatedVolumeCreation | Boolean | Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. (Optional) During this process, the Remote Copy group is stopped and started once. |
oneRcgPerPvc | Boolean | Creates a dedicated Remote Copy group per persistent volume. (Optional) |
1 = Existing RCGs must have local CPG and target CPG configured along with the correct policy, "path_management".
Important
RCGs created by the HPE CSI Driver applies the correct policies according to the replication mode supported by the system. Any changes to those policies need to be changed manually by a storage administrator and is highly discouraged.
Add Non-Replicated Volume to Remote Copy Group¶
In order to add an existing PVC to a RCG, the StorageClass
will utilize the Volume Mutator. PVC overrides may be used for creating PVCs with replication.
Important
Using the either mutations or overrides is the recommended workflow for Active Peer Persistence.
In the "parameter" section of the StorageClass
, add the following (example):
...
paramters:
allowOverrides: replicationPolicy,remoteCopyGroup,replicationDevices
allowMutations: replicationPolicy,remoteCopyGroup,replicationDevices
...
Tip
It's entirely possible to only use "allowOverrides" to prevent users to tamper with existing PVCs.
Description of the parameters.
Parameter | Option | Description |
---|---|---|
replicationPolicy | Text | Set to "active" for Active Peer Persistence and omit or change to empty string if using Classic Peer Persistence. |
remoteCopyGroup | Text | Name of existing RCG. |
replicationDevices | Text | Name of HPEReplicationDeviceInfo CRD . |
oneRcgPerPvc | Boolean | Creates a dedicated RCG per PVC . (Optional) |
Note
"remoteCopyGroup" and "oneRcgPerPvc" parameters are mutually exclusive and cannot be added together when editing a PVC
.
Learn in the using section how to apply overrides and mutations in more detail, as an example for replication, this PVC, when either created or edited, add the underlying PersistentVolume
to "my-csi-rcg" RCG using Active Peer Persistence using the "my-peer-persistence-target" HPEReplicationDeviceInfo
.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-replicated-pvc
annotations:
csi.hpe.com/replicationPolicy: active
csi.hpe.com/remoteCopyGroup: my-csi-rcg
csi.hpe.com/replicationDevices: my-peer-persistence-target
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 374Gi
storageClassName: hpe-standard
The same "edit" operation may be performed non-interactive with kubectl
:
kubectl annotate pvc/my-replicated-pvc \
csi.hpe.com/replicationPolicy=active \
csi.hpe.com/remoteCopyGroup=my-csi-rcg \
csi.hpe.com/replicationDevices=my-peer-persistence-target
The underlying PersistentVolume
is now being replicated.
Verification
The "VolumeEditSuccessful" event will be visible on the PVC if the mutation is successful, kubectl events --for pvc/my-replicated-pvc
.
VolumeSnapshotClass Parameters¶
These parameters are for VolumeSnapshotClass
objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable.
How to use VolumeSnapshotClass
and VolumeSnapshot
objects is elaborated on in using CSI snapshots.
Parameter | String | Description |
---|---|---|
read_only | Boolean | Indicates if the snapshot is writable on the array. |
VolumeGroupClass Parameters¶
In the HPE CSI Driver version 1.4.0+, a volume set with QoS settings can be created dynamically using the QoS parameters for the VolumeGroupClass
. The following parameters are available for a VolumeGroup
on the array. Learn more about VolumeGroups
in the provisioning concepts documentation.
Parameter | String | Description |
---|---|---|
description | Text | An identifier to describe the VolumeGroupClass . Example: "My VolumeGroupClass" |
domain | Text | The array Virtual Domain, with which the volume group and related objects are associated with. Example: "sample_domain" |
bwMaxLimitKb | Text | Bandwidth maximum limit in kilobytes per second for the target volume set. Example: "30000" |
priority1 | Text | The priority level for the target volume set. Example: "low", "normal", "high" |
ioMinGoal1 | Text | IOPS minimum goal for the target volume set. Example: "300" |
ioMaxLimit | Text | IOPS maximum limit for the target volume set. Example: "10000" |
bwMinGoalKb1 | Text | Bandwidth minimum goal in kilobytes per second for the target volume set. Example: "300" |
latencyGoal1 | Text | Latency goal in milliseconds (ms) or microseconds(us) for the target volume set. Example: "300ms" or "500us" |
1 = Parameter is deprecated and have no effect on HPE Alletra Storage MP B10000 10.5 and later.
Important
All QoS parameters supported by the platform are mandatory when creating a VolumeGroupClass
.
Example for HPE Primera:
apiVersion: storage.hpe.com/v1
kind: VolumeGroupClass
metadata:
name: my-volume-group-class
provisioner: csi.hpe.com
deletionPolicy: Delete
parameters:
description: "HPE CSI Driver for Kubernetes Volume Group"
csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend
csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage
priority: normal
ioMinGoal: "300"
ioMaxLimit: "10000"
bwMinGoalKb: "3000"
bwMaxLimitKb: "30000"
latencyGoal: "300ms"
Important
In certain situations where VolumeGroups
are used, the StorageClass
needs to have parameters.fsCreateOptions: "-K"
set to workaround a data path issue. A symptom of this issue is prolonged staging of newly provisioned PersistentVolumes
.
SnapshotGroupClass Parameters¶
These parameters are for SnapshotGroupClass
objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable.
How to use VolumeSnapshotClass
and VolumeSnapshot
objects is elaborated on in using CSI snapshots.
Parameter | String | Description |
---|---|---|
read_only | Boolean | Indicates if the snapshot is writable on the array. |
Static Provisioning¶
Static provisioning of PVs
and PVCs
may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass
.
Prerequisites¶
The CSP expects a certain naming convention for PersistentVolumes
and Virtual Volumes on the array.
- Persistent Volume:
pvc-00000000-0000-0000-0000-000000000000
- Virtual Volume:
pvc-00000000-0000-0000-0000-000
Note
The zeroes are used as examples. They can be replaced with any hexadecimal from 0
to f
. Establishing a scheme may be important if static provisioning is going to be the main method of providing persistent storage to workloads.
The following example uses the above scheme as a naming convention. Have a storage administrator rename the existing Virtual Volume on the array:
setvv -name pvc-00000000-0000-0000-0000-000 my-existing-virtual-volume
HPEVolumeInfo¶
Create a new HPEVolumeInfo
resource.
apiVersion: storage.hpe.com/v2
kind: HPEVolumeInfo
metadata:
name: pvc-00000000-0000-0000-0000-000000000000
spec:
record:
Id: pvc-00000000-0000-0000-0000-000000000000
Name: pvc-00000000-0000-0000-0000-000
uuid: pvc-00000000-0000-0000-0000-000000000000
Persistent Volume¶
Create a PV
referencing the HPEVolumeInfo
resource.
Warning
If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-00000000-0000-0000-0000-000000000000
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 16Gi
csi:
volumeHandle: pvc-00000000-0000-0000-0000-000000000000
driver: csi.hpe.com
fsType: xfs
volumeAttributes:
volumeAccessMode: mount
fsType: xfs
controllerPublishSecretRef:
name: hpe-backend
namespace: hpe-storage
nodePublishSecretRef:
name: hpe-backend
namespace: hpe-storage
controllerExpandSecretRef:
name: hpe-backend
namespace: hpe-storage
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Tip
Remove .spec.csi.controllerExpandSecretRef
to disallow volume expansion.
Persistent Volume Claim¶
Now, a user may claim the static PV
by creating a PVC
referencing the PV
name in .spec.volumeName
.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
volumeName: my-static-pv-1
storageClassName: ""
Active Peer Persistence Limitations¶
These are the current limitations of Active Peer Persistence when used with the HPE CSI Driver.
- Active Peer Persistence was introduced in version 3.0.0 of the CSI driver. Prior versions won't work.
- Three sites are required to provide a redundant Kubernetes control-plane and Quorum Witness for the RCGs.
- RCGs, volumes and VLUNs may not be managed outside of the CSI driver.
- Pods (containers or VMs) needs to be labeled accordingly to be monitored by the HPE CSI Driver Pod Monitor to prevent multi-attach errors.
- Only intra-cluster disaster recovery is supported.
- Only symmetric host proximity is supported and running Active Peer Persistence beyond a campus distance (around 1km) is not recommended.
- Only manual host and RCG creation is supported (this limitation will be removed in the future).
- Once an automatic failover has occurred, the recovered workloads needs to be manually restarted when the previous primary is restored. This will resume full redundancy with VLUNs created on both arrays for the workloads (a future platform update will address this workaround).
Classic Peer Persistence Limitations¶
These are the current limitations of the Remote Copy Classic Peer Persistence integration with the HPE CSI Driver.
- Classic Peer Persistence does not provide disaster recovery for workloads running on Kubernetes. Classic Peer Persistence provide disaster recovery for the storage system.
- Classic Peer Persistence only provide data path resilience. If the primary array is unreachable for the CSP or the role of the remote copy group has changed due to disaster recovery operations (manual or automatic switchover/failover), all CSI operations will cease to function until the primary array comes back up and the role of the remote copy groups returned to original state.
- When the primary array is unavailable for the Kubernetes cluster and remote copy group has failed over to the target array successfully, running workloads will continue to run if the host the workload was running on has redundant data paths to the target array (current primary array).
- It's possible to access volumes from the target array by statically provisioning
PersistentVolumes
without renaming the volume on the array. This is only safe if it has been determined that the primary array does not have active hosts accessing the volume against the primary array.
Support¶
Please refer to the HPE Alletra Storage MP B10000, Alletra 9000 and Primera and 3PAR Storage CSP support statement.