Overview¶
The HPE GreenLake for File Storage CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator.
Helm¶
Helm is the package manager for Kubernetes. Software is being delivered in a format designated as a "chart". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG
file.
The official Helm chart for the HPE GreenLake for File Storage CSI Driver is hosted on Artifact Hub. In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm.
- Go to the chart on Artifact Hub.
Note
It's possible to use the HPE CSI Driver for Kubernetes steps for v2.4.2 or later to mirror the required images to an internal registry for installing into an air-gapped environment.
Operator¶
The Operator pattern is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes.
Red Hat OpenShift Container Platform¶
During the beta, it's only possible to sideload the HPE GreenLake for File Storage CSI Operator using the Operator SDK.
The installation procedures assumes the "hpe-storage" Namespace
exists:
oc create ns hpe-storage
First, deploy or download the SCC:
oc apply -f https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml
Install the Operator:
operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/filex-csi-driver-operator-bundle-ocp:v1.0.0-beta
The next step is to create a HPEGreenLakeFileCSIDriver
resource, this can also be done in the OpenShift cluster console.
# oc apply -n hpe-storage -f https://scod.hpedev.io/filex_csi_driver/examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml
apiVersion: storage.hpe.com/v1
kind: HPEGreenLakeFileCSIDriver
metadata:
name: hpegreenlakefilecsidriver-sample
spec:
# Default values copied from <project_dir>/helm-charts/hpe-greenlake-file-csi-driver/values.yaml
controller:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []
disableNodeConformance: false
imagePullPolicy: IfNotPresent
images:
csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
csiControllerDriver: quay.io/hpestorage/filex-csi-driver:v1.0.0-beta
csiNodeDriver: quay.io/hpestorage/filex-csi-driver:v1.0.0-beta
csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
csiNodeInit: quay.io/hpestorage/filex-csi-init:v1.0.0-beta
csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
kubeletRootDir: /var/lib/kubelet
node:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []
For reference, this is how the Operator is uninstalled:
operator-sdk cleanup hpe-filex-csi-operator -n hpe-storage
Add a Storage Backend¶
Once the CSI driver is deployed, two additional resources need to be created to get started with dynamic provisioning of persistent storage, a Secret
and a StorageClass
.
Tip
Naming the Secret
and StorageClass
is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here.
Secret Parameters¶
All parameters are mandatory and described below.
Parameter | Description |
---|---|
endpoint | This is the management hostname or IP address of the actual backend storage system. |
username | Backend storage system username with the correct privileges to perform storage management. |
password | Backend storage system password. |
Example:
apiVersion: v1
kind: Secret
metadata:
name: hpe-file-backend
namespace: hpe-storage
stringData:
endpoint: 192.168.1.1
username: my-csi-user
password: my-secret-password
Create the Secret
using kubectl
:
kubectl create -f secret.yaml
Tip
In a real world scenario it's more practical to name the Secret
something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e "hpe-greenlake-file-sanjose-prod".
Next step involves creating a default StorageClass.