Skip to content

Latest commit

 

History

History
145 lines (95 loc) · 2.67 KB

csi-dev.md

File metadata and controls

145 lines (95 loc) · 2.67 KB

Azure azurelustre Storage CSI driver development guide

 

Clone repo and build locally

 

  • Clone repo
$ mkdir -p $GOPATH/src/sigs.k8s.io
$ git clone https://github.com/kubernetes-sigs/azurelustre-csi-driver $GOPATH/src/sigs.k8s.io/azurelustre-csi-driver

 

  • Build azurelustre Storage CSI driver
$ cd $GOPATH/src/sigs.k8s.io/azurelustre-csi-driver
$ make azurelustre

 

  • Run verification before sending PR
$ make verify

 

  • Build container image and push to dockerhub
$ export REGISTRY_NAME=<dockerhub-alias>
$ make push-latest

   

Test locally using csc tool

 

  • Install CSC

Install csc tool according to https://github.com/rexray/gocsi/tree/master/csc:

$ mkdir -p $GOPATH/src/github.com
$ cd $GOPATH/src/github.com
$ git clone https://github.com/rexray/gocsi.git
$ cd rexray/gocsi/csc
$ make build

 

  • Setup variables
$ readonly volname="testvolume-$(date +%s)"
$ readonly cap="MULTI_NODE_MULTI_WRITER,mount,,,"
$ readonly target_path="/tmp/lustre-pv"
$ readonly endpoint="tcp://127.0.0.1:10000"

$ readonly lustre_fs_name=""
$ readonly lustre_fs_ip=""

 

  • Start CSI driver locally
$ cd $GOPATH/src/sigs.k8s.io/azurelustre-csi-driver
$ ./_output/azurelustreplugin --endpoint $endpoint --nodeid CSINode -v=5 &

Before running CSI driver, create "/etc/kubernetes/azure.json" file under testing server(it's better copy azure.json file from a k8s cluster with service principle configured correctly) and set AZURE_CREDENTIAL_FILE as following:

$ export set AZURE_CREDENTIAL_FILE=/etc/kubernetes/azure.json

 

1. Get plugin info

$ csc identity plugin-info --endpoint $endpoint

 

2. Create an azurelustre volume

$ csc controller new --endpoint $endpoint --cap $cap --req-bytes 2147483648 --params "fs-name=$lustre_fs_name,mgs-ip-address=$lustre_fs_ip" $volname

 

3. Publish volume

$ mkdir /tmp/target-path
$ volumeid=$(csc node publish --endpoint $endpoint --cap $cap --target-path $target_path --vol-context "fs-name=$lustre_fs_name,mgs-ip-address=$lustre_fs_ip" $volname)

 

4. Unpublish volume

$ csc node unpublish --endpoint $endpoint --target-path $target_path $volname

 

5. Delete azurelustre volume

$ csc controller del --endpoint $endpoint volumeid

 

6. Validate volume capabilities

$ csc controller validate-volume-capabilities --endpoint $endpoint --cap $cap volumeid

 

7. Get NodeID

$ csc node get-info --endpoint $endpoint