Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ControllerUnpublishVolume #67

Merged
merged 6 commits into from
Mar 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
## unreleased
* Take into account `ControllerUnpublishVolumeRequest.NodeId` in `ControllerUnpublishVolume` to prevent undesired detach operations during overlapping CSI calls.
* Update base image to newer alpine minor version.

## v3.5.4 - 2024.01.05
* Update base image to newer alpine minor version.
Expand Down
2 changes: 1 addition & 1 deletion cmd/cloudscale-csi-plugin/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.17.6
FROM alpine:3.17.7

# e2fsprogs-extra is required for resize2fs used for the resize operation
# blkid: block device identification tool from util-linux
Expand Down
4 changes: 4 additions & 0 deletions deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,10 @@ You can **either** install the driver from your working directory
# Install a specific Chart version or latest if --version is omitted
helm install -n kube-system -g csi-cloudscale/csi-cloudscale [ --version v1.0.0 ]

You can verify that the csi-driver has been installed by running the following command and checking if the csi-cloudscale pods are running:

kubectl get pods -n kube-system

Then execute the test suite:

make test-integration
Expand Down
23 changes: 22 additions & 1 deletion driver/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ func (d *Driver) ControllerUnpublishVolume(ctx context.Context, req *csi.Control
ll.Info("controller unpublish volume called")

// check if volume exist before trying to detach it
_, err := d.cloudscaleClient.Volumes.Get(ctx, req.VolumeId)
volume, err := d.cloudscaleClient.Volumes.Get(ctx, req.VolumeId)
if err != nil {
errorResponse, ok := err.(*cloudscale.ErrorResponse)
if ok {
Expand All @@ -308,6 +308,27 @@ func (d *Driver) ControllerUnpublishVolume(ctx context.Context, req *csi.Control
return nil, err
}

isAttachedToNode := false
for _, serverUUID := range *volume.ServerUUIDs {
if serverUUID == req.NodeId {
isAttachedToNode = true
}
}

ll = ll.WithFields(logrus.Fields{
"volume_api_uuid": volume.UUID,
"volume_api_name": volume.Name,
"volume_api_server": volume.ServerUUIDs,
"is_attached_to_node": isAttachedToNode,
})

if req.NodeId != "" && !isAttachedToNode {
ll.Warn("Volume is not attached to node given in request.")
return &csi.ControllerUnpublishVolumeResponse{}, nil
}

ll.Info("Volume is attached to node given in request or NodeID in request is not set.")

detachRequest := &cloudscale.VolumeRequest{
ServerUUIDs: &[]string{},
}
Expand Down
2 changes: 1 addition & 1 deletion helpers/bootstrap-cluster
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ set -euo pipefail

# Default values
RANDOM_NUMBER=$((RANDOM % 8193))
K8TEST_SHA="da91af6"
K8TEST_SHA="1190654"
ZONE="lpg1"
CLUSTER_PREFIX="csi-test-$RANDOM_NUMBER"
KUBERNETES="latest"
Expand Down
Loading