-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request - Support For thin provisioned block storage (iSCSI, FC... /SAN) #645
Comments
Hi, The GFS2 SR from XenServer is not production ready: it's based on SMAPIv3, without support for any migration path from SMAPIv1, no live storage migration, no backup/delta capability and so on. But we have plans on our side: improving SMAPIv3 and build a driver for a shared block based. Maybe GFS2 is an option, maybe not. |
Is there any schedule for that? Even a rough one? |
First, getting one SMAPIv3 local driver available (matter of weeks now), then backup capabilities, then a migration path, then shared file based SR (NFS) and finally shared block (the hardest thing). No ETA because depends on previous steps. |
I see that this was discussed about 7 months ago. As you know, support for block storage on fibre channel and elimination of 2 TB limits would be a huge deal in supplanting other vendors dominance in the enterprise marketplace. I just wanted to check how things are going in moving towards this feature? |
I'm working on submitting some code as this is high priority for me as well. Hoping I've thought this through properly. Anyone willing to test in an obviously non-production environment? Should have my branch ready next week. |
Please post when you get something ready with some details on the implementation and testing so far. |
How do you handle locks and synchronization between hosts? That's the main challenge with shared block storage. |
While XenServer got it's proprietary GFS2, XCP-ng still lack the option of a thin-provisioned, modern, block storage.
Many storage systems are connected via iSCSI or (probably lest often than iSCSI) Fibre Channel.
The current LVM/VHD-combo currently still lacks of:
Due to performance reasons it was less of a thing with pure HDD storages, as it can easily end up in lots of fragments, but due to a steady increase of hybrid or even full-flash storages, it's barely a point anymore.
Still something to cach up with VMware and (now back) XenServer.
The text was updated successfully, but these errors were encountered: