-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage on Aperture #11
Comments
The large volumes of storage on each host can be used for large file storage such as user vm isos, video files, etc, while the NFS share can be used for storage of container files (application data), backups (with another copy of the backups offsite in something like backblaze) and user homedirs. |
A nomad nfs-csi driver exists (link) which seems promising. Using this would allow for containers to be allocated on any host, with their application data living on the dedicated storage host for the network. This shrinks the redundancy slightly, however it allows for a much simpler ecosystem, something I think is vital in Redbrick, considering the knowledge gained by one set of admins is usually lost within 2 years (because people graduate) |
In the current state of things, another host could be acquired from the old pool of servers (ones on the 136.206.15.0/24 range) or by purchasing another server for the sole purpose of storage. Currently, the first option seems like a sane choice. |
Anything that wants to be fast should be replicated across the boxes and using local storage. Backups and anything that needs to be persistent (on a filesystem and that can't be replicated sanely otherwise) goes into NFS. Databases specifically are able to replicate data themselves, and 3 instances of the same data is more reliable than one instance on an NFS share. |
As Redbrick offers a wide range of services (and will offer more in the future), a good storage base is vital to ensuring things work well. Storage should not be the issue that prevents something from working.
The new servers have 3.4TB of usable space per host. While this could be used for a distributed file system (ceph, gluster), the maintainability of such a solution rules them out completely.
Something much simpler in the form of a separate NFS host (or two), with a csi driver for nomad would make maintenance and usage a much simpler experience.
The text was updated successfully, but these errors were encountered: