Replies: 1 comment 2 replies
-
Who said they're sequential? Defining fragmentation (or "sequential" except for logical presentation of the data you gave it) on something where you never overwrite in place and allocations can happen from multiple disks becomes very complicated to do very quickly. Preallocation is a noop on ZFS except for dropping any existing allocations in that range (if you do it with PUNCH_HOLE), and updating the metadata. Since how much space things take up is going to vary based on what you write, which top-level vdevs it gets written to, and how much contiguous free space there is, among other things, you would rapidly wind up with surprising outcomes like "preallocating" 500 GiB and then writing incompressible data that took up more than 500 GiB between metadata or other fun overheads, and getting ENOSPC early. (Depending on your goal, you could conceivably use a Also, as a nit on myself, technically I suppose changing the logical file length would affect how the metadata encodes if you wrote, say, 4k 100G into a 0B file, versus 4k 100G into a 4T file, maybe, but that's pretty academic for most use cases...and if it's not, I have serious questions about your use case.) |
Beta Was this translation helpful? Give feedback.
-
Was playing around with rclone and ZFS and stumbled upon this thread:
https://forum.rclone.org/t/multiple-write-streams-not-causing-fragmentation-on-zfs/44521/5
Had to try it out. Even 16 simultaneous write streams do not seem to cause any fragmentation on ZFS.
From rclone/rclone#3066 :
while transfering data to ZFS dataset with rclone, both
du
anddu --apparent-size
show the file size as currently transfered amount so the files are not sparse either.Where does the magic happen here? What causes the writes to align sequentially?
From #10408 :
Beta Was this translation helpful? Give feedback.
All reactions