Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sending though SSH un-mount the host #76

Open
farzadha2 opened this issue Nov 3, 2020 · 6 comments
Open

sending though SSH un-mount the host #76

farzadha2 opened this issue Nov 3, 2020 · 6 comments

Comments

@farzadha2
Copy link

farzadha2 commented Nov 3, 2020

Hi
Currently testing out to send the snapshots to another pool with ssh, but realized that it messes up with my host pool

ex: on my host 1

root@prometheus4:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             1.77T  2.43T   104K  /rpool
rpool/ROOT         479G  2.43T    96K  /rpool/ROOT
rpool/ROOT/pve-1   479G  2.43T   479G  /
rpool/data        1.31T  2.43T  7.85G  -

but in the rpool/data i have another vm disk which somehow disappeared
but the vms are working not sure how

after i ran this config

#create backupsnapshotfirst
#backup snapshots
[rpool/data/vm-109-disk-0]
frequent = 10
snap = yes
clean = yes
dest = ssh:22:[email protected]:rpool/data/vm-109-disk-0
compress = gzip
dest_auto_create = yes

#cleanup remotely
[ssh:22:[email protected]:rpool/data]

i rebooted the server but still shows the same dataset so not sure what to do

on the host it seems that its there but somehow does not show

root@prometheus4:~# zfs get all rpool/data/vm-109-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
rpool/data/vm-109-disk-0  type                  volume                 -
rpool/data/vm-109-disk-0  creation              Sat Jul 25 11:22 2020  -
rpool/data/vm-109-disk-0  used                  7.90G                  -
rpool/data/vm-109-disk-0  available             2.43T                  -
rpool/data/vm-109-disk-0  referenced            7.85G                  -
rpool/data/vm-109-disk-0  compressratio         1.24x                  -
rpool/data/vm-109-disk-0  reservation           none                   default
rpool/data/vm-109-disk-0  volsize               128G                   local
rpool/data/vm-109-disk-0  volblocksize          8K                     default
rpool/data/vm-109-disk-0  checksum              on                     default
rpool/data/vm-109-disk-0  compression           on                     inherited from rpool
rpool/data/vm-109-disk-0  readonly              off                    default
rpool/data/vm-109-disk-0  createtxg             18095                  -
rpool/data/vm-109-disk-0  copies                1                      default
rpool/data/vm-109-disk-0  refreservation        none                   default
rpool/data/vm-109-disk-0  guid                  13967327577356581275   -
rpool/data/vm-109-disk-0  primarycache          all                    default
rpool/data/vm-109-disk-0  secondarycache        all                    default
rpool/data/vm-109-disk-0  usedbysnapshots       51.5M                  -
rpool/data/vm-109-disk-0  usedbydataset         7.85G                  -
rpool/data/vm-109-disk-0  usedbychildren        0B                     -
rpool/data/vm-109-disk-0  usedbyrefreservation  0B                     -
rpool/data/vm-109-disk-0  logbias               latency                default
rpool/data/vm-109-disk-0  dedup                 off                    default
rpool/data/vm-109-disk-0  mlslabel              none                   default
rpool/data/vm-109-disk-0  sync                  disabled               inherited from rpool
rpool/data/vm-109-disk-0  refcompressratio      1.24x                  -
rpool/data/vm-109-disk-0  written               27.5M                  -
rpool/data/vm-109-disk-0  logicalused           9.80G                  -
rpool/data/vm-109-disk-0  logicalreferenced     9.71G                  -
rpool/data/vm-109-disk-0  volmode               default                default
rpool/data/vm-109-disk-0  snapshot_limit        none                   default
rpool/data/vm-109-disk-0  snapshot_count        none                   default
rpool/data/vm-109-disk-0  snapdev               hidden                 default
rpool/data/vm-109-disk-0  context               none                   default
rpool/data/vm-109-disk-0  fscontext             none                   default
rpool/data/vm-109-disk-0  defcontext            none                   default
rpool/data/vm-109-disk-0  rootcontext           none                   default
rpool/data/vm-109-disk-0  redundant_metadata    all                    default

root@prometheus4:~# zfs get all rpool/data
NAME        PROPERTY              VALUE                  SOURCE
rpool/data  type                  volume                 -
rpool/data  creation              Fri Jul 24 10:09 2020  -
rpool/data  used                  1.31T                  -
rpool/data  available             2.43T                  -
rpool/data  referenced            7.85G                  -
rpool/data  compressratio         1.20x                  -
rpool/data  reservation           none                   default
rpool/data  volsize               128G                   local
rpool/data  volblocksize          8K                     default
rpool/data  checksum              on                     default
rpool/data  compression           on                     inherited from rpool
rpool/data  readonly              off                    default
rpool/data  createtxg             9                      -
rpool/data  copies                1                      default
rpool/data  refreservation        none                   default
rpool/data  guid                  8037406478648268761    -
rpool/data  primarycache          all                    default
rpool/data  secondarycache        all                    default
rpool/data  usedbysnapshots       12.9M                  -
rpool/data  usedbydataset         7.85G                  -
rpool/data  usedbychildren        1.30T                  -
rpool/data  usedbyrefreservation  0B                     -
rpool/data  logbias               latency                default
rpool/data  dedup                 off                    default
rpool/data  mlslabel              none                   default
rpool/data  sync                  disabled               inherited from rpool
rpool/data  refcompressratio      1.24x                  -
rpool/data  written               0                      -
rpool/data  logicalused           1.57T                  -
rpool/data  logicalreferenced     9.71G                  -
rpool/data  volmode               default                default
rpool/data  snapshot_limit        none                   default
rpool/data  snapshot_count        none                   default
rpool/data  snapdev               hidden                 default
rpool/data  context               none                   default
rpool/data  fscontext             none                   default
rpool/data  defcontext            none                   default
rpool/data  rootcontext           none                   default
rpool/data  redundant_metadata    all                    default

Thank you

@yboetz
Copy link
Owner

yboetz commented Nov 3, 2020

pyznap unmounts the destination datasets before sending, see #44. You can easily remount them with zfs mount path/to/dataset. I haven't had time to look into the issue more. It shouldn't be too much of a problem, since backup datasets are usually rarely accessed.

@farzadha2
Copy link
Author

thanks for the reply,
i tried the following

root@prometheus4:~# zfs mount rpool/data/vm-109-disk-0 
cannot open 'rpool/data/vm-109-disk-0': operation not applicable to datasets of this type

@farzadha2
Copy link
Author

whats odd is that i cant seem to remount or maybe possible to change the vale?


root@prometheus4:~# zfs get mounted,mountpoint rpool/data
NAME        PROPERTY    VALUE       SOURCE
rpool/data  mounted     -           -
rpool/data  mountpoint  -           -

on another proxmox box i get this

root@prometheus2:~# zfs get mounted,mountpoint rpool/data
NAME        PROPERTY    VALUE        SOURCE
rpool/data  mounted               no              -
rpool/data  mountpoint        /rpool/data    default

@yboetz
Copy link
Owner

yboetz commented Nov 4, 2020

What do you get for zfs get type rpool/data/vm-109-disk-0?

@farzadha2
Copy link
Author

Thanks for the reply

root@prometheus4:~# zfs get type rpool/data/vm-109-disk-0
NAME                      PROPERTY  VALUE   SOURCE
rpool/data/vm-109-disk-0  type      volume  -

@yboetz
Copy link
Owner

yboetz commented Nov 4, 2020

Hm ok I'm not sure how volumes work with mounting/unmounting, since I don't use them...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants