-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not backing up mounted subfolders since tag "1.8.2-1.2.6" #125
Comments
What errors are you seeing in the logs? Are you able to see the folders when you enter the container via |
None, the backup finishes without errors. That's also pretty nasty behavior, since that's why I didn't realize that my backups have been unusable since mid-September.
Yes, I already checked that. I can see the folders itself with all the files within. |
It may be worthwhile posting on the original repo so the project maintainer (Witten and co) can have a look. This is mostly a fork with a few custom bits, so I can only answer to most of the Docker problems rather than the Borg(matic) issues themselves. I've also just updated to the latest 1.8.5 so it may be worthwhile to see if the issue has been resolved. |
Unfortunately, the problem still exists with the tag I will have a look at the original container and open an issue there as well. |
I just tested my environment with the latest container I suspect that the problem exists only with your container, but I have no real explanation for it. I don't want to rule out that I'm making a mistake, but I can`t see the forest for the trees. Let me prepare a minimal example. Maybe we can use it to better understand the problem. |
Oops, I accidentally closed this issue... sorry! |
Sounds like a plan. If we can figure out a basic compose file with volume mounts, we can figure out potential issues, especially as we'll be implementing a lot of the changes in this repo to the upstream (like s6), so it'd be good to understand why this is occurring. |
Just for transparency, this is my config: borgmatic:
image: modem7/borgmatic-docker
# image: borgmatic:test
container_name: Borgmatic
environment:
TZ: $TZ
BORG_PASSPHRASE: $BORG_PASSPHRASE
BORG_SOURCE_1: $BORG_SOURCE_1
BORG_SOURCE_2: $BORG_SOURCE_2
BORG_REPO: $BORG_REPO
BORG_HEALTHCHECK_URL: $BORG_HEALTHCHECK_URL
CRON: $BORG_CRON
DOCKERCLI: true
CRON_COMMAND: $BORG_CRON_COMMAND
# EXTRA_CRON: |-
# 0 5 2 * * command1
# 0 7 1 * * command2
logging:
driver: "local"
options:
max-size: 10m
max-file: "3"
volumes:
- $BORGHOMESOURCEDIR:/mnt/source/
# - $CRONTAB:/mnt/source/Cron
# - Pihole:/mnt/source/Pihole/Pihole
# - Dnsmasq:/mnt/source/Pihole/Dnsmasq
- $BORGSERVBACKUPDIR/Database:/mnt/borg-DBrepository
- $BORGSERVBACKUPDIR/Docker:/mnt/borg-repository
- $RAMDRIVEBACKUP/borg:/mnt/ramdrive
- $USERDIR:/mnt/source/DockerApps/
- $USERDIR/Borgmatic/borgmatic.d/:/etc/borgmatic.d/
- $USERDIR/Borgmatic/.config/borg/:/root/.config/borg
- $USERDIR/Borgmatic/.ssh/:/root/.ssh
- $USERDIR/Borgmatic/.state/:/root/.borgmatic
- $USERDIR/Borgmatic/.cache/borg/:/root/.cache/borg
- $BORGSCRIPTS:/borgscripts
- /var/run/docker.sock:/var/run/docker.sock # So we can run scripts
networks:
isonet:
isolated:
restart: always .env # Borgmatic
BORGSERVBACKUPDIR="/mnt/oldhd/ServerBackup/"
BORGHOMESOURCEDIR="/home/alex/"
CRONTAB="/var/spool/cron/"
BORGSCRIPTS="/home/alex/DockerApps/Borgmatic/scripts/"
BORG_PASSPHRASE="passphrase"
BORG_RESTORE="/mnt/downloads/"
#BORG_RESTORE="/var/hda/files/drives/drive12/downloads/"
BORG_REPO="ssh://reponame.repo.borgbase.com/./repo"
BORG_SOURCE_1="/mnt/source/DockerApps"
BORG_SOURCE_2="/mnt/source/Cron"
BORG_HEALTHCHECK_URL="https://hc-ping.com/uuid"
BORG_CRON="0 5 * * *"
BORG_CRON_COMMAND="borgmatic --stats -v 0" borgmatic config: source_directories:
- ${BORG_SOURCE_1}
- ${BORG_SOURCE_2}
repositories:
- path: ${BORG_REPO}
label: Borgbase
one_file_system: true
exclude_caches: true
#storage:
# Passphase is set in variable $BORG_PASSPHRASE
compression: lz4
archive_name_format: 'backup-{now}'
keep_hourly: 0
keep_daily: 7
keep_weekly: 4
keep_monthly: 12
keep_yearly: 1
checks:
- name: repository
frequency: 2 weeks
- name: archives
frequency: always
- name: extract
frequency: 2 weeks
- name: data
frequency: 1 month
before_everything:
- borgmatic break-lock
- echo "Starting a backup job."
- echo "Stopping containers."
- exec /borgscripts/docker-stop.sh
after_everything:
- echo "Starting containers."
- exec /borgscripts/docker-start.sh
- echo "Backup created."
on_error:
- echo "Error while creating a backup."
- exec /borgscripts/docker-start.sh
# https://torsion.org/borgmatic/docs/how-to/backup-your-databases/
healthchecks:
ping_url: ${BORG_HEALTHCHECK_URL} |
Creating a minimal example was a wild ride, but I was able to gain a few insights. My example can be found here: https://github.com/mrclschstr/docker-borgmatic-issue-125. After cloning, the borg repository must first be initialized. Attention: You have to delete the When creating the example, I was able to determine that this behavior only occurs when a database hook is added to the borgmatic config. According to the documentation, the options What surprises me most is that this behavior has only been occurring since If there are any questions about the minimal example, please let me know. |
The more documentation I read on the subject, the more I believe that the problem is related to the I still don't understand why the problems only occur since the tag |
@witten @toastie89 @grantbevis - As we're going to probably be moving to S6 at some point in the near future (mostly to sort out the annoying sigterm issue, but also to allow better scaling), do you guys have any inputs regarding the above? I want to see if we can either rule out S6 as the issue, or figure out what may be causing this and seeing what the potential resolution is (if any). Even more so as the logs aren't assisting nor notifying of issues. @mrclschstr Does it work without the database hook in the config? |
Yes, as long as I just recognized that you are missing a |
Aye, I'm pretty sure that change was due to borgmatic-collective/docker-borgmatic#216 Especially as @kaechele made a good point regarding As long as you declare the volume in your compose file, that part shouldn't be an issue. |
Well, my guess was not bad at all. If you add the statement See also: mrclschstr@b413ee5 It is still unclear to me what the exact cause of the problem is. What is the best way to solve it now? |
Yeah, as long as |
I can think of the following solutions off the top of my head:
I think solutions 2, 3 and 4 should at least be pointed out in the readme. Do you see any other approaches? |
I think given that the way Borgmatic is (currently) designed to work, option number 2 is the "right" approach, more complicated or not. Regarding the volume mounts (options 1 and 3), I think that's a red herring in this case. Having an anonymous mount wouldn't really solve anything, and if anything, may complicate matters (basically, more cons than pros in this case). Option 4 is again doable, and arguably, potentially better for certain things, although again, a bit more complex in terms of configs. Now, there is an addition that hasn't been discussed, and that is variables to use within the container itself for configs, which may make configs less complicated (arguable). So within the config, one can define: source_directories:
- ${BORG_SOURCE_1}
- ${BORG_SOURCE_2} Then within your compose file BORG_SOURCE_1: /mnt/source/DockerApps
BORG_SOURCE_2: /mnt/source/Cron This would allow for easy modifications without modifying the config file directly, and easily rebuilding the container when changes are made (rather than having to bring the container down/up). This would certainly simplify options 2 and 4. |
Personally, I will probably opt for option 4. Basically, I'm happy with either approach; after all, you are the maintainer 😃 What I still find problematic, however, is that the default behavior (let's call it that) has changed between two versions of the container. There was no breaking changes message (or similar) and since borgmatic did not throw any errors or other messages, I naturally assumed that everything would work as usual after the update. Unfortunately, as I wrote at the beginning, I have had invalid backups since mid-September. Of course, this is partly my "fault" as I didn't check the backups after the update! I happen to know of a project called Mailcow, which also mounts the volumes as subfolders in the documentation, see: https://docs.mailcow.email/third_party/borgmatic/third_party-borgmatic/. If the statement Please don't get me wrong: This is not meant to be a blame; we do all this in our spare time. I would, however, like borgmatic to throw a message when it recognizes a mount point boundary, or for there to be a chapter in the readme on "do's and don'ts" for using this container. |
I actually happen to be the original author of that Borgmatic guide for Mailcow and the person to suggest dropping the Also, I don't believe it was your fault. Generally, I don't believe in surprising users with breaking changes and then blaming them for not checking things after updating. In an ideal world you'd either know what to check or not need to check at all. But, as we all know, software doesn't usually work that way. It does, however, strike me as odd that creating a volume for EDIT: Fun fact: the Mailcow use-case actually led me to suggest dropping the |
Starting with version 1.8.10 of the official borgmatic docker container the bug described above is also included. Admittedly, I have not tested it yet, but since the path Should I also open a bug for this in the official repo? |
I'd recommend opening a bug in the official repo as there are better resources (people) who will be able to investigate. Once a resolution is found, I can migrate the solution downstream. If you open the bug, then reference it here, we can keep monitoring it. |
@kaechele Just to be clear: If people backup their Mailcow installation with the official borgmatic container (see: https://docs.mailcow.email/third_party/borgmatic/third_party-borgmatic/), they will probably get empty backups with the new version! EDIT: At least if they follow the instructions linked above. |
@kaechele It seems that the mailcow community has already figured this out in 2022: https://community.mailcow.email/d/1796-borgmatic-does-not-backup-vmail See also: mailcow/mailcow-dockerized-docs#700 |
I'll close this one so that it can be dealt with in one location. This way we're not splitting the focus of matters, especially if 1.9.0 resolves. |
Yeah, borgmatic 1.9.0+ no longer turns on |
My answer still stands:
|
I'll try to describe my problem, which unfortunately is not easy for me, but I'll try my best 😃 I back up my Nextcloud instance with the borgmatic container and have packed the important folders into individual volumes. I mount these volumes under the folder
/mnt/source
in the borgmatic container in individual subfolders. Here is an excerpt from thedocker-compose.yml
:Previously it was sufficient to simply specify the
/mnt/source
folder inconfig.yaml
, here is an excerpt:Starting with the image tag
1.8.2-1.2.6
this unfortunately no longer works. The files in the individual subfolders are no longer backed up and the backups are therefore unusable. As a workaround, I now back up each subfolder individually (excerpt fromconfig.yaml
):As I have already written, the problem only exists since the image tag
1.8.2-1.2.6
and with1.8.2-1.2.5
everything still works as expected. I have not found anything in the borg release notes that would indicate this bug and since the borgmatic version has not changed between releases, I would rule out a bug there. Can you help me to identify the error? Is this an intended behavior?The text was updated successfully, but these errors were encountered: