-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Larges pushes fail, because they take too long #11
Comments
I think to hard part about this is that to do it right we need to update the client to work properly with async jobs and anticipate that, then checking if the job is finished after the last push is finished. Otherwise the client thinks the job failed and tells the user as such, but then it didn't actually fail and the user is left very confused. |
In most cases, you won't have as many media files to deal with initially - although with sound recording available in The Combine it will likely become more of an issue. Once the repo is set with all of those media files, the subsequent changes will be much less (unless someone has to redo the repo from scratch for some reason) and probably won't cause the issues. Are there other ways of initially populating a repo apart from the S/R process? It might be more efficient to have a human with special privileges to do this than fixing it. And this discussion brings up the need for a media server that works with Combine, FLEx, LF, etc. :) |
If you can, I'd try to work off the docker branch I made, the commit history is kinda a mess atm. |
We recently had a push of almost 2GB (surprisingly it only took about 3 min for the client to upload it).
Here's an overview of when things broken:
It failed on the last chunk, because that's when the server actually starts doing the heavy lifting: trying to apply the commit.
And here's what I think/know happened:
hg incoming
, which takes ~3.5m (it gets logged to/var/cache/hgresume/<transaction-ID>.bundle.incoming.async_run
)hg incoming
command to finish (which does finish, because it's in its own process)hg unbundle
and creating a lockfile for that command (A lockfile is created forhg incoming
, but that's a seperate file)Do we want to allow big pushes like this? I think so! So how:
So we either need to:
I think 3 sounds like the best bet. Something like:
isComplete
check, with something that anticipates this senario:hg incoming
already ran and if so:hg unbundle
The text was updated successfully, but these errors were encountered: