-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decide on HTTP chunked-data transfer protocol #290
Comments
You forgot the one used by ownCloud ;-) ("chunking NG") |
@guruz I've updated the issue description with the two protocols that I remeber, thanks for the heads up! |
@labkode https://github.com/cernbox/smashbox/blob/master/protocol/chunking.md is chunking NG. the old chunking is https://github.com/owncloud/core/wiki/spec:-big-file-chunking and not worth to be put in your list :-) |
@guruz links updated :) we still use the old chunking ... and definitely we want to get rid of it with whatever we chose for the new APIS |
Yes, I have a strong opinion which one we should use. |
The current data service will go away once we decide on the protocol, is currently there for the MVP and it does the job, files can be uploaded and downloaded and the current desktop sync client protocol is implemented in its own service. Once we decide on the direction we want to go, we'll change it as you mentioned it is not the most efficient way to transfer the files and is not re-sumable. tus.io and ownCloud rely solely on HTTP headers while the rest rely on some encoded payload. I've done a quick reading of tus.io and I like that is well-documented and the intention is nice, with nice examples and comes with extension support. It also offers data integrity by extension, however to be compatible wit the extension you need to support minimum I've also found some major drawbacks compared with other protocols: For non-chunked uploads: (not too much drawback)
For chunked and non-parallel uploads (not too much drawback):
For chunked and parallel uploads (major drawback): this is where the protocol is really inefficient and major consideration is needed for the sync client: For every single chunk, you need to:
Once all the chunks have been uploaded, the client (who has to remember the URL of all the previous chunks), needs to send a final POST to concatenate the chunks by the
Some side notes: If an URL for a chunk looks like this: It would be nice if more people could take a look at the protocol and give their opinion @guruz @moscicki @dragotin @diocas @glpatcern |
It seems the tus.io page has not been updated with the latest protocol spec: https://github.com/tus/tus-resumable-upload-protocol/blob/master/protocol.md
The creation-with-upload extension allows sending a body with the initial POST. That being said, we currently have a Regarding ``termination
As above,
Not quite, from the spec:
The Response of the
As above, the
This has been discussed in the original issue. Think different: you don't divide the file inte blocks of equal size, but instead decide how many parallel uploads you want to execute. The chunk size is not limited and you can resume each chunk. That is why the authors did not see the use case for thousands of chunks. It is currently not allowed, but we can easily propose a specification the accepts a list of chunks, either in tho body, or inside a chunk? The protocol is open and we can extended it if necessary. For our use case (syncing many files) I think concatenation is not a high priority. We can fill the bandwith with parallel upload requests for other files. The client needs to try to detect stalled uploads and use HEAD+PATCH to start where he left off, anyway. if we do that with one chunk or multiple is an optimization (one we should use, of course). I do acknowledge, that the concatenation of hundreds of chunks does not allow us to map the oc "chunking NG" algorithm 'nicely' because we have to send chunks in the correct order to accomodate differnt chunk sizes which means we may have to store a few chunks, however, AFAICT the client does not send multiple chunks in parallel. So, this is a corner case for people that implemented their own client. A way to solve this on the protocol level would be a new
allows:
The current Or allowing the
The beauty is that we can specify this part as an extension in their repo and maybe get feedback from @Acconut, @kvz or @evert who already gave comments on the protocol ;-) |
Tus is most advanced and our best option. Good research! |
If you need help with tus, we are happy to help you at https://tus.io/support.html! |
All of the protocols discussed here so far assume, that we want to assemble chunks in the server, so that the storage eventually represents the end user file as a storage file. This (understandably) leads to complications with temp files and whatnot, when range-put or patch requests are not supported by the storage. I'd like to consider an alternative: Let's assume, we store chunks as individual chunk files on the storage.
Rough sketch here, please let me know what I am missing. |
@jnweiger Something I long for, actually. Another benefit you did not mention explicitly is the Write Once Read Many (WORM) and Deduplication property of chunk based storage. It comes at the cost of maintaining the file to chunk relationship, which if you thunk about it ends up implementing a journaling file system. Just with block sizes in the range of tens or hundreds of megabytes instead of kilobytes. The main reason for not pursuing that direction was the fragility of the old php / request driven architecture. Introducing a chunk based storage implementation is possible there, but messing with the way the chunks are passed through the stack to leave them intact and implementing the chunk to file relationships on top of the old filecache ... is alienating. Furthermore, if you break up files into individual chunks you loose the comparitevely easy recoverability. With chunk based storage you need to backup the file to chunk relationship, which is now metadata in addition to the bytes on disk. I totally agree, that when we can assume total control over the storage there are better / more flexible / interesting ways of storing bytes there than just as a file. However, we currently need to be able to trigger a workflow when a file has been uploaded, eg to encrypt, scan a file for viruses or send it to another storage via whatever api. For that we need to have a quarantine or ingress area where wiles that are processed / filtered reside before allowing them to move to the actual storage. Sometimes ther are requirements that a file has to physically be kept on a different storage or even in RAM, until it has been accepted by whatever werkflow. Sometimes even a manual workflow. That is one of the reasons why the CS3 API delegates file up and download to an external API. Which brings me back to tus.io. It allows us to do both:
The other reason is that we want to use all storage capabilities as best as possible. That allows us to use the WORM property of certain S3 implementations or other hardware appliances that internally store the files using chunks on disk. Two things to keep in mind:
EOS has an API that allows range based PUT requests: http://eos-docs.web.cern.ch/eos-docs/restapi/putrange.html which we cun use to stream uploads that arrive via TUS. To summarize, I'd love to implement a storage with WORM and deduplication that is then mounted via FUSE ... but the main goal for this issue and CS3 is to agree on a protocol that is flexible enough to handle different storage backends, yet easy to implement. I think tus.io is best suited for that task, because not only does it already support all requirements we bolted on top of the owncloud style webdav uploads like checksums or chunking. It also supports resuming and extensions are build into the protocol. Based on the underlying storage technology we could actually expose different storage capabilities as tus extensions. Something which fits the requirement of integrating whatever storage technology is used very well. I hope that clarifies my reasoning. |
Wow, glad to read all the discussions here. I'm not an expert in chunked-data transfer protocols but with this issue I can, at least, get an overview of all the possible solutions we could implement. I've seen that tus.io is one of the favourite candidates and after researching a bit, I found that it also has an Android client in their official GH repository. I would need to try it out but well, it looks good at first sight. |
I was reading through the whole discussion and as one of the authors of the Chunking NG proposal I take the freedom to comment here. First, I think tus is a great approach. IMHO tus and Chunking NG share the same basic ideas, which is cool, but tus is way further on the way to become a real standard: It has many supporters, people who verified it's concepts and wrote code and put it in production. That alone makes it the top candidate, and we for sure should rather contribute than implementing our own. I think ownClouds requirements can mostly be covered with the current feature set of tus. However, there are a few scenarios where I currently can not see a good solution: When is the upload finished?Think the following scenario: Client wants to upload a very big file to a place that is on an external storage which is slow.
Now the tus page says in the FAQ that the upload is finished after the last PATCH has succeeded. That is not true for ownCloud, as it might wanna do virus checks on the whole file, move it to the slow external storage, do versioning foo, whatever. All that takes time, and I assume it would happen during the last PATCH request in the logic of tus. That is dangerous, because: The client can not display a status during that time, the last PATCH takes an not predictable longer time than all other PATCH requests and finally it might time out, which leaves the upload behind in a bad state. We know that situation in ownCloud. That is why Chunking NG has the final MOVE at the end, which tus does not have. Between the last PATCH and the final MOVE, the client could regularly send HEAD requests to the upload url to learn about the processing status on the server side. And once all the processing is done, the client should send the MOVE to make the file finally available. That last move would actually not be needed, but it gives another chance to implement a decision if the client really wants to finalize the upload. Upload as much as you cantus says in in the FAQ: "Idealy, this PATCH request should contain as much upload content as possible" Yes, of course. But, as a client developer I can tell that it is incredible hard to decide on what the right size is. That depends on so many parameters outside of the client that I believe the client can't decide at runtime (except with super clever alogrithms maybe). So it is left to the client developer to make a decision that should work in most circumstances which can have an unpredictable result. To avoid that we have decided to define the size of the upload chunks to a fixed size (at least back in the days, maybe @ogoffart or @guruz changed that meanwhile). Ideally that size is returned from the server as a configuration value (in OPTIONS in tus). Delta SyncDelta sync is also something that benefits very much from fixed chunks, because it eases to do what is described in part three of the initial Chunking NG post. Let other clients begin earlySomething that is not so much about this part of the protocol, but should still be considered: During an upload of a huge file, other clients should already be able to start downloading the same file, well knowing that it is not yet valid. ResultAgain, I think tus is the way to go, but maybe these challenges should be discussed. Maybe @Acconut can comment on how to solve the "when is the upload finished" problem? |
Latest experiments (owncloud/core#31851, owncloud/core#32576, owncloud/client#6614) Dynamic chunk sizes (up to 100MB) introduced with 2.4.0 (owncloud/client#5852) |
Ok, so the KI for dynamic chunk sizes is already there. Of course 😄 |
@dragotin AWESOME input! Thx a lot. Let me try to explain how I understand the points you raised. When is the upload finished?There are multiple levels of a finished upload:
As @dragotin pointed out: in oc10 we learned that a slow storage might cause the 2nd step to take very long. We might run into timeouts when assembling chunks ... or when running the virus scanning. This is why we added async processing. In this case the clients can poll the server for the upload progress. Having a background process and a stable upload ID makes implementing this a lot clearer. With tus.io we can rely on a dedicated service that can execute arbitrary workflows before making the file available in the storage. Upload as much as you canThe main feature of tus IMO is resumable uploads. The clients can upload us much as possible without having to think about a chunk size first. If the upload fails (because a proxy in between only allows eg 40mb) the client can ask the server how many bytes he has received for the upload. I don't know if the tus.io client libs already take into account these special cases, but it should be easy to detect when bytes dont reach the server and fall back to fixed size chunking (which is specified as a tus extension). This should cover all our use cases. Delta SyncWe should specify this as a tus extension. Protocol first. Since tus specifies how PATCH can be used to write to a certain offset a lot of the necessary HTTP calls should be there. Let other clients begin earlyOne thing I'd like to see is the status of an upload in the ui. The upload Id allows us to fetch the progress from the dataprovider service using the TUS protocol. The TUS spec does not contain any GET requests, so we could contribute an extension that allows clients to download the bytes that are available. However, downloads may have to be deferred until server processing has happened in order to eg. prevent downloading of malicious content that has not yet been antivirus scanned. |
That's only forking if the proxy did forward the first 40MB to the owncloud backend. Some proxy would just buffer the request, and if the buffer is full, or if there is a disconnection, or another error then the request is dropped and the owncloud server will not know about it, and the client will have to re-start from scratch. |
I 100% agree. AFAICT using a single PATCH request might break in various unpredictable ways that would prevent the upload from ever finishing. There actually is a PR fqor the tus.io protocol to let the server recommend a max chunk size: tus/tus-resumable-upload-protocol#93 With this the serler admin can configure a max chunk size which is exposed to a tus client as |
@ogoffart would it help to put a timeout on the answer of the first PATCH (or actually on all...) and if it was not answered properly within the time the upload is considered bogus and indeed started from scratch? |
@dragotin @Hugo_Gonzalez_Labrador I think we can close this? I could add the resumable upload specified by msgraph, but we currently have tus.io implemented, so that would be an enhancement. |
Yes, the decision is that we use tus. |
Aaah, ok, maybe not my call ;-) |
Compare different chunking data transfer protocols and chose the one that everyone should implement.
The protocol needs to allow to upload by chunks of data (files are split into small chunks) and resumable, if the transfer is cut, the client should be able to discover what are the missing chunks to be uploaded.
For the latter, should the client remember the chunks that it has already uploaded or should ask the server to get the list? Different protocols do differently
Proposed protocols:
The text was updated successfully, but these errors were encountered: