-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload 0Kb, does api need to continue to configure on the server? #20
Comments
@CNBroderick What is the issue you see here? I see code, but no problem or what is expected to be done |
after upload file by run these code, the nextcloud show 1.txt size is 0kb, and can not open it
|
Hey there, I overloaded the |
Yes, please send a merge request, I will then review it |
@TobiWineKenobi What does work in your environment? continue = false, oder continue = true? |
Oh, sorry I forgot. We need to set the continue header to false. Then it works for us. |
I have merged your patch request. |
Thanks, for your fast response. |
I am seeing the same effect using Sardine 5.9, release 11.2 with Nextcloud 18.04. Upload methods return without exception - both with the header parameter set to true and false, but the uploaded file is 0KB in both cases. Is there any other information that could help me track down the issue, @a-schild ? |
Just for the records, it seems to be related to chunking as @TobiWineKenobi already mentioned. I found quite a few users having this problem, and while there are workarounds by means of configuration for NGINX, there is nothing for Apache. Here is an alternative upload using Apache Commons HTTP client and Jackrabbit WebDAV:
So far, this seems to run fine with small and large files, independent of the webserver serving Nextcloud. Would it make sense to make it the default for file uploads in nextcloud-java-api? |
If you can give me a not working configuration of an webserver we could do real tests with it. I don't wish to integrate yet another webdav library, so fixing the current one is more important for me. |
I totally agree.
Yes, I have tried all combinations and they make no difference for me. I verified this behaviour using two self-hosted Nextcloud instances and one Nextcloud instance provided by a hosting company. All show the same issue.
What specifically do you need? I can setup an account for you on my instance and send you the credentials - is that what you mean? I am going to send the credentials to your email address ae@s.ws. (this is not a regex, just trying to not put your address here :-) ) |
How to download files from NextCloud |
And some more I wonder if your workarround also would work with an normal InputStream and not a FileInputStream. |
@a-schild thanks for looking into this. Yes, I agree it seems like a server problem, because the nextcloud-java-api implementation seems to work for some, and not work for others. I think having a clientside workaround would help more users and would be less invasive, but I also totally understand that adding additional libraries (that even serve the same purpose) is not ideal. The only valid way forward might be to really understand / track down what the workaround I posted above is doing different, maybe by doing some analysis on exchanged requests / reponse headers. @a-schild what exactly do you mean by "normal InputStream"? I just tried the workaround with a ByteArrayInputStream instead of a FileInputStream, with both a small 5KB file and a larger 22MB file. Both were uploaded fine. |
The jackrabbit library uses not chunked uploads, with a known length. not sure where the length is known, since an inpurtstream can be "indefinite" length |
I see. So, I think it's just "wait till the Nextcloud guys solve the problem, and leave nextcloud-java-api unchanged" for now, to not clutter it with competing / similar libraries. Anyone with an upload requirement can use the workaround. It's just that the library has an API with a somewhat unpredictable behaviour. |
The problem is not on NextCloud side, but rather the webserver<->PHP connection. The only possible thing to correct this on server side, would be to configure the fpm/fcgi connection so it does buffer all chunks until it has all parts. The woraaround you mentions does also not work in all cases, as soon as you have a streaming input source, then the data get sent as chunks and we have the same problem as with sardine. In the Apache Http Client implementation is a workarround with using retryable put, but in that case the InputStream must be readable multiple times.... |
I think this could be documented as a "known limitation", and users can deal with it, e.g. by fully retrieving the source and then doing the upload. Not ideal, but workable. My knowledge on Sardine is limited, might dig into the sources to figure out whether they have some conditional logic for deciding on chunking / non-chunking PUTs. If that were configurable / can be controlled by a parameter, I'd vote for not doing chunking at all, which would make the workaround (and need for additional libraries) unnecessary. |
I have added a method to upload a You can use the 11.3.0-SNAPSHOT to test this. |
@a-schild thanks for the change. I locally built 11.3.0-SNAPSHOT and ran some tests, basically uploading files of various sizes and taking MD5 checksums locally and on the server once they were uploaded. All of the uploads were successful, checksums always matched. The largest file I tried was around 900megs. Is there a need for me to really try 2gigs? No issue with doing it, but will take a while given my bandwidth. Thanks for all your support! Is there a projected date on when 11.3.0 will be released? |
The 11.3.0 release deprecates the |
upload 0 Kb. Flow is my test code. wanna to help me.
Uploaded WebSites:
Shared problem linked: https://xn--9kq.xn--fx1at73c.com/index.php/s/2KbrqgBK4peKA8t
The text was updated successfully, but these errors were encountered: