-
Hello, I have seen this issue #2555 and this discussion is related but not totally the same problem. I need to write a large set of data that I retrieve from an external API by chunks, this data is flexible and I don't know the size of it in advance. This is why I use upload_fileobj() with the same chunk size in the multipart_chunksize parameter (5 MiB minimum as the doc says). I provide to the Fileobj parameter an io.BytesIO object with the read function implemented and I set seekable() to False as stated in the issue. The read function works fine but once the API gave all the informations (egg. the length results in the chunk is lower that the chunk size) it should stop the reading and send the file to S3. But it doesn't and I am left with an infinite loop reading that last chunk. After some retro engineering using a local file and the io.FileIO, It's sending the file to S3 once the length of the chunk received is lower than the chunk size. If anyone as an idea of what I am missing, I would be grateful, Stéphane. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Hi @S-Boutot, thanks for reaching out and sorry to hear you’re having issues - in order for me to fully understand and reproduce the issue, it would be helpful to see the debug logs (by adding |
Beta Was this translation helpful? Give feedback.
-
As you are running a non searchable data, you should return a b"" in your stop criteria code : |
Beta Was this translation helpful? Give feedback.
As you are running a non searchable data, you should return a b"" in your stop criteria code :
if self.upload_chunk_number == 3: return b""