Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.IllegalStateException: The service request was not made within 10 seconds of doBlockingWrite being invoked. Make sure to invoke the service request BEFORE invoking doBlockingWrite if your caller is single-threaded.ription)(short issue description) #4893

Open
ramakrishna-g1 opened this issue Feb 6, 2024 · 19 comments
Labels
bug This issue is a bug. p2 This is a standard priority issue

Comments

@ramakrishna-g1
Copy link

Describe the bug

The service request was not made within 10 seconds of doBlockingWrite being invoked. Make sure to invoke the service request BEFORE invoking doBlockingWrite if your caller is single-threaded.
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.waitForSubscriptionIfNeeded(BlockingInputStreamAsyncRequestBody.java:110) ~[sdk-core-2.22.2.jar!/:na]
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.writeInputStream(BlockingInputStreamAsyncRequestBody.java:74) ~[sdk-core-2.22.2.jar!/:na]

Expected Behavior

We are experiencing these failures very often even after using latest aws crt client "aws-crt-client" 2.23.12.
We expect this to wait for longer time / should have option to increase the time out which would be helpful when there is huge data with large files.

Current Behavior

We are trying to steam large number of files from source system to AWS s3 using Transfer manager by reading the stream using HttpURLConnection, below is sample code -

URL targetURL = new URL("URL");
HttpURLConnection urlConnection = (HttpURLConnection) targetURL.openConnection();
urlConnection.setRequestMethod(HttpMethod.GET.toString());
urlConnection.setRequestProperty(HttpHeaders.ACCEPT, MediaType.ALL_VALUE);

if (urlConnection.getResponseCode() == HttpStatus.OK.value()) {
BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null);

Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.addTransferListener(UploadProcessListener.create(fileTracker.getPath()))
.putObjectRequest(req -> req.bucket(s3BucketName).key("v3/" + s3Key + "/" + fileTracker.getPath()))
.build());

long totalBytes = body.writeInputStream(urlConnection.getInputStream());

}

Reproduction Steps

URL targetURL = new URL("URL");
HttpURLConnection urlConnection = (HttpURLConnection) targetURL.openConnection();
urlConnection.setRequestMethod(HttpMethod.GET.toString());
urlConnection.setRequestProperty(HttpHeaders.ACCEPT, MediaType.ALL_VALUE);

if (urlConnection.getResponseCode() == HttpStatus.OK.value()) {
BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null);

Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.addTransferListener(UploadProcessListener.create(fileTracker.getPath()))
.putObjectRequest(req -> req.bucket(s3BucketName).key("v3/" + s3Key + "/" + fileTracker.getPath()))
.build());

long totalBytes = body.writeInputStream(urlConnection.getInputStream());

}

Possible Solution

No response

Additional Information/Context

Last week I have created ticket(awslabs/aws-crt-java#754) under aws-crt-java, as per the suggestion/comments creating this ticket here.

AWS Java SDK version used

2.23.12

JDK version used

11

Operating System and version

window / linux

@debora-ito
Copy link
Member

Hi @ramakrishna-g1 apologies for the silence.

We identified an issue with multipart uploads using BlockingInputStream where the client enters a bad state and doesn't recover from it. We are working on a fix.

We'll also consider creating a timeout configuration so this default value can be customized.

Will keep this updated with progress of the fix.

@debora-ito debora-ito added p2 This is a standard priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Mar 4, 2024
@benarnao
Copy link
Contributor

benarnao commented Mar 13, 2024

Would this apply when using S3AsyncClient? ex. S3AsyncClient.crtBuilder().build()

I am running into a similar issue

Also any eta on a fix? Thx

@nredhefferprovidertrust
Copy link

nredhefferprovidertrust commented Mar 13, 2024

Running into this issue with BlockingInputStreamAsyncRequestBody instead of the Output body.

Default S3Async setup and creds.

BlockingInputStreamAsyncRequestBody body =
      AsyncRequestBody.forBlockingInputStream(null); // 'null' indicates a stream will be provided later.

  CompletableFuture<PutObjectResponse> responseFuture =
      _s3AsyncClient.putObject(r -> r.bucket(bucketName).key(key), body);
  body.writeInputStream(inputStream); <- fails here
  return responseFuture.get();

@zoewangg
Copy link
Contributor

Hey all, we've exposed an option to allow users to configure subscribeTimeout via #5000, could you try with it?

BlockingOutputStreamAsyncRequestBody.builder()
                                    .contentLength(1024L)
                                    .subscribeTimeout(Duration.ofSeconds(30))
                                    .build();

@vswamy3
Copy link

vswamy3 commented Mar 14, 2024

In which version fix is available?

@nredhefferprovidertrust

In which version fix is available?

2.25.8

@nredhefferprovidertrust

This issue describes the timeout problem in the BlockingInputStreamAsyncRequestBody, but the change made by #5000 adds the configuration option to BlockingOutputStreamAsyncRequestBody. Is a similar configuration option going to be exposed for the BlockingInputStreamAsyncRequestBody as well?

@zoewangg
Copy link
Contributor

Hi @nredhefferprovidertrust #4893 is created to add the same config for BlockingInputStreamAsyncRequestBody

@vswamy3
Copy link

vswamy3 commented Mar 15, 2024

What is the best fail-safe value for .subscribeTimeout() in the PROD environment. where we have uploading thousands of messages per minute.

@mohithm2
Copy link

Hi @zoewangg, I see that you have provided an option to extend the timeout which is good. But it still doesn't solve the original issue of the client going into an unhealthy state.

So is there going to be a fix for that?

@debora-ito
Copy link
Member

Yes, we have a task in our backlog to fix the issue. No ETA to share at the moment.

@tcerda95
Copy link

tcerda95 commented Jul 5, 2024

Hello! Are any updates on this issue, or any recommendation on how to avoid this?

I am using CRT Client along with BlockingInputStreamAsyncRequestBody passing the file size and without multipart uploads. These are disabled. Although disabling multipart uploads reduced the occurrence of this, it is still happening.

I notice it happens when S3 starts responding backoff and throttling responses. The client enters an unrecoverable bad state and everything after that throws the timeout error described by OP.

For now, I replaced the implementation with AsyncRequestBody.fromInputStream and there are no more errors:

AsyncRequestBody.fromInputStream(inputStream, fileSize, executorService)

This however requires a separate executor service which makes not much sense for us, as we will just block the calling thread uploading the file anyways.


We are using S3TransferManager. This is the whole code:

    private void copyFile(URL source, String destination, long fileSize) {
        try (InputStream inputStream = urlStreamReader.read(source)) {
            BlockingInputStreamAsyncRequestBody requestBody = 
                 AsyncRequestBody.forBlockingInputStream(fileSize);

            Upload upload = transferManager.upload(
                    UploadRequest.builder()
                            .putObjectRequest(PutObjectRequest.builder()
                                    .bucket(s3BucketName)
                                    .expectedBucketOwner(s3BucketOwner)
                                    .key(destination)
                                    .build())
                            .requestBody(requestBody)
                            .build());

            // Blocks calling thread
            requestBody.writeInputStream(inputStream);

            upload.completionFuture().join();
        }
    }

@varianytsia
Copy link

Hi @debora-ito are there any updates for the fix for BlockingInputStreamAsyncRequestBody?
If not, are there any workarounds for that issue, maybe by using TransfrerManager created without using S3AsyncClient (if possible at all)?

@varianytsia
Copy link

varianytsia commented Aug 16, 2024

Was able to bypass the issue using this approach (Scala).

val body: BlockingInputStreamAsyncRequestBody = BlockingInputStreamAsyncRequestBody.builder().subscribeTimeout(Duration.ofSeconds(30)).build()
val upload: Upload = transferManager.upload(UploadRequest.builder().requestBody(body).putObjectRequest(request).build())
body.writeInputStream(inputStream)

@javafanboy
Copy link

javafanboy commented Oct 4, 2024

I also get this exception seemingly "randomly" with code that looks like this (CRT client as this , as I understand it is required to do streaming with unknown size):

            final BlockingInputStreamAsyncRequestBody body =
                    AsyncRequestBody.forBlockingInputStream(null);
            this.upload = s3TransferManager.upload(builder -> builder
                    .requestBody(body)
                    .putObjectRequest(req -> req.bucket(bucketName).key(path))
                    .build());
            // Start the streaming run by a separate thread...
            this.streamingThread = ThreadUtils.createAndStartDeamonThread(() -> {
                // Transfer data from the piped stream pair to the SDK stream that uploads the data to S3...
                body.writeInputStream(pipedInputStream); // Exception thrown here
            });

and would really like to find a work-around as this is to fragile to use in production...

For me specifying the size in advance is not an option (that this in unknown is the whole point for me of using streaming as the data is generated over some time andf total data size may be far larger than I could hold in memory).

@xwh1108
Copy link

xwh1108 commented Oct 9, 2024

您好!此问题有任何更新吗?或者有任何避免此问题的建议吗?

我正在使用 CRT 客户端并BlockingInputStreamAsyncRequestBody传递文件大小,但_没有_分段上传。这些都被禁用了。虽然禁用分段上传减少了这种情况的发生,但这种情况仍然会发生。

我注意到当 S3 开始响应退避和限制响应时会发生这种情况。客户端进入不可恢复的不良状态,此后的一切都会引发 OP 描述的超时错误。

现在,我用以下代码替换了实现AsyncRequestBody.fromInputStream,不再出现错误:

AsyncRequestBody.fromInputStream(inputStream, fileSize, executorService)

然而,这需要一个单独的执行服务,这对我们来说没有多大意义,因为我们无论如何都会阻止调用线程上传文件。

我们正在使用S3TransferManager。这是整个代码:

    private void copyFile(URL source, String destination, long fileSize) {
        try (InputStream inputStream = urlStreamReader.read(source)) {
            BlockingInputStreamAsyncRequestBody requestBody = 
                 AsyncRequestBody.forBlockingInputStream(fileSize);

            Upload upload = transferManager.upload(
                    UploadRequest.builder()
                            .putObjectRequest(PutObjectRequest.builder()
                                    .bucket(s3BucketName)
                                    .expectedBucketOwner(s3BucketOwner)
                                    .key(destination)
                                    .build())
                            .requestBody(requestBody)
                            .build());

            // Blocks calling thread
            requestBody.writeInputStream(inputStream);

            upload.completionFuture().join();
        }
    }

I use your method, it still has this problem

@xwh1108
Copy link

xwh1108 commented Oct 9, 2024

是的,我们有一个待办事项来修复此问题。目前无法分享预计到达时间。

Is it still going on

@javafanboy
Copy link

是的,我们有一个待办事项来修复此问题。目前无法分享预计到达时间。

Is it still going on

Unless my code pasted above has some error it seems to me there is still some intermittent problem with the SDK implementation of S3 streaming....

@JHBaik
Copy link

JHBaik commented Oct 21, 2024

Seems like when s3 api exception like 403 happends, async client is not trying to consume (subscribe) BlockingInputStreamAsyncRequestBody.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. p2 This is a standard priority issue
Projects
None yet
Development

No branches or pull requests