Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IO Exception when listing s3 objects in bucket #1214

Closed
Sebo11 opened this issue Feb 12, 2024 · 31 comments · Fixed by smithy-lang/smithy-kotlin#1171
Closed

IO Exception when listing s3 objects in bucket #1214

Sebo11 opened this issue Feb 12, 2024 · 31 comments · Fixed by smithy-lang/smithy-kotlin#1171
Labels
bug This issue is a bug. p2 This is a standard priority issue

Comments

@Sebo11
Copy link

Sebo11 commented Feb 12, 2024

Describe the bug

When executing below code it randomly happens that the call to s3 fails with an IO Exception (exception is inserted below). I cannot put a precise number on it but I guess this happens once every 20 request or so. It can be "fixed" when the attempts are upped to more than 5. Because when this happens it failes always exactly 5 times with the 6th attempt going trough somehow.

I cannot explain why this occurs and why it always fails 5 times with the 6th attempt going trough.

import aws.sdk.kotlin.services.s3.S3Client
import aws.sdk.kotlin.services.s3.model.ListObjectsV2Request
import jakarta.inject.Singleton
import io.micronaut.context.annotation.Property


@Singleton
class Service(private val s3: S3Client) : Logger {
    @Property(name = "bucket-name")
    private lateinit var awsbucket: String

    suspend fun listObjects(uuid: UUID): List<Object>? {
        val request = ListObjectsV2Request {
            bucket = awsbucket
            prefix = "documents/$uuid"
        }
        return s3.listObjectsV2(request).contents
    }

with this factory

import aws.sdk.kotlin.services.s3.S3Client
import io.micronaut.context.annotation.Factory
import io.micronaut.context.annotation.Property
import jakarta.inject.Singleton

@Factory
class S3Factory {
    @Singleton
    fun getS3Client(): S3Client = S3Client {
        region = "eu-central-1"
    }
}

the resulting exception

15:41:51.721 [DefaultDispatcher-worker-17] ERROR i.m.http.server.RouteExecutor - Unexpected error occurred: java.io.IOException: unexpected end of stream on https://example-bucket.s3.eu-central-1.amazonaws.com/...
aws.smithy.kotlin.runtime.http.HttpException: java.io.IOException: unexpected end of stream on https://example-bucket.s3.eu-central-1.amazonaws.com/...
        at aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine.roundTrip(OkHttpEngine.kt:158)
        at aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine$roundTrip$1.invokeSuspend(OkHttpEngine.kt)
        at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
        at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
        at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Caused by: java.io.IOException: unexpected end of stream on https://example-bucket.s3.eu-central-1.amazonaws.com/...
        at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:209)
        at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:111)
        at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:95)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:84)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:65)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at aws.smithy.kotlin.runtime.http.engine.okhttp.MetricsInterceptor.intercept(MetricsInterceptor.kt:30)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:205)
        at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:537)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.io.EOFException: \n not found: limit=0 content=
        at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:335)
        at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)
        at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:179)
        ... 18 common frames omitted

Expected behavior

Not failing randomly exactly 5 times when this error occurs.

Current behavior

Like explained before the error which is thrown.

15:41:51.721 [DefaultDispatcher-worker-17] ERROR i.m.http.server.RouteExecutor - Unexpected error occurred: java.io.IOException: unexpected end of stream on https://example-bucket.s3.eu-central-1.amazonaws.com/...
aws.smithy.kotlin.runtime.http.HttpException: java.io.IOException: unexpected end of stream on https://example-bucket.s3.eu-central-1.amazonaws.com/...
        at aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine.roundTrip(OkHttpEngine.kt:158)
        at aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine$roundTrip$1.invokeSuspend(OkHttpEngine.kt)
        at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
        at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
        at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Caused by: java.io.IOException: unexpected end of stream on https://example-bucket.s3.eu-central-1.amazonaws.com/...
        at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:209)
        at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:111)
        at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:95)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:84)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:65)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at aws.smithy.kotlin.runtime.http.engine.okhttp.MetricsInterceptor.intercept(MetricsInterceptor.kt:30)
        at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
        at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:205)
        at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:537)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.io.EOFException: \n not found: limit=0 content=
        at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:335)
        at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)
        at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:179)
        ... 18 common frames omitted

Here with more attempts set going trough

09:29:19.565 [default-nioEventLoopGroup-1-3] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(i
d=aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
09:29:19.566 [default-nioEventLoopGroup-1-3] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 004111cbb877796271fe307182ce84a0aa2ce6414c9a1e5a74ba3cd038d64e0b
09:29:19.574 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 2
09:29:19.574 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=a
ws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
09:29:19.574 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 8bc5844c65f86333f4cb21e7206b3ae7e471c8973789db9f86605d02ddc5f0a4
09:29:19.588 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 3
09:29:19.588 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=a
ws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
09:29:19.588 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 9d2db5c865b79490bb54de1f21d43909e23a0ab9611ebd4f488ad00af09c8f56
09:29:19.609 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 4
09:29:19.609 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=a
ws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
09:29:19.609 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 74a7dffb53a0ab23be7e32814c42dbe526e96b768e71ba406e13b4a374bb3814
09:29:19.621 [DefaultDispatcher-worker-18] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 5
09:29:19.621 [DefaultDispatcher-worker-18] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=
aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
09:29:19.621 [DefaultDispatcher-worker-18] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 549aae8fd9ed87f1ffdd3117dbfcf69b9522946925a2a97356a31daac8b58aa0
09:29:19.653 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 6
09:29:19.653 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=a
ws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
09:29:19.653 [DefaultDispatcher-worker-2] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: c1e9643a2ffa94e87ad7d1828c7aaf215374963703785314552aefceb0a4c357
09:29:19.832 [DefaultDispatcher-worker-18] DEBUG httpTraceMiddleware - HttpResponse: 200: OK

Steps to Reproduce

The framework in use is Micronaut in the version 4.2.2 . The s3 bucket itself has several objects in it and returns with the prefix key 10 elements.

The code was started locally via Gradle (version 8.4) and invoked over the micronaut controoler via Postman. Note that the same behavior is present in the cloud environment.

import aws.sdk.kotlin.services.s3.S3Client
import aws.sdk.kotlin.services.s3.model.ListObjectsV2Request
import jakarta.inject.Singleton
import io.micronaut.context.annotation.Property


@Singleton
class Service(private val s3: S3Client) : Logger {
    @Property(name = "bucket-name")
    private lateinit var awsbucket: String

    suspend fun listObjects(uuid: UUID): List<Object>? {
        val request = ListObjectsV2Request {
            bucket = awsbucket
            prefix = "documents/$uuid"
        }
        return s3.listObjectsV2(request).contents
    }

with this factory

import aws.sdk.kotlin.services.s3.S3Client
import io.micronaut.context.annotation.Factory
import io.micronaut.context.annotation.Property
import jakarta.inject.Singleton

@Factory
class S3Factory {
    @Singleton
    fun getS3Client(): S3Client = S3Client {
        region = "eu-central-1"
    }
}

with these build.gradle.kts file

plugins {
    id("org.jetbrains.kotlin.jvm") version "1.9.22"
    id("org.jetbrains.kotlin.plugin.allopen") version "1.9.22"
    id("com.google.devtools.ksp") version "1.9.22-1.0.17"
    id("com.github.johnrengelman.shadow") version "8.1.1"
    id("io.micronaut.application") version "4.3.2"
    id("io.micronaut.aot") version "4.3.2"
}

version = "0.1"
group = "com.example"

val kotlinVersion="1.9.22"
repositories {
    mavenCentral()
}

dependencies {
    ksp("io.micronaut:micronaut-http-validation")
    ksp("io.micronaut.serde:micronaut-serde-processor")
    implementation("io.micronaut.kotlin:micronaut-kotlin-runtime")
    implementation("io.micronaut.serde:micronaut-serde-jackson")
    implementation("org.jetbrains.kotlin:kotlin-reflect:${kotlinVersion}")
    implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8:${kotlinVersion}")
    compileOnly("io.micronaut:micronaut-http-client")
    runtimeOnly("ch.qos.logback:logback-classic")
    runtimeOnly("com.fasterxml.jackson.module:jackson-module-kotlin")
    testImplementation("io.micronaut:micronaut-http-client")
    implementation("aws.sdk.kotlin:s3:1.0.54")

}


application {
    mainClass.set("com.example.ApplicationKt")
}
java {
    sourceCompatibility = JavaVersion.toVersion("21")
}


graalvmNative.toolchainDetection.set(false)
micronaut {
    runtime("netty")
    testRuntime("junit5")
    processing {
        incremental(true)
        annotations("com.example.*")
    }
    aot {
    // Please review carefully the optimizations enabled below
    // Check https://micronaut-projects.github.io/micronaut-aot/latest/guide/ for more details
        optimizeServiceLoading.set(false)
        convertYamlToJava.set(false)
        precomputeOperations.set(true)
        cacheEnvironment.set(true)
        optimizeClassLoading.set(true)
        deduceEnvironment.set(true)
        optimizeNetty.set(true)
    }
}

Possible Solution

No response

Context

I want to show the user the s3 objects which the user has saved in the cloud via a web app. However at some point the request fails randomly (locally and in the cloud).

AWS Kotlin SDK version used

1.0.54

Platform (JVM/JS/Native)

Java 21.0.2 with gradle 8.4

Operating System and version

Windows 10 Pro

@Sebo11 Sebo11 added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Feb 12, 2024
@Sebo11
Copy link
Author

Sebo11 commented Feb 14, 2024

Hello, I created a new Micronaut project (version 4.3.1) and tested the behavior again. It still fails occasionally with the same error. However, in the fresh application it only needs 2 attempts instead of 6. I can't explain why exactly as the dependencies and the setup are the same in my opinion.

Don't know if it is worth it to look into it.

@Sebo11 Sebo11 closed this as completed Feb 14, 2024
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@Sebo11 Sebo11 reopened this Feb 14, 2024
@lauzadis
Copy link
Member

Hi, thanks for the report! I've set up a local application using Micronaut, but I can't replicate this behavior.

Can you please configure logMode = LogMode.LogRequest on the client and share some logs (with sensitive information redacted) demonstrating the problematic requests being retried and then finally succeeding? It will help determine what it changing in the request to make it succeed.

@lauzadis lauzadis added response-requested Waiting on additional info and feedback. Will move to 'closing-soon' in 5 days. and removed needs-triage This issue or PR still needs to be triaged. labels Feb 19, 2024
@Sebo11
Copy link
Author

Sebo11 commented Feb 26, 2024

Hello, sorry for the late response. Here the requested logs :)

Like always after the sixth request it somehow gets trough. However, like mentioned before it does not happen a lot. Every 20 request or so.

15:24:29.029 [default-nioEventLoopGroup-1-3] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(i
d=aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
15:24:29.029 [default-nioEventLoopGroup-1-3] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 416437ca1204259b13badc097e54b87aab915e307ddc825c48a9d4efa74b20e3
15:24:29.030 [default-nioEventLoopGroup-1-3] DEBUG httpTraceMiddleware - HttpRequest:
GET /?list-type=2&prefix=documents%2Ffc6cb449-326e-4926-b6a7-58599791a0d7
Host: https://example-bucket.s3.eu-central-1.amazonaws.com
User-Agent: aws-sdk-kotlin/1.0.54 ua/2.0 api/s3#1.0.54 os/windows#10.0 lang/kotlin#1.9.22 md/javaVersion#21.0.2 md/jvmName#Java_HotSpotTM_64-Bit_Server_VM md/jvmVersion#21.0.2+13-LTS-58
x-amz-user-agent: aws-sdk-kotlin/1.0.54
amz-sdk-invocation-id: 551be614-830f-4bad-a4b8-56e242692085
amz-sdk-request: attempt=1; max=10
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240226T142429Z
X-Amz-Security-Token: securityToken
Authorization: AWS4-HMAC-SHA256 Credential=AAAAAAAAAAAAAAAAAAAA/20240226/eu-central-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=416437ca1204259b13badc097e54b87aab915e307ddc825c48a9d4efa74b20e3       


15:24:29.449 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 2
15:24:29.449 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=
aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
15:24:29.450 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 10f692f13983e1b42442bb12f138fbde4ee6f8ea0e1c0fac80bc77b66c6653e1
15:24:29.450 [DefaultDispatcher-worker-13] DEBUG httpTraceMiddleware - HttpRequest:
GET /?list-type=2&prefix=documents%2Ffc6cb449-326e-4926-b6a7-58599791a0d7
Host: https://example-bucket.s3.eu-central-1.amazonaws.com
User-Agent: aws-sdk-kotlin/1.0.54 ua/2.0 api/s3#1.0.54 os/windows#10.0 lang/kotlin#1.9.22 md/javaVersion#21.0.2 md/jvmName#Java_HotSpotTM_64-Bit_Server_VM md/jvmVersion#21.0.2+13-LTS-58
x-amz-user-agent: aws-sdk-kotlin/1.0.54
amz-sdk-invocation-id: 551be614-830f-4bad-a4b8-56e242692085
amz-sdk-request: attempt=2; max=10
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240226T142429Z
X-Amz-Security-Token: securityToken
Authorization: AWS4-HMAC-SHA256 Credential=AAAAAAAAAAAAAAAAAAAA/20240226/eu-central-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=10f692f13983e1b42442bb12f138fbde4ee6f8ea0e1c0fac80bc77b66c6653e1       


15:24:29.795 [DefaultDispatcher-worker-20] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 3
15:24:29.796 [DefaultDispatcher-worker-20] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=
aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
15:24:29.796 [DefaultDispatcher-worker-20] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: f3b55ed40eefbb74e0e41ca9496e1029f8ff351d68fdfaa9cea3c37b1ec427b7
15:24:29.797 [DefaultDispatcher-worker-20] DEBUG httpTraceMiddleware - HttpRequest:
GET /?list-type=2&prefix=documents%2Ffc6cb449-326e-4926-b6a7-58599791a0d7
Host: https://example-bucket.s3.eu-central-1.amazonaws.com
User-Agent: aws-sdk-kotlin/1.0.54 ua/2.0 api/s3#1.0.54 os/windows#10.0 lang/kotlin#1.9.22 md/javaVersion#21.0.2 md/jvmName#Java_HotSpotTM_64-Bit_Server_VM md/jvmVersion#21.0.2+13-LTS-58
x-amz-user-agent: aws-sdk-kotlin/1.0.54
amz-sdk-invocation-id: 551be614-830f-4bad-a4b8-56e242692085
amz-sdk-request: attempt=3; max=10
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240226T142429Z
X-Amz-Security-Token: securityToken
Authorization: AWS4-HMAC-SHA256 Credential=AAAAAAAAAAAAAAAAAAAA/20240226/eu-central-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=f3b55ed40eefbb74e0e41ca9496e1029f8ff351d68fdfaa9cea3c37b1ec427b7       


15:24:30.066 [DefaultDispatcher-worker-20] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 4
15:24:30.067 [DefaultDispatcher-worker-20] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=
aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
15:24:30.068 [DefaultDispatcher-worker-20] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: c46c59d8f3bc4990336f52ebc36fcc8e3b70ce8e037cdf05fcca13aa7ff81937
15:24:30.068 [DefaultDispatcher-worker-20] DEBUG httpTraceMiddleware - HttpRequest:
GET /?list-type=2&prefix=documents%2Ffc6cb449-326e-4926-b6a7-58599791a0d7
Host: https://example-bucket.s3.eu-central-1.amazonaws.com
User-Agent: aws-sdk-kotlin/1.0.54 ua/2.0 api/s3#1.0.54 os/windows#10.0 lang/kotlin#1.9.22 md/javaVersion#21.0.2 md/jvmName#Java_HotSpotTM_64-Bit_Server_VM md/jvmVersion#21.0.2+13-LTS-58
x-amz-user-agent: aws-sdk-kotlin/1.0.54
amz-sdk-invocation-id: 551be614-830f-4bad-a4b8-56e242692085
amz-sdk-request: attempt=4; max=10
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240226T142430Z
X-Amz-Security-Token: securityToken
Authorization: AWS4-HMAC-SHA256 Credential=AAAAAAAAAAAAAAAAAAAA/20240226/eu-central-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=c46c59d8f3bc4990336f52ebc36fcc8e3b70ce8e037cdf05fcca13aa7ff81937       


15:24:30.315 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
15:24:30.315 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Fill pool skipped, pool has sufficient level or currently being filled.
15:24:30.804 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 5
15:24:30.804 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=
aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
15:24:30.805 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: 001bf3ba8b261c022ce2f040ad2c5073cf30d5841e7ee312a89842c28c1d2657
15:24:30.805 [DefaultDispatcher-worker-13] DEBUG httpTraceMiddleware - HttpRequest:
GET /?list-type=2&prefix=documents%2Ffc6cb449-326e-4926-b6a7-58599791a0d7
Host: https://example-bucket.s3.eu-central-1.amazonaws.com
User-Agent: aws-sdk-kotlin/1.0.54 ua/2.0 api/s3#1.0.54 os/windows#10.0 lang/kotlin#1.9.22 md/javaVersion#21.0.2 md/jvmName#Java_HotSpotTM_64-Bit_Server_VM md/jvmVersion#21.0.2+13-LTS-58
x-amz-user-agent: aws-sdk-kotlin/1.0.54
amz-sdk-invocation-id: 551be614-830f-4bad-a4b8-56e242692085
amz-sdk-request: attempt=5; max=10
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240226T142430Z
X-Amz-Security-Token: securityToken
Authorization: AWS4-HMAC-SHA256 Credential=AAAAAAAAAAAAAAAAAAAA/20240226/eu-central-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=001bf3ba8b261c022ce2f040ad2c5073cf30d5841e7ee312a89842c28c1d2657       


15:24:31.931 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.h.middleware.RetryMiddleware - retrying request, attempt 6
15:24:31.933 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.http.operation.AuthHandler - resolved endpoint: Endpoint(uri=https://https://example-bucket.s3.eu-central-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=
aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=eu-central-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})
15:24:31.934 [DefaultDispatcher-worker-13] DEBUG a.s.k.r.a.a.DefaultAwsSignerImpl - Calculated signature: b443cc2ec0581a82ab2cc410678b2e67eef0a617a718837ac31908d5562f7235
15:24:31.935 [DefaultDispatcher-worker-13] DEBUG httpTraceMiddleware - HttpRequest:
GET /?list-type=2&prefix=documents%2Ffc6cb449-326e-4926-b6a7-58599791a0d7
Host: https://example-bucket.s3.eu-central-1.amazonaws.com
User-Agent: aws-sdk-kotlin/1.0.54 ua/2.0 api/s3#1.0.54 os/windows#10.0 lang/kotlin#1.9.22 md/javaVersion#21.0.2 md/jvmName#Java_HotSpotTM_64-Bit_Server_VM md/jvmVersion#21.0.2+13-LTS-58
x-amz-user-agent: aws-sdk-kotlin/1.0.54
amz-sdk-invocation-id: 551be614-830f-4bad-a4b8-56e242692085
amz-sdk-request: attempt=6; max=10
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240226T142431Z
X-Amz-Security-Token: securityToken
Authorization: AWS4-HMAC-SHA256 Credential=AAAAAAAAAAAAAAAAAAAA/20240226/eu-central-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=b443cc2ec0581a82ab2cc410678b2e67eef0a617a718837ac31908d5562f7235       


15:24:32.099 [DefaultDispatcher-worker-20] DEBUG httpTraceMiddleware - HttpResponse: 200: OK

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to 'closing-soon' in 5 days. label Feb 26, 2024
@ianbotsf ianbotsf added the p2 This is a standard priority issue label Apr 4, 2024
@cloudshiftchris
Copy link

We're experiencing the same exception w/ S3 (in our case, its on GetObject - we aren't doing a ListObject in this use case).

Context:

  • Fargate job run on-demand that does a bunch of processing, orchestrating various AWS services - S3, Dynamo, RDS, Secrets Manager. Only S3 client is experiencing these retries
  • S3 client (and other AWS clients) are using default configurations (S3Client.fromEnvironment() etc) - 3 retries. These are singletons established at the start of the job and reused
  • while there is some coroutine parallelism, the retries occur on most S3.GetObject requests even in initial, sequential sections before hitting the parallelism
  • some times it retries once, sometimes twice, and fails when exceeding the 3 try limit.
  • latest AWS Kotlin SDK 1.3.x (we upgrade this weekly via Dependant an have been having this issue for a few months now)

Below entire request that retried (TRACE level output); you can see the initial request, the "call failed" (there is no response, just the EOFException), retry setup, establishing a new connection, and a second attempt. Seems like somehow the initial connection is closed.

2024-10-09T15:09:44.204-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.http.operation.OperationHandler","message":"operation started","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.205-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.http.operation.AuthHandler","message":"resolved endpoint: Endpoint(uri=https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=us-east-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.210-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.auth.awssigning.DefaultAwsSignerImpl","message":"Canonical request:\nGET\n\/mssql\/5.0.1-alpha.0.20241009194137961%2Bdev\/mssql-fingerprint.zip\nx-id=GetObject\namz-sdk-invocation-id:ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7\namz-sdk-request:attempt=1; max=3\nhost:marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\nx-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nx-amz-date:20241009T210944Z\nx-amz-security-token:***\nx-amz-user-agent:aws-sdk-kotlin\/1.3.47\n\namz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent\ne3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.214-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.auth.awssigning.DefaultAwsSignerImpl","message":"String to sign:\nAWS4-HMAC-SHA256\n20241009T210944Z\n20241009\/us-east-1\/s3\/aws4_request\n6d82ce770911a8184749db5e349dcca5a2a7c709de9cb8c0ccb39add9a62b390","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.214-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.auth.awssigning.DefaultAwsSignerImpl","message":"Calculated signature: cc5865ae501a7f9cb5b0fbd991a9ef0940e332ba1372f373e3c507afbabb692b","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.215-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"httpTraceMiddleware","message":"HttpRequest:\nGET \/mssql\/5.0.1-alpha.0.20241009194137961%2Bdev\/mssql-fingerprint.zip?x-id=GetObject\r\nHost: marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\r\nUser-Agent: aws-sdk-kotlin\/1.3.47 ua\/2.1 api\/s3#1.3.47 os\/linux#5.10.226-214.879.amzn2.x86_64 lang\/kotlin#2.0.20 md\/javaVersion#21.0.4 md\/jvmName#OpenJDK_64-Bit_Server_VM md\/jvmVersion#21.0.4+7-LTS exec-env\/AWS_ECS_FARGATE m\/E\r\nx-amz-user-agent: aws-sdk-kotlin\/1.3.47\r\namz-sdk-invocation-id: ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7\r\namz-sdk-request: attempt=1; max=3\r\nX-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\r\nX-Amz-Date: 20241009T210944Z\r\nX-Amz-Security-Token: ***\r\nAuthorization: AWS4-HMAC-SHA256 Credential=ASIAY6YOVYHTRPQJL243\/20241009\/us-east-1\/s3\/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=cc5865ae501a7f9cb5b0fbd991a9ef0940e332ba1372f373e3c507afbabb692b\r\n\r\n","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.220-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"call started","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.220-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"connection acquired: conn(id=374268970)=Connection{marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com:443, proxy=DIRECT hostAddress=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138:443 cipherSuite=TLS_AES_128_GCM_SHA256 protocol=http\/1.1}; connPool: total=2, idle=1","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.220-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"sending request headers","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.221-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"finished sending request headers","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.222-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"response failed","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"},"stacktrace":"java.io.IOException: unexpected end of stream on https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...\n\tat okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:220)\n\tat okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:114)\n\tat okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:93)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:72)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat aws.smithy.kotlin.runtime.http.engine.okhttp.MetricsInterceptor.intercept(MetricsInterceptor.kt:32)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:203)\n\tat okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:527)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base\/java.lang.Thread.run(Thread.java:1583)\nCaused by: java.io.EOFException: \\n not found: limit=0 content=…\n\tat okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:341)\n\tat okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)\n\tat okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:188)\n\t... 18 common frames omitted\n"}
2024-10-09T15:09:44.222-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"connection released: conn(id=374268970)=Connection{marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com:443, proxy=DIRECT hostAddress=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138:443 cipherSuite=TLS_AES_128_GCM_SHA256 protocol=http\/1.1}; connPool: total=1, idle=1","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.223-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"call failed","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"},"stacktrace":"java.io.IOException: unexpected end of stream on https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...\n\tat okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:220)\n\tat okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:114)\n\tat okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:93)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:72)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat aws.smithy.kotlin.runtime.http.engine.okhttp.MetricsInterceptor.intercept(MetricsInterceptor.kt:32)\n\tat okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126)\n\tat okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:203)\n\tat okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:527)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base\/java.lang.Thread.run(Thread.java:1583)\nCaused by: java.io.EOFException: \\n not found: limit=0 content=…\n\tat okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:341)\n\tat okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)\n\tat okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:188)\n\t... 18 common frames omitted\n"}
2024-10-09T15:09:44.224-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.awsprotocol.ClockSkewInterceptor","message":"service did not return \"Date\" header, skipping skew calculation","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.314-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.http.middleware.RetryMiddleware","message":"retrying request, attempt 2","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.315-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.http.operation.AuthHandler","message":"resolved endpoint: Endpoint(uri=https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com, headers=null, attributes={AttributeKey(aws.smithy.kotlin#endpointAuthSchemes)=[AuthOptionImpl(schemeId=AuthSchemeId(id=aws.auth#sigv4), attributes={AttributeKey(aws.smithy.kotlin.signing#AwsSigningRegion)=us-east-1, AttributeKey(aws.smithy.kotlin.signing#AwsSigningService)=s3, AttributeKey(aws.smithy.kotlin.signing#UseDoubleUriEncode)=false})]})","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.317-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.auth.awssigning.DefaultAwsSignerImpl","message":"Canonical request:\nGET\n\/mssql\/5.0.1-alpha.0.20241009194137961%2Bdev\/mssql-fingerprint.zip\nx-id=GetObject\namz-sdk-invocation-id:ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7\namz-sdk-request:attempt=2; max=3\nhost:marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\nx-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nx-amz-date:20241009T210944Z\nx-amz-security-token:***\nx-amz-user-agent:aws-sdk-kotlin\/1.3.47\n\namz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent\ne3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.318-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.auth.awssigning.DefaultAwsSignerImpl","message":"String to sign:\nAWS4-HMAC-SHA256\n20241009T210944Z\n20241009\/us-east-1\/s3\/aws4_request\n8593f46cf4df7a5ddff7fd66638b2a5a4a17f1985761725ba771cd852b00b4ea","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.325-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"aws.smithy.kotlin.runtime.auth.awssigning.DefaultAwsSignerImpl","message":"Calculated signature: 441fe3e2d3b0ac6ec81c66da5158737494bbc04586c815ec0705f622f2c34543","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.328-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-4","logger":"httpTraceMiddleware","message":"HttpRequest:\nGET \/mssql\/5.0.1-alpha.0.20241009194137961%2Bdev\/mssql-fingerprint.zip?x-id=GetObject\r\nHost: marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\r\nUser-Agent: aws-sdk-kotlin\/1.3.47 ua\/2.1 api\/s3#1.3.47 os\/linux#5.10.226-214.879.amzn2.x86_64 lang\/kotlin#2.0.20 md\/javaVersion#21.0.4 md\/jvmName#OpenJDK_64-Bit_Server_VM md\/jvmVersion#21.0.4+7-LTS exec-env\/AWS_ECS_FARGATE m\/E\r\nx-amz-user-agent: aws-sdk-kotlin\/1.3.47\r\namz-sdk-invocation-id: ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7\r\namz-sdk-request: attempt=2; max=3\r\nX-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\r\nX-Amz-Date: 20241009T210944Z\r\nX-Amz-Security-Token: ***\r\nAuthorization: AWS4-HMAC-SHA256 Credential=ASIAY6YOVYHTRPQJL243\/20241009\/us-east-1\/s3\/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-user-agent, Signature=441fe3e2d3b0ac6ec81c66da5158737494bbc04586c815ec0705f622f2c34543\r\n\r\n","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.333-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-6","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"call started","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.334-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"proxy select start: url=https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.334-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"proxy select end: url=https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/; proxies=[DIRECT]","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.334-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"dns query: domain=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.404-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"dns resolved: domain=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com; records=[marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/52.217.163.226, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/52.216.210.242, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/52.216.77.88, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/16.15.185.14, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/52.217.100.40, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/3.5.24.120, marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/3.5.17.103]","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.404-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp connect https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"starting connection: addr=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138:443; proxy=DIRECT","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.406-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"initiating TLS connection","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.435-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"TLS connect end: handshake=Handshake{tlsVersion=TLS_1_3 cipherSuite=TLS_AES_128_GCM_SHA256 peerCertificates=[CN=s3.amazonaws.com, CN=Amazon RSA 2048 M01, O=Amazon, C=US, CN=Amazon Root CA 1, O=Amazon, C=US] localCertificates=[]}","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.438-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"connection established: addr=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138:443; proxy=DIRECT; protocol=http\/1.1","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.439-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"connection acquired: conn(id=1940407935)=Connection{marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com:443, proxy=DIRECT hostAddress=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138:443 cipherSuite=TLS_AES_128_GCM_SHA256 protocol=http\/1.1}; connPool: total=3, idle=0","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.439-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"sending request headers","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.439-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"finished sending request headers","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.573-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"response headers start","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.574-06:00[America/Denver] {"level":"TRACE","thread":"OkHttp https:\/\/marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/...","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"response headers end: contentLengthHeader=0","keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.579-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-2","logger":"httpTraceMiddleware","message":"HttpResponse:\nHTTP 200: OK\r\naccept-ranges: bytes\r\ncontent-length: 276044\r\ncontent-type: application\/octet-stream\r\ndate: Wed, 09 Oct 2024 21:09:45 GMT\r\netag: \"23537892b77b791940be4cbb8e45592e\"\r\nlast-modified: Wed, 09 Oct 2024 20:11:36 GMT\r\nserver: AmazonS3\r\nx-amz-id-2: MQLG0jGFfbzBsW8Jdmgr+5HaBu3MM9f+NaNtq6D0UWcZYg2w1MC395d1ineBuEPQ4+haaOTBRIM=\r\nx-amz-request-id: D4E03NJPDJSP3QDJ\r\nx-amz-server-side-encryption: AES256\r\nx-amz-version-id: sjBG4eRd67vhnynCfxT8DbDP_uOhF8z_\r\n\r\n","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.580-06:00[America/Denver] {"level":"DEBUG","thread":"DefaultDispatcher-worker-2","logger":"aws.smithy.kotlin.runtime.awsprotocol.ClockSkewInterceptor","message":"service did not return \"Date\" header, skipping skew calculation","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.581-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-2","logger":"aws.smithy.kotlin.runtime.http.operation.OperationHandler","message":"operation completed successfully","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.582-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-2","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"response body available","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.605-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-2","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"response body finished: bytesConsumed=276044","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.607-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-2","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"connection released: conn(id=1940407935)=Connection{marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com:443, proxy=DIRECT hostAddress=marko-codepipeline-poc-test.s3.us-east-1.amazonaws.com\/54.231.224.138:443 cipherSuite=TLS_AES_128_GCM_SHA256 protocol=http\/1.1}; connPool: total=3, idle=3","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}
2024-10-09T15:09:44.608-06:00[America/Denver] {"level":"TRACE","thread":"DefaultDispatcher-worker-2","logger":"aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine","message":"call complete","mdc":{"tenantId":"8e08ebca-16d4-49fa-baac-cc29a9790305"},"keyValues":{"rpc":"S3.GetObject","sdkInvocationId":"ce8b1d40-2a69-4d91-9c4f-ea883ec72ee7"}}

@cloudshiftchris
Copy link

Well, took a bit to find a minimal reproducer, and boy is it minimal - this snippet causes retries due to the connection being closed.

suspend fun main() {

    val s3 = S3Client.fromEnvironment {
        // CUSTOM: for running locally
        region = "us-east-1"
        credentialsProvider = DefaultChainCredentialsProvider(profileName = "***")

        retryStrategy {
            maxAttempts = 1 // make retries fail faster for diagnostic purposes
        }
    }

    repeat(7) {
        val getObjectRequest = GetObjectRequest {
            bucket = "***"
            key = "***"  // ~262k ZIP file
        }

        val outputFile = File(System.getProperty("java.io.tmpdir")).resolve("${UUID.randomUUID()}.zip")
        s3.getObject(getObjectRequest) { resp -> resp.body?.writeToFile(outputFile) }
        outputFile.delete()
        delay(15.seconds) // also fails at 10s, 15s, not at 5s ???
    }
}

@ianbotsf
Copy link
Contributor

I've been able to reproduce the issue with the code you've given @cloudshiftchris and I've begun investigating possible causes and solutions.

@cloudshiftchris
Copy link

cloudshiftchris commented Oct 12, 2024

Additional context (perhaps you've seen this already), perhaps not related - the application never exits. Even with a single request (no retry / failure) there is a thread stuck in the background.

EDIT: ok, it does exit - after about a minute or so, presumably when some timeout happens.


suspend fun main() {

    val s3 = S3Client.fromEnvironment {
        // CUSTOM: for running locally
        region = "us-east-1"
        credentialsProvider = DefaultChainCredentialsProvider(profileName = "***")

        retryStrategy {
            maxAttempts = 1 // make retries fail faster for diagnostic purposes
        }
    }

    s3.use {
        val getObjectRequest = GetObjectRequest {
            bucket = "***"
            key = "***"  // ~262k ZIP file
        }

        val outputFile = File(System.getProperty("java.io.tmpdir")).resolve("${UUID.randomUUID()}.zip")
        s3.getObject(getObjectRequest) { resp -> resp.body?.writeToFile(outputFile) }
        outputFile.delete()

        println("Done")
    }
    /* Process never exits; the below thread is blocked waiting for a response
        "OkHttp portal.sso.us-east-1.amazonaws.com@4152" tid=0x26 nid=NA runnable
          java.lang.Thread.State: RUNNABLE
              at sun.nio.ch.Net.poll(Net.java:-1)
              at sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:191)
              at sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:201)
              at sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309)
              at sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346)
              at sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796)
              at java.net.Socket$SocketInputStream.read(Socket.java:1099)
              at sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489)
              at sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483)
              at sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70)
              at sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461)
              at sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066)
              at okio.InputStreamSource.read(JvmOkio.kt:93)
              at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:153)
              at okio.RealBufferedSource.request(RealBufferedSource.kt:63)
              at okio.RealBufferedSource.require(RealBufferedSource.kt:56)
              at okhttp3.internal.http2.Http2Reader.nextFrame(Http2Reader.kt:94)
              at okhttp3.internal.http2.Http2Connection$ReaderRunnable.invoke(Http2Connection.kt:636)
              at okhttp3.internal.http2.Http2Connection$ReaderRunnable.invoke(Http2Connection.kt:627)
              at okhttp3.internal.concurrent.TaskQueue$execute$1.runOnce(TaskQueue.kt:119)
              at okhttp3.internal.concurrent.TaskRunner.runTask(TaskRunner.kt:124)
              at okhttp3.internal.concurrent.TaskRunner.access$runTask(TaskRunner.kt:44)
              at okhttp3.internal.concurrent.TaskRunner$runnable$1.run(TaskRunner.kt:73)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
              at java.lang.Thread.runWith(Thread.java:1596)
              at java.lang.Thread.run(Thread.java:1583)
     */
}

@ianbotsf
Copy link
Contributor

ianbotsf commented Oct 15, 2024

Update: S3 appears to be closing idle connections faster than the default SDK idle timeout. In my testing, I've observed S3 closes unused connections after ~5 seconds. The SDK's default idle timeout is 60 seconds. The OkHttp engine should be able to handle that gracefully but it appears not to be. I'm still performing tests to verify the exact sequence of connection events to determine if there is a bug in OkHttp, the SDK, or S3.

That said, there are a few workarounds I've been able to verify:

  • Reduce the SDK's idle timeout to below what S3's service-side timeout appears to be:
    S3Client.fromEnvironment {
        httpEngine {
            connectionIdleTimeout = 3.seconds
        }
    }
  • Use the CRT HTTP engine instead of OkHttp:
    S3Client.fromEnvironment {
        httpClient(CrtHttpEngine)
    }

Can you give these a try in your environment and verify whether they work for you?

@cloudshiftchris
Copy link

cloudshiftchris commented Oct 15, 2024

Awesome, thanks @ianbotsf. We'll redeploy this morning (with an idle timeout of 3.seconds) and validate.

@cloudshiftchris
Copy link

cloudshiftchris commented Oct 15, 2024

@ianbotsf we tested with the Java SDK v2 (it doesn't fail) - in looking through the debug logs & Apache HttpClient code there's the below check that HttpClient does (in the context of validating a connection prior to borrowing it from the pool); for the second attempt (after waiting for 10s) it hits return bytesRead < 0; with bytesRead == -1, at which point the connection is closed and a new one is established.

org.apache.http.impl.BHttpConnectionBase#isStale

    public boolean isStale() {
        if (!isOpen()) {
            return true;
        }
        try {
            final int bytesRead = fillInputBuffer(1);
            return bytesRead < 0;
        } catch (final SocketTimeoutException ex) {
            return false;
        } catch (final IOException ex) {
            return true;
        }
    }

..will have other results of adjusting idle timeout shortly.

@cloudshiftchris
Copy link

@ianbotsf using the below compensates for the stale connection w/o retries:

S3Client.fromEnvironment {
    httpEngine {
        connectionIdleTimeout = 3.seconds
    }
}

We've put that in one use case (component) - don't plan to propagate that elsewhere atm (we've discovered that other components have the same issue) as the 'default' S3Client configuration should just work (once this issue is fixed).

@sandrine-bedard
Copy link

sandrine-bedard commented Oct 15, 2024

@cloudshiftchris : We're having the same issue (java.io.IOException: unexpected end of stream), but for PutObject. Is this the workaround?

import aws.sdk.kotlin.services.s3.S3Client
import aws.sdk.kotlin.services.s3.model.GetObjectRequest
import aws.sdk.kotlin.services.s3.model.PutObjectRequest
import aws.smithy.kotlin.runtime.content.ByteStream
import aws.smithy.kotlin.runtime.content.toByteArray

  private val amazonS3ClientAsync: S3Client = S3Client {
    region = Regions.DEFAULT_REGION.getName()
    httpClient {
      connectionIdleTimeout = 3.seconds
    }
  }

We're using AWS Kotlin SDK 1.3.43

@cloudshiftchris
Copy link

@sandrine-bedard that isn't a fix, its a workaround (largely for diagnostic purposes), until the SDK team can determine what an appropriate fix looks like. The SDK should "just work" in a default configuration.

@sandrine-bedard
Copy link

sandrine-bedard commented Oct 15, 2024

@sandrine-bedard that isn't a fix, its a workaround (largely for diagnostic purposes), until the SDK team can determine what an appropriate fix looks like. The SDK should "just work" in a default configuration.

@cloudshiftchris : Sorry, I didn't use the correct word. I meant "workaround". Can you confirm this is the correct workaround?

@cloudshiftchris
Copy link

Hard to know if it is correct for your use case & preferences; we encountered situations where an S3 client that hasn't been used in > 5s would retry on subsequent requests (and, under load, the number of retries would exceed the default of 3 and throw an exception). Aside from confirming that the workaround does address our issue we've opted not to widely deploy it as we have many different components that would need to be adjusted (most aren't materially impacted by this issue, the retry limit masks it); an updated SDK (when available with a fix) will roll through with Dependabot.

@cloudshiftchris
Copy link

cloudshiftchris commented Oct 16, 2024

@ianbotsf you've likely gotten well into this already - this code in aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngineConfig

        // Transient connection errors are handled by retry strategy (exceptions are wrapped and marked retryable
        // appropriately internally). We don't want inner retry logic that inflates the number of retries.
        retryOnConnectionFailure(false)

...is problematic.

It doesn't seem practical to handle these outside of the client; for example, these two scenarios are indistinguishable:

  1. Connection pool has a stale connection:
  • request is sent to server (goes nowhere);
  • client attempts to read response headers, discovers the connection is closed, throws exception
  • retry logic retries the request
  1. Connection pool has an available (non-stale) connection (or a new connection):
  • request is sent to server;
  • server/software receives request and crashes in a manner that closes the connection before response is written (i.e. it got partway into processing the request)
  • client attempts to read response headers, throws exception
  • retry logic retries the request (Oops! this can result in the same request being processed twice)

...the point being that dealing with a connection closed exception after-the-fact is problematic - we can't distinguish between a "connection closed because it was stale before we sent the request" vs "connection closed because it was closed after we sent the request", hence "connection closed" is not idempotent/retryable.

It isn't clear how OkHttpClient detects/retries these internally (when retryOnConnectionFailure is enabled) - maybe it has the same challenge, if it isn't precisely detecting stale connections before use.

imo, the OkHttp connection pool should skip over stale connections (as that isn't really a connection attempt) with retries kicking in after a connection is borrowed/established.

There's related discussion here: square/okhttp#7007, but the premise is flawed:

After some brief investigation, I think it’s best to fix this on the server. If it’s going to close a socket within 10 seconds of the HTTP response, it should send the Connection: close response header. That way OkHttp won’t pool the connection.

...like the server knows it may crash in 10 seconds! Servers/networks can/will do all kinds of weird and wonderful things - the client should never trust that any arbitrary server behaves correctly. In this case - if a pooled connection is closed, well, don't attempt to send anything on it - simply grab another connection from the pool (and validate that) or establish a new connection.

EDIT: there's some connection validation logic in OkHttp's connection pool but it has flawed premises:

okhttp3.internal.connection.RealConnection#isHealthy

    val idleDurationNs = synchronized(this) { nowNs - idleAtNs }
    if (idleDurationNs >= IDLE_CONNECTION_HEALTHY_NS && doExtensiveChecks) {
      return socket.isHealthy(source)
    }

okhttp3.internal.connection.CallConnectionUser#doExtensiveHealthChecks

  override fun doExtensiveHealthChecks(): Boolean {
    return chain.request.method != "GET"
  }

Why only validate idle connections older than ten seconds? The subsequent validations aren't expensive, relative to dealing with exceptions later. Apache HttpClient uses a 2s default (configurable).
Why not validate GET requests - are they not worthy??? Apache HttpClient doesn't make this distinction.

(it's a sad day when comparing to Apache HttpClient...)

And the kicker is that all that mess repeats for the SDK retry count - if there are multiple connections in the pool that have gone stale (server has closed them), each one of those counts as a retry attempt, until you're out of attempts. Doesn't bode well for real-world scenarios where the server aggressively closes idle connections (S3) or a server/load balancer/network blip that drops some/all connections.

@cloudshiftchris
Copy link

@ianbotsf as suspected in the above there's an idempotency issue - s3.PutObject retries on HttpErrorCode(CONNECTION_CLOSED), which shouldn't happen as the timing of connection close is unknown (is it a stale connection, or closed after the request was already sent?)

Reproducer:

suspend fun main() {
    testKotlinClient()
//    testJavaClient()
}

private suspend fun testKotlinClient() {
    try {
        S3Client.fromEnvironment {
            // CUSTOM: for running locally
            region = s3Region
            credentialsProvider = DefaultChainCredentialsProvider(profileName = s3Profile)

            retryStrategy {
                maxAttempts = 1 // make retries fail faster for diagnostic purposes
            }
        }.use { s3 ->
            repeat(2) {
                val getObjectRequest = GetObjectRequest {
                    bucket = s3Bucket
                    key = s3Key  // ~262k ZIP file
                }

                val testFile = File(System.getProperty("java.io.tmpdir")).resolve("test-file.txt")
                testFile.writeText("AWS Kotlin SDK S3Client test")
                s3.putObject {
                    bucket = s3Bucket
                    key = "_test/test-file.txt"
                    body = ByteStream.fromFile(testFile)
                }
                testFile.delete()

//                val outputFile = File(System.getProperty("java.io.tmpdir")).resolve("${UUID.randomUUID()}.zip")
//                s3.getObject(getObjectRequest) { resp -> resp.body?.writeToFile(outputFile) }
//                outputFile.delete()
//
                delay(7.seconds)
            }

            println("S3 request successful (AWS Kotlin SDK)")
        }
    } catch (e: Exception) {
        println("Error using AWS Kotlin SDK S3Client: ${e.message}")
        e.printStackTrace()
    }
}

@ianbotsf
Copy link
Contributor

Latest updates from my investigation:

  • I can confirm that S3 is sending TLS close_notify and TCP FIN to the client after a connection's been idle for ~5 seconds
  • OkHttp does not appear to be responding to those events in real time. Only after "sending" a request to a local buffer and attempting to read the response does the condition surface.
  • Discussing with OkHttp maintainers about potential solutions inside OkHttp. They believe retryOnConnectionFailure(true) (which is the default but disabled by the SDK) is safe but cannot say for certain that non-idempotent requests would be handled properly in all cases. I'm working on a low-level repro to verify.
  • I'm also looking at implementing a workaround in the SDK involving continual connection monitoring.

I'll update this issue when I get more info or results.

@ianbotsf
Copy link
Contributor

Latest updates:

  • OkHttp treats GETs and non-GETs differently. Specifically, the repro code shows the failure on GETs but changing to a non-GET (e.g., PutObject) causes the failure to disappear, even with retryOnConnectionFailure disabled.
  • I've been able to verify a POC implementation of an SDK-hosted connection monitor, which successfully detects when the connection is closed remotely and then prevents it from being grabbed from the pool
  • I'm working on another possible fix by special-casing the connection closure exceptions in the SDK's retry strategy such that they don't count against maxAttemps and are also eligible for retry immediately

Once I make progress on the SDK-level retry approach, we'll decide among the available options for the fix.

@cloudshiftchris
Copy link

cloudshiftchris commented Oct 21, 2024

thanks @ianbotsf.

On this point:

OkHttp treats GETs and non-GETs differently. Specifically, the repro code shows the failure on GETs but changing to a non-GET (e.g., PutObject) causes the failure to disappear, even with retryOnConnectionFailure disabled.

There's a reproducer above (also included below) that shows S3.PutObject failing w/ a 7s delay - greater than the 5s for S3 to close the connection, less than OkHttp's 10s minimum idle time for stale connection checking (that only applies to non-GET requests).


suspend fun main() {
    testKotlinClient()
//    testJavaClient()
}

private suspend fun testKotlinClient() {
    try {
        S3Client.fromEnvironment {
            // CUSTOM: for running locally
            region = s3Region
            credentialsProvider = DefaultChainCredentialsProvider(profileName = s3Profile)

            retryStrategy {
                maxAttempts = 1 // make retries fail faster for diagnostic purposes
            }
        }.use { s3 ->
            repeat(2) {
                val getObjectRequest = GetObjectRequest {
                    bucket = s3Bucket
                    key = s3Key  // ~262k ZIP file
                }

                val testFile = File(System.getProperty("java.io.tmpdir")).resolve("test-file.txt")
                testFile.writeText("AWS Kotlin SDK S3Client test")
                s3.putObject {
                    bucket = s3Bucket
                    key = "_test/test-file.txt"
                    body = ByteStream.fromFile(testFile)
                }
                testFile.delete()
                delay(7.seconds)
            }

            println("S3 request successful (AWS Kotlin SDK)")
        }
    } catch (e: Exception) {
        println("Error using AWS Kotlin SDK S3Client: ${e.message}")
        e.printStackTrace()
    }
}

On this point:

I'm working on another possible fix by special-casing the connection closure exceptions in the SDK's retry strategy such that they don't count against maxAttemps and are also eligible for retry immediately

The 'connection closed' exceptions are caught after the request is sent, making it impossible to distinguish between a connection that was closed while processing the request versus a stale connection from the pool. This has implications for idempotency of retries (esp. when combined with above reproducer showing mutating operations failing).

Glad to hear that it may be possible to use an SDK-hosted connection monitor! This would be cleaner if OkHttp made the connection pool validation logic more correct / configurable.

@sandrine-bedard
Copy link

@ianbotsf and @cloudshiftchris : My team implemented a 30s timeout, and we still see java.io.IOException: unexpected end of stream on errors. My client is below:

private val amazonS3ClientAsync: S3Client = S3Client {
    region = Regions.DEFAULT_REGION.getName()
    httpClient {
      connectionIdleTimeout = 30.seconds
    }
  }

Anything else we can try, or should we just rollback this non-blocking client entirely (and revert to the Java SDK)?

@ianbotsf
Copy link
Contributor

ianbotsf commented Oct 22, 2024

@sandrine-bedard, setting the connection idle timeout to 30 seconds will not resolve the issue with S3, which is automatically closing connections as quickly as 5 seconds after idle. The present workarounds are still the ones listed above:

  • Reduce the SDK's idle timeout to below what S3's service-side timeout appears to be:
    S3Client.fromEnvironment {
        httpEngine {
            connectionIdleTimeout = 3.seconds
        }
    }
  • Use the CRT HTTP engine instead of OkHttp:
    S3Client.fromEnvironment {
        httpClient(CrtHttpEngine)
    }

We're continuing to investigate proper fixes for this so that workarounds are no longer needed.

@sandrine-bedard
Copy link

@ianbotsf : Thanks for the quick reply. I can't set httpEngine (not available). Using AWS Kotlin SDK 1.3.43, and referring to these docs

@ianbotsf
Copy link
Contributor

Apologies, httpEngine is a typo. It should be httpClient. I've corrected the code snippets above.

@ianbotsf
Copy link
Contributor

Latest updates:

  • My effort to special-case the exception within the SDK retry strategy were unsuccessful since we only have vague heuristics at that level of the underlying problem.
  • I've also attempted and failed to implement a solution that simply checks connection health on connection acquisition. The problem with that approach is that the listener event only fires after OkHttp has decided to use a connection and performed all its health checks. Consequently, the solution simply causes different errors under the same conditions.
  • The most promising SDK-based solution remains connection monitoring. The most promising OkHttp-based solution seems to be doing deeper health checks during the acquisition phase which include an attempt to read from the socket. I'll PR both ideas in the next day or so.

@ianbotsf
Copy link
Contributor

A connection idle monitoring solution to address this is now in PR and being reviewed.

Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@ianbotsf
Copy link
Contributor

The PR has been approved and merged and will be available in an upcoming release. A new configuration parameter for polling idle connections has been added to OkHttp config:

S3Client.fromEnvironment {
    httpEngine(OkHttpEngine) {
        connectionIdlePollingInterval = 200.milliseconds
    }
}

When set to a non-null value, this will poll connections in the pool for data at the given interval. Connections closed remotely will be detected by the poller before OkHttp goes to retrieve them from the pool. This connectionIdlePollingInterval setting defaults to null (i.e., disabled) for now to preserve existing behavior and resource utilization. We may change that default or replace this mechanism in a future minor version bump.

@ianbotsf
Copy link
Contributor

The new poller was released today in SDK version 1.3.66.

@mi-kas
Copy link

mi-kas commented Nov 8, 2024

@ianbotsf Even with the latest release (1.3.69), I don't see a connectionIdlePollingInterval property that is directly exposed or configurable through the SDK, as mentioned in the code block above. Could you clarify this and provide an example of how to configure an S3 client using this property?

I was able to get it working with the following dependencies in build.gradle.kts:

dependencies {
    implementation("aws.sdk.kotlin:s3:1.3.69")
    implementation("aws.smithy.kotlin:http-client-engine-okhttp:1.3.20")
}

And here's the code that sets up the S3 client:

import aws.sdk.kotlin.services.s3.S3Client
import aws.smithy.kotlin.runtime.http.engine.okhttp.OkHttpEngine

fun main() {
    val s3Client = S3Client {
        httpClient(OkHttpEngine) {
            connectionIdlePollingInterval = 200.milliseconds
        }
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. p2 This is a standard priority issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants