-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: emit metrics from CRT HTTP engine #1017
base: main
Are you sure you want to change the base?
Conversation
} | ||
} | ||
|
||
private fun emitConnections() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit / naming: emitConnectionsMetrics
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the class already has the word "connection" in it, renaming to simply emitMetrics
.
|
||
if (sendEnd != null && receiveStart != null) { | ||
val ttfb = receiveStart - sendEnd | ||
if (ttfb.isPositive()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: have you seen any instances where this is negative?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not directly but event streams may begin receiving traffic before sending has concluded since the communication is bidirectional and duplexed. It seemed prudent in that situation to not report TTFB.
@@ -3,17 +3,23 @@ | |||
* SPDX-License-Identifier: Apache-2.0 | |||
*/ | |||
|
|||
@file:OptIn(ExperimentalApi::class) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Can this just be an opt-in on the test class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can actually be removed. I forgot why I added this but it's unused now!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment: These new body types are missing tests
* the resulting duration will be negative. | ||
* @param other The [Instant] marking the end of the duration | ||
*/ | ||
public operator fun minus(other: Instant): Duration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests?
@@ -102,4 +109,7 @@ public fun Instant.Companion.fromEpochMilliseconds(milliseconds: Long): Instant | |||
return fromEpochSeconds(secs, ns.toInt()) | |||
} | |||
|
|||
public fun Instant.Companion.fromEpochNanoseconds(ns: Long): Instant = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests?
@@ -61,16 +66,22 @@ internal class ConnectionManager( | |||
val manager = getManagerForUri(request.uri, proxyConfig) | |||
var leaseAcquired = false | |||
|
|||
metrics.queuedRequests = pending.incrementAndGet() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correctness: If we are trying to acquire a connection then it's not queued right? I think queued would be before we hit the requestLimiter.
metrics.queuedRequests = pending.incrementAndGet()
requestLimiter.withPermit {
metrics.queuedRequests = pending.decrementAndGet()
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, if "queued" means "before we attempt to acquire a connection" then I'm guessing that the requestsQueuedDuration
measurement below is also wrong. I'll move it too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to look at the definitions I gave them again and I think this would be inline with what it says
queued=waiting to be executed (e.g. waiting for thread to be available), in-flight=actively processing
If we are to the point of acquiring a connection we are "actively processing" the request. I can see where the definition could be interpreted differently though as in "I have all the resources needed at this point to execute the request" but I think establishing a connection (or waiting on one to be available) is part of overall request processing. Curious if others disagree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still doesn't look correct, I don't think queuedRequests
is calculated in connection manager.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I think I was confusing the semaphore inside ConnectionManager
with the one inside CrtHttpEngine
. Switching.
} | ||
} | ||
|
||
private fun emitConnections() { | ||
val idleConnections = leases.availablePermits.toLong() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correctness: this semaphore isn't tied in anyway to actual connections in CRT, you could have 100 max connections configured but that doesn't mean you have 100 idle connections already connected waiting to be used. I'm thinking this would have to come from the actual CRT connection manager.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh jeez, good point. Yes, this is clearly flawed.
That brings up an interesting wrinkle, though. We don't have just one CRT connection manager, we have one for each host. Does the Smithy metric for acquired connections equal the sum of acquired connections across all underlying CRT connection managers?
Moreover, we have a CRT config param called maxConnections
. Contrary to its name, it's used as the maximum number of connections for each underlying CRT connection manager. Meaning if maxConnections
is 100 and we connect to 3 hosts, there are actually 3 underlying CRT connection managers which each have a max of 100 connections for a total of 300. How then do we measure the Smithy metric of idle connections? Is it the sum of idle connections across all underlying CRT connection managers (which may be more than maxConnections
)? Or is it maxConnections
minus the sum of acquired connections across all underlying CRT connection managers (which may be less than the actual number of open/idle connections)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fact CRT uses connection manager per host is an implementation detail, the config setting is at the engine level. This is what the leases
semaphore is for is to control max connections across multiple managers (albeit it likely isn't perfect at ensuring we never cross the maxConnections
threshold due to idle connections).
The reason I allowed each connection manager to be configured with maxConnections
is because we don't know how many hosts an engine will be used to connect to, it may be 1 or it may be split across many hosts which will require many connection managers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this as an area that may cause confusion later, especially if users begin correlating the idle connections metric with connections reported by the OS or JVM. I don't have a better suggestion though given how the CRT connection manager works. I'll switch the implementation to calculate the idle connections metric as maxConnections
minus the sum of acquired connections across all underlying CRT connection managers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh and interrogating the number of active connections from the CRT connection managers requires another revision of aws-crt-kotlin: awslabs/aws-crt-kotlin#88
private fun emitMetrics() { | ||
val acquiredConnections = connManagers.values.sumOf { it.managerMetrics.leasedConcurrency } | ||
metrics.acquiredConnections = acquiredConnections | ||
metrics.idleConnections = config.maxConnections.toLong() - acquiredConnections |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix: This still isn't right, idle connections are connections established with the server not currently in use. If I'm not mistaken this is just availableConcurrency which would require the same summing across managers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, sum of availableConcurrency
sounds doable, although it will exceed max connections under some circumstances.
@@ -61,16 +66,22 @@ internal class ConnectionManager( | |||
val manager = getManagerForUri(request.uri, proxyConfig) | |||
var leaseAcquired = false | |||
|
|||
metrics.queuedRequests = pending.incrementAndGet() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still doesn't look correct, I don't think queuedRequests
is calculated in connection manager.
Quality Gate passedThe SonarCloud Quality Gate passed, but some issues were introduced. 1 New issue |
Issue #
#893
Description of changes
This change emits metrics from the CRT HTTP engine similar to the ones emitted by the OkHttp engine.
Companion PRs: awslabs/aws-crt-java#738, awslabs/aws-crt-kotlin#86, awslabs/aws-crt-kotlin#88
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.